Building my own NAS/Media center – Part 2 – Hardware

To select the hardware, I mostly followed indications from various sources. This blog had me started on the basics, together with the followup by the same author. Most of the open questions and possible troubles have been solved through aggressive googling, but most of all precious contribute from a friend of mine, Hans.

Here is a quick panorama of the various components I bought, together with the unit price, the shop I used, and a general description for the choice I made.

Motherboard: Zotac Z68-ITX WiFi Supreme

Zotac Z68-ITX Supreme

Price: 110.99 EUR on salland.eu

This little mini-ITX motherboard has everything I need. Extremely compact and yet very powerful.

Zotac Z68-ITX Supreme Hardware

It supports Intel core i3/5/7 processors, so it leaves my options open when it comes to processor choice. It has an on-board nvidia (great for experiments with CUDA) and support for on-chip Intel HD, which is provided by my Intel Core i3. This is normally known as a hybrid graphics system, and it can be a source of trouble with Linux. More details will follow on the subsequent posts.

There are plenty of connection options in the back panel.

Zotac Z68-ITX back panel

Notice the presence of many different video output ports. This will become important later on, during graphics setup. There are also plenty of USB ports, both on the panel and on-board, and an eSATA port.

Processor: Intel Core i3 3.3 GHz tray

Intel Core I3

Price: 107.99 EUR sold by salland.eu

A core i3 is probably all I need. If I find it inadequate, the motherboard can take up to an i7. I probably erred in choosing a high clock frequency, as the higher it is, the higher is the heat dissipated. I was surprised at the shape and weight of the processor. The last time I had a processor in my hands, it was a bulky but light Pentium II 333 MHz cartridge. The i3 is small, tough and heavy.

I decided to take the tray version. The boxed version includes the cooler/fan (or so I’ve been told), but I was not sure of the cooler size, and I wanted a low profile, low noise one. Vertical space in the case is at a premium.

Processor cooling:  Scythe Shuriken Rev.2 – SCSK-1100

Schyte Shuriken Rev. B

Price: 28.95 + 19.99 shipping = 48.94 GBP = 57.07 EUR sold by ADMI Limited

This processor fan is dead silent and with a reduced height, making it the perfect candidate for a HTPC case form factor.

Thermal paste: Arctic Silver 5 3.5g Thermal Paste

Arctic Silver

Price: 6.05 + 5.02 = 11.07 GBP = 12.91 EUR  Sold by ADMI Limited

This is the best thermal paste you can get, according to my friend Hans. The Scythe came with a small amount of unnamed paste, but I preferred to have my own Artic Silver syringe.

Case: Techsolo TC-2200 Casing M-ATX HTPC 350 W Aluminium

Techsolo mediacenter

Price: 80.43 + 14.75 shipping = 95.18 GBP = 110.99 EUR from ALB Computer Germany via Amazon.co.uk

I chose this case for many reasons: first, it was cheap. Second, it included a Power Supply that was already sized appropriately to my needs. Third, its small form factor. There’s not a lot of disk slots available (only two, a 3.5 and a 5.25), I am not planning to build a disk farm.

Techsolo TC-2200

The case is also quite stylish and with plenty of features, including a remote control (which I am not using at the moment). Overall, I think this case as excellent for my needs.

Memory: Crucial SODIMM 2GBx2,204-pin,DDR3 PC3-10600,Cl=9,1.5v

Crucial SO-DIMM

Price: 27.40 + 7.02 = 34.42 GBP = 40.14 EUR by: mopodo-uk

I did not have particular requirements for the RAM, I just wanted a decent amount. 8 GiB were probably too much. I opted for the same amount I have on my laptop, and I will upgrade later on if I really want to. RAM is cheap, but less RAM is cheaper.

Hard drive - WD Caviar Green 3000 GB, SATA, 64 MB, 5400 RPM

WD Caviar Green

Price: 121,99 EUR from salland.eu

I chose a Western Digital Green with a whooping 3 Terabytes of space. After the starving of the SSD, I don’t want to risk space exhaustion anymore.

Case Fan - Scythe Glide Stream Zwart, 120 mm, 1000 Rpm

Scythe Glide Stream

Price 13,00 EUR from salland.eu

I needed a good fan to improve air circulation in the case. I chose the dead-silent, big Scythe Glide Stream. The fan is big, and fits perfectly on the bottom vent of the case.

Bluetooth dongle – Konig Micro USB

Bluetooth dongle

Price: 13,00 EUR from salland.eu

A simple bluetooth dongle to connect to a wireless mouse and keyboard. Also useful if I were to implement a proximity detection so that the media center turns active when I come back home.

OS Hard drive – Unnamed Fujitsu

Fujitsu

Price: Unknown. Recovered from my Mac after replacement.

This is the old disk I extracted from my mac and replaced with an SSD. I changed it due to worries about hardware failure, but apparently the disk is fine, and my woes at the time might have been due to battery issues. The disk is only 160 GB, which is enough to keep the OS and some additional software I might install.

The reason for the powerstrips will become clear in later posts.

Total price for the system and final remarks

With the exclusion of the latter item, the final price for the system is 588.08 EUR. The basic MacMini is priced at 649,00 EUR, and has a smaller hard drive (500 GB), no nvidia card, and less software flexibility.  I am overall extremely pleased with the result I obtained.

As a final note, I am deeply satisfied of the online shop salland.eu. They have excellent products in stock, they are quick at shipping and provide good and fast email service. I can confidently recommend them, if you can deal with the page in Dutch.

In the next post, I will start assembling the hardware.

Posted in Hardware. Tags: , , , . Comments Off »

Building my own NAS/Media center – Part 1 – Introduction

A few weeks ago, I took a look at my MacBook’s disk occupation and got a bit worried. The great little software Disk Inventory X made extremely clear that my work directory was a bit too packed with pictures, and checking the total disk occupation left me with an uneasy feeling of scarcity. This is the life of the “lucky” owners of SSD disks: fast, small, and unreliable. In fact, I had some worrying events on my laptop, like requiring multiple on and offs to finally show the “apple” logo, or getting stuck while installing updates. To be fair, it could also be the battery’s fault (which OSX is asking me to service), but these factors combined created a worry and an itch I must scratch.

Building my own Network Attached Storage system

Clearly I needed a NAS where I could put my stuff, keeping the laptop light, and get access through the local network. I checked around for interesting products, and I found some: I really like Drobo, but to me, the absolute best is the Synology product selection. Yet, the idea of having a dedicated product for nothing else than storage started to feel a bit restrictive. I have a TV with HDMI input. Why can’t I play Minecraft on it? The NAS project slowly became a NAS + “Media center” project, and I embraced the complexity. I quickly realized it was impossible to find this kind of product off-the-shelf, so I started planning for self-assembling.

The last time I built my own computer, the processor available was a Pentium II 333 MHz. It was as cool as it can get, with its cartridge-like shape and its holographic watermark. Despite having previous experience in hardware assembling, it took me a couple of days to build and even more to install, due to driver problems with Linux. Overall, it was a good experience but I was more interested in the software side. A few years later, I abandoned the Intel/Linux world and switched to PowerPC/OSX, and the rest is history. Things just work in the Mac world, and I like it that way in some circumstances, when my target is to use a computer with minimal fuss. Yet, I am not over-zealot as a Mac user, and this is especially true after they decided to drift more and more towards markets I either don’t care about (iPhone stuff) nor support (App stores). I also recognize a chance for flexibility and ad-hoc tinkering when I see one, and I’m all for it.

“Stefano” – I hear you say – “Why don’t you buy a Mac mini with an additional disk and get over the problem? It’s a full computer, it’s small, it has OSX and it does all you need”. Indeed this is true, but it’s also more expensive. Also, the needs are different: I don’t want a closed system. I need a flexible, highly configurable system that allows me to do both network storage and general computing in any potential software direction, and change the hardware when needed, adding new disks or upgrading the old ones. Having some additional nice features like an Nvidia chip for some grotesque CUDA experiment is a plus. From where can I start, with no experience in modern hardware?

Re-learning the world of consumer hardware

Not having a lot of hardware experience, I had to re-learn or update on a few things: processor types, socket types, RAM types, Motherboard types, case types, and power supply requirements. For example, the processor world is confusing at best. Here are some of the questions I had

  1. Is an LGA 1155 socket compatible with an LGA 1156. No, but they are for cooler purposes
  2. How many processors are available today, and which one should I get? Too many. In addition to the plenty of i3/5/7 models, Intel also has restored the Pentium name for recent processors.
  3. Sandy bridge, Ivy bridge? Ok, bored now.

Overwhelmed with the details, I asked a friend for help and checked some blogs. He gave me an extremely useful course in modern hardware, and an extremely useful price-comparison website, tweakers.net. It’s in dutch, but having spent a year in the most badass country in the world, I understand enough to use it. For the blogs, an authentic trove of information came from this one. In fact, I almost built his configuration, with proper changes.

In the next part, I will detail the products I bought and their price.

Posted in Hardware. Tags: , , , . Comments Off »

The von Neumann method revisited: a letter from a reader

It’s always a source of extreme satisfaction when someone contacts you about a post you made, and even more so when the mail actually answers an open question I had about generalizing the von Neumann method. If you don’t recall what’s about, it was about getting a fair coin throw from an unfair coin. I wondered if it was possible to generalize it to any number of faces (a coin is a 2 face die), but left the issue unsolved and moved on.

A few days ago, I received a mail from Albert Rafetseder, which I copy here verbatim except for minor presentation adjustments. Thank you Albert for the interest and the cool research

Hello Stefano,

by chance I came across your blog (via some Python PEP discussion you participated in), finding your post from August on the von Neumann method. I have pondered the question of a von Neumann die, too, and my solution is this: Observe that the von Neumann coin method can extract an unbiased coin flip from those strings of flips of length 2 that are permutations on the set S of all possible outcomes of the biased coin S = {0, 1}:

Permutations P(S) = {01, 10}.

There are Length(S)! possible permutations, i.e. the extracted unbiased “random gadget” is a coin again.

If you consider a die with S = {1, 2, 3, 4, 5, 6}, then P(S) = {123456, 123465, 123564, 123546, …. }, and the extracted unbiased random gadget is a Length(S)! = 6! = 720 facet thing. Unless, of course, you simply group the permutations by some property to reduce the number of unbiased results, e.g. grouping all 120 of them that had a “1″ in the first roll, and so on for every s in S.

By different grouping, I believe you could fabricate an unbiased random gadget for every divisor of 720, too; similarly, you could use a biased n-sided die rather than a six-sided one. And maybe you could increase the efficiency of the scheme by taking into account how the pips on opposing sides usually add up to seven ([that is:] probabilities of opposing sides of a real die are somewhat complementary, although they usually do not complement in an absolute sense, i.e. not q=1-p). This conjecture is problematic: first, I have nothing to its support other than that I think the physics would work this way in a real die. Second, this (sub-)complementary property does not hold in general for a simulated die as the probabilities for each side could be anything. Third, no joy for “dies” with an odd number of sides, obviously.

As it often goes, you and I are not the first people to ponder this question. Here are other’s peoples approaches: this one and especially this which lists a rich source of prior as well.

Also, optimization of von Neumann’s coin method exist that don’t throw away so many 00 or 11 results. [This one] has a multi-level method for extracting unbiased bits from longer strings of flips, indeed asking at the very end “Now, how do you simulate an unbiased die with a biased die”. (Well, we know now). This is a collection of further reading material on Stackexchange. I haven’t read this one, but it seems to take up the efficiency/optimization idea too.

After stealing so much of your valuable time, allow me to mention one further paper that describes computations that can be done using random numbers of unknown probability distribution. I personally find late Philippe Flajolet’s work always worth a read.

With that, thank you for your blog post reminding me of this interesting topic (and stimulating me to write up my thoughts on it).

Best regards,
Albert.

I will hopefully explore in more details with actual code after my next series is finished. I built a NAS from scratch and I have a long series of posts ready to ship starting next month. I want to keep the post chain uninterrupted, but I really wanted to publish your mail as soon as I could. For this reason, I couldn’t research any further on your insights, but they are definitely worth of interest for future posts. Stay tuned!

Posted in Probability. Tags: . Comments Off »

Old CentOS “4 is not a valid release or hasnt been released yet” error

If you have to handle an old Centos 4 machine and you try to use yum to get some packages, you will get an error “4 is not a valid release or hasnt been released yet”. This is because CentOS 4 is end of life, and the default repo does not provide updates anymore. Yet, you might still be interested in getting the latest packages for the end of life release.

You can solve the problem by switching to the vault as a repo, doing the following

  1. open /etc/yum.repos.d/CentOS-Base.repo
  2. comment out all the mirrorlist entries
  3. uncomment and set all the baseurl entries to http://vault.centos.org/4.9/os/$basearch

 

Posted in Linux. Tags: . Comments Off »

Git hangs at clone? It could be a Type of Service issue.

Today I was trying to download some code via git, and I got into a strange problem. When I ran git clone, it simply reported cloning into ‘repo’ and was stuck there, until timeout. Same thing for pull. I could not understand what was happening, and it took me a while to figure it out but here is the solution, at least to my problem. Add this to your .ssh/config

Host *
  IPQoS 0x00

What was the problem? Here are the symptoms:

  • git worked on my NAS (Linux, connected to the router via cable), but not from my laptop (OSX, connected via WiFi)
  • ssh worked from the laptop. I could ssh to the NAS and to other external ssh machines.
  • However, I observed that the interactive ssh behaved strange when I ssh’d to an external machine. For example, if I asked for an ls, it kind of got stuck, and I had to press enter every time to get further data. It was like acting with a constant “more”.
  • using GIT_TRACE, I found out that ssh negotiation was completely successful and it got stuck at invoking git-upload-pack via ssh.

Google didn’t help. I changed versions of ssh and git, checked my router for strange setups, but nothing could solve. Until I smelled something about the way packets were exchanged. Don’t ask me how I did it. I don’t know. I just had the feeling that strange “enter to get more stuff” behavior had something to do with it. The fact that the laptop had the problem, while the NAS didn’t, made me think it was either a OS issue or a NAT configuration issue.

So I started checking something about NAT, adding NAT to the google query, and here I found a post of a Dane who had similar troubles, apparently caused by the router messing up due to Quality of Service. This sounded just about right. Quality of Service influences how packets are considered for priority routing. I did more searching until I found this other post, where I found the solution given above.

It was a fun hour.

Posted in Linux, MacOSX. Tags: , . Comments Off »

A raytracer in python – part 6: cameras

In the latest commit for the raytracer, I added cameras. The design changed so that now the responsible for rendering is the camera object. Actual cameras are specializations of an abstract class BaseCamera, which holds common information about positioning. The BaseCamera is then specialized into two concrete classes:

  1. PinholeCamera is a camera where rays are shot as diverging from a single point, called the eye_point. This allows perspective, which was not present previously as the rays were emerging from the ViewPlane pixels.
  2. LensCamera is a camera that simulates depth of field, that is, focus/out of focus. Contrary to the PinholeCamera, where everything is in focus, LensCamera allows different focusing. Objects that happen to be on the “focal plane” are in focus, while objects that are outside (either closer or farther from the camera) present less defined details proper of an out-of-focus object. To perform this effect, we need the random sampling on a disk implemented in the previous post.

The following picture shows how LensCamera performs. A set of hemispheres are deployed along a line. The camera is above them, slightly angled and with a roll angle appreciable from the horizon. In all three cases, the orange central sphere is focused, as the focus plane has been set to fall on the sphere’s position. Note how other objects are in focus for a Pinhole camera (left picture) which has no depth of field by construction, and become more out of focus as the lens size increases (1.0 in the center picture, 5.0 in the right one)

Focusing

From left to right, PinholeCamera, LensCamera with lens size 1.0, LensCamera with lens size 5.0

Other cameras may technically be possible: the book goes further in deploying fisheye and stereoscopic cameras, but I am not interested in them. I think the pinhole and lens camera are flexible enough for quality renderings and my desire to learn.

One important feature of the Camera system is that it requires the definition of local coordinates on the camera itself. The three vectors defining this set of coordinates, called u, v, w in the book, are obtained by building an orthonormal basis using the cross product between the observation vector (the vector between the “eye” of the camera and the look_at point) and an “up” vector, our default being in the same direction as the y axis. Doing the cross product of these two vectors (observation and up) produces the third remaining vector of the orthogonal basis centered on the camera. However, if the camera looks straight up, or straight down, the cross product is zero and we obtain a singularity, losing one degree of freedom (a condition also known as gimbal lock). The book proposes to detect this condition and treat it accordingly, by either overriding the specification and setting the vectors to an arbitrary, well defined alternative, or by “juggling” the up vector out of alignment so that the third vector is still defined. I decided for the third option, ignore the problem, as I am not going to use gimbal locked configurations for now, but it’s definitely a problem to add to the todo list.

With this post, I take a temporary break from the raytracing business. I may add optical effects such as reflections, refractions, materials, lights, but the point is that the amount of rays that must be propagated for these effects to show tends to be very high. I want to venture into CUDA, and therefore I will switch my attention to CUDA programming from now on, integrate it with the raytracing later on, then go back to light effects at a later stage. I will implement light effects first in python, then use CUDA to achieve the same results. My aim is to have fun, test CUDA/C/Python integration, compare performances, and provide a fully python raytracer with optional C/CUDA high-performance code to achieve the same task. For CUDA tinkering, I will switch back to my old friend, the mandelbrot set.

Posted in Python, Raytracing. Comments Off »

How to convert a QString to unicode object in python 2?

I had this problem to solve, and I tried to find the safest way. This program illustrates the solution

from PyQt4 import QtCore, QtGui                                                                                                                       
import sys                                                                                                                                            

app = QtGui.QApplication(sys.argv)                                                                                                                    
ed = QtGui.QLineEdit()                                                                                                                                

def editingFinished():                                                                                                                                
    # The text() returns a QString, which is unicode-aware                                                                                            
    print type(ed.text())                                                                                                                             
    # This returns a QByteArray, which is the encoded unicode string in the utf-8 encoding.                                                           
    print type(ed.text().toUtf8())                                                                                                                    
    # Since unicode() accepts a sequence of bytes, the safest and fully controlled way of performing                                                  
    # the transformation is to pass the encoded sequence of bytes, and let unicode know it is utf-8 encoded                                           
    print unicode(ed.text().toUtf8(), encoding="UTF-8")                                                                                               

QtCore.QObject.connect(ed, QtCore.SIGNAL("editingFinished()"), editingFinished)                                                                       
ed.show()                                                                                                                                             

app.exec_()

So the solution is

unicode(qstring_object.toUtf8(), encoding="UTF-8")

Maybe there’s another, simpler way that it’s also safe, but for now this solution is good enough.

Posted in Python. Comments Off »

Fair throw from an unfair coin

Imagine you are given an unfair coin. You don’t know how unfair it is, nor which side (head or tails) is produced more frequently than the other. How do you perform a fair throw with it ?

Enter the von Neumann method. It’s really simple: throw the coin twice. If the results are the same, discard the result; if the results are different, choose the first one.

How does it work ? It’s rather easy. Suppose the coin returns head 90 % of the times, and the remaining 10 % returns tail. We give actual probabilities here for the sake of discussion, but you don’t need to know how unfair is the coin in order to apply this method. If you throw the coin two times

  • the probability of throwing two times head is 0.9 * 0.9 * 100 = 81 %
  • of getting two tails is, of course 0.1 * 0.1 * 100 = 1 %.
  • the probability of throwing tail followed by head is 0.1 * 0.9 * 100 = 9 %
  • and finally, the probability of throwing head followed by tail is 0.9 * 0.1 * 100 = 9 %.

You can see how the probabilities of the mixed events (Tail-Head and Head-Tail) is the same. If you discard homogeneous cases Head-Head and Tail-Tail, accept only the mixed results and pick one of the throws as the right one, you will obtain the same probability of obtaining a head and tail, that is 50/50, a fair coin. The only problem with this method is that you may have to discard a very high number of throws. The method will be slower to yield a result, but the result is guaranteed to be fair. In the limit of a completely unfair coin that returns the same face every time, you will never get a result. I wonder how this can be generalized to an unfair n-faces die.

Posted in Probability. Comments Off »

Random number generator hardware

Suppose you want to generate a large number of random dice rolls for a computer program. How do you do it ? With a robotic dice rolling machine, of course

Why would anyone need such device, I hear you ask? Well, software random number generators are technically not perfectly random, and if you have to reassure a crowd of people complaining about your rolls’ randomness, this solution is straightforward, creative, and a real pleasure to read in its gory and unexpected technical issues.

Posted in Hardware, Probability, Statistics. Comments Off »

Why Gaia?

Choosing the name of my new blog was a problem I worked on for some time. I wanted to stay within my domain, forthescience.org, and I wanted the blog to be mostly about simple living, outdoor activities, beekeeping, and so on, while maintaining a scientific approach to the presentation of these topics. I therefore needed a central theme embracing their broad spectrum.

As humans, we are just the latest gimmick in a big show started billions of years ago. Many species have come and gone during this time, and many more will in the future. We humans have apparently little control on the complexity of our environment, exactly because this complexity emerges from an extremely large set of interactions and feedbacks we have little knowledge or control on. Additionally, these interactions can be extremely non-linear, meaning that they may either resist to changes, or run off wildly for even small perturbations of the dependent factors.

I first heard about the Gaia hypothesis when I was 13, in a PC game called SimEarth. The central idea is that the planet Earth is what can be considered as a self-regulating “superorganism” given by the dependencies among biota (plants, animals, bacteria, fungi, algae) and the inorganic environment in which they live (atmosphere composition, temperature, water level and acidity). This hypothesis, developed mainly by James Lovelock in the 70s, found more and more supporting evidence, and it triggered a better understanding of the complexity of the Earth system. Experiments such as Biosphere 2 made rather clear how a stable, self-sustaining system is hard to establish, and how many subtle interactions are hidden and unexpected. This network of interactions, on a global scale, is what Lovelock calls “Gaia”.

I must admit I read very little of Lovelock, and of what I read, I don’t fully agree with some of his personal opinions, in particular about the importance of computational modeling and the actions that one can take, yet the point of Gaia as a complex system of interactions is well established, factual and verified by experiments both on the field and in computational models.

That said, I decided to explore a very small subsystem of these interactions at a personal level: bees are important pollinators, producers of useful substances such as honey, wax, and propolis, but also allowing plants to reproduce; propolis and wax are used in woodworking, another dreamy interest of mine (dreamy because I lack the tools and the space); growing an extremely tiny but pleasant part of my diet is a rewarding and relaxing activity, where I learned the importance of soil characteristics, the different chemical and physical requirements for different species of plants, techniques such as hydroponics and aquaponics, mushroom growing, and integrated cultivars to take advantage of dependencies among species and improve production. I took great interest in Jared Diamond’s Collapse, and its reporting of Tikopia as a long established self-sustained islet in the pacific.

Keeping bees, growing plants, building gardens, working wood are difficult tasks that require knowledge of biology, mathematics, chemistry and physics, in addition to artistic and practical skills. This blog aims at exploring some of these topics, disentangling a tiny section of the interactions’ snarl for fun and cultural profit.

Posted in Opinion, Personal. Comments Off »