Python and memory fragmentation

If you use CPython on 32 bit architectures, you may encounter a problem called memory fragmentation. It is more likely to happen on Windows for reasons that will soon be clear, but it’s not a Windows exclusive. It is also not an exclusive python problem, but tends to occur more often on CPython due to its intrinsic memory allocation strategy.

When you dynamically allocate memory in C, you do so in a particular area of the Virtual Memory, the heap. A requirement for this allocation is that the allocated chunk must be contiguous in virtual memory. CPython puts extreme stress on the heap: all objects are allocated dynamically, and when they are freed, a hole is left in the heap. This hole may be filled by a later allocation, if the requested memory fits in the hole, but if it doesn’t, the hole remains until something that fits is requested. On Linux, you can follow the VM occupation with this small python script

import sys
import subprocess

mmap = [' ']*(16*256)
out = subprocess.check_output(["/usr/bin/pmap","-x", "%d" % int(sys.argv[1])])
for i in out.splitlines()[2:-2]:
	values = i.split()[0:2]
	start = int("0x"+values[0], 16) / 2**20
	end = start + (int(values[1])*1024)/2**20
	for p in xrange(start, end+1):
		mmap[p] = '*'

for row in xrange(16):
	print hex(row)+" | "+"".join( 
                        mmap[row * 256:(row+1)*256]

On Windows, the great utility VMMap can be used to monitor the same information.

Given the above scenario, the Virtual Memory space can eventually become extremely fragmented, depending on the size of your objects, their order of allocation, if your application jumps between dynamically allocating large chunks of memory and small python objects, and so on. As a result, you may not be able to perform a large allocation, not because you are out of memory, but because you are out of contiguous memory in your VM address space. In a recent benchmark I performed on Windows 7, the largest contiguous chunk of memory available was a meager 32 megabytes (ow!), which means that despite the free memory being around 1 gigabyte, the biggest chunk I could request was only 32 megabytes. Anything bigger would have the malloc fail.

Additional conditions that can make the problem worse are dynamic libraries binding and excessive threading. The first invades your VM address space, and the second needs a stack per each thread, putting additional unmovable barriers throughout the VM and reducing real estate for contiguous blocks. See for example what happens with 10 threads on Linux

(gdb) thread apply all print $esp

Thread 10 (Thread 10606): $5 = (void *) 0x50d7184
Thread 9 (Thread 10607): $6 = (void *) 0x757ce90
Thread 8 (Thread 10608): $7 = (void *) 0x7b69e90
Thread 7 (Thread 10609): $8 = (void *) 0x7d6ae90
Thread 6 (Thread 10618): $9 = (void *) 0x8a4ae90
Thread 5 (Thread 10619): $10 = (void *) 0xb22fee90
Thread 4 (Thread 10620): $11 = (void *) 0xb20fde90
Thread 3 (Thread 10621): $12 = (void *) 0xb1efce90
Thread 2 (Thread 10806): $13 = (void *) 0xb2ea31c4
Thread 1 (Thread 0xb7f6b6c0 (LWP 10593)): $14 = (void *) 0xbffd1f3c

Fragmentation is eventually made irrelevant by a 64 bit architecture, where the VM address space is huge (for now ;) ). Yet, if you have a 32 bit machine and a long running python process that is juggling large and small allocations, you may eventually run out of contiguous memory and see malloc() fail.

How to solve? I found this interesting article that details the same issue and provides some mitigation techniques for Windows, because Windows is kind of special: on Windows, the 4 gigabytes address space is divided in two parts of 2GB EACH, the first for the process, the second reserved for the kernel. If you must stay on 32 bits, your best bet is to give an additional gigabyte of VM with the following recipe (only valid for Windows-7)

  1. run bcdedit /set IncreaseUserVa 3072 as administrator, then reboot the machine.
  2. mark your executable with EditBin.exe you_program.exe  /LARGEADDRESSAWARE

With this command, the VM subdivision is set to 3GB+1GB, granting one additional gigabyte to your process. This improves the situation, but sometimes it’s enough. If it is not, and you still need to work on 32 bit machines then you are in big trouble. You have to change your allocation strategy and be smarter on handling the fragmentation within your code.

Posted in Python, Windows. No Comments »

Nitpicking on python properties

I like python properties, I really do. Properties allow you to convert explicit setters and getters into lean code while keeping control of your operation. Instead of this

current_mode = state.mode()

with properties you can write a much more pleasant

state.mode = EditorMode.INSERT
current_mode = state.mode

In both cases, if it weren’t for the magic of properties, you would get direct access to a member, but if you create your class like this

class State(object):
    def __init__(self):
        self._mode = EditorMode.COMMAND

    def mode(self):
        return self._mode

    def mode(self, value):
        self._mode = value

the two decorated methods are called automatically. This gives you a chance for validating the passed value, for example, or using smart tricks like caching if getting the value is slow, basically the same tricks you would obtain by traditional accessor methods, but with a better syntax and without violating encapsulation.

That said, I noticed a few minor things with properties that are good to point out.

1. Refactoring

Imagine that you find “mode” too poor as a name. and you decide to use editor_mode instead. If you had an accessor method, you would refactor setMode() to setEditorMode(). If any part of your code still calls setMode(), you get an exception.

With a property, the same refactoring implies a change from state.mode to state.editor_mode. However, if other parts of your code still use state.mode = something, you will not get an exception. In fact, it’s trivially correct. This could produce hard-to-find bugs.

2. No callbacks

While you can store or pass around state.setEditorMode as a callback, you can’t achieve the same effect with a property, not trivially at least. No, you can’t use a lambda, because assignment is forbidden in a lambda.

3. Mocking

You can certainly mock a property, but requires a bit more care. Nothing impossible, but if you learn the mock module, you have to go on that extra bit if you want to cover properties.

4. Soft-returning a set operation details

Sometimes you might want your setter to return state information about the set operation. One trivial example may be True or False, depending if the operation was successful or not. You can certainly throw an exception for this specific case, but your mileage may vary depending on the specifics of your problem and what “looks better” for your design. A property does not give you flexibility to return a value during set.

5. Only one value at a time

If your setters are connected to some notification system, you might want to set multiple values at once to trigger a single notification. Once again, it’s a minor problem: you can use a special property accepting a tuple. For example, if you have values v1 and v2, on class Foo, you could have something like

class Foo(object): 
    def __init__(self): 
        self._v1 = None 
        self._v2 = None 
    def v1(self): 
        return self._v1 
    def v1(self, value): 
        self._v1 = v1 
        # notify listeners 
    def v2(self): 
        return self._v2 
    def v2(self, value): 
        self._v2 = v2 
        # notify listeners 
    def values(self): 
        return (self._v1, self._v2) 
    def values(self, value_tuple): 
        self._v1, self._v2 = value_tuple 
        # notify listeners 
f.values = (1,2)

6. Magic!

There’s some kind of magic behind properties that you can’t perceive to be there when you read client code. For example, code like this

myobj.my_foo = 5

generally makes you think of a simple assignment taking place, but this is not the case if my_foo is a property. Maybe naming convention could disambiguate? I am not a fan of the strict PEP-8 requirements on naming of methods, so one could potentially decide for

myobj.myProperty = 5
myobj.my_member_var = 3

I am thinking out loud here, and I don’t have a strong opinion on this issue. That said, properties are definitely cool and will make your interfaces much more pleasant, so I definitely recommend their use, if no other constraints prevent you to do so.

Posted in Python. No Comments »

Building my own NAS/Media center – Part 9 – BIOS update and final decorations

The final assembly is now performing its task amazingly well, except for one annoying issue: the fans are too noisy. I decided to explore the issue to conclude the setup. In particular, I observed that the fans always ran at full speed after wake-up. That sounded like a BIOS problem, so I checked for BIOS upgrades on the ZOTAC website. Turns out I was right, and there is indeed a fix for the issue I encounter. I had to flash the BIOS, an operation that generally makes me nervous, because if it goes bad, the risk of bricking the BIOS is real.

I downloaded the driver and followed the instructions I found on the XBMC website. Briefly said, it’s a matter of installing freedos on a USB stick with unetbootin, which is available on the Ubuntu repository. Then, I copied the contents of the dowloaded BIOS upgrade package, and followed the readmes and pdf guides I found inside. The flashing operation didn’t go smooth, I got some strange behaviors which unfortunately I didn’t write down, but in the end the BIOS was flashed and the fan problem after wake-up went away.

The second operation I performed was to carve a good fan grid just above the processor fan. The case is aluminum so it’s rather easy to drill, but unfortunately I only had a Dremel with no drill press, and the handmade drilling ended up rather poorly executed. One mistake I made was to drill from the outside, so I ended up with a lot of scratches on the external paint. I should have done it from the inside. After drilling, the external case was full of aluminum dust, and I took extra care of removing every drill imperfection and dust with some light sandpaper and canned air. You definitely don’t want conductive debris on your electronics.

The resulting, poorly executed grid left me a bit disappointed, so I wanted to add some creative execution to the case. I decided to carve three wolf moon, a picture I honestly like, with the fan grid being the moon.To do so, I printed a thresholded picture of the wolves on a piece of paper and attached it to the case.


Then, I started carving the outline with a sewing needle inserted into a mechanical pencil.


Unfortunately I would have preferred to transfer the drawing with carbon paper, but I wasn’t able to find it. After transferring the outline, I shaded the result adding proper hair-like scratches. Here is just a sample of the procedure

I then scratched the moon and added a few stars with the Dremel. I can’t say I am an artist, but I had a good time doing it, and the result is better than I expected.


The full assembly


This post closes the series, although I will do another post immediately after this one as an aggregating index. I am currently working on other two long-term projects, this time software related.

Posted in Hardware. Tags: , , . No Comments »

Building my own NAS/Media center – Index

Posted in Hardware, Ubuntu. Tags: , , . No Comments »

Building my own NAS/Media center – Part 8 – Other Software

The next step (after managing to get out of minecraft) is to install some other fundamental software I can need. Here is a quick list of the little things that make the Linux box jump out of the home appliance/NAS realm and get into serious business.


As a developer, I find git extremely useful. Additionally, I keep some important configuration files on github, so it makes sense to install git. From the ubuntu software center is just a couple of clicks away.


To install vim, I go again through the software center. I also download my vim configuration files from github.


XBMC was easy to install via the ubuntu software center. The software is extremely good, and I found it packed with plugins to get easily on youtube. It’s a bit disappointing the presentation and access to the movies (just a directory, I would have preferred a  set of scrolling screenshots of each movie, but maybe I am using it wrong). Overall, I am extremely satisfied of XBMC. Premium quality software indeed.

I installed a lot of plugins, allowing me to watch euronews, browse my pictures, and listen to my favorite songs and radio. XBMC is definitely recommended, its only problems are the tendency to get stuck once in a while (nothing a ssh+kill can’t fix), the abundance of broken plugins that I had to fix by hand (but they are in python, so it’s easy) and a general tendency to be reliant on mouse action and the right button click.


Chrome was obtained from the google website as a .deb. Once downloaded, clicking the package brings the package manager up, and provides an easy installation step. I am a bit disappointed with the overall quality of the fonts for the browser.


A great planetarium program. A toy for most people, but I find extremely beautiful to browse the current nightsky and the artificial satellites moving above my head from my comfy chair.


Useful to monitor the temperature of my processors. I posted a lot of diagrams on this regard, and I keep monitoring the temperature during intensive tasks, as I want to be sure the hardware does not suffer in its current enclosure.

Posted in Ubuntu. Tags: , , , . Comments Off »

Building my own NAS/Media center – Part 7 – Minecraft and PS3 SixAxis

How can I possibly live without minecraft? Downloading the Sun JRE is easy. From the Oracle website, I got the JRE-7u21, downloaded a .tar.gz file, and unpacked it in /opt/oracle (just to keep it consistent with /opt/google/chrome)

To download minecraft, I logged in my premium account (heh) and downloaded the software, a straightforward operation but when I finally started the game, all I got was a black screen with the following error message

Exception in thread "Thread-3" java.lang.UnsatisfiedLinkError:
/home/sbo/.minecraft/bin/natives/ wrong ELF class: ELFCLASS32 
(Possible cause: architecture word width mismatch)

Apparently, this is due to incorrect LD_LIBRARY_PATH setup and outdated libraries. I fixed with adding the following line to ~/.bashrc export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:”/opt/oracle/java-7-oracle/lib/amd64/” and updating the LWJGL library as indicated here.

To add the minecraft icon, it was easy: I simply followed this post on askubuntu.

Setting up the Playstation 3 controller for minecraft

Playing minecraft with the mouse and keyboard is fine if you are sitting close to the screen and with a table in front of you. If you are playing from a sofa, you need a controller, and I already have one: the PS3 controller. The task now is to setup the Linux box to make the two communicate.

The PS3 controller is a real cool toy: it sports a huge number of proportional buttons, gyroscopes, and joysticks. It’s a completely programmable, very standard object that talks USB and Bluetooth. Props to Sony for producing such a technological jewel, and for once not going proprietary. That said, I did some research and got somewhere, although it’s far from easy. Technically, the kernel recognizes the controller as soon as plugged in. dmesg says

[182796.359735] usb 2-1.1: new full-speed USB device number 16 using ehci_hcd
[182796.469769] usb 2-1.1: New USB device found, idVendor=054c, idProduct=0268
[182796.469775] usb 2-1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[182796.469778] usb 2-1.1: Product: PLAYSTATION(R)3 Controller
[182796.469781] usb 2-1.1: Manufacturer: Sony
[182796.688727] sony 0003:054C:0268.0011: Fixing up Sony Sixaxis report descriptor
[182796.714451] input: Sony PLAYSTATION(R)3 Controller as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.1/2-1.1:1.0/input/input31
[182796.714909] sony 0003:054C:0268.0011: input,hiddev0,hidraw0: USB HID v1.11 Joystick [Sony PLAYSTATION(R)3 Controller] on usb-0000:00:1d.0-1.1/input0

but this is not enough to use the sixaxis out of the box. The following instructions focused on USB communication, as my computer has no bluetooth at the moment. I installed three softwares, two of them (QtSixA and jstest-gtk) from the Ubuntu software center. I recommend installing them first, because they bring a lot of useful dependencies in.

First I tried QtSixA. I got it to see the controller, but no buttons were available. I could see the controller though, so it was a step in the right direction.

Then, I tried jstest-gtk. This recognized the controller as a joystick on /dev/input/js0. I performed the calibration moving the controller here and there, pressing all the buttons and moving all the joysticks. This step is particularly important to associate the raw values for each “axis” (as the control channels are called) coming from the driver to the values to send to the applications (in an interval apparently between -32767 and 32768). After the step was completed, I could see the actual values fluctuate in jstest-gtk as I moved the controller around.

Screenshot from 2013-05-24 20:22:26

Now that I have a correct driver setup, I need to configure the mapping between the joypad events and the sending of actual game-useful events (such as mouse movements, keyboard presses). For this, I downloaded qjoypad. This application is not available on the ubuntu software center, so I had to compile and install it at the command line. Once ready, I had to run it with the –notray option, because with the icon in the tray I don’t get any popup to configure the pad. This produces a nasty floating window with a joystick in it, but at least it works. QJoypad must be kept running when I play, being the one responsible for mapping pad events to keyboard/mouse events.

qjoypad is a bit complex to configure, but it does its job and it’s highly configurable. I created a new layout called “minecraft” and then, for each axis, configured the events. For axis 1 (the left-right movement of the left analog stick) I wanted to have the player movement, so I set in the popup dialog as follows

Screenshot from 2013-05-24 20:25:15

Note the blue and red markers in the dialog. When you move the analog stick, a grey area increases. Hitting the blue marker makes it enter in a “fire” area, where the event is produced, in this case, the A letter is sent. I put the blue marker very high so that I don’t get accidental movements if I just touch the analog stick.

The red marker becomes important for proportional movements, such as looking around  (assigned to the right analog stick, Axis 3 and 4). In this case, I want to send mouse events. The blue marker is considered the zero. the red marker is considered the highest value for mouse action. I put my markers as follows

Screenshot from 2013-05-24 20:25:58

so that I don’t get accidental movement by just brushing the stick (blue marker), and I get full speed a bit before hitting the maximum excursion with the stick (red marker). I had to reduce the mouse sensitivity, and mapped the excursion linearly (that is, the farther I move the stick off-center, the faster the visual panning goes, with speed being in a smooth, progressive association to the stick displacement).

Full configuration took me a while, and I had to test it for practicality in minecraft, because some actions must be accessible at the same time (for example, jumping and digging, jumping and moving, and so on). I am still refining the associations. Obviously, the select is a good button for inventory (it’s kind of a standard in PS3 games), as well as the two sticks for movement and look-around. The arrows buttons left and right could be used to cycle the in-hand item, although this prevents inventory change while moving. A good alternative could be the shoulder trigger buttons, but they are not easy to reach and have a bad tactile feedback, so I am trying not to use them. Shoulder buttons can be used for digging or jumping.

After a bit of trials, I came up with this redundant association. It’s still experimental, but I really enjoy being able to set the digging action sticky by pressing square.

Joystick 1 {
 Axis 1: dZone 16768, +key 40, -key 38
 Axis 2: dZone 13492, +key 39, -key 25
 Axis 3: gradient, dZone 6167, maxSpeed 10, tCurve 0, mouse+h
 Axis 4: gradient, dZone 5589, maxSpeed 10, tCurve 0, mouse+v
 Button 1: key 26
 Button 3: key 65
 Button 4: key 9
 Button 6: mouse 5
 Button 8: mouse 4
 Button 9: mouse 4
 Button 10: mouse 5
 Button 11: mouse 3
 Button 12: mouse 1
 Button 14: key 9
 Button 15: key 65
 Button 16: sticky, mouse 1

Trying Bluetooth

With the addition of a Bluetooth dongle, I started playing with the possibility of using the controller as a wireless device. I followed this page, but despite my best efforts, I failed to pair the controller and the computer. I give up on this because it’s not as important, and I can play with the cable just fine.

Posted in Hardware, Ubuntu. Tags: , . Comments Off »

Building my own NAS/Media center – Part 6 – NAS Software, WakeOnLAN

At the moment, I have a stable system and I can start installing the stuff I need. So let’s start.


The main role of the NAS is to be a NAS, obviously, and this means being accessible from my laptop for storage. With this, comes the choice of which service/protocol to use among NFS (mostly used in the Unix world), SMB (windows), and AFP (Apple), and other less common alternatives, such as SSHFS and iSCSI. The choice is not necessarily exclusive: the same files or directories can be exported through all or some of these channels simultaneously, but staying simple, which one should I use?

I am a Mac user, and I am looking for a solution that is satisfying for both Linux (as a server) and Mac (as a client). I also have a Windows laptop, so it might put an additional requirement. Mac technically supports both NFS and SMB protocols, although its native protocol is AFP. When it comes to performance, NFS is the winner. SMB is slower, and can’t take advantage of multithreading, but in all fairness I don’t really care too much. Another option is SSHFS, which is basically a sftp made filesystem. All the traffic is encrypted and it’s relatively straightfoward to setup. Finally, there’s also iSCSI, something I may try out later, but unfortunately iSCSI is not supported by Mac, unless you use this iSCSI initiator software. A bit pricey, but I might be tempted to try and see how it goes just for fun.

Trying out SMB

I am the only user of the system, and I started with what I thought was the fastest to setup: SMB.  I followed this excellent guide, and configured a directory “Media” where I will keep all my media files. It took only a few minutes. On the OSX side however, it turned out to be a bit more tricky. I thought the computer would appear in the “Network” entries automatically, but I was wrong. Instead, I had to connect manually using “Go -> Connect to server” and specifying smb://ip-address/” in the dialog. Easy enough, but I honestly don’t understand why the computer does not appear in my Network. My other laptop (with ubuntu 12.04 as well) is found by my Mac immediately, and I didn’t do anything special except basic install.

After playing for a while with the shares, I realized the interaction was weird. Occasionally, the machine would actually appear in the “Shares” section of the Finder, then disappear, then appear again, with no apparent relation to my actions on the server or on the Mac. Occasionally, I was also unable to connect at all. I browsed around, and asked around, but apparently the only way to make the system detectable by the mac auto discovery was to change the avahi configuration to add the following smb.service

<?xml version=”1.0” standalone=’no’?><!—*-nxml-*—>
<!DOCTYPE service-group SYSTEM “avahi-service.dtd”>
    <name replace-wildcards=”yes”>%h</name>

Still, this didn’t fix the occasional hiccups of the service, that kept returning an “unable to connect to the share” error messaged occasionally, and a general strange behavior of what and how things are exported to a guest. Although I am sure there is a solution to my problems, I gave up on SMB, and moved on to SSHFS.

Trying SSHFS

On paper, SSHFS is trivial: a sftp session is established to the remote host, and the content is made available as a filesystem via FUSE. This makes the access to the remote NAS extremely transparent, secure (the traffic is encrypted), and easy to setup. The problem, however, is that if the connection is suspended (for example, if I put my laptop in sleep mode) it won’t restart when I turn it on again. Minor problem, but still…

Going with AFP

After so much experimenting, I decided to try out AFP support, and worry about other computers later on. There this an excellent tutorial, albeit old, if you want to use AFP on a Linux box, another, much shorter one, and a blog post. I went with the first one, to learn something new.

To enable AFP and serve files, I followed the tutorial step by step: installed netatalk (version 2.2.1 gets installed, so it should use encryption), configured the netatalk file /etc/default/netatalk with


Then added the home and share volume to /etc/netatalk/AppleVolumes.default

:DEFAULT: options:upriv,usedots
~/ "$u home" cnidscheme:cdb
/home/Share/Media allow:sbo cnidscheme:cdb options:usedots,upriv

And finally putting the following line in /etc/netatalk/afpd.conf (and it should be the only line in the file)

- -tcp -noddp -uamlist, -nosavepassword -advertise_ssh

It is important that you use, NOT, otherwise you will not be able to login. Also, I created an empty /etc/netatalk/afppasswd file, just to be sure. Apparently, Ubuntu does not ship the command afppasswd to administer this file. The authentication info should come from /etc/passwd anyway.

I restarted the daemon (service afp restart), then it’s time to configure avahi to advertise the service. As in the case of SMB, it’s a matter of creating a proper XML file. I created /etc/avahi/services/afp.service

<?xml version="1.0" standalone='no'?><!--*-nxml-*-->
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">
<name replace-wildcards="yes">%h</name>

And restart avahi with service avahi-daemon restart. The service immediately appeared in my finder, but I could not access my Media directory, with the following error message “Something wrong with the volume’s CNID DB”. These problems were solved by following the last part of this blog post: I removed all the .AppleDB and .AppleDouble directories in my exported dirs, then changed the cnidscheme to dbd.

AFP now works like a charm, except for a couple of things: first, it comes with a range of caveats, the most striking one is the problem handling symbolic links. I don’t know how hard this can hit me, but I don’t think it’s a problem at the moment. Second, there might be problems in using the Share for anything mac-related such as the iPhoto library and similar stuff. The reason is that these tools rely on metainformation that is supported by the native HFS+ file system, but not by the underlying EXT4 filesystem my Ubuntu box is based on. I don’t know if HFS+ support on linux would solve this problem, but for now, I just transfer my iPhoto library there as-is, and see what happens. Worst case scenario, I can create a HFS+ volume as suggested in the post, and mount it remotely from the Mac.


A quick note on UPnP/DLNA server. My TV technically supports this protocol, so I tried to install a DLNA server, mediatomb. There’s a good guide on this blog. After installing it and trying it out, I decided to remove it for various reasons. The first is that mediatomb is not really intuitive, not particularly fancy in its setup. The configuration file is overly complex, and setting up the file database is counterintuitive and requires web access. Additionally, the daemon scans the directories, but not having any understanding of the actual content, you find a lot of spurious information, such as the thumbnails of iPhoto. Also, UPnP is very slow: accessing my photo directory took 5 minutes and, although I hope it’s cached, I don’t find the whole thing deserving of attention when I can use XBMC.


It would be very convenient to be able to turn on the system from my laptops, in case am in bed, I am lazy and I need to access some files. To this purpose, there’s Wake-On-Lan, a mechanism that allows to turn on the system remotely by sending a “magic ethernet packet” to its ethernet adapter. To enable this behavior, support in both the BIOS and the ethernet adapter must be present. When the system is off, the network adapter keeps listening for the magic ethernet packet, and when it receives it, it asks the BIOS to power up the system.

Technically, WiFi Wake-On-LAN does exist, but it’s not supported by my motherboard. This is unfortunate, as I did everything through WiFi until now. I bite the (minor) bullet and connect the system via ethernet cable.

I found a couple of good tutorials on how to setup WoL (here, and here), and I will detail the main points. First, I entered the BIOS, and in Advanced -> ACPI settings I set Wake On Lan to Enabled. While I was there, I also set “Restore from AC Power Loss” to “Power Off”. I don’t want my computer to start up arbitrarily if there’s a power outage, and this solves the minor inconvenience of having it power up when I turn on the PSU.

For the network adapter side, I need to start up Linux and do some tinkering. This post describes how to do it. I installed ethtool and peeked the current setup with ethtool eth0

Supports Wake-on: pumbg
Wake-on: g

So it appears that Wake-on-Lan is already enabled, and it will be triggered by the magic ethernet packet (option “g”).

To test it from my Mac, I downloaded WoM. This tiny but amazing program allows you to send the WoL packet with just a click. I took note of the IP address and ethernet address of my NAS, configured WoM appropriately, and turned off the NAS. With a click on “Wake”, I was able to start it successfully, and I am now a happy nerd.

Posted in Hardware, Linux, MacOSX. Tags: , , , . Comments Off »

Building my own NAS/Media center – Part 5 – Make Ubuntu 12.04 play nice on NVIDIA

Following the adventures in OS setup from last time, here is a step-by-step guide to make Ubuntu 12.04 play nice on a nvidia card setup:

  1. Plug your screen on the sockets associated to the nvidia card.
  2. Get into the BIOS -> chipset -> Display configuration -> Initiate graphic adapter. Set it to PEG/PCIEx4
  3. Boot the ubuntu 12.04 installation disk or usb key
  4. Install the system normally.
  5. reboot into the new system
  6. Login. Open a terminal. Ignore any popup, and close any software manager emerging. Do
    sudo apt-get update
    sudo apt-get upgrade
    sudo apt-get install linux-image-generic-lts-quantal linux-headers-generic-lts-quantal
  7. Tea break. When the three commands are completed, reboot.
  8. Back into the system. Open a terminal
  9. Go into System settings -> Additional drivers. Select and activate the “version current” of the NVIDIA driver.
  10. Do not reboot. Instead, open a terminal, do
    cat /etc/X11/xorg.conf

    If it contains something like

    Section "Device"
           Identifier "Default Device"
           Option "NoLogo"    "True"

    this is wrong, and you have to issue the following command

    sudo nvidia-xconfig

    Disregard any VALIDATION ERROR messages. Check if the new xorg.conf file contains (among a lot of other things) something like this

    Section "Device"
           Identifier "Device0"
           Driver     "nvidia"
           VendorName "NVIDIA Corporation"
  11. Reboot.
  12. Now you are using the Nvidia drivers. Test them with the following from a terminal
    sudo apt-get install mesa-utils

    You should get a nice spinning gearset. Close the glxgears window

  13. Edit the /etc/rc.local as root with the following command
    sudo vi /etc/rc.local

    and add the following line before the “exit 0″

    /usr/bin/nvidia-smi -pm 1
  14. Reboot. You are done.

The magic line nvidia-smi -pm 1 deserves some explanation. The problem with the corrupted screen and inability to get back to lightdm once logged out is due to the fact that X releases the nvidia driver. with -pm 1 you force the module to be persistent and not be unloaded, The reason why X did not start is probably due to a race condition between the instantiation of the driver and the access of the new X session started by lightdm after logout.

Posted in Hardware. Tags: , , , , , . Comments Off »

Building my own NAS/Media center – Part 4 – OS setup

As soon as I plugged the NAS to the electric socket, something was fishy. Apparently it starts up without any signal from the power button. The only way to turn it off is to use the PSU. This is worth more investigation, but for now I will move on to set up the BIOS and installing Ubuntu.

Downloading Ubuntu

I downloaded the Ubuntu 12.04 LTS ISO image from the website. Nothing peculiar here, except for the fact that I need to install from USB key, as I don’t have a CD reader. I followed the instructions from the Ubuntu website and obtained a USB memory stick which I plugged into the USB port of the NAS.

First light and BIOS setup

It was a nice surprise to see that, once I started the NAS and turned on the monitor, Ubuntu was already booting. Apparently the BIOS is already configured to boot from USB stick, which is a plus, but I think it deserves some tinkering just because of the nature of the project. Also, I want to see if the CPU temperature is appropriate. I might have had some troubles due to the difficult seating of the pins.

Apparently, the CPU temperature is a bit weird. It keeps rising from 52 C to a stable 55 C. The CPU fan has four pins, so it’s adaptive and can be controlled via the BIOS. I tinkered the BIOS values to make it work more, and see how the temperature changes with time, but I didn’t get any appreciable difference in the fan speed. The CPU fan is working at 1000 RPM, and the system fan at 4000. I discovered later on that while in the BIOS, the CPU never enters low-power mode, so it’s normal for the temperature to increase.


Nevertheless, I might have to monitor the situation. According to the Intel website, the maximum temperature (Tcase) for the Core i3 is 69.1 C. The conductive paste has a break-in period, which should lower the temperature of 2-5 C. This might be the reason why I have readings that are higher than what I feel comfortable with. For now, the temperature is not above the limit and I can try to install a basic Ubuntu.

A quick note on noise. The PSU and the processor fans are extremely silent. The GPU fan is instead quite loud in the high-pitch range. Putting the lid on the case doesn’t really solve. I’ll address this problem later.

Installing Ubuntu

The system boots naturally from the USB key. I am impressed at the ease of installation of Ubuntu. Literally 3 clicks and the system is installed. I didn’t bother doing excessive configuration of the partitions, I just let the installer choose for me. The 160 GB disk is probably oversized for the task of keeping the system, and I might consider replacing it with a small SSD or even a plain USB key. I don’t expect a lot of stuff to go on the system partition. The bulk of the space will be provided by the additional hard drives.

After the installation completed, I rechecked the BIOS CPU temperature monitor, which is now at 48 degrees, again increasing. As I said, staying in the BIOS makes the CPU work hard, thus the increase in temperature. I tried monitor the CPU from the system, and it stays rather cool, in the 30 C range. This implies that my setup is apparently fine.

Troubles with Nvidia and Intel HD cards

The nvidia drivers introduced some trouble. I installed the nvidia-current from the software center, but even after reboot I didn’t have a GLX display. I tried running nvidia-xconfig, but it trashed the X window config file. After rebooting, I was left in text mode. I struggled with the problem for a while, until I found a (partial) solution.

The first thing to note is that there are two graphic cards available: the Nvidia (called PEG in the BIOS) and the integrated Intel (called IGD). On Linux, using the command lspci -vv reported two entries with VGA capabilities.  The idea at the hardware level is that the Intel is a low-consumption graphic card. When it needs accelerated rendering, it delegates it to the nvidia card and multiplexes the data. This is called hybrid graphics, and it’s not supported fully by Linux (the usual drill). Some efforts are being made, but they are not for the faint of heart or those who just want something that works (you know, like on the Mac).

Long story short, as Linux can’t use the hybrid mechanism, you have to choose, and the presence of two cards makes trouble, because by default, Linux might use the integrated Intel. Some options are possible, namely to use vga_switcheroo, but it works only if you use the nouveau driver (an opensource implementation of the nvidia drivers). I wanted to use the closed source nvidia ones, making this solution not feasible. As far as I’ve been told, the opensource drivers are still too radioactive to be worth of consideration.

The second point is that there are four sockets for video on the board. two DVI, one DisplayPort, and one HDMI. One DVI (the top one) and the DisplayPort are connected to the Nvidia card. The remaining DVI and the HDMI are connected to the Intel. When I started the system the first time, I plugged in the DVI, but then I realized there was no sound. I thought DVI didn’t carry the sound signal (wrongly), so I connected the HDMI to the monitor (my TV). Unfortunately, this meant that X windows always ran on the Intel, making all efforts to use the Nvidia drivers fruitless (GLX not available when running an OpenGL program, nothing available under the nvidia-settings program, and so on).

To solve, I “switched off” the Intel graphics chip. You can’t technically turn off the integrated Intel, but the way I did is the best you can do to make it irrelevant to Linux. From the BIOS as suggested here I chose option Initiate Graphic adapter, selected PEG/PCIEx, instead of PEG/IGD. Additionally, I connected the second DVI port (the nvidia one) to the monitor via a DVI-HDMI adapter cable. Apparently, this works: lspci now reports only one VGA card, the nvidia, and the drivers now install and work correctly. glxgears runs accelerated at 60 frames per second. I found suggestions to blacklist the driver as an additional line of defense, but as far as lspci tells me, I am running full nvidia now. For additional reference, here is another ubuntu page that might address additional problem someone google visitor may face.

Unfortunately, I don’t get any sound, and I assumed it was because DVI does not carry sound information. There’s plenty of useful information on the topic of audio on this forum, where it says that it’s technically possible to have audio out of the (Nvidia) DisplayPort with a DP->HDMI cable. If you want to know more about DVI, I suggest this excellent article from tomshardware. I thought about feeding sound with a proper cable DVI+Audio Out -> HDMI, when I realized I was probably using a different output. I changed it in the Ubuntu system configuration and I am now getting audio.

I have two problems left, one minor, one major: the first, minor one is that the noise of the GPU fan is extremely annoying; the second, major one is that I can’t log out. If I do, the system hangs. I’ll address these two problems one after another.

Changing Fan behavior from the BIOS

To make the fans more silent, I simply changed the appropriate values for temperature startup and running speed in the BIOS. I settled for a maximum 80% fan speed  (the “fan duty” value). This reduces the noise considerably, while keeping the temperature within an acceptable maximum.

Problems with lightdm at logout

Solving this problem was much harder: I quickly realized I could not perform a sane logout once I started using the nvidia card. When I got out of the X session, instead of going back to lightdm, I got a strange garbled text-only display instead.


I followed the advice from user jazztickets, and tried various proposed solutions in the ubuntu forums, such as adding an entry for udev to lightdm configuration, checking other users in similar conditions, dropping lightdm for gdm in the hope that it was a lightdm issue, adding a sleep 5 just before the exec lightdm in /etc/init/lightdm.conf, and also putting DEVPATH=*card0 instead of card0 in the “start on” entry of the same file. None of these worked.

After plenty of reboots and no clear clue, I managed somehow to stop the X server, get the garbled text, and get access to the text consoles, something that was not possible before. Magic, or chance, I’ll never know. In any case, I noted that I could not restart X manually due to the following error

NVIDIA: could not open the device file /dev/nvidia (Input/output error)

That gave me a hint. I tried to add vmalloc=256M in grub. Didn’t work, and I then found it’s only for 32 bit machines anyway. I also tried to remove xorg.conf and recreate it. No luck.

I started reinstalling Ubuntu again and again, trying different options. It took me 3 days and a lot of googling, but I finally found the sequence of events that works, which will be the subject of the next post.

Formatting and encrypting the big hard drive

The 3 terabyte big hard drive is correctly seen at boot and the entry is present in dmesg

[ 2.402539] scsi 1:0:0:0: Direct-Access ATA WDC WD30EZRX-00M 80.0 PQ: 0 ANSI: 5
[ 2.402606] sd 1:0:0:0: Attached scsi generic sg1 type 0
[ 2.402621] sd 1:0:0:0: [sdb] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB)

In order to partition and format it, I had to use parted, since fdisk apparently does not support disks so big. I found this post extremely useful and I will give here just the essence of the operation

(parted) mklabel gpt
Warning: The existing disk label on /dev/sdb will be destroyed and all data on
this disk will be lost. Do you want to continue?
Yes/No? yes
(parted) unit TB
(parted) mkpart primary 0.00TB 3.00TB
(parted) print
Model: ATA WDC WD30EZRX-00M (scsi)
Disk /dev/sdb: 3.00TB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number Start End Size File system Name Flags
1 0.00TB 3.00TB 3.00TB primary

Now that I have a partition, it is time to bring truecrypt to work. I want to encrypt my disk for security. In case of theft, my data would be easily accessible, and I don’t really like this. I will use truecrypt to encrypt the whole partition, then mount it manually when needed by typing in the password. Although it appears to be a bit annoying, it will be done only once the computer boots up. I plan to keep it asleep when I’m not using it, and this should not compromise the truecrypt mount. If the computer gets stolen, it must necessarily be unplugged, and the data won’t be accessible later on.

Once again, another blogger provides me an easy (though outdated) guide to achieve my goal. Truecrypt is easily installed from the website. The downloadable .tar.gz package extracts to an executable script that can be run and performs installation graphically. After the installation is completed, I had a “truecrypt” application in my application menu. Then I created the encrypted partition with

truecrypt -t -c /dev/sdb1

and answered the questions. I chose my preferred encryption and hash method, my password, and ext4 as a partition. The operation worked through the night.

Unfortunately, in the morning I got a 100% completion but also a I/O error. Being the disk new, I started being worried of a potential faulty drive. I checked the kernel logs with dmesg and got plenty of messages of this kind

[168480.771645] sd 1:0:0:0: [sdb] Unhandled error code
[168480.771647] sd 1:0:0:0: [sdb]  
[168480.771647] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[168480.771648] sd 1:0:0:0: [sdb] CDB: 
[168480.771649] Read(10): 28 00 c1 cf 57 d8 00 00 08 00
[168480.771653] end_request: I/O error, dev sdb, sector 3251591128

bad news. I ran badblocks, but that turned out to be a very bad idea. The program ran for 9 hours with CPU at max power, getting only to 50% of the task, and I got my other hard drive filled with kernel error messages. I was able to recover the situation by killing the process on a dying system, and freeing some space. I am not convinced it’s a hardware issue, and before returning it, I decided to do some investigation.

My first suspect is temperature. While the process gets pretty hot (maximum 70 degrees, operative around 65) despite the big fan, the WD drive was rather cool, so I’d exclude temperature as a potential troublemaker.

The second suspect is power supply. Is 350 W enough? I both asked on superuser, and checked on the eXtreme power supply calculator. For my configuration, around 170 W should be enough. I have plenty of wiggle room when it comes to power, and other components would fail if power were an issue.

At this point, I decided to install gsmartcontrol, and run the SMART diagnostics. Right after the incident, the drive was “Unknown”. After the reboot, it is correctly detected. All tests are successful. What gives, then?

I decided to do nothing and ignore the error. I have the suspicion the problem is a kernel issue, and if it doesn’t bother me in ordinary use, I won’t really care. What I did was to set the driver on the SATA II channel (instead of the SATA III), change the cable, and simply do a quick truecrypt creation with

truecrypt -t -c --quick /dev/sdb1

This prevents encryption of the free space. I am not that paranoid. The result is that the filesystem is created and formatted instantly, and I can successfully mount it with

truecrypt /dev/sdb1 /media/test

I decided to trim down the reserved space for root to 1%, since this is not a boot disk

sudo tune2fs -m 1 /dev/mapper/truecrypt1

And finally, I have an encrypted filesystem on a really, really big disk.

Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/truecrypt1  2.7T  201M  2.7T   1% /media/test

I also set the scheduler from “deadline” (the default) to “noop” first and “cfq” later. I had a lot of problems rsyncing a large external hard drive to the NAS. The whole computer froze, sshd dropped connections, and the processor temperature started raising. I simply did

echo "noop" >/sys/block/sdb/queue/scheduler

and the problems stopped, although I am not sure the scheduler was the real source of it.

I confirm that both temperature and the disk are acceptable and working. I stressed both the disk and the processor for hours, filling it completely with random generated data. No problems at all, so I start to cast some personal doubts over the “deadline” scheduler, at least with my current setup.

Posted in Hardware, Ubuntu. Tags: , , . Comments Off »

Building my own NAS/Media center – Part 3 – Assembling

Before I start, please keep in mind that all images are linked to a high resolution version, in case you want to see more details. For the impatient, here is a pic of the final assembly


Preparing the case

To start my adventure in assembling, I opened and prepared the case. There’s plenty of space, for now.


The case has three bottom intakes (one covered by the hard drive brace), and one potential intake on the back. I say potential, because this is actually the space for the PCIe cards, and the plate can be removed with some minor hack. I am not planning to use any PCIe card, so at least in theory, I can put a fan in there.

The case came well equipped with cables. The connectors you see for the front panel are for the frontal USB ports and the IR receiver/front panel buttons. The case came with a remote, so I assume it’s possible to generate USB keyboard events and hook them to specific software actions. Unfortunately, my motherboard comes with only one 5 pins plug for USB2, so I had to choose between the frontal USB panel and the IR receiver. I chose the former.

One possible solution to this dilemma is that the motherboard also provides USB3 pins. These pins accommodate a 2 ports USB3 connector, shaped to fit the PCIe reserved area in the back. I could, in principle, keep this connector inside the case, then use this USB to PIN adapter to plug the IR receiver in. It’s a side project I will attack if I have time. For now, I need to assemble the motherboard in place.

Putting the processor in the socket

The Zotac Z68 ITX motherboard is very cute but packed with action.

Zotac Z68-ITX Supreme Hardware

I removed the red warning label, and opened the CPU socket by moving the lever on the side. The amount of pins found in a modern processor is mind-bending.


Once I unpacked the processor, fitting the processor required some care. First, I had to be sure it was correctly aligned (see red arrows). Then I had to clean it from the inevitable fingerprint smudges that end up on it after I fumbled for a while trying to find the correct alignment.


All that was left to do was to close the metal brace and put the lever back in the slot. It required some “Damn I’m going to break it” strength, but it fitted nicely.


Assembling the processor fan and cooling

The fan and the cooling system came preassembled. I had a lot of different mounting braces available, and I had to choose the proper one.


Unfortunately, I had to use the push pins, the black and white plastic things you can see in the picture. These things are evil, and I’ll get into that in a moment.

Before that, I had to apply the heat paste. I visited the Arctic Silver website, where they have clear instructions for the various processors. For my i3, the vertical strip method is the best choice. I added a small amount (a bit more than the size of a rice seed)  to the cooling plate


and spread it thin and uniform with a plastic spatula. This fills eventual gaps and provides better grip to what comes next.


Now it’s time for the processor. I spread a uniform line in the appropriate direction, 2-3 mm thick.


and I can now put the cooling system on. Here comes the push pins. These things are extremely hard to put on, especially the last one. Supposedly, you push them and a click they are set. In practice, I had to push so hard for the last two that I almost broke my thumb. To remove them, it’s even worse, because you have to rotate the upper part, but the pressure makes it hard to rotate and you don’t have enough leverage. If you add to this the cramped space of a mini-ITX board, the evening ends in tears, swearing, and more than 30 minutes of failed attempts to settle the last pin. The bastard is here pictured


Assembling the RAM

Adding the two RAM modules is easy. Just open the white locks


align the modules in the sockets, putting the notch in the right place, and push down vertically with some force. The white locks will close automatically. Clearly, I started from the slot most difficult to reach, that is, the one near the processor. The CPU cooler prevents easy access to this slot, but it’s a minor inconvenience and everything slides into place with relative ease.

Adding the Hard Drives

For testing and initial boot, I did not use the large hard drive, but an old spare I had laying around. I originally changed it due to some worries about its condition, so I might just discard it, but I used it for a while and it’s not giving me any problems. I just plugged the SATA cable and the power cable. Initially, I let it loose, but when I started the final assembly I used Powerstrips to lock it into a reasonable position.


For the big hard drive, I used the 3.5 inches slot provided by the case. The remaining 5.25 inches is currently empty, and technically made to accept a dvd player, but I am keeping the option open for a second hard drive.


Final result

Apparently, jumpers went the way of the dodo and the dinosaurs. The Motherboard has only one jumper, to reset the BIOS. Nothing to be done on this regard.

I set my fan to be blowing into the case from the bottom, so to keep a higher pressure inside (reduces dust accumulation). Hot air escapes from the back through the grid. This setup seems to be fine, but if I find additional problems, I can add a second fan to the PCIe grid with some hacking.

When it comes to electrical consumption, the computer in standby  (suspended) consumes 2 W, probably to keep the ethernet powered up for the WakeOnLan. When turned on and idle, the consumption clocks at 50 W, which raises to 60 W when watching a movie. Playing minecraft drains up to 100 W.

Here is again the picture of the final assembly. Now it’s time for software configuration, the subject of the next post.


Posted in Hardware. Tags: , , , . Comments Off »