Useful Links: December 2009

A list of items I’ve come across recently that I found potentially useful, interesting, or just plain funny:

Script to back up a MySQL database (e.g. for WordPress blogs)

Jeff Atwood’s recent blog failure has probably served as a timely reminder to website owners across the ‘net – and I’m pleased to see that Jeff got his site back (some of us would be less lucky).  At the time, I tweeted this:

“RT @edbott: Every webmaster’s nightmare: ^MW Feeling sorry for @codinghorror and thanking @alexcoles for my backups”

To which the response from my hosting provider came back as “er… what backups?!”.  Thankfully, that was an attempt at humour (either that or Alex had forgotten that he’d helped me set some backups last summer – it helps to be on first name terms with the guys who look after your Internet presence)!

In common with many blogs, runs on self-hosted WordPress and the key to its existence is a MySQL database.  Themes, etc. can be recreated… the data cannot (at least not easily).

This is where the following script comes in.  Run as a cron job on my host’s server, I have it set up to make a copy of my WordPress database and e-mail it to me (which means I have archive copies in my Google mail).  It took some tweaking (by Alex) to run without errors and I’d like to link to it’s original source but the link I have appears to be dead – I’ve left the original author’s comments intact though – hopefully he won’t mind:

#! /usr/bin/php -q


// configure your database variables below:
$dbhost = ‘localhost’; // Server address of your MySQL Server
$dbuser = ‘db_username’; // Username to access MySQL database
$dbpass = ‘db_password’; // Password to access MySQL database
$dbname = ‘db_name’; // Database Name

// Options You May Optionally Configure

$use_gzip = “no”; // Set to No if you don’t want the files sent in .gz format
$remove_sql_file = “yes”; // Set this to yes if you want to remove the .sql file after gzipping. Yes is recommended.
$remove_gzip_file = “yes”; // Set this to yes if you want to delete the gzip file also. I recommend leaving it to “no”

// Configure the path that this script resides on your server.

$savepath = “/home/username/wpbackup”; // Full path to this directory. Do not use trailing slash!

$send_email_w_file = “yes”; // Do you want this database backup sent to your email? Fill out the next 2 lines
$to = “emailaddress”; // Who to send the emails to
$from = “emailaddress”; // Who should the emails be sent from?
$senddate = date(“j F Y”);
$subject = “ WordPress backup”; // Subject in the email to be sent.
$headers = “From: $from”;
if ($send_email_w_file == “yes”)
$message = “Attached is a MySQL dump of sitename dated $senddate.”;
// Brief Message.
$message = “Your MySQL database has been backed up and placed in $savepath.”; // Brief Message.

$use_ftp = “no”; // Do you want this database backup uploaded to an ftp server? Fill out the next 4 lines
$ftp_server = “localhost”; // FTP hostname
$ftp_user_name = “ftp_username”; // FTP username
$ftp_user_pass = “ftp_password”; // FTP password
$ftp_path = “/”; // This is the path to upload on your ftp server!

// Do not Modify below this line! It will void your warranty!

class zipfile

var $datasec = array(); // array to store compressed data
var $ctrl_dir = array(); // central directory
var $eof_ctrl_dir = “\x50\x4b\x05\x06\x00\x00\x00\x00”; //end of Central directory record
var $old_offset = 0;

function add_dir($name)

// adds “directory” to archive – do this before putting any files in directory!
// $name – name of directory… like this: “path/”
// …then you can add files using add_file with names like “path/file.txt”
$name = str_replace(“\\”, “/”, $name);

$fr = “\x50\x4b\x03\x04”;
$fr .= “\x0a\x00”; // ver needed to extract
$fr .= “\x00\x00”; // gen purpose bit flag
$fr .= “\x00\x00”; // compression method
$fr .= “\x00\x00\x00\x00”; // last mod time and date

$fr .= pack(“V”,0); // crc32
$fr .= pack(“V”,0); //compressed filesize
$fr .= pack(“V”,0); //uncompressed filesize
$fr .= pack(“v”, strlen($name) ); //length of pathname
$fr .= pack(“v”, 0 ); //extra field length
$fr .= $name;
// end of “local file header” segment

// no “file data” segment for path

// “data descriptor” segment (optional but necessary if archive is not served as file)
$fr .= pack(“V”,$crc); //crc32
$fr .= pack(“V”,$c_len); //compressed filesize
$fr .= pack(“V”,$unc_len); //uncompressed filesize

// add this entry to array
$this -> datasec[] = $fr;

$new_offset = strlen(implode(“”, $this->datasec));

// ext. file attributes mirrors MS-DOS directory attr byte, detailed
// at…s/Q125/0/19.asp

// now add to central record
$cdrec = “\x50\x4b\x01\x02″;
$cdrec .=”\x00\x00″; // version made by
$cdrec .=”\x0a\x00″; // version needed to extract
$cdrec .=”\x00\x00″; // gen purpose bit flag
$cdrec .=”\x00\x00″; // compression method
$cdrec .=”\x00\x00\x00\x00”; // last mod time & date
$cdrec .= pack(“V”,0); // crc32
$cdrec .= pack(“V”,0); //compressed filesize
$cdrec .= pack(“V”,0); //uncompressed filesize
$cdrec .= pack(“v”, strlen($name) ); //length of filename
$cdrec .= pack(“v”, 0 ); //extra field length
$cdrec .= pack(“v”, 0 ); //file comment length
$cdrec .= pack(“v”, 0 ); //disk number start
$cdrec .= pack(“v”, 0 ); //internal file attributes
$ext = “\x00\x00\x10\x00”;
$ext = “\xff\xff\xff\xff”;
$cdrec .= pack(“V”, 16 ); //external file attributes – ‘directory’ bit set

$cdrec .= pack(“V”, $this -> old_offset ); //relative offset of local header
$this -> old_offset = $new_offset;

$cdrec .= $name;
// optional extra field, file comment goes here
// save to array
$this -> ctrl_dir[] = $cdrec;


function add_file($data, $name)

// adds “file” to archive
// $data – file contents
// $name – name of file in archive. Add path if your want

$name = str_replace(“\\”, “/”, $name);
//$name = str_replace(“\\”, “\\\\”, $name);

$fr = “\x50\x4b\x03\x04”;
$fr .= “\x14\x00”; // ver needed to extract
$fr .= “\x00\x00”; // gen purpose bit flag
$fr .= “\x08\x00”; // compression method
$fr .= “\x00\x00\x00\x00”; // last mod time and date

$unc_len = strlen($data);
$crc = crc32($data);
$zdata = gzcompress($data);
$zdata = substr( substr($zdata, 0, strlen($zdata) – 4), 2); // fix crc bug
$c_len = strlen($zdata);
$fr .= pack(“V”,$crc); // crc32
$fr .= pack(“V”,$c_len); //compressed filesize
$fr .= pack(“V”,$unc_len); //uncompressed filesize
$fr .= pack(“v”, strlen($name) ); //length of filename
$fr .= pack(“v”, 0 ); //extra field length
$fr .= $name;
// end of “local file header” segment

// “file data” segment
$fr .= $zdata;

// “data descriptor” segment (optional but necessary if archive is not served as file)
$fr .= pack(“V”,$crc); //crc32
$fr .= pack(“V”,$c_len); //compressed filesize
$fr .= pack(“V”,$unc_len); //uncompressed filesize

// add this entry to array
$this -> datasec[] = $fr;

$new_offset = strlen(implode(“”, $this->datasec));

// now add to central directory record
$cdrec = “\x50\x4b\x01\x02″;
$cdrec .=”\x00\x00″; // version made by
$cdrec .=”\x14\x00″; // version needed to extract
$cdrec .=”\x00\x00″; // gen purpose bit flag
$cdrec .=”\x08\x00″; // compression method
$cdrec .=”\x00\x00\x00\x00”; // last mod time & date
$cdrec .= pack(“V”,$crc); // crc32
$cdrec .= pack(“V”,$c_len); //compressed filesize
$cdrec .= pack(“V”,$unc_len); //uncompressed filesize
$cdrec .= pack(“v”, strlen($name) ); //length of filename
$cdrec .= pack(“v”, 0 ); //extra field length
$cdrec .= pack(“v”, 0 ); //file comment length
$cdrec .= pack(“v”, 0 ); //disk number start
$cdrec .= pack(“v”, 0 ); //internal file attributes
$cdrec .= pack(“V”, 32 ); //external file attributes – ‘archive’ bit set

$cdrec .= pack(“V”, $this -> old_offset ); //relative offset of local header
// echo “old offset is “.$this->old_offset.”, new offset is $new_offset
$this -> old_offset = $new_offset;

$cdrec .= $name;
// optional extra field, file comment goes here
// save to central directory
$this -> ctrl_dir[] = $cdrec;

function file() { // dump out file
$data = implode(“”, $this -> datasec);
$ctrldir = implode(“”, $this -> ctrl_dir);

$this -> eof_ctrl_dir.
pack(“v”, sizeof($this -> ctrl_dir)). // total # of entries “on this disk”
pack(“v”, sizeof($this -> ctrl_dir)). // total # of entries overall
pack(“V”, strlen($ctrldir)). // size of central dir
pack(“V”, strlen($data)). // offset to start of central dir
“\x00\x00”; // .zip file comment length

//——— end class

$date = date(“Ymd-Hi”);
$filename = “$savepath/$dbname-$date.sql”;
passthru(“/usr/bin/mysqldump –opt -h$dbhost -u$dbuser -p$dbpass $dbname >$filename”);

$zipfile = new zipfile();

// add the subdirectory … important!
$zipfile -> add_dir($savepath);

// add the binary data stored in the string ‘filedata’
$file6 = fopen($filename,’rb’);
$filedata = fread($file6,filesize($filename));
//$filedata = “(read your file into $filedata)”;
$zipfile -> add_file($filedata, $filename);

// OR instead of doing that, you can write out the file to the loca disk like this:
$filename3 = $savepath.”/”.$dbname.”.”.$date.”_sql.tar.gz”;
$fd = fopen ($filename3, “wb”);
$out = fwrite ($fd, $zipfile -> file());
fclose ($fd);

//$zipline = “tar -czf “.$dbname.”.”.$date.”_sql.tar.gz “.$dbname.”-“.$date.”.sql”;
//echo shell_exec($zipline);

$filename2 = $filename3;
$filename2 = “$savepath/$dbname-$date.sql”;

if($send_email_w_file == “yes” )
$fileatt_type = filetype($filename2);
$fileatt_name = “”.$dbname.”-“.$date.”_sql”; // change to _sql.tar.gz if gzip file

// Read the file to be attached (‘rb’ = read binary)
$file = fopen($filename2,’rb’);
$data = fread($file,filesize($filename2));

// Generate a boundary string
$semi_rand = md5(time());
$mime_boundary = “==Multipart_Boundary_x{$semi_rand}x”;

// Add the headers for a file attachment
$headers .= “\nMIME-Version: 1.0\n” .”Content-Type: multipart/mixed;\n” .” boundary=\”{$mime_boundary}\””;

// Add a multipart boundary above the plain message
$message = “This is a multi-part message in MIME format.\n\n” .”–{$mime_boundary}\n” .”Content-Type: text/plain; charset=\”iso-8859-1\”\n” .”Content-Transfer-Encoding: 7bit\n\n” .
$message . “\n\n”;

// Base64 encode the file data
$data = chunk_split(base64_encode($data));

// Add file attachment to the message
$message .= “–{$mime_boundary}\n” .”Content-Type: {$fileatt_type};\n” .” name=\”{$fileatt_name}\”\n” .”Content-Disposition: attachment;\n” .” filename=\”{$fileatt_name}\”\n” .”Content-Transfer-Encoding: base64\n\n” .
$data . “\n\n” .”–{$mime_boundary}–\n”;

// Send the message
$ok = @mail($to, $subject, $message, $headers);
if ($ok)
echo “Database backup created and sent! File name $filename2”;
echo “Mail could not be sent. Sorry!”;

if($use_ftp == “yes”)
$ftpconnect = “ncftpput -u $ftp_user_name -p $ftp_user_pass -d debsender_ftplog.log -e dbsender_ftplog2.log -a -E -V $ftp_server $ftp_path $filename2”;
echo “

$filename2 Was created and uploaded to your FTP server!


exec(“rm -r -f $filename”);

exec(“rm -r -f $filename2”);

All you need to do is: save this on a web server somewhere; edit the variables at top of the script; and set up a cron job to run it at whatever schedule suits.

For many people, that should be enough to ensure that their WordPress installation is safe from any catastrophe (note: no warranty is implied and I’m not responsible if things do go wrong for you).  For me though, I have an additional consideration as my images are not in the database… maybe that will be the subject of a future blog post.  In the meantime, I can sleep soundly knowing that the database behind my site is backed up regularly.

Controlling who sees which Twitter status updates (tweets)

These days, many of the items that would once have warranted a blog post end up as one of my bookmarks on delicious (posted monthly to this blog in a “useful links” post, with the help of Postalicious) or on my Twitter stream (and I will start to blog the more useful tweets here soon) but I have to admit that, for a long-time tech blogger, I’m still a bit of a newbie when it comes to social media.

I’ve learned a lot though in the last 24 hours – last night I retweeted (the old way – using RT and adding a comment with ^MW then my text – rather than using Twitter’s new retweet function) and the original author thought I was trying to make them look bad.  If getting a message into 140 characters is difficult (even worse than avoiding unintentional emotion in e-mail, which is essentially an emotionless medium), getting it into a 20 or 30 character comment on someone else’s tweet is tough.  Thankfully, that was all resolved with a few more tweets this morning but I noticed that some of the messages I saw were using .@username and that got me wondering what the . is for.

Googling . and @ is not easy so I asked @brynmorgan, who I’d seen use this method, and he explained that the . broadcasts his message to all of his followers, because unless someone also follows me they don’t see a normal @ reply.  I’d never quite understood if my followers saw all of the responses that I posted to @username and now it makes sense.  I thought that might be useful for others so, if assuming I have understood this correctly:

  • @username will direct a tweet to a specific user and if someone follows both the sender and the specified username, they will see the message.
  • .@username will direct a tweet to a user and all of the sender’s followers will see the message.
  • D username is a direct message between two Twitter users (i.e. a private message).

Thanks to Bryn for educating me and, by the way, there’s some interesting commentary on social media over on the Brynovation blog.

Writing a macro to e-mail a worksheet from an Excel workbook

I spent most of today trying to catch up with my expenses (in order to plug the rather large hole in my bank balance before Christmas).  I work for a large IT company, and to say that our systems are antiquated would be an understatement: I understand that my expense claims are entered onto a computer system prior to payment but, before that, I have to complete an Excel spreadsheet and mail a hard copy, with receipts attached, to a team that processes them (which is corporate-speak for wasting my time, their time, and my manager’s time to quibble over minor infringements).  Recently a new level of bureaucracy was added to the process and, before snail-mailing the hard copy to be processed, I also have to e-mail a soft copy to my manager for approval, using a pre-defined format for the mail subject header.

You can probably tell by the tone of this post that I’m no fan of this process.  I understand from contacts at Microsoft, for example, that their system is entirely electronic, although paper receipts do also need to be submitted for audit purposes and I can’t see why we couldn’t do something similar.  Still, it could be worse: when I worked for a major fashion design, marketing and retail organisation a few years back, they insisted that I staple each receipt to a sheet of A4 paper first…

Anyway, after messing up the process a couple of times today and recalling messages with incorrectly formatted subjects, I decided that it’s something that should be automated.  I’ve never written any Visual Basic for Applications (VBA) before, but, armed with a few code snippets from the web, I managed to write a macro (the whole thing took me about 30 minutes to pull together and debug):

Sub SendToManager()
' SendToManager Macro
' Macro to send completed expense worksheet to one's Manager
' Keyboard Shortcut: Ctrl+Shift+M

'Create a new Workbook Containing 1 Sheet (right most) and sends as attachment.


With ActiveWorkbook

.SendMail Recipients:="", Subject:="Expenses for approval: " " & Range("C8").Value & ", " & Range("O8").Value & ", " & Format(Range("O9").Value, "Long Date") & ", " & Format(Range("Q48").Value, "Currency")

.Close SaveChanges:=False

End With

End Sub

The code is here for anyone that might find something similar useful… I’m sure that it will need modification to suit someone else’s requirements but the basic idea is here.  Basically, we create a copy of the right-most worksheet in our Excel workbook (I create a new sheet for each claim, and work left to right…), then we take that and send it to the specified recipient (one change might be to prompt for a user name) and format the subject with data from the sheet that eventually reads “Expenses for approval: name, employee number, claim date, claim value” before sending the mail.  Simple really.

Here are a few links that helped me out in doing this:

Migrating infrastructure services to a Windows Server 2008 R2 computer

Having built a low-power server to run my home infrastructure, I need to get moving on decommissioning the old virtual machines so I can turn off the Dell PowerEdge 840 that runs them.

The first step was to migrate the Active Directory Domain Services from my existing Windows Server 2003 R2 SP2 installation to the new Windows Server 2008 R2 machine:

  1. On the 2003 DC, insert the 2008 R2 DVD and open a command prompt.
  2. Locate adprep.exe (or adprep32.exe if running on a 32-bit architecture – I was already running 64-bit) in the \support\adprep folder (note the changed file location between Windows Server 2008 and 2008 R2 – it used to be in \sources\adprep) and run the following commands (theres more detail on these actions in Daniel Petri’s article on Windows Server 2008 ADprep):
    • adprep /forestprep (on the schema master for the forest)
    • adprep /domainprep (on the infrastructure master for each domain, after making sure that the domain is in at least Windows 2000 native mode)
    • adprep /domainprep /gpprep (on the infrastructure master for each domain)
    • adprep /rodcprep (if read only domain controllers are required in a Windows Server 2003 forest)
  3. After this, I ran dcpromo.exe on the new server, promoting it to a domain controller in the existing forest/domain, electing to make the server a DNS server and a Global Catalog server at the same time.
  4. With the new server running as a DC, I needed to transfer the FSMO roles.  I did this by following the advice in Microsoft knowledge base article 324801 to: register scmmgmnt.dll; run the Active Directory Schema and transfer the Schema Master role; run Active Directory Domains and Trusts and transfer the Domain Naming Master role; run Active Directory Users and Computers and transfer the RID Master, PDC Emulator and Infrastructure Master Roles.  Incidentally, even though I did this with the GUI tools, Adam Bell outlines a much smarter method to transfer FSMO roles using PowerShell.
  5. After checking that the new server’s DNS service successfully performed simple and recursive lookups (on the Monitoring tab in the DNS Server properties) then switching the new server’s primary DNS server to use itself (rather than the old DC), I ran dcpromo.exe on the 2003 server to demote it to a normal domain member, before ensuring that all computers were using the new (2008 R2) DNS server and removing the role from the 2003 computer.
  6. With Active Directory and DNS migrated, the last major service to move was DHCP (I do have some other services running on a separate server: TFTP, phone directory web service, etc. running on another server but they are application services really – this post will concentrate on the infrastructure).  This is pretty straightforward (details can be found on the Microsoft Enterprise Networking team blog) and involves a couple of commands – one to export from the 2003 R2 server and another to import on the 2008 R2 server:
    • netsh dhcp server export filename all
    • netsh dhcp server import filename all
  7. After confirming that the DHCP service was running on the target with all entries transferred, I stopped the DHCP Server service on the source (net stop "DHCP Server"), and renewed a client IP address (by starting up a PC, but running ipconfig /renew would have been as effective) to check that renewals worked before restarting the service (net start "DHCP Server"), deauthorising the original DHCP server and removing the DHCP role.
  8. If I was using the Encrypted File System or the server was a Terminal Services Licensing Server there would be some extra steps (not for me).
  9. Finally, with all services removed from the 2003 machine, I shut it down, deleted the virtual machine files from the host server, and removed the computer account from Active Directory, which can now have the forest and domain functional levels raised if necessary, as I’m completely free from legacy domain controllers.

Intel pairs its Atom with a more efficient chipset for low power PCs

Last week I wrote about my attempts to build a low-power infrastructure server for my home network (part 1 and part 2).  After I’d put all the pieces together, I saw a tweet from Jeff Atwood, highlighting a review of a new Intel board that uses a mobile 945GSE chipset instead of the power-sapping 945GC – allowing a much lower power unit to be built (albeit just with an Atom N270 – not the dual-core 330 that I used) .  Something like this would make a great PC for home use – although it’s never going to be a screamer for games, it should be pretty good for web surfing, email, and other everyday uses.

John Craddock whiteboards DirectAccess and Active Directory Recycle Bin

I spent yesterday evening at an Active Directory User Group (ADUG) meeting where John Craddock presented on IPv6 and DirectAccess (repeating two of John’s TechEd Europe 2009 sessions).  I have to admit that, at times, I struggled to keep up as the technology went deep but it was extremely worthwhile.  I hope to find the time to write some blog posts to summarise some of key points from the evening; however John also has two 10 minute videos on the Dutch site NGN, in which he “whiteboards” Direct Access and Active Directory Recycle Bin.  These are worth a few minutes of your time to get a quick overview of two new technologies in Windows Server 2008 R2.

Useful Links: November 2009

A list of items I’ve come across recently that I found potentially useful, interesting, or just plain funny:

Building a low-power server for 24×7 infrastructure at home: Part 2 (assembly and initial configuration)

Yesterday I wrote about how I’d been looking to create a server that didn’t consume too much power to run my home infrastructure and finally settled on a mini-ITX solution.  This post continues the theme, looking at the assembly of the unit, installation of Windows Server, and finally, whether I achieved my goal of building a low-power server.

As I commented previously, it’s been at least 10 years since I built a PC from scratch and it’s still a minefield of connectors and components.  I took the Travla C158 case and Intel D945GCLF2 board that I had purchased and added 512MB of DDR2 RAM and a 250GB Seagate Barracuda (ST3250620NS) that were not being used in any of my other machines.  I didn’t fit an optical drive, electing to use a USB-attached one for setup (more on that in a moment) and the case also has a slot for a card reader, which I really should consider filling (or blanking off).

With all the components ready, this is the process I followed:

  1. Open the top cover of the case.
  2. Remove the media drive and hard drive holders.
  3. Fix the hard disk to its holder and refit.
  4. Fit the gasket that surrounds the various ports (supplied with the motherboard) to the case
  5. Fit the motherboard and PCI riser.
  6. Fit a blanking plate for the (unused) PCI card slot.
  7. Install some DDR2 memory in the motherboard’s single memory slot.  Unfortunately the module that I used does not have a low-enough profile to allow the media drive holder to be refitted, so I’ll be looking for some more (512MB isn’t much for a modern operating system anyway).
  8. Connect the case fan to the jumper on the motherboard.
  9. Connect the side panel audio ports to the motherboard (the labelling on the connectors did not match Intel’s instructions for the motherboard but I followed Gabrielle Torres’ Hardware Secrets article on installing frontal audio plugs – sound is not really a concern for me on a server).
  10. Connect the front panel connectors to the motherboard, using the pattern shown in the instructions (noting that the case I selected doesn’t have a reset button, so pins 5 and 7 are not connected)
  11. Connect the side panel USB ports to the motherboard (single jumper).
  12. Connect both the power jumpers (2×2 and 2×10) to the motherboard.
  13. Connect the SATA hard drive power and data cables (the data cable was supplied with the motherboard, along with an IDE cable that I did not use)
  14. Install the mounting kit, ready to fix the PC to the wall of my office (I also considered placing it in the void between the downstairs ceiling and upstairs floor… but decided it wasn’t really necessary to bury the machine inside the fabric of the house!).
  15. Check that the BIOS configuration jumper block is set to pins 1 and 2 (normal) and refit the top of the case, then boot the PC.
  16. Press F2 to enter the BIOS configuration utility and change the following values:
    • Set the date and time (on the Main screen).
    • Under Boot Configuration on the Advanced screen, enable the System Fan Control.
    • On the Power screen, set the action After Power Failure to Power On.
    • On the Boot screen, ensure that Boot USB Devices First is enabled.
  17. Connect a DVD drive and boot from a Windows setup DVD.

I did manage to boot from my DVD drive once; however I had left the wrong DVD in the drive and so I rebooted.  After rebooting I was unable to get the PC to boot from the external DVD drive (a Philips SPD3900T).  I tried different USB ports, I changed BIOS options, I even reset the BIOS jumper to pins 2 and 3 (which provides access to some extra settings in the BIOS) but nothing worked, so I configured a USB thumb drive to install Windows Server 2008 R2 and that booted flawlessly.  I later found that Windows didn’t recognise the DVD drive until I had reset its power (which may also have resolved my issues in a pre-boot environment); however it’s all a bit odd (I hadn’t previously experienced any issues with this external DVD drive), and I do wonder if my motherboard has a problem booting from USB-attached optical media.

The Windows Server setup process was smooth, and all of my devices were recognised (although I did need to set the screen resolution to something sensible, leaving just the configuration of the operating system and services (adding roles, etc.).

With Windows Server 2008 R2 running, I decided to take a look at the power usage on the server and it seems to tick over at around 35W.  That’s not as low as I would like (thanks to the Intel 945GC chipset – the CPU itself only needs about 8W) but it’s a lot better than running my Dell PowerEdge 840 all day.  There are some other steps I can take too – I could potentially reduce hard disk power consumption by replacing my traditional hard drive with an SSD as the the Barracuda pulls about 9W idle and 12W when seeking (thanks to Aaron Parker for that suggestion).  It may also be that I can do some work with Windows Server to reduce it’s power usage – although putting a server to sleep is probably not too clever!  A brief look at the energy report from powercfg.exe -energy indicates that the USB mass storage device may be preventing processor power management from taking place – and sleep is disabled because I’m using a standard VGA driver (vgapnp.sys).  Microsoft has written a white paper on improving energy efficiency and managing power consumption in Windows Server 2008 R2 and this blog post from the Windows Server Performance team looks at adjusting processor P-states.  It may be some time before I reach my nirvana of a truly low-power infrastructure server, but I’ll write about it if and when I do – and 35W is still a lot better than 100W.

Building a low-power server for 24×7 infrastructure at home: Part 1 (hardware selection)

A couple of years back, I bought myself a small server with the intention that it would replace many of the various PCs I was using at home.  As a result, I did decommission some older hardware and it allowed me to consolidate the infrastructure I use for my home office network, testing and development onto one Hyper-V host, supplemented by an assortment of client devices (my netbook, my kids’ netbook, my wife’s laptop, the living room Mac, my MacBook, my work notebook) and a couple of Netgear ReadyNAS Duos (one as always-on storage, the other as an iSCSI target).  The trouble is that, even though the clients may be shutdown or in a low-power state when not in use, the server runs 24×7 consuming something in the region of 100W of power – generally just so that one virtual machine is available with my DHCP, DNS, TFTP (for IP phone configuration) and basic web services (e.g. phone directory).

I decided to build a low-cost, low-power, PC to run as a small server and provide these services, along with Active Directory.  After reading Jeff Atwood’s post on ultra-low-power PCs, I was inspired; however I wasn’t sure that I’d get Windows Server running on a non-Intel chip (it might be possible, but I wasn’t going to risk it) and even though Linux could potentially offer the rest of the services I needed, a lot of my work uses Microsoft software, and I wanted to run Active Directory.

I tweeted to see if anyone had any advice for me – and there were some good suggestions – including Garry Martin’s identification of an Asus Aspire Revo nettop for around £144 at eBuyer.  That got me thinking: I wasn’t sure about the Revo’s form factor – but if I could get myself a Mini-ITX wall-mountable case and suitable motherboard, then use a spare disk and memory from my “box of PC bits”, I could probably build something for a similar price.

I should explain that I haven’t “built” a PC for about 10 years (maybe more) because there are so many decent OEM deals around but I started to look at what was available at (as highlighted in Jeff’s post).  I did also consider a Fit PC 2 (low power and Intel-based) but it uses the Z-series Atom processors, which provide Intel VT (great for Windows Virtual PC) but are not 64-bit so are not suitable for Windows Server 2008 R2 (or for Hyper-V – although virtualisation is not something I’m looking to do with this server).  Eventually, I settled on a Travla C158 case coupled with an Intel D945GCLF2 board which has an Intel Atom 330 giving me a dual-core x64 CPU (actually it’s like two Atom N270s on the same socket) with hardware DEP (but not VT).Intel Atom inside  There were other options with lower power consumption, but they would have involved using non-Intel CPUs (and I was unable to confirm that Windows Server will run on a Via Eden (for example); however this Atom 330/Intel D945GCLF2 board combination is still providing plenty of computing power for my purposes (William Henning’s review on neoseeker is worth a read).

I ordered the equipment yesterday evening at 19:00 (the site says “place UK orders before 7.30PM GMT for same-day despatch”) and a courier delivered it to me before 11 today; however I did end up buying some power connectors because it wasn’t clear whether they would be required or not (I left a comment in the order notes to remove them if not required but that was ignored) and there was some confusion because I received order and stock confirmations last night but the dispatch notice wasn’t sent until today (making me think I’d missed the next-day slot).  Even with the communications issues, that sort of turnaround is fantastic – many online stores need orders by mid-afternoon for same-day dispatch.

With the hardware in place, the next step was assembly – I’ll save that for a future blog post (along with some real values for power consumption) whilst I turn my attention to configuring the software…