Creating an iSCSI target on a Netgear ReadyNAS

A few months ago, I wrote that I was looking for an iSCSI target add-on for my Netgear ReadyNAS Duo. I asked if such an add-on was available on Netgear’s ReadyNAS community forums; however it seems that these are not really a true indication of what is available and the moderators are heavily biased by what Netgear supports, rather than what can be done. Thanks to Garry Martin, who pointed me in the direction of Stefan Rubner’s ReadyNAS port of the iSCSI Enterprise Target Project, I now have a ReadyNAS acting as an iSCSI target.

I have a lot of data on my first ReadyNAS and, even though I backed it all up to a new 1.5TB drive in my server (which will eventually be swapped into the the ReadyNAS as part of the next X-RAID upgrade), I wasn’t prepared to risk losing it so I bought a second ReadyNAS to act as an iSCSI target for serving virtual machine images. In short, don’t run this on your ReadyNAS unless you are reasonably confident at a Linux command prompt and you have a backup of your data. This worked for me but your mileage may vary – and, if it all goes wrong and takes your data with it, please don’t blame me.

First up, I updated my ReadyNAS to the latest software release (at the time of writing, that’s RAIDiator version 4.1.6). Next, I enabled SSH access using the Updates page in FrontView with the EnableRootSSH and ToggleSSH addons (note that these do not actually install any user interface elements: EnableRootSSH does exactly what it says, and when it’s complete the root password will be set to match the admin password; ToggleSSH will enable/disable SSH each time the update is run).

The next step was to install the latest stable version (v0.4.17-1.0.1) of Stefan Rubner’s iSCSI target add-on for ReadyNAS (as for EnableRootSSH and ToggleSSH, it is simply applied as an update in FrontView).

With SSH enabled on the ReadyNAS, I switched to using a Mac (as it has a Unix command prompt which includes an SSH client) but any Unix/Linux PC, or a Windows PC running something like PuTTY will work too:

ssh root@ipaddress

After changing directory to /etc (cd /etc), I checked for an existing ietd.conf file and found that there was an empty one there as ls-al ie* returned:

-rw-r–r–    1 admin    admin           0 Dec  3  2008 ietd.conf

I renamed this (mv ietd.conf ietd.conf.original) and downloaded a pre-configured version with wget http://readynasfreeware.org/gf/download/docmanfileversion/3/81/ietd.conf before editing the first line (vi ietd.conf) to change the IQN for the iSCSI target (a vi cheat sheet might come in useful here).

As noted in the installation instructions, the most important part of this file is the Lun 0 Path=/c/iscsi_0,Type=fileio entry. I was happy with this filename, but it can be edited if required. Next, I created a 250GB file to act as this iSCSI LUN using dd if=/dev/zero of=/c/iscsi_0 bs=10485760 count=25600. Beware, this takes a long time (I went to the pub, came back, wrote a good chunk of this blog post and it was still chugging away for just over 4 hours; however it’s possible to get some idea of progress by watching the amount of free space reported in FrontView).

At this point, I began to deviate from the installation notes – attempting to run /etc/init.d/rfw-iscsi-target start failed so I rebooted the ReadyNAS but when I checked the Installed Add-ons page in FrontView I saw that the iSCSI target was already running although the target was listed as NOT_FOUND and clicking the Configure Targets button seemed to have no effect (I later found that was an IE8 issue – the button produced a pop-up when I ran it from Safari over on my Mac and presumably would have worked in another browser on Windows too).

I changed the target name to /c/iscsi_0, saved the changes, and restarted the ReadyNAS again (just to be sure, although I could have restarted the service from the command line), checking that there was a green light next to the iSCSI target service in FrontView (also running /etc/init.d/rfw-iscsi-target status on the command line).

ReadyNAS iSCSI Target add-on configuration

With the target running, I switched to my client (a Windows Server 2008 computer) and ran the iSCSI initiator, adding a portal on the Discovery tab (using the IP address of the ReadyNAS box and the default port of 3260), then switching to the Targets tab and clicking the Refresh button. I selected my target and clicked Log On… waiting with baited breath.

Windows iSCSI initiator Discovery tabWindows iSCSI initiator Discovery tab

iSCSI target exposed in Disk Management

No error messages indicated that everything was working so I switched to Server Manager and saw a new 250GB unallocated disk in Disk Management, which I then brought online and initialised.

Finally, I updated /etc/rc6.d/S99reboot to include /etc/init.d/rfw-iscsi-target stop just before the line that says # Save the last 5 ecounters by date.

ReadyNAS Duo with no available disk space

My new Netgear ReadyNAS Duo was delivered at lunchtime today. This is the second ReadyNAS Duo I’ve bought and the first is happily serving media and other files to my home network, whereas this one is intended to be hacked so that it can become an inexpensive iSCSI target (I hope).

I bought the RND2000 (i.e. the model with no disks installed) as I already have a spare 500MB disk from the first ReadyNAS (which has since been upgraded to use a pair of 1TB Seagate Barracudas) and NetGear’s current offer of a free hard drive will allow me to make this single disk one half of a RAID 1 mirror. After setting up the device via the web interface (FrontView), I discovered that the disk was detected as full with 0MB (0%) of 0GB used.

There were no options to erase the disk, but I had previously been using this disk in a Windows Server computer so I mounted it on a Windows PC where it was recognised and I was able to delete the existing partition. After putting the disk back into the ReadyNAS, the RAIDar utility showed that it was creating a volume (eventually I could see that Volume C: had been created, with 0% of 461GB used) although it seems that I had also wiped my configuration along with the NTFS partition (that was straightforward enough to set up again).

RAIDar showing new volume creation on a ReadyNAS

Now I have the ReadyNAS up and running it’s time to have a go at setting it up for iSCSI… watch this space.

Microsoft makes Storage Server 2008 (including the iSCSI software target) available to MSDN and TechNet subscribers

I was doing some work yesterday with the Microsoft iSCSI target software and noticed a post on Jose Barteto’s blog, indicating that Windows Storage Server 2008 is now available to TechNet and MSDN subscribers. Previously it was for OEMs only (or you could extract the iSCSI Target from an evaluation copy of Storage Server) but this will help out IT administrators looking to set up an iSCSI target using software only (alternatives are available, but they are not free – at least not the ones that support persistent reservations, which are needed for Windows Server 2008 failover clustering).

Now, if only I could get an add-on for my Netgear ReadyNAS Duo to support iSCSI…

Connecting to an iSCSI target using the Microsoft iSCSI Initiator

I’ve spent a good chunk of today trying to prepare a clustered virtual machine demonstration using the Microsoft iSCSI Initiator and Target.

I’ve done this before but only on training courses and I was more than a little bit rusty when it came to configuring iSCSI. It’s actually quite straightforward and I found Jose Barreto’s blog post on using the Microsoft iSCSI software target with Hyper-V a very useful guide (even though the Hyper-V part was irrelevant to me, the iSCSI configuration is useful).

The basic steps are:

  • Install the iSCSI Target (server) and Initiator (client) software (the Initiator is already supplied with Windows Vista and Server 2008).
  • Set up a separate network for iSCSI traffic (this step is optional – I didn’t do this in my demo environment – but it is recommended).
  • On the client(s):
    • Load the iSCSI Initiator, and answer yes to the questions about starting the iSCSI Service automatically and about allowing iSCSI traffic through the firewall.
    • Examine the iSCSI Initiator properties and make a note of the initiator name on the General page (it should be something like iqn.1991-05.com.microsoft:clientdnsname).
    • On the Discovery page, add a target portal, using the DNS name or IP address of the iSCSI Target and the appropriate port number (default is 3260).
  • On the iSCSI server:
    • Create a new target, supplying a target name and the identifiers of all the clients that will require access. This is where the IQNs from the initiators will be required (you can also use DNS name, IP or MAC address but IQN is the normal configuration method).
    • Add one or more LUN(s) to the target (the Microsoft implementation uses the virtual hard disk format for this, so the method is to create a virtual disk within the iSCSI Target console).
    • Make a note of the IQN for the target on the Target properties and ensure that it is enabled (checkbox on the General page).
  • Back on the client(s):
    • Move to the Target properties page and refresh to see the details of the new target.
    • Click the Log on button and select the checkbox to automatically restore the connection when the computer starts.
    • Bring the disk online and initialise it in Disk Management, before creating one or more volume(s) as required.

After completing these steps, the iSCSI storage should be available for access as though it were a local disk.

It’s worth noting that the Microsoft iSCSI Target is not easy to come by (unless you have access to a Windows Storage Server). It is possible to get hold of an evaluation copy of Storage Server though and Jose explains how to install this in another blog post. Alternatively, you can use a third party iSCSI software target (it must support persistent reservations) or, even better, use a hardware solution.

NetBooks, solid state drives and file systems

Yesterday, I wrote about the new NetBook PC that I’ve ordered (a Lenovo IdeaPad S10). In that post I mentioned that I had some concerns about running Windows 7 on a PC with a solid state drive (SSD) and I wanted to clarify something: it’s not that Windows 7 (or any other version of Windows) is inherently bad on SSD, it’s just that there are considerations to take into account when making sure that you get the most out of a solid state drive.

Reading around various forums it’s apparent that SSDs vary tremendously in quality and performance. As a consequence, buying a cheap NetBook with a Linux distro on it and upgrading the SSD to a larger device (the Linux models generally ship with lower capacity SSDs than their more expensive Windows XP brethren) is not necessarily straightforward. Then there’s the issue of form factor – not all SSDs use the same size board.

Another commonly reported issue is that NTFS performance on an SSD is terrible and that FAT32 should be used instead. That rings alarm bells with me because FAT32: does not include any file-level access control lists; has a maximum file size of 4GB (so no good for storing DVD ISOs – not that you’ll get many of those on the current generation of SSDs – anyway, most NetBooks do not ship with an optical drive).

The reason for poor NTFS performance on SSDs may be found in a slide deck from the 2008 Windows Hardware Engineering Conference (WinHEC), where Frank Shu, a Senior Program Manager at Microsoft, highlights:

  • The alignment of NTFS partition to SSD geometry is important for SSD performance in [Windows]
    • The first Windows XP partition starts at sector #63; the middle of [an] SSD page.
    • [A] misaligned partition can degrade [the] device’s performance […] to 50% caused by read-modify-write.

It sounds to me as if those who are experiencing poor performance on otherwise good SSDs (whilst SSDs come in a smaller package, are resistant to shocks and vibration, use less power and generate less heat than mechanical hard drives SSD life and performance varies wildly) may have an issue with the partition alignment on their drives. Windows 7 implements some technologies to make best use of SSD technology (read more about how Windows 7 will, and won’t, work better with SSDs in Eric Lai’s article on the subject).

In addition, at the 2007 WinHEC, Frank Shu presented three common issues with SSDs:

  • Longer setup time for command execution.
  • SSD write performance.
  • Limited write cycles for NAND flash memory (100,000 write cycles for single layer cell devices and 10,000 write cycles for multi layer cell devices).

(He also mentioned cost – although this is dropping as SSDs become more prevalent in NetBooks and other PC devices aimed at highly-mobile users).

In short, SSD technology is still very new and there are a lot of factors to consider (I’ve just scraped the surface here). I’m sure that in the coming years I’ll be putting SSDs in my PCs but, as things stand at the end of 2008, it’s a little too soon to make that jump – even for a geek like me.

Incidentally, Frank Shu’s slide decks on Solid State Drives – Next Generation Storage (WinHEC 2007: WNS-T432) and Windows 7 Enhancements for Solid-State Drives (WinHEC 2008: COR-T558) are both available on the ‘net and worth a look for anyone considering running Windows on a system with an SSD installed.

Netgear ReadyNAS: low-cost RAID storage for the consumer

A few months back I was looking into how to solve my home data storage issue (huge photo collection, huge iTunes library, increasing use of digital file storage, big disaster waiting to happen) and I thought about buying a Drobo. At least, I did until my friend Garry Martin said something to the effect of “that looks expensive for what it is… what about a Windows Home Server?”

Although I was initially a fan of WHS, meeting some of the guys who produce it last November left me with more uncertainty than confidence – and what confidence remained was shattered when I realised that they had managed to take a perfectly stable Windows Server with NTFS and produce a data corruption issue when accessing files directly across the network (the issue may be obscure – and it’s been patched now – but the fact that it was produced by messing around with an otherwise stable file system to allow it to do things that it shouldn’t be able to makes it no less alarming).

Whilst the Drobo is undoubtedly a really neat solution, it’s also more than I need and my real requirements are: RAID; at a low price point; preferably with a decent (Gigabit Ethernet) network connection (the Drobo is just storage – for network attachment an additional DroboShare device is required); running independently of my server (i.e. an appliance); solidly built; looks good on the desk. What I found at BroadbandBuyer.co.uk was a Netgear ReadyNAS Duo – basically a 2-disk RAID 1, or RAID-X NAS box (Netgear bought Infrant Technologies last year and it’s actually their technology, rebadged as a Netgear device). Whilst the RND2150 I bought had just a single 500GB (Seagate Barracuda) disk, it was less expensive for me to reallocate that disk elsewhere and to buy two more 1TB Barracuda 7200.11 disks (ST31000340AS) than to buy a larger ReadyNAS (go figure), but the ReadyNAS was about £230 (with a mail-in offer for an iPod Shuffle – received just a few days later), the disks were about £90 each (or they were last week – now they’re down to £78 as larger disks start to come on stream), and the 500GB will match the others in my server if I want to add some internal RAID there sometime. At just under £400 all-in it wasn’t cheap but the TrustedReviews and Practically Networked writeups were positive and I decided to go for it (there’s also a “definitive guide” to the ReadyNAS Duo on the ReadyNAS community site, which is great for a rundown of the features but is probably not a particularly objective review).

Netgear ReadyNAS DuoOnce I got the ReadyNAS home, I realised how solidly built it is, and how much value it includes. In addition to all the usual file protocols (CIFS, NFS, AFP, FTP, HTTP(S) and rsync), the ReadyNAS has a variety of additional server functionality (streaming media and discovery services, BitTorrent client, photo-sharing, etc. – which can be extended by accessing it directly as a Linux box), a thriving community and excellent Mac support (even providing a Widget for MacOS X to monitor the box). In fact, the only downside I’ve found so far is the lack of Active Directory support in the low-end ReadyNAS Duo (higher-specification devices can join a domain and one version of firmware I had on my RND2150 let me do so, but promptly left the web management interface inaccessible, resulting in the need for me to back up the data, perform a factory reset, and then copy the data back on again).

For small and medium businesses, there are higher-end ReadyNAS devices with more drive space and additional functionality but the ReadyNAS Duo is the one with the low price point.

Having expanded my ReadyNAS to 2x1TB disks (I was initially sceptical of the expansion process but, having done it, now I’m pretty impressed and will write a separate post on the subject), my new storage regime will use the ReadyNAS for all onsite storage, periodically backed up to separate USB disk for offsite storage. In addition, I’ll continue to back up my entire MacBook hard disk to a Western Digital Passport drive (which I can use to boot the system if when the primary disk goes belly-up), with an additional copy of the iTunes Library and photos on the ReadyNAS and Mozy to provide backup in the cloud for work in progress (at least until Live Mesh has a Mac client and increased storage capabilities). In the meantime, my server will continue to primarily be used for virtual machines and any essential data from the VMs will be copied to the ReadyNAS.

For many people, a single disk backup (e.g. USB hard disk) may suffice (even if it does represent a risk in terms of disaster recovery) and I’ll admit that this solution is not for everyone – but, for anyone with a lot of data hanging around at home and who doesn’t want the hassle of maintaining a full Windows or Linux server, the ReadyNAS appliance is worth considering, with expandable RAID providing expansion capabilities as well as peace of mind.

High volume, low cost, portable hard disk

When I bought my MacBook, I immediately upgraded the hard disk to a 320GB model (I generally avoid Western Digital, but I decided to risk it this time on the basis that as long as the data is backed up then everything should be OK).

Ever since then, I’ve been looking for a suitable USB-powered hard disk to back the MacBook up. I wanted a good-looking portable unit but upgrading the disk to match (or exceed) the internal disk was going to be problematic from a power and cooling perspective. Then I walked into PC World yesterday and saw a 320GB Western Digital My Passport Essential hard disk for £99.99. Unfortunately they only had the 320GB size in Midnight Black (my MacBook is white), so I paid a little bit more for an Arctic White one from dabs.com.

Even though the drive supports Windows and Macintosh computers (and, although it doesn’t say so, it should work with any other PC operating system as long as it can load the appropriate USB drivers), the supplied software is only for Windows. I moved the software to another disk and connected the drive to my Mac, where I reformatted it using HFS+ and a GUID partition table (the drive was supplied as FAT32 – which is great for device portability but does have some limitations on file size – and with a master boot record (MBR). As it happens, that step was not necessary because my chosen backup software erased the disk.

After running Carbon Copy Cloner my Mac hard disk contents were duplicated onto the external disk and I could breath a sigh of relief, safe in the knowledge that when (not if) the internal hard disk fails at least I have a copy to work from.

Carbon Copy ClonerThere’s just one point to note about the cloning process… on my 2.2GHz MacBook with 4GB of RAM, the cloning operation started out by taking around 4 minutes per GB. With just short of 300GB to transfer that’s 20 hours, so I did’t pay too much attention to the progress bar (which indicated that the clone was about 25% complete after about 12 minutes) – it just happens that the operating system and applications (at the front of the disk) have lots of small files whereas my data (written later) includes a lot of large media files. Even as the progress bar slowed to a crawl, the file transfer rate seemed to improve and the operation finally completed in about 6 hours and 40 minutes. Subsequent backups should be faster as they will be incremental.

IDE/SATA to USB cable for temporary disk access

After last week’s near-catastophe when one of my external hard disks failed, I found that the disk itself was still serviceable and it was just the enclosure that had inexplicably stopped working. So, I put the disk into an identical enclosure that had been sitting on the shelf since my previous data storage nightmare, only to find that enclosure had also failed (whilst not even being used).

IDE/SATA to USB cableAlthough I have recovered the data to another drive, my new MacBook has yet to arrive and so I wanted to hook the disk back up to the original machine and sync my iPhone with iTunes (I was running out of podcasts to listen to in the car), so Maplintonight I bought a IDE/SATA to USB 2.0 cable from Maplin, allowing me to connect 2.5″ or 3.5″ IDE (PATA) or SATA disks to the USB port on any computer without a caddy.

It doesn’t look pretty and I wouldn’t recommend using it for too long as the drive gets very hot but it will certainly suffice as a temporary measure and the ability to support either PATA or SATA drives means that the cable should continue to be useful for a while yet.

When, oh when, will I learn to take proper backups?

Apparently (according to my wife), I’ve been a bit stressy today. Justifiably, I’d say: I have an exam tomorrow that I haven’t finished preparing for; one or both of my kids has woken me up at least once a night (or early in the morning) since I-can’t-remember-when; and this morning I walked into the office to see that the nice blue light on my external hard disk (the one with my digital photos and my iTunes library) was red.

The disk was still spinning and the Mac that it was attached to still appeared to have a volume called “External HD” but any application that attempted to access the volume locked up. So I shut down the computer, switched off the external hard disk, turned it all back on again and… saw the familiar blue light appear but without any sign of the disk spinning up. Arghhh!

Toshiba PX1223E-1G32 320GB External Hard DiskThis was the second time this had happened to me with this model of external hard disk. “Bloody Western Digital disks”, I thought… but I didn’t have time to investigate further – I had three practice exams to do today, a conference call about the Windows Server 2008 launch and the usually deluge of e-mail to process – so, I turned the disk enclosure off again and left it until, once the kids were in bed, I took the disk out of the enclosure and put it into another PC.

Imagine my relief when it spun up – and, after installing MacDrive on the PC (the disk is formatted with HFS+) and rebooting, I could see my data. Woohoo!

I’m currently in the process of copying all of the data to the only volume I have left with enough free space. Unfortunately the machine I’m using for recovery only has a 100Mbps NIC and seeing as Windows Server 2008 says I’m getting 10.5MB/sec (i.e. around 84Mbps) then I think the network’s doing pretty well but the process of copying almost 300GB of data will take most of the night. Then, once everything is safely recovered, I’ll run some diagnostics on the disk and work out whether it’s the disk or the enclosure that gone belly-up. In the meantime, I’m withdrawing my earlier recommendations for the Toshiba PX1223E-1G32 (320GB 7200 RPM external USB 2.0 hard drive with 8MB data buffer).

Does anybody know where I can get an iSCSI storage device with decent RAID capabilities for not too much cash?

Secure online backup from Mozy

Mozy logoA few weeks back I was discussing backups with a couple of my colleagues. I’ve commented before that, despite nearly losing all of my digital photos and my entire iTunes library, I’m really bad at backing up my data (it’s spread across a load of PCs and I almost never have it in a consistent enough state to back up). I had thought that Windows Home Server would be my saviour, but Microsoft rejected my feedback requests for Mac and Linux client support so that won’t work for me. Besides which, I should really keep an offsite copy of the really important stuff. One of my colleagues suggested that I joined in a scheme he was setting up with remote transfers between friends (effectively a peer-to-peer network backup) but then another recommended Mozy.

Mozy? What’s that?

For those who haven’t heard of it (I hadn’t, but it does seem to be pretty well known), Mozy is an offline backup service from Berkeley Data Systems (who were purchased by EMC last week). Available as a free service with 2GB of storage (and an extra 256MB per referral – for both the referrer and the new customer – my referral code is L8FPFL if anyone would like to help out…), as a paid service with unlimited storage for $4.95 a month (per computer), or as a professional service for $3.95 a month (per computer) plus $0.50 per GB of storage, it seems there’s an option for everyone – although it is worth understanding the differences between Mozy Home and Mozy Pro.

With clients support for most Windows versions (including Vista) and Mac OS X (still in beta), data is protected by the Mozy client application using 448-bit Blowfish encryption (with a user-supplied key or one from Mozy) and then transferred to the Mozy servers over an HTTPS connection using 128-bit SSL. Upload speeds are not fast on my ADSL connection and there is some impact on performance but I can still use the web whilst I’m uploading in the background (in fact I have a a backup taking place as I’m writing this post). Also, once the first backup has taken place, Mozy only copies changed blocks so subsequent backups should be faster. The only problem that I found (with the Mac client – I haven’t tried on Windows yet) was that it uses Spotlight searches when presenting backup sets so if you have recently had a big clearout (as I did before backing up), the size of each backup set may be out of date (Apple support document 301562 offers some advice to force Spotlight to re-index a folder).

I should highlight that backup is only half the story – the Mozy client has a simple interface for browsing files and selecting those that need to be restored. There’s also a web interface with browsing based either on files or on backup sets and the Mozy FAQ suggests that Mozy can ship data for restoration using DVDs if required (for a fee).

Whilst Mozy has received almost universal acclaim, not everyone likes it. For me it’s perfect – an offline copy of my data but it doesn’t do versioning and it will assume that if I delete a file then after 30 days I won’t want it back. I think that’s fair enough – if I have a catastrophic failure I generally know about it and can restore the files within that month. As for versioning, why not have a local backup with whatever controls are considered necessary and use Mozy as the next tier in the backup model? The final criticism is about Mozy’s potential to access files – that’s purely down to the choice of key. Personally, I’m happy with the idea that they can (in theory) see the pictures of my kids and browse the music/videos in my library – and if I wasn’t, then I could always use my own private key to encrypt the data.

I’m pretty sure that I’ll be moving to the paid MozyHome product soon but I wanted to try things out using MozyFree. Based on what I’ve seen so far, using Mozy will be money well spent.