Monday, April 18, 2011

Learning about low-end systems, the hard way

Apologies for the delayed (and rambling) update.  Have been very busy.  Following is an update on the experiments with installing various virtualization technologies.  The common theme is: the video card on the older box isn't recognized by any of the software installs.  I believe this to be associated with the removal of the frame buffer as a default device on many install disks.  Ubuntu is only now adding it back.

The issue with Proxmox 1.7 turned out to be the video driver. The built-in video on the motherboard wasn't recognized by Proxmox. I got around this by putting the hard drive in a newer computer (have I said that I really like BlacX?), installing there, and moving the drive back to the original computer.

CentOS 5.5 just doesn't like my boxes, either of them. The install (net or DVD based) completes successfully but, upon reboot, hangs when udev starts up. I'm probably missing a boot option or two. Again, it's more work than I care to do at this point.

XenServer 5.6.1 installs nicely on the older hardware. One drawback is that the official management program (XenCenter) requires Windows to run. A decent alternative appears to be Open XenCenter. If I end up using this, I'll need to figure out how to load ISOs onto the server as there's no upload tool like what vSphere has.

Which brings me to a side topic: management software. One of the drawbacks for most commercial hypervisors is that you need Windows to run some sort of management software. For an all-Unix shop, this can have drastic affects on production networks (think required infrastructure to support that one Windows box). Fortunately, a number of non-Windows management pieces are available:

solutionadvantagesdisadvantages
home grown - easy to customize - must be customized for each install
- extremely limited feature set without a large investment of time
vSphere - I'm familiar with it - requires a Windows box
- requires moderately powerful hardware
XenCenter - similar to vSphere in function - requires a Windows box
Open XenCenter - doesn't require Windows - somewhat limited feature set

What each needs most:

solutionfeature
vSphere A non-Windows version of vSphere
XenCenter A non-Windows version of XenCenter
Open XenCenter A built-in means for uploading ISOs into local storage

The delay in posting was mostly caused by a hardware failure.  I'd been wanting to move the house ESXi server off of the main box and run it on a smaller system.  For this purpose, I had purchased an eMachines EL-1352-07e.  It's a 64-bit dual core AMD system with 4GB of memory and a 300 gig hard drive.  I successfully modified the ESXi install disk (I'm getting good at this) and had moved the VMs onto the new server.

To be on the safe side, I didn't erase anything from the old server, deciding to run the new server for 3 days, just in case of a failure.  Three days went by without a hick-up, so I downloaded and installed Fedora 14 with the idea that I would experiment with KVM.  That's when karma stepped in.  When I attempted to connect to the new server with the vSphere client, the connection would time out.  Checking the console, I discovered that it was frozen.  

My only recourse was to hold the power button in to trigger a hard reboot.  The system returned to normal operation.  About two hours after that, the console froze again.  Then again, after about 30 minutes.  This time, the system complained about a corrupted file system and PSOD'd.

After a couple hours of panic (I'd erased the old server, the new one had a bad file system, and the last backup was done over a month ago), I remembered that ESXi sets up a number partitions on the hard drive (the OS is separate from the datastore). I started researching what could be done to pull the VMs off of the corrupted disk.

The short version of a 2-week long story is that the VMs are now running on the old server, without any loss of data.  The month-old backup was not needed.  I'd discovered a number of tools which aided in the recovery or just made things interesting:
  • vmfs-fuse, part of vmfs-tools allows you to mount VMFS formatted hard drives under Linux
  • qemu-img allows you to convert VMs to other formats (not just qemu's)
  • vde provides a distributed (even over the Internet) soft switch
For awhile, I had the VMs running under KVM on my workstation. VMFS-fuse allowed me to mount the original data stores and qemu-img allowed me to convert the VMs to QCOW2 format. However, qemu-img could not include ESXi's snapshots in the file system conversion, so it was only useful for accessing data even older than the backup.

So, for now, the VMs are back on the old server, running under ESXi. They'll stay there at least until the "Build an Open Source Cloud Day" Friday at the SouthEast LinuxFest (SELF). Hopefully, I'll be learning a bit more about deploying/managing Xen servers (appears to be the currently supported "cloud" in CloudStack) then.

No comments:

Post a Comment