Note: This is going to be both technical and long. If you find this difficult, skip, unless you also have an Ubuntu Linux system, other than the current developer version (13.10/Saucy Salamander), with an ext4 filesystem. In that case, upgrade your copy of e2fsprogs immediately to version 1.42.8 or above. You can obtain it here – extract it into your /sbin/ and follow the instructions in the INSTALL file. You should also find any liveCD/liveUSB media you have hanging around that fit this description and re-burn them with fresh software.
So, I lost my /home partition and with it the master boot record, and all the data in it. Sucks. This threw me into a world of pain, the best part of a week’s worth of systems administration and increasingly brutal anxiety. Fortunately, I can retrieve some stuff from the cloud, some stuff from backups, but far from all. Of course, this happened the week the office migrated its mail server, and as a result there was a nontrivial risk that I wasn’t going to be back online in time to redownload all my work e-mail before Fasthosts threw it all away. Jesus, what a week.
What happened? The narrative is as follows. I was very short of disk space in the /home/ and I wanted to export a lot of photos from a camera. So, I decided to shrink my Windows partition, where there was plenty of space, and re-allocate the free space. For reasons, the partition table has the Windows partition on sda2, then a Linux swap space on sda6, and then the Linux /home on sda5, so the sequence of operations would be: shrink /windows/ by 3GB, move /swap left, move /home left and grow partition by 3GB. I began by using the Windows partitioner to shrink the Windows partition, and then rebooted the computer using an Ubuntu 12.04 LiveUSB stick.
I ran GParted, dealt with the swap, and then began the move of /home. As I had always learned that you *never* edit a Linux filesystem online, and GParted is designed to make this difficult, I did the move offline. Part way through the operation, GParted and the entire system locked up hard – the screen froze, none of the TTYs were accessible, SysRq didn’t do anything useful, trying to restart X didn’t work, and no hard disk or network activity was observable. With some considerable worry about this course of action, I forced a reboot.
On restarting GParted, I chose to run e2fsck on the volume. Per documentation, e2fsck should start by recovering the journal and replaying the journalled actions. It took all night to run, which came as no surprise to me as I had used fsck before, and in the morning it reported the filesystem as clean, resized, and showing 30GB of free space rather than 5 or so. Clearly, something very bad had happened.
It was around about this time I learned that there was a new version of e2fsprogs (and therefore, e2fsck). Thanks, Duane! Running it told me that the superblocks were corrupt. Using testdisk, I was able to find valid ones. Using the backup superblocks, e2fsck seemed to fix a lot more errors, but eventually, part way through the process of listing multiply-claimed blocks, it began to print out the number 16,777,215 over and over again. Left to its own devices, it would do this until the terminal program ran out of memory and the kernel killed e2fsck. Exploring the file system with debugfs, I found that the mystery number came up in various places. Around this time, various people noted that it’s 2^24-1 and that sounds significant. Thanks, all of you.
I redirected the e2fsck output to a file, which reached 499MB before I stopped it, and used a grep pipeline to find the last inode (2937350) that began to pour forth the 16,777,215s. I told debugfs to dump its contents to a file, which reached 28GB rather quickly. Asking on the linux-ext4 mailing list, the primary developer of ext4, Theodore T’so, suggested I run “debugfs stat <2937350>“. He also very kindly offered to look at the fs metadata, something which was hindered by Virgin Media’s war on large uploads of any kind at the behest of Peter Mandelson. (Seriously, guys, you can come out now, he’s gone!) Eventually, sftp worked.
This turned out to contain tens of thousands of blocks with the id 16777215 and also the same number written into their content. Killing it and searching again (by this point I’d taken a low level image of the broken fs) showed that an identical inode existed 8 inodes further on, and then more at the same interval. The clones all had the same, crazily wrong, modification times. I realised at this point that these symptoms were very, very similar to those described here, and even more similar to them as described here.
If your ext4 filesystem has the resize_inode flag set, it is capable of being resized online. Mine did. As far as I know, this is because the Ubuntu 12.10 installer made it that way. In e2fsprogs < 1.42.7, there was a bug that could wreck the filesystem if you did an offline, i.e. unmounted, resize. Ubuntu packages 1.42.5 right up to 13.04, the last time I checked. The bug is not a problem if you’re doing an online, i.e. mounted, resize. But most Linux users are strongly trained to NEVER EVER EVER touch a mounted filesystem, some tools (like GParted) are designed to enforce this, and documentation and support resources in general will tell it you over and over again.
This is what James Reason would call the Swiss cheese accident model. A succession of issues, minor in themselves, coincided. Had any one of them not cooperated, nothing would have happened, and safety would have been maintained. But the holes in the cheese lined up and disaster struck. Ubuntu’s install offered ext4 and (silently) set the flag; gparted insisted on not using the flag; Ubuntu didn’t keep up with ext4 and still doesn’t; the resize2fs bug bit me.
So, in the best Reason tradition, what have we learned?
Recommendation 1: back up your stuff
Obviously, back up your stuff, yadda yadda. Unfortunately, backing up takes effort that doesn’t have a vaguely imminent payoff, uses time that other people want from you. The various superduper one click solutions put all your stuff in Amazon EC2’s Elastic Block Service, the bit of EC2 that fails most often, and do you really want to put more of your stuff in the NSA cloud? I should do more backup, but it’s not the solved nobrainer people make out.
Recommendation 2: The menace in your sysadmin bag
Here’s something more interesting. Precisely because I intended to do a supposedly “safe” offline resize, I didn’t boot the system – I used a LiveUSB stick, so as to avoid mounting the /home partition. But LiveUSB sticks are usually read-only, and therefore they begin to go out of date as soon as they’re made. This goes double for LiveCDs.
You can keep persistent information on a Ubuntu LiveUSB stick, but it’s pretty almost. After putting off recovering my data and deciding to recover the computer first, I had a hard time with reinstalls from a stick with a substantial (16GB stick, 5-2GB casper) persistent partition. It is so slow as to be unusable and I suspect various problems I had were down to this.
So it’s unlikely that any live media you’ve got lying around are even close to up to date. Even if they are updateable, you’ll probably only use them when something breaks or you need to investigate someone’s virusy Windows XP box. Unmanaged, zombie code is lurking on those devices. In this case, the distro also had the bug, but it should make you think.
Recommendation 3: Ubuntu, about that package
I filed a bug.
I asked on a list and the guy who made ext4 answered. I don’t think the same kind of support for, say, NTFS is available. Be afraid of your liveUSB stick. Meanwhile, testdisk sees all my files but can’t evacuate files from an ext4. photorec might do it, but the fact they’re encrypted might be annoying.