My First btrfs Failure and the State of Ubuntu's btrfs Support

| Comments

I had my first btrfs file system failure this week. The root/home btrfs volume on the Intel X25-M SSD in my laptop became unmountable on Sunday night. It was functioning just fine with no signs of any problems, but it wouldn’t mount after a reboot. I’ve really been waiting for this sort of thing to happen. There has been quite a lot of improvement to btrfs with regards to stability since I formatted this partition.

I tried to run an fsck from the console of the Ubuntu installer, but it failed. I don’t recall the error message, but after the error came up, all CPU and disk activity ceased. I wasn’t terribly interested in recovering the file system anyway. I run a daily backup to an ext4 partition on my laptop’s second hard drive anyway.

Installing a fresh copy of Ubuntu and restoring my data and configuration only took about two hours. That includes the time it took to download and burn a Xubuntu 11.04 installation disc.

I also keep a backup of the output of dpkg --get-selections. You can pretty much just feed that back into dpkg --set-selections, but on this restore I chose not to do that. I’ve been accumulating a lot of package cruft of the last few years and I figured this was a good excuse to do some spring-cleaning.

The state of btrfs support in Ubuntu’s installer

I am very impressed with the way Ubuntu is handling btrfs. I was worried that they were still going to be mounting the primary volume as root; they don’t do this! They create @ and @home subvolumes and mount these as / and /home. I like this convention of using the @ sign. I was previously using a subvolume called root for / and home for /home.

This saved me a lot of recovery time. I was expecting to have to play a game of “musical snapshots” to get things organized like this. Organizing things this way makes it easier to boot off of a previous snapshot.

I did not let the installer format my btrfs partition, though. I wanted to correct a mistake I felt I made the last time I formatted that partition. mkfs.btrfs defaults to using mirrored metadata. I’ve been thinking that that is just a waste of performance and write cycles on an SSD. I’ve already had two instant total failures of my SSD and that seems to be a very common failure mode for solid state drives. I don’t think I’m likely to have to deal with bad sectors inside the metadata areas.

I don’t know if the Ubuntu installer would have created the file system with mirrored metadata. I didn’t think it was worth taking the time to find out.

I’m not giving up on btrfs

I don’t think I have had a complete and total non-hardware related file system failure in over ten years. That failure was with a reiserfs file system and it almost completely put me off of ever using reiserfs again. I only used reiserfs one single time since then and it was to save space with tail packing.

I knew what I was getting into with btrfs, and I know things will continue to get better. I’m not willing to give up on having regular snapshots to pull files out of. I’ve gotten into the habit of just editing my “only” copy of config files without making a quick backup copy first. It is just easier to yank a copy out of a recent snapshot on the times I actually do screw things up than it is to make backups that I don’t end up needing.

I’m also extremely happy that I can boot off of snapshots of previous installs. I wasn’t terribly happy with my upgrade to Ubuntu 11.04. I always take a manual snapshot before big updates like that, so it was trivial to boot off of my 10.10 snapshot. I was then able to chroot into the 11.04 volume and run some apt-get upgrades in there so I could watch for an update to a package relevant to the issues I was having.

Comments