Self-Hosted Cloud Storage: Changing the Plan and Scaling Back

| Comments

A few weeks ago I decided on precisely how I was going to implement my own self-hosted cloud storage, and I even wrote up a blog entry describing what I was planning to do. I discovered Seafile the very next day, and I knew that I was going to have to throw my entire plan out the window.

Out of all the options I’ve investigated so far, Seafile is the most Dropbox can also be configured to keep a history of changes to your files. That means it can be used as a replacement for a significant portion of my backup plan.

Scaling back

I did make an attempt at syncing my entire home directory to Seafile. It was an epic failure. Seafile seemed to grow exponentially slower as it indexed more and more of my data. It took about six hours to index the first 19 GB. Eight hours later, and it was barely past the 20 GB mark.

Seafile slows down as the number of files in the library grows. My home directory has over 275,000 files, which seems to be about 200,000 files too many. Most of the solutions I investigated seem to have similar problems, so syncing my entire home directory is probably not going to be a viable option.

My Seafile libraries

Seafile stores your data in “libraries”, and it lets you sync existing directories up to these libraries. I ended up creating a library for each of the directories that I currently back up on a daily basis, and I also created a more generic library that I called “Seafile”. I’m using the “Seafile” library much like I’ve been using my “Dropbox” directory.

These have been syncing for over a week now, and except for a few problems, I am very happy with the results. None of these libraries have more than about 25,000 files.

Some things to watch out for with Seafile

I ran into two problems, but they are both easy to rectify.

Don’t auto-sync directories that change constantly

I am currently syncing approximately 6.5 GB of data. Four or five days after moving this data into my Seafile libraries, I nearly ran out of space on my Seafile virtual server. Seafile was eating up over 60 GB of space, and it had used a comparable amount of bandwidth.

I was syncing my Pidgin log files and configuration, and also my ~/.config directory. Most of the bandwidth and history space was being used by my constantly updating Pidgin logs and Google Chrome’s cookie database. When I disabled auto-sync on these libraries, the Seafile server dropped from a constant 200 to 300 KB per second of network usage to zero.

This was not a good permanent option, though. I definitely wanted to back up these two directories. Seafile’s command line tools don’t seem to have a way to force a sync on these libraries. I was able to cheat my way around that. I stole the URL from the “sync now” link in the local web interface, and I am querying it every 6 hours from a cron job.

seafile-manual-sync.sh cron job
1
2
3
4
5
6
7
#! /bin/bash

## dot-config
curl 'http://127.0.0.1:13420/repos/operation/?repo=61811848-0afd-45a2-982e-009193ac1a8e&op=sync'

## dot-purple
curl 'http://127.0.0.1:13420/repos/operation/?repo=09e7bda0-950f-4bc2-acb1-f8cfc5752010&op=sync'

Server side garbage collection

You have to periodically stop the Seafile server and manually run the garbage collector. That was the only way I could get rid of the 60 GB of cruft my server had collected. If I were still accumulating 10 GB of useless data every day, this might be problematic.

It now seems to be growing at a much more acceptable limit, though, so I’m not too worried about it. I’ll keep an eye on things for a month or so; then I’ll have a much better idea of how much bandwidth and disk space Seafile is going to eat up on me.

Performance

I didn’t think to run a stopwatch until I was syncing my final directory. It was 5 GB of photos. I kept an eye on the throughput numbers in Seafile’s local web interface. It would sit up at 4.5 MB per second for a while, and then it would idle for a while. Sometimes it would sit in between somewhere for a while.

I do know that the entire process from indexing to uploading all my photos came out to an average speed of around 2.5 MB second. I’ve watched Dropbox.

My Seafile control script

I put together a little script to make starting, stopping, and garbage collecting a little easier and safer. I was worried that I would have to run the garbage collector repeatedly, and I wanted to make sure I wouldn’t accidentally do that with the service running.

~/bin/seafile.sh on my Seafile server
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#! /bin/bash

SEAFILE=/home/user/haiwen/seafile-server-1.7.0

ulimit -n 30000

case "$@" in
  stop)
    cd $SEAFILE
    ./seafile.sh stop
    ./seahub.sh stop
  ;;
  start)
    cd $SEAFILE
    ./seafile.sh start
    ./seahub.sh start-fastcgi
  ;;
  gc)
    cd $SEAFILE
    ./seafile.sh stop
    ./seahub.sh   stop
    cd seafile
    export LD_LIBRARY_PATH=./lib:${LD_LIBRARY_PATH}
    ./bin/seafserv-gc -c ../../ccnet -d ../../seafile-data
  ;;
esac

My installation is running under nginx for SSL support. If you’re not running under a standalone web server you will need to change ./seahub.sh start-fastcgi to ./seahub.sh start.

It’s Great to Be Using a Mechanical Keyboard Again

| Comments

Late last year, I went shopping with my friend Brian to help him pick out a mechanical keyboard. We tried our local Fry’s, but they didn’t have a very good selection, so we hopped on 75 and drove down to Microcenter. They had a much better selection. We got to try out keyboards with almost every variety of Cherry switch, except the Cherry MX Clear switches, and even one keyboard with ALPS switches.

Brian didn’t end up taking home a keyboard that day, but he did decide which Cherry switches he preferred. He did some more research, thought about it for a few days, and he ended up ordering a nice keyboard with Cherry MX Black switches.

My Model M proudly sitting on my desk

I was hoping to like Cherry’s blue switches

Shopping for mechanical keyboards, and trying out Brian’s new keyboard had me really missing my old IBM Model M keyboard. The blue switches from Cherry are supposed to feel pretty similar to the buckling spring switches in the Model M, but they are supposed to be a little less stiff and a lot quieter.

They are certainly quieter, but I thought they felt absolutely terrible. The buckling springs just feels so much smoother. The transition at the mid-point of the key press on the Cherry MX Blue switches feels really awful. They just feel way too crunchy.

The Model M

I used my Model M for nearly a decade, right up until around 2006 when I retired my last desktop computer. I certainly missed it, but it was much simpler using a laptop at two different desks without having to dock it at two sets of keyboards and monitors.

Things changed a bit when I bought my current laptop. This laptop spends most of its time connected to a keyboard, mouse, and two monitors. I actually use my old Model M keyboard with this laptop for the first week. The cheap PS/2 to USB dongle that I had wasn’t up to the task, though. I’d see the keyboard LEDs flash every so often when the dongle reset itself, and some important Emacs key combinations just weren’t possible.

The two choices I had at the time were to either order a better USB dongle or unpack the Dell Bluetooth keyboard that I had in closet. I opted to give the Bluetooth keyboard a shot, and I used it for three years.

It was a terrible keyboard. Just like all rubber dome keyboards, it felt pretty darn sloppy. Its space bar was also extremely loud; even louder than the Model M. I can’t believe I used it for three years. On the up side, at least it looked nice not having any wires on my desk.

My “new” Model M keyboard

I’m pretty sure the keyboard Chris is using right now is my old Model M. I was tempted to take it back for continuity’s sake, but I’d hate to steal her keyboard. Instead, I am using my spare Model M, and it isn’t perfect. The cord has a tear in the jacket, it has two wrong key caps, and one key cap is completely missing.

My Model M Keyboard's birthday, April 17, 1993

I saw three Model M keyboards when I was visiting my parents, so replacement key caps aren’t too far away. I’m thinking that I’d like to lop off the cord, stick the USB dongle inside the keyboard, and convert this Model M to USB. Then I could buy or make a nice cloth jacketed cable for it.

Wrong cap on the 3 key Wrong cap on the decimal, no cap on the 3 Cut jacket near base of the cable

Speaking of USB dongles, I ended up buying the big, beefy dongle recommended by ClickyKeyboards.com. It is quite gigantor, but it is doing a splendid job. The keyboard hasn’t reset once, and every useful key combination that I can think to try has worked. I couldn’t ask for anything more.

Wireless Model M?

I keep talking about building a wireless Model M keyboard, but I just haven’t had enough gumption to take on the project. It turns out that the controller from a particular Logitech wireless keyboard for the Wii has a matrix that almost perfectly matches the matrix in the Model M! The only mismatch is the slash and asterisk on the number pad; they are transposed. I could certainly live with that.

I saw this hack in a forum post somewhere, but I can’t seem to find it anywhere. It even had diagrams showing the layout of the matrices of both keyboards.

The thing that has been holding me back is the lack of availability of the correct Logitech keyboard. It hasn’t been manufactured in years, and it is pretty hard to find. Especially at a reasonable price.

At the moment, I am just thrilled to be typing this on a proper keyboard.

My Model M keyboard in front of my custom dual monitor stand

The State of Hard Drive Manufacturers in 2013

| Comments

I just had the displeasure of ordering four SATA hard drives. They’re going to find a home in a 1U server that I’m planning to purchase in the next couple of weeks. I ordered the hard drives early because I need one or two large, empty drives so I can make some major partitioning changes to my home file server.

I had some simple requirements for the drives in this new server. They had to be large, reasonably fast, RAID friendly, and they needed to fit in the 1U chassis. It would have been nice to be able to use some server-grade 10K or 15K RPM disks, but they just don’t have the capacity I need, and they are a quite a bit more expensive. The best fit for this server are 2 TB or 3 TB 7200 RPM desktop drives.

In the distant past

In the latter days of IDE hard drives, I almost exclusively used drives manufactured by Western Digital. I had reasonable luck with them, and their RMA process was very convenient. They didn’t make you jump through hoops to prove that you had a bad drive, and they would cross ship your replacement drive for free.

I once had a drive in my home desktop machine’s RAID 5 fail, and I picked up a Maxtor 160 GB IDE drive as an emergency replacement. The drive worked fine for a few months, but then it started dropping out of the RAID due to read and write errors.

Maxtor wouldn’t RMA the drive unless I ran their diagnostic software. Since the drive was plugged into a 3ware RAID controller, their software couldn’t see the drive. I had to move it to one of the motherboard’s IDE ports. Their software miraculously claimed to have fixed the drive. I had to repeat this process three more times over the next several months before it finally admitted the drive was bad. This made me really start to appreciate Western Digital’s RMA policy.

Early Western Digital SATA drives were also quite handy, since they had both SATA and IDE power pins. In those days, most power supplies only had one or two SATA port connectors, so this was terribly handy.

I continued to use Western Digital’s SATA drives right up until about three or four years ago. I picked up some of their 1 TB, 7200 RPM hard drives to use in a Linux software RAID 10. Those drives liked to randomly drop out of the RAID due to read timeouts, which seems to still be a common problem with many of Western Digital’s drives.

Switching to Seagate

I replaced those 1 TB Western Digital drives with equivalent models from Seagate, and I’m still using most of those drives today. I did have to RMA one of them, though. The process isn’t quite a slick as Western Digital’s used to be, but it wasn’t insane like Maxtor’s either. Seagate’s drives also came with five-year warranties, which beat Western Digital by two years.

Things are looking pretty ridiculous this year

Seagate doesn’t seem to have any drives with a five-year warranty anymore. The drives in the capacities and speed that I was shopping for, they were all one-year or two-year warranties. Here’s where it seems to get stupid: drives with the same model number can have either a one-year or two-year warranty. They may even have a different number of platters, heads, and entirely different performance characteristics.

In the future

I did buy four Seagate drives today, but I’m not sure I will be buying any more drives from Seagate in the near future. I’m not too enthusiastic about the shorter warranty periods. I have four or five 1 TB Seagate drives in the room with me here, and many of them will still be under warranty after the warranty on these new drives expire.

I’m even less happy about that fact that I have no idea what sort of drives are going to arrive at my door later this week. I don’t have any solid data, but the Internet is leading me to believe that I may get some combination of two-platter and three-platter drives. The two-platter drives are slower, and they may not be as well made as the three-platter drives.

It sure feels like a conundrum.

Two year warranty Seagate drives

Update: The drives arrived today. If I am decoding the serial numbers correctly, one is the superior 6 head model. The other three are the slower two-platter, four-head model. All four were made in Thailand, and I am pretty sure that is good news.

Update: According to Backblaze’s statistics, Seagate drives may not have been the best choice.

Self-Hosted Cloud Storage: The Plan

| Comments

Update: Shortly after writing this entry, I discovered Seafile. I had to scale my plans back. Most of the software I looked at did not scale well to the hundreds of thousands of files in my home directory, especially when some of those files are changing several times each minute. I have been using Seafile for a few weeks, and so far I am very happy with the results.

Almost two months ago, I looked into the idea of replacing Dropbox. No single piece of software was able to meet my needs. Since then, I’ve put some thought into what my goals are. It looks like I will be able to meet the majority of my requirements by combining two or three different pieces of software.

The goals

I want to synchronize the contents of my 25 GB home directory between multiple locations. I would like to be able to replace my existing backup plan with one of these synced copies. That means at least one location has to have file versioning or snapshotting. If I want to be able to eliminate my existing backup plan, at least one of these copies has to be stored securely off-site.

I need a convenient way to share files. I can easily share entire directories using Dropbox, even if the recipient only has a web browser.

I am most definitely addicted to Dropbox’s Android app. Automatically uploading photos is extremely handy. I will most definitely need to be able to replicate this functionality.

They all fall short of the goals

I could most likely meet all of these goals using Dropbox encrypts your data on the server side, and they have all the keys needed to look at your data.

SparkleShare and ownCloud can each meet the first two goals, but not without problems. You can encrypt your SparkleShare repository on the client-side, but then you won’t be able to share files. ownCloud has the automatic photo uploading covered on Android, but there are no plans to add client-side encryption to ownCloud.

BitTorrent Sync can easily and securely move my data around, but the data it moves around is stored unencrypted on each end. That means that you have to put a lot of trust in each endpoint.

My actual plans

I should be able to accomplish most of my goals with a combination of BitTorrent Sync and a snapshot-capable file system, like ZFS or btrfs.

Sync and back up my home directory

To start things off, I plan to synchronize my laptop’s entire home directory with my home file server. This will require some effort, and it will be a bit of a challenge. My file server does not currently have the capability of taking snapshots, and I don’t power it up 24/7 anymore.

I plan to have the server wake up long enough to sync up with my laptop once each day. After the syncing is finished, the server will record a snapshot and shut itself down. The hard drives will only have to spin for a few minutes each day, and it will still be easy to keep my home office cool during the upcoming Texas summer.

Around seven years ago, I replaced a small laptop and a bulky desktop computer with just a single laptop. I haven’t had a desktop computer since then, and I don’t know if I ever will again. Late last year, my friend Brian very nearly convinced me to build a desktop computer for myself. Having good, solid sync capability for my entire home directory would make it much easier to revert back to using two devices again.

Sharing files

I had originally planned to use ownCloud for this, but I thought of a simpler solution. I am going to create a directory called “Public” on my laptop, and I will use BitTorrent Sync to push it to a web server. That should cover nearly all of my random file sharing needs.

BitTorrent Sync has its own built-in, secure file-sharing system, but the recipient needs to have the BitTorrent Sync client installed in order to receive the files. This option isn’t ideal, but I wouldn’t be surprised if it ends up coming in handy every now and again.

Camera uploads

This is the other reason I had originally planned on supplementing BitTorrent Sync with ownCloud. The folks over at BitTorrent Labs had an ARM build of their client available since they launched, and there’s already an app in the Google Play store to install it for you.

I can probably get away with syncing the photos directly to a web server. The Titanium Backup backups of my phone and tablet are probably using most of my Dropbox and just sync that backup elsewhere.

Something is missing

That something is the off-site backup. I don’t have an ideal solution for this part of the problem yet. I will leave my existing rdiff-backup and Duplicity backups in place until I figure out a better plan.

What is the first step?

I will finally be home, in my own office, sitting in my own chair at my own desk in just under one week. When I get there, one of the first things I’ll have to do is find some spare disks to temporarily hold all the data on my file server. Then I can tear down the existing RAID 6 array, and replace it with a ZFS RAID-Z6 volume.

That is actually the hardest part. The rest should be easy, so stay tuned!

/2013/04/self-hosted-cloud-storage-solution-owncloud-vs-sparkleshare.html “Self-Hosted Cloud Storage Solutions: ownCloud vs. SparkleShare vs. BitTorrent Sync vs. Seafile”

zsh-dwim: Small Update For apt and dpkg

| Comments

I’ve been trying to remember to implement items from my zsh-dwim notes in my spare time. I managed to implement some apt and dpkg helpers this week.

add-apt-repository –> apt-get update

Every six months I upgrade Ubuntu. I always end up having to add at least one or two PPA repositories, using the add-apt-repository command, every single time I upgrade. Just about every add-apt-repository command is followed by an apt-get upgrade. This transformation was definitely a no-brainer.

which –> dpkg -S –> apt-cache show

This one was a little less obvious, but probably just as useful. Often enough, I’m interested in finding out exactly which package the file returned by the which command belongs to. That usually means either typing dpkg -S and pasting in the path, or recalling the previous command from history and manually adding the dpkg -S and the required backticks.

That just seems like too much work. Now I just hit control-u, and the which command is converted directly into a correctly formatted dpkg -S command.

I ended up taking it one step further. Hitting control-u on a dpkg -S command will transform the command into a slightly buggy apt-cache show $(dpkg -S ) command. I’m not terribly happy with how the end result turned out; it ends up generating a rather verbose command.

You can find zsh-dwim at GitHub.

Invest in a Quality Office Chair

| Comments

Buy a good, comfortable, sturdy chair now rather than later

An Aeron would have been a pretty big expense for me fifteen years ago when I bought my first office chair. Just a few years later, though, and the price wouldn’t have been too big of a big deal. I should have bought one a decade ago. I’d have been sitting more comfortably the entire time, and in the long run it would have been less expensive than buying a new, cheap chair every few years.

Not everyone finds the Aeron to be a comfortable chair. Fortunately there are plenty of other good chairs to choose from. I’d definitely recommend spending some time sitting in them before you buy one, though.

My Progression of Chairs

At some point when I was in school, probably shortly before starting high school, my parents bought me a wooden office chair like this one. I remember them saying they got it at a yard sale for five or ten bucks. I would guess that I started sitting in that chair sometime around 1990, and I continued to sit in that chair through most of high school. I don’t remember it being uncomfortable, but very little is uncomfortable when you’re still under 140 pounds.

At some point that chair finally broke, and I upgraded to another wooden office chair, like this one. This one once belonged to my grandfather, and it is still at my parents’ house today.

I replaced it with a random, cushioned, high back executive style chair at some point after I got out of high school, most likely sometime around 1997. Over the next decade or so I bet I bought a half dozen of these chairs, each for between $150 and $250. They weren’t well made, and I often left them behind or gave them away when I moved.

My actual, current workspace, 2009-Present Just like my first chair, 1990-1994 JMuch like my grandfather's chair, 1994-1997 Random, cheap high back chair, 1997-2009 Same cobalt blue pellicle as my first Aeron, 2009

The story of my Aeron Chairs

I finally convinced myself to buy an Aeron chair in the latter part of 2009. I ordered a used “Cobalt Classic,” fully adjustable model through a vendor on eBay for around $450 shipped. The chair was in pretty good shape, and the label said it was manufactured in 1997. The sides where the arms meet the chair were pretty scuffed up, presumably from twelve years of adjusting, and it was a bit squeaky when leaning back.

About a year later, my father had a heart attack. I knew I’d be spending some time up there, so I drove the 900 miles up to my parents’ house with the Aeron in the back seat. I was up there for about two months, and in that time we ended up ordering another Aeron for my father, and I left my “Cobalt Blue” Aeron behind for my mother.

Shortly before I left to return home I found another chair on eBay for myself. This one had the more common “Carbon Classic” color scheme, which is a black chair with a dark gray weave in the pellicle. This one was “new”; I believe it was some sort of floor model or something, fully adjustable, and has the newer levered locking mechanisms on the arms.

I have been visiting my parents again for the last six weeks or so, this time for back surgery. I am in my old bedroom, at my 20-year-old desk, sitting in that same “Cobalt Blue” Aeron chair. I’m not exactly sure why, but it no longer has a squeak when I lean back.

I still miss the blue chair a bit. It was a very nice-looking chair; the “Carbon Classic” is pretty boring in comparison. I’m pretty certain it is the same color as Sheldon Cooper’s Aeron chair.

Why the Aeron chair? Aren’t there better chairs now?

I’m certain there are better chairs available now. I can’t say I’ve ever sat in any of them, though. The major selling point of the Aeron was the fact that I’d spent some time sitting in one in the past and I already knew I liked it. After that, the price and build quality of a used Aeron made a lot of sense.

My “Carbon Classic” Aeron chair has been in my possession now for almost four years. It looks and feels exactly the same as it did when I first got it. I expect to be able to say the same thing ten years from now.

Some people dislike the Aeron’s pellicle seat. More specifically, they don’t like the hard edge at the end of the seat. Some people say it cuts off the circulation in their legs. I’m a pretty heavy guy, and this has never been a problem for me. In fact, I don’t think I’ll ever buy an office chair with a cushion again.

The breathable pellicle seat is more comfortable on warmer days, and it even helps prevent swamp ass:

shstack: Persistent and Easy to Use Stacks Shared Between Shell Sessions

| Comments

This little Perl program has been sitting in my ~/bin directory for almost a year now. It has quite a few features that I don’t use very often, and some of those rarely used features seem to have a lot of bugs. Even so, I figured that it was time to throw it up on GitHub.

Where did shstack come from?

Fairly often, I get into a situation where I am performing different parts of the same task in two or more terminal windows. A lot of times I wish I had easier access to path and file names in the other window. I thought I’d write up a little script to automate the process.

I very quickly realized that it would be handy to be able to work with more than one file at a time, so emulating pushd and popd made a lot of sense. Since I was already planning to store the stacks in a file, I decided that they might as well be persistent.

What can you do with shtack?

I store the DNS names of some important servers in a stack called servers. I can use shstack’s built-in for command to ping them all to see if they are alive.

Why reinvent the for loop?

I have two reasons. The first is shstack’s “safe” for loop (sfor). The sfor loop works like a regular for loop, but it removes each item from the stack when the command is completed successfully. You might use this if you wanted to rsync a directory to multiple servers. That way, if there are any errors, you can easily fix the problem and run the command against the problematic servers again. I didn’t end up using this as often as I expected, though.

I also wanted to be able to write very simple commands that used items from multiple stacks. In this example, I’m using rsync to back up three directories to two different remote servers:

Real world example: one of my backup scripts

I use Duplicity to make a backup of important parts of my home directory once a day. I have a script that runs right after this to copy those new backups out to a couple of servers out on the Internet. The script pulls those server names from a stack called rsyncDestinations:

My rsync backup script using shstack
1
2
3
4
5
6
7
8
#! /bin/bash

SRC=/mnt/platter/Backups/duplicity
source ~/.keychain/zaphod-sh

s for "echo Syncing $SRC to %rsyncDestinations%;
       nice rsync -va --partial --delete --progress $SRC %rsyncDestinations%
      "

The future of shstack

Shstack definitely has some bugs. Some of them are just cosmetic, while others are pretty serious. I was going to record a screen cast showing off how sfor works, but I immediately noticed that sfor doesn’t correctly handle file names with spaces. I was pretty surprised by this because I’m usually pretty good about making sure my code is space friendly.

Obviously, I’d like to fix some of these bugs. I’m sure I’ll find others along the way too. I’d also like to write some documentation. I don’t think the help screen is verbose enough to actually explain how to work it.

zsh-dwim: Transformations for the Find Command

| Comments

Not long after I got zsh-dwim up and running, I went mining in my old Zsh and Bash history files looking for potential transformation candidates. I thought I already implemented them all, but I was doing some spring cleaning and came across some of my notes. It turns out that I was wrong. There were still quite a few ideas in there that I haven’t implemented yet.

One of the more interesting ideas in there was related to the find command. Each time I write anything more complicated than the simplest find command line, I usually follow the same careful workflow:

  • Run the command with no -exec option to make sure the correct files are chosen
  • Add the -exec option, but echo the commands first
  • Remove the echo, and run it one last time

As you can see in the video, that’s precisely what this transformation does. If it isn’t already there, hitting control-u will add the -exec echo {} \; to the command and put the cursor in front of the curly bracket. If the -exec is already followed by an echo, then the echo is removed.

For good measure, one more press of control-u will convert the -exec to a -print0 | xargs -0.

There is a tiny new feature in zsh-dwim

I wanted to be able to automatically place the cursor directly before the curly bracket, but I didn’t have an existing mechanism to do that. The function that places the cursor after a successful transformation now sets the cursor position to the value of the _dwim_cursor variable, if it is defined.

Why Do I Run Ubuntu?

| Comments

I’m a bit bored. I’m sitting here watching my laptop download packages for my slightly early upgrade to Ubuntu 13.04. I figured this was as good a time as any to write about why I run Ubuntu.

I started my Linux journey sometime around 1996 with Slackware 3.0 on my old 40 MHz AMD 386 computer. A few years later, I tried SuSE for a little while. By the end of the decade I had finally found Debian, and I ran it on all my desktops, laptops, and servers.

The servers all ran Debian stable, the desktops and laptops usually ran Testing. As excellent as Debian is, there were problems in both of these cases.

My old Debian desktop from around 2003

Debian’s release cycle was glacial at the time. Running Debian stable on a server was quite convenient as long as the release was only a year or two old, but it became harder and harder to install more modern software on those servers after that. At that point, you had to either manually upgrade some libraries, or you had to run the testing or unstable branch on your server.

On the desktop, Debian’s stable release was almost always too outdated to use. The unstable branch was regularly broken, and the testing branch didn’t always work either. Sometimes you’d run an apt-get upgrade and be stuck with a partially unusable system.

Debian with a predictable release cycle

Ubuntu showed up in 2004 and gave me a solution to all of these problems. They were taking regular snapshots of Debian’s unstable branch, polishing it up, and releasing it every six months. For me, this solved both of Debian’s problems. I didn’t have to wait three or more years for new stable releases, and I didn’t have to take my chances running a branch that could be broken at any time.

Not Really Into Pokemon

Folks often look surprised when I tell them that I run Ubuntu. When they ask my why I run Ubuntu, I have always said that for my purposes, Ubuntu is Debian with a six-month release schedule. I don’t use Unity, and I don’t really use a full desktop environment, either. I use most of XFCE, but I replace the window manager with Sawfish.

Ubuntu isn’t as Debian as it used to be

My canned response has been getting less accurate over time. Ubuntu has been drifting farther and farther away from Debian with every release. Most people complain about the changes Canonical has been making to the default desktop environment, but I couldn’t care any less about those changes than I already do.

It is the deeper, system-level changes that worry me more, but they haven’t done anything to actually scare me away. Upstart has come close, though.

I won’t be surprised if I end up looking for a better alternative in a few years. I also won’t be surprised if that alternative ends up being Debian.

Pat's desk in 2011

The upgrade to 13.04 is finished

As far as I’m concerned, this was an excellent upgrade. After the required reboot, almost everything looks exactly the same as it did before—everything but my weather applet, which seems to have switched itself over to the metric system.

My BIOS is Limiting My CPU Clock Speed!

| Comments

I’ve been playing quite a bit of Killing Floor the last few weeks on my aging HP DV8T laptop. The other night, I noticed that the frame rate was just horrible on a couple of maps. I adjusted all the quality settings just about as low as they would go, and I turned down the resolution quite a bit. The frame rate just didn’t improve at all.

So, I decided to investigate. The first thing I did was open up i7z to see if my Core i7 720QM processor’s turbo boost feature was being properly utilized. Many kernel upgrades in the past, I used to have a weird performance issue with Team Fortress 2 under Wine. If I left everything alone, i7z would show all four cores being active at around 1.7 GHz. If I manually disabled two or three cores in the kernel, the processor would run at 2.4 or 2.8 GHz and Team Fortress 2 would run much better.

This was a kludge that helped work around a deficiency in Linux’s process scheduler. I didn’t use this hack for long, and I didn’t really expect to need it today. What I actually found was more interesting, and much more disappointing.

i7z with BIOS limiting the clock rate

As you can see from the screenshot, my processor was stuck at about 1.4 GHz! I poked around quite a bit, trying to figure out what was going on. I tried echoing 1600000 into /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq, but it just wouldn’t go over 1466000.

It didn’t take long for me to notice the bios_limit file in the cpufreq directory:

/sys/devices/system/cpu/cpu0/cpufreq/bios_limit at 1466000

The BIOS is limiting me to 1.4 GHz, but why is it doing that? It seems that the BIOS doesn’t think my aftermarket power supply is doing a good enough job. Interestingly enough, if I unplug the power cable, it goes right back up to full speed while running on battery power.

You can force the kernel to ignore the warning from the BIOS by writing a 1 to /sys/module/processor/parameters/ignore_ppc. I figured this would be a safe work-around, since the laptop can run at full speed on battery power anyway. If I were having a thermal issue instead, this might not be safe for the hardware.

i7z and openssl benchmark running correctly

Killing Floor is running much better now at 2.4 to 2.8 GHz!

How did I fix it?

I added these lines to my /etc/rc.local file:

echo 1 > /sys/module/processor/parameters/ignore_ppc

for x in /sys/devices/system/cpu/cpu[0-7]/cpufreq/;do 
  echo 1600000 > $x/scaling_max_freq
done

Did the power supply go bad? What caused it to begin failing?

My father had to have back surgery last month, so I flew up and I’ve been staying here in my parents’ 110-year-old house while he recovers. I am writing this article in my childhood bedroom while sitting at the desk that was originally home to our Laser 128, a most excellent Apple IIc compatible computer.

The only source of light in this room is a lamp with a CFL bulb. The bulb was flickering quite a bit, so I replaced it. The replacement was also flickering by the next day.

My laptop is plugged into the same outlet, and if you listen carefully, you can actually hear its power brick clicking. The clicks of the power supply and the flicker of the light are in sync. I wouldn’t be surprised if there is some sort of problem with the power coming out of this outlet.

I borrowed a friend’s Kill A Watt power meter to see if it would give me any insights. The only thing even remotely odd is that it is reading 127.1 volts. That’s higher than any outlet that I’ve plugged my own Kill A Watt into. The Internet seems to think that is an acceptable number, but my parents sure do seem to wear out electronics faster than anyone else I know.