zsh-dwim: Small Update For apt and dpkg

I’ve been trying to remember to implement items from my zsh-dwim notes in my spare time. I managed to implement some apt and dpkg helpers this week.

add-apt-repository –> apt-get update

Every six months I upgrade Ubuntu. I always end up having to add at least one or two PPA repositories, using the add-apt-repository command, every single time I upgrade. Just about every add-apt-repository command is followed by an apt-get upgrade. This transformation was definitely a no-brainer.

which –> dpkg -S –> apt-cache show

This one was a little less obvious, but probably just as useful. Often enough, I’m interested in finding out exactly which package the file returned by the which command belongs to. That usually means either typing dpkg -S and pasting in the path, or recalling the previous command from history and manually adding the dpkg -S and the required backticks.

That just seems like too much work. Now I just hit control-u, and the which command is converted directly into a correctly formatted dpkg -S command.

I ended up taking it one step further. Hitting control-u on a dpkg -S command will transform the command into a slightly buggy apt-cache show $(dpkg -S ) command. I’m not terribly happy with how the end result turned out; it ends up generating a rather verbose command.

You can find zsh-dwim at GitHub.

Invest in a Quality Office Chair

Buy a good, comfortable, sturdy chair now rather than later

An Aeron would have been a pretty big expense for me fifteen years ago when I bought my first office chair. Just a few years later, though, and the price wouldn’t have been too big of a big deal. I should have bought one a decade ago. I’d have been sitting more comfortably the entire time, and in the long run it would have been less expensive than buying a new, cheap chair every few years.

Not everyone finds the Aeron to be a comfortable chair. Fortunately there are plenty of other good chairs to choose from. I’d definitely recommend spending some time sitting in them before you buy one, though.

My Progression of Chairs

At some point when I was in school, probably shortly before starting high school, my parents bought me a wooden office chair like this one. I remember them saying they got it at a yard sale for five or ten bucks. I would guess that I started sitting in that chair sometime around 1990, and I continued to sit in that chair through most of high school. I don’t remember it being uncomfortable, but very little is uncomfortable when you’re still under 140 pounds.

At some point that chair finally broke, and I upgraded to another wooden office chair, like this one. This one once belonged to my grandfather, and it is still at my parents’ house today.

I replaced it with a random, cushioned, high back executive style chair at some point after I got out of high school, most likely sometime around 1997. Over the next decade or so I bet I bought a half dozen of these chairs, each for between $150 and $250. They weren’t well made, and I often left them behind or gave them away when I moved.

My actual, current workspace, 2009-Present Just like my first chair, 1990-1994 JMuch like my grandfather's chair, 1994-1997 Random, cheap high back chair, 1997-2009 Same cobalt blue pellicle as my first Aeron, 2009

The story of my Aeron Chairs

I finally convinced myself to buy an Aeron chair in the latter part of 2009. I ordered a used “Cobalt Classic,” fully adjustable model through a vendor on eBay for around $450 shipped. The chair was in pretty good shape, and the label said it was manufactured in 1997. The sides where the arms meet the chair were pretty scuffed up, presumably from twelve years of adjusting, and it was a bit squeaky when leaning back.

About a year later, my father had a heart attack. I knew I’d be spending some time up there, so I drove the 900 miles up to my parents’ house with the Aeron in the back seat. I was up there for about two months, and in that time we ended up ordering another Aeron for my father, and I left my “Cobalt Blue” Aeron behind for my mother.

Shortly before I left to return home I found another chair on eBay for myself. This one had the more common “Carbon Classic” color scheme, which is a black chair with a dark gray weave in the pellicle. This one was “new”; I believe it was some sort of floor model or something, fully adjustable, and has the newer levered locking mechanisms on the arms.

I have been visiting my parents again for the last six weeks or so, this time for back surgery. I am in my old bedroom, at my 20-year-old desk, sitting in that same “Cobalt Blue” Aeron chair. I’m not exactly sure why, but it no longer has a squeak when I lean back.

I still miss the blue chair a bit. It was a very nice-looking chair; the “Carbon Classic” is pretty boring in comparison. I’m pretty certain it is the same color as Sheldon Cooper’s Aeron chair.

Why the Aeron chair? Aren’t there better chairs now?

I’m certain there are better chairs available now. I can’t say I’ve ever sat in any of them, though. The major selling point of the Aeron was the fact that I’d spent some time sitting in one in the past and I already knew I liked it. After that, the price and build quality of a used Aeron made a lot of sense.

My “Carbon Classic” Aeron chair has been in my possession now for almost four years. It looks and feels exactly the same as it did when I first got it. I expect to be able to say the same thing ten years from now.

Some people dislike the Aeron’s pellicle seat. More specifically, they don’t like the hard edge at the end of the seat. Some people say it cuts off the circulation in their legs. I’m a pretty heavy guy, and this has never been a problem for me. In fact, I don’t think I’ll ever buy an office chair with a cushion again.

The breathable pellicle seat is more comfortable on warmer days, and it even helps prevent swamp ass:

shstack: Persistent and Easy to Use Stacks Shared Between Shell Sessions

This little Perl program has been sitting in my ~/bin directory for almost a year now. It has quite a few features that I don’t use very often, and some of those rarely used features seem to have a lot of bugs. Even so, I figured that it was time to throw it up on GitHub.

Where did shstack come from?

Fairly often, I get into a situation where I am performing different parts of the same task in two or more terminal windows. A lot of times I wish I had easier access to path and file names in the other window. I thought I’d write up a little script to automate the process.

I very quickly realized that it would be handy to be able to work with more than one file at a time, so emulating pushd and popd made a lot of sense. Since I was already planning to store the stacks in a file, I decided that they might as well be persistent.

What can you do with shtack?

I store the DNS names of some important servers in a stack called servers. I can use shstack’s built-in for command to ping them all to see if they are alive.

Why reinvent the for loop?

I have two reasons. The first is shstack’s “safe” for loop (sfor). The sfor loop works like a regular for loop, but it removes each item from the stack when the command is completed successfully. You might use this if you wanted to rsync a directory to multiple servers. That way, if there are any errors, you can easily fix the problem and run the command against the problematic servers again. I didn’t end up using this as often as I expected, though.

I also wanted to be able to write very simple commands that used items from multiple stacks. In this example, I’m using rsync to back up three directories to two different remote servers:

Real world example: one of my backup scripts

I use Duplicity to make a backup of important parts of my home directory once a day. I have a script that runs right after this to copy those new backups out to a couple of servers out on the Internet. The script pulls those server names from a stack called rsyncDestinations:

My rsync backup script using shstack
1
2
3
4
5
6
7
8
#! /bin/bash

SRC=/mnt/platter/Backups/duplicity
source ~/.keychain/zaphod-sh

s for "echo Syncing $SRC to %rsyncDestinations%;
       nice rsync -va --partial --delete --progress $SRC %rsyncDestinations%
      "

The future of shstack

Shstack definitely has some bugs. Some of them are just cosmetic, while others are pretty serious. I was going to record a screen cast showing off how sfor works, but I immediately noticed that sfor doesn’t correctly handle file names with spaces. I was pretty surprised by this because I’m usually pretty good about making sure my code is space friendly.

Obviously, I’d like to fix some of these bugs. I’m sure I’ll find others along the way too. I’d also like to write some documentation. I don’t think the help screen is verbose enough to actually explain how to work it.

zsh-dwim: Transformations for the Find Command

Not long after I got zsh-dwim up and running, I went mining in my old Zsh and Bash history files looking for potential transformation candidates. I thought I already implemented them all, but I was doing some spring cleaning and came across some of my notes. It turns out that I was wrong. There were still quite a few ideas in there that I haven’t implemented yet.

One of the more interesting ideas in there was related to the find command. Each time I write anything more complicated than the simplest find command line, I usually follow the same careful workflow:

  • Run the command with no -exec option to make sure the correct files are chosen
  • Add the -exec option, but echo the commands first
  • Remove the echo, and run it one last time

As you can see in the video, that’s precisely what this transformation does. If it isn’t already there, hitting control-u will add the -exec echo {} \; to the command and put the cursor in front of the curly bracket. If the -exec is already followed by an echo, then the echo is removed.

For good measure, one more press of control-u will convert the -exec to a -print0 | xargs -0.

There is a tiny new feature in zsh-dwim

I wanted to be able to automatically place the cursor directly before the curly bracket, but I didn’t have an existing mechanism to do that. The function that places the cursor after a successful transformation now sets the cursor position to the value of the _dwim_cursor variable, if it is defined.

Why Do I Run Ubuntu?

I’m a bit bored. I’m sitting here watching my laptop download packages for my slightly early upgrade to Ubuntu 13.04. I figured this was as good a time as any to write about why I run Ubuntu.

I started my Linux journey sometime around 1996 with Slackware 3.0 on my old 40 MHz AMD 386 computer. A few years later, I tried SuSE for a little while. By the end of the decade I had finally found Debian, and I ran it on all my desktops, laptops, and servers.

The servers all ran Debian stable, the desktops and laptops usually ran Testing. As excellent as Debian is, there were problems in both of these cases.

My old Debian desktop from around 2003

Debian’s release cycle was glacial at the time. Running Debian stable on a server was quite convenient as long as the release was only a year or two old, but it became harder and harder to install more modern software on those servers after that. At that point, you had to either manually upgrade some libraries, or you had to run the testing or unstable branch on your server.

On the desktop, Debian’s stable release was almost always too outdated to use. The unstable branch was regularly broken, and the testing branch didn’t always work either. Sometimes you’d run an apt-get upgrade and be stuck with a partially unusable system.

Debian with a predictable release cycle

Ubuntu showed up in 2004 and gave me a solution to all of these problems. They were taking regular snapshots of Debian’s unstable branch, polishing it up, and releasing it every six months. For me, this solved both of Debian’s problems. I didn’t have to wait three or more years for new stable releases, and I didn’t have to take my chances running a branch that could be broken at any time.

Not Really Into Pokemon

Folks often look surprised when I tell them that I run Ubuntu. When they ask my why I run Ubuntu, I have always said that for my purposes, Ubuntu is Debian with a six-month release schedule. I don’t use Unity, and I don’t really use a full desktop environment, either. I use most of XFCE, but I replace the window manager with Sawfish.

Ubuntu isn’t as Debian as it used to be

My canned response has been getting less accurate over time. Ubuntu has been drifting farther and farther away from Debian with every release. Most people complain about the changes Canonical has been making to the default desktop environment, but I couldn’t care any less about those changes than I already do.

It is the deeper, system-level changes that worry me more, but they haven’t done anything to actually scare me away. Upstart has come close, though.

I won’t be surprised if I end up looking for a better alternative in a few years. I also won’t be surprised if that alternative ends up being Debian.

Pat's desk in 2011

The upgrade to 13.04 is finished

As far as I’m concerned, this was an excellent upgrade. After the required reboot, almost everything looks exactly the same as it did before—everything but my weather applet, which seems to have switched itself over to the metric system.

My BIOS is Limiting My CPU Clock Speed!

I’ve been playing quite a bit of Killing Floor the last few weeks on my aging HP DV8T laptop. The other night, I noticed that the frame rate was just horrible on a couple of maps. I adjusted all the quality settings just about as low as they would go, and I turned down the resolution quite a bit. The frame rate just didn’t improve at all.

So, I decided to investigate. The first thing I did was open up i7z to see if my Core i7 720QM processor’s turbo boost feature was being properly utilized. Many kernel upgrades in the past, I used to have a weird performance issue with Team Fortress 2 under Wine. If I left everything alone, i7z would show all four cores being active at around 1.7 GHz. If I manually disabled two or three cores in the kernel, the processor would run at 2.4 or 2.8 GHz and Team Fortress 2 would run much better.

This was a kludge that helped work around a deficiency in Linux’s process scheduler. I didn’t use this hack for long, and I didn’t really expect to need it today. What I actually found was more interesting, and much more disappointing.

i7z with BIOS limiting the clock rate

As you can see from the screenshot, my processor was stuck at about 1.4 GHz! I poked around quite a bit, trying to figure out what was going on. I tried echoing 1600000 into /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq, but it just wouldn’t go over 1466000.

It didn’t take long for me to notice the bios_limit file in the cpufreq directory:

/sys/devices/system/cpu/cpu0/cpufreq/bios_limit at 1466000

The BIOS is limiting me to 1.4 GHz, but why is it doing that? It seems that the BIOS doesn’t think my aftermarket power supply is doing a good enough job. Interestingly enough, if I unplug the power cable, it goes right back up to full speed while running on battery power.

You can force the kernel to ignore the warning from the BIOS by writing a 1 to /sys/module/processor/parameters/ignore_ppc. I figured this would be a safe work-around, since the laptop can run at full speed on battery power anyway. If I were having a thermal issue instead, this might not be safe for the hardware.

i7z and openssl benchmark running correctly

Killing Floor is running much better now at 2.4 to 2.8 GHz!

How did I fix it?

I added these lines to my /etc/rc.local file:

echo 1 > /sys/module/processor/parameters/ignore_ppc

for x in /sys/devices/system/cpu/cpu[0-7]/cpufreq/;do 
  echo 1600000 > $x/scaling_max_freq
done

Did the power supply go bad? What caused it to begin failing?

My father had to have back surgery last month, so I flew up and I’ve been staying here in my parents’ 110-year-old house while he recovers. I am writing this article in my childhood bedroom while sitting at the desk that was originally home to our Laser 128, a most excellent Apple IIc compatible computer.

The only source of light in this room is a lamp with a CFL bulb. The bulb was flickering quite a bit, so I replaced it. The replacement was also flickering by the next day.

My laptop is plugged into the same outlet, and if you listen carefully, you can actually hear its power brick clicking. The clicks of the power supply and the flicker of the light are in sync. I wouldn’t be surprised if there is some sort of problem with the power coming out of this outlet.

I borrowed a friend’s Kill A Watt power meter to see if it would give me any insights. The only thing even remotely odd is that it is reading 127.1 volts. That’s higher than any outlet that I’ve plugged my own Kill A Watt into. The Internet seems to think that is an acceptable number, but my parents sure do seem to wear out electronics faster than anyone else I know.

Self-Hosted Cloud Storage: ownCloud vs. SparkleShare vs. BitTorrent Sync vs. Seafile vs. Pydio - 2013 Edition

Update: This post has gotten pretty obsolete. I’ve written a newer, more thorough cloud-storage comparison post. There may still be some useful information in the post you are currently looking at, but my advice is to skip ahead to the newer post.

In the days before Dropbox, roughly five years ago, just about the only way to share files and store them remotely was to use a file server. When you saved a file, you had to wait for it to be written to your file server. If the file server was outside of your local area network, this might have taken quite a while.

Dropbox made this simple.

Why bother hosting your own Dropbox style service?

I don’t really trust Dropbox, because if you sync a file that another user already has, it would be uploaded instantly. This was a red flag that indicated that you aren’t the only person with the keys to decrypt your data.

Maybe you trust Dropbox and their competitors, we have no way to be certain that our data is being safely encrypted, and we have no way of knowing exactly who has keys that can unlock our data.

Self-hosting might be less expensive

The choices are pretty slim, but there are some virtual private server providers out there that offer backup-oriented servers. These are always very light on memory and CPU, but usually offer quite a bit of disk space. These kinds of plans can sometimes be found for rates a bit better than Dropbox requires you to buy.

I am going to be cheating a little here, though. I have my own server hardware sitting out there in a data center, and there are terabytes of free disk just waiting to be used up.

Hosting your files out in a random data center may not be ideal for everyone. It seems like almost everyone these days has a pretty reliable and speedy Internet connection at home, so you could always build your own NAS and keep your “cloud” storage at home.

What will I be syncing?

If I am going to take the time to set up and maintain my own self-hosted, file-syncing cloud storage service, I am going to need to store enough data up there to make it worth the effort. The home directory on my laptop holds roughly 30 GB of data, and my music collection takes up a similar amount of space.

I’m not certain that I care about storing my music up there. I hardly ever listen to music these days, and Google Music already does a great job of letting me listen to my music on my phone, tablet, or computer.

SparkleShare

SparkleShare was the first solution I looked at. One of the things that I find very interesting about this project is that it stores all your data in a Git repository. That means it ought to be very easy to quickly and efficiently replicate your data to multiple servers. It looks like SparkleShare will also let you easily access the version history of a file, and it will even automatically merge changes made to text files from multiple locations.

SparkleShare’s client side encryption doesn’t look ideal, though. File names are not encrypted on the server, and you can only set up a single password, which can never change. This isn’t a deal breaker, but it would be nice to have a more configurable encryption system.

The SparkleShare client for Android is at a very early stage of development. All it can do at this point is download files.

Update: I’m crossing SparkleShare off my list due to this bug report. SparkleShare’s design doesn’t allow it to properly sync directories that contain Git repositories. This makes it completely unusable for my purposes.

ownCloud

ownCloud is a much more mature project than SparkleShare. I remember hearing about it on a SourceTrunk podcast episode back a couple of years ago. If my memory serves me correctly, at that time there was no Dropbox server as a network drive using webdav. This deficiency went away at some point in the last couple of years, though.

ownCloud seems to have a much richer set of features than SparkleShare. ownCloud’s web interface has photo galleries and a built-in music player. You can also connect the ownCloud server to external data sources, like Dropbox, Amazon S3, or Google Drive.

The ownCloud Android client is much more advanced than the SparkleShare client. Much like Dropbox Android client is able to automatically upload photos as you take pictures.

I’m pretty sure that ownCloud’s encryption plugin won’t meet my needs. It looks like ownCloud encrypts your data on the server side. If your ownCloud server is compromised, then all your data is at risk.

BitTorrent Sync

BitTorrent Labs just released the first alpha of their BitTorrent Sync application today. For my purposes, BitTorrent Sync has one huge advantage over SparkleShare and ownCloud: it was designed from the ground up with security and encryption in mind. If all you’re really interested in is secure and efficient file synchronization, then BitTorrent Sync might be a good fit.

Unfortunately, you must have complete trust in every machine in computer your are syncing between. With BitTorrent Sync, the data is transferred securely, but the files are stored in the natural, unencrypted state.

BitTorrent Sync uses the cloud in its most literal definition. You don’t need a centralized server. You just need to make sure that at least one of the other devices is also connected to the Internet in order for your data to sync.

Seafile

Seafile is looking like it might be the best option for my purposes. It has client side encryption, so I don’t have to have complete trust in my host. The Seafile server has a rather spiffy looking web interface that gives you easy access to old versions of files, deleted files, and lets you view the differences between different revisions of the same file.

Seafile does have some minor restrictions on file names, but they shouldn’t be nearly as problematic as SparkleShare’s restrictions. Seafile has an Android client, but it looks pretty basic.

Earlier today, I set up Seafile server, and I installed the client on my laptop. I asked it to push my entire home directory up to the server. I came back to check on it 12 hours later, and it had seemingly finished indexing about 23 GB of data. I checked on it a few hours later, and it didn’t seem to have made any significant progress.

I found a handful of responses on the Seafile forums implying that Seafile doesn’t work so well with “large” libraries, and it sounds like file count is the primary problem and not file size. My home directory takes up 32 GB and contains 274,568 files.

Pydio and Pydio Sync

Pydio reminds me of ownCloud because it has an extremely featureful web interface. It seems to have an add on called Pydio Sync that provides Dropbox style file synchronization. The Pydio Sync page states that it can scale to 20-30k files and tens of gigabytes of data.

It looks like Pydio Sync uses the rsync protocol, so file synchronization should be fast and efficient. Unfortunately, Pydio Sync does not appear to support client-side encryption of your data. It will only protect your data while it is in transit.

Is there a winner?

This is a difficult question to answer. This blog entry started out as a cursory comparison of just two cloud storage solutions. I’ve already added two more since then. There are others that I didn’t feel were usable for me, and others that I just haven’t found yet.

That said, Seafile has client side encryption, and it is the most Dropbox lets you sync multiple clients up with a centralized server. That server has a web interface, and it allows you to share files and folders.

In my mind, Seafile would be the clear winner, if it could just manage to handle hundreds of thousands of files without breaking a sweat. Even with that limitation, I feel that it would be a good direct replacement for Dropbox.

Are you already hosting your own cloud storage? Do you have concerns regarding the security of services like Dropbox?

More Powerful Zsh Incremental History Search Using Zaw

Zaw is a Zsh widget that works much like Emacs’s Helm or anything.el modules. I don’t use Helm or anything.el, and I don’t actually make full use of Zaw in that fashion, either. The only piece of Zaw that I do use is its excellent history search. I bound Zaw’s zaw-history function to control-r instead of the default built-in history-incremental-search-backward.

What makes Zaw’s history search better than Zsh or Bash’s built-in history search?

The default incremental history is very handy, but it is also very rigid. You can keep flipping back through all the matching commands in your history, but you only get to search for a single substring. Zaw lets you search for multiple substrings, each separated by a space.

Back in the decade when I still used Bash, there were innumerable situations where I wished that I had exactly this kind of functionality. The most common cases were probably trying to find an rsync or scp command with a particular server as the target. With the default incremental search, the best you could do was search for the server name or the rsync command, and then flip through results until you found the one you were looking for.

Sometimes you luck out, and it is only a few entries away. Other times, you end up tapping that control-r over and over for what seems like eternity.

Zaw is a bit too visually intrusive

With its default settings, Zaw takes up quite a bit of vertical terminal real estate. If you primarily operate in standard size 80x24 terminals-like I do, Zaw will end up pushing almost everything up and off of your screen. I ended up limiting mine to just three lines. This way, it isn’t nearly as intrusive, and I still get a bit more information than I would from the default incremental history search.

“Better” keys for moving through search results

I needed to bind a few extra keys to make Zaw behave more like the function it was replacing. I bound control-r to Zaw’s down-line-or-history. This is intuitively backwards, but it perfectly fits my muscle memory.

I also needed a key to complement control-r for moving in the opposite direction. Since control-s is usually bound to history-incremental-search-forward, I thought it would be reasonable to use control-s for moving in the opposite direction.

Editing the command before executing

This is another situation where I had to appease my muscle memory. When using the default incremental search, I always hit control-e to accept the result and move the cursor to the end of the line. By default, the same action with Zaw is accomplished by hitting alt-enter.

I’m just too old to retrain myself. I ended up binding control-e to Zaw’s accept-search command.

~/.zprezto/modules/zaw/init.zsh
1
2
3
4
5
6
7
8
9
10
source $HOME/.zprezto/modules/zaw/external/zaw.zsh
bindkey '^R' zaw-history
bindkey -M filterselect '^R' down-line-or-history
bindkey -M filterselect '^S' up-line-or-history
bindkey -M filterselect '^E' accept-search

zstyle ':filter-select:highlight' matched fg=green
zstyle ':filter-select' max-lines 3
zstyle ':filter-select' case-insensitive yes # enable case-insensitive 
zstyle ':filter-select' extended-search yes # see below

zsh-dwim Now Works on Mac OS X and FreeBSD

A few weeks ago, I received my first pull request on the zsh-dwim GitHub repository. Much to my surprise, it was a change to make zsh-dwim work on Mac OS X. I made a mistake that I’ve made a few times in the past, he sort of mistake that goobers me up once every five years or so.

Never assume that UNIX command line tools are compatible with the awesome GNU versions

I encountered this “problem” on my first few Sun Solaris servers. It was painfully obvious there during interactive shell sessions. I was constantly hitting command line switches that didn’t exist. I’ve always preferred the long form “double dash” command line arguments over their single letter equivalents, but at the time those were seemingly nonexistent on Solaris.

This time, though, my problem was that the BSD version of the sed command doesn’t support the -r command line switch like the GNU version does. This is required to support the regular expression capture groups that most of zsh-dwim’s transformations use.

The solution

The solution in Lepht’s pull request was to use the gsed command if it was available. This worked great, so I immediately accepted the pull request. I didn’t give it much thought again until I was updating the documentation.

I originally decided to use sed for the regular expression substitutions in zsh-dwim because sed is very lightweight, and the regular expression substitution engine built into zsh didn’t feel up to the challenge. I also figured that everyone would have it installed.

That last part was very important. I realized as I was updating the documentation that I didn’t want anyone to have to go through the trouble of installing additional software to use zsh-dwim. I talked to Lepht about the idea of swapping out sed for perl, since pretty much every UNIX flavor has Perl installed already.

Changes to zsh-dwim related to sed

I think we came up with a pretty reasonable solution. zsh-dwim will now use sed if it supports the -r argument. If not, it will check and see if gsed is installed. Failing that, it will just use perl. That means it should use the most efficient regular expression engine available, and most people won’t have to think about it at all.

Many thanks to Lepht for pointing out my oversight!

Getting Solarized Working with emacsclient in both GUI and Terminal

Some time last year, I decided it was finally time to try out a new color theme in my terminals and text editor. I have been a long time user of the old, standard “Linux console” colors for a very long time. Those were the colors they gave me when I installed Slackware 3.0 in 1996, and they were pretty much the same as the ANSI colors my 8088-based MS-DOS machine back in the late 1980s. I guess that I had been clinging to those rainbow colors for over twenty years.

Solarized

I’m not sure exactly what finally made me give up my old color theme and give Solarized a try, but I do remember listening to The Changelog’s interview with the creator of Solarized, Ethan Schoonover. I do know that I set my terminal emulator to the Solarized Dark colors, and I installed the Solarized Emacs theme and I forced myself to stick with it long enough to get used to it.

That wasn’t easy, though. For many, many weeks, every time I opened a terminal I would think to myself, “this doesn’t look right!” These days, I am more likely to say that when I happen to see a gray-on-black terminal. I’m going to call that progress.

emacsclient and Solarized

I’ve had a lot of trouble getting my Solarized terminals and emacsclient to play nicely together. When I first set things up, it was just awful. Opening Emacs in a terminal would make the colors go weird in any existing Emacs GUI instance and vice versa. It was just a mess.

I did manage to straighten that out pretty well, but things still weren’t quite right. Everything looked fine in the GUI, but most of the colors were pretty far off in the terminal. I do most of my heavier editing in the GUI, so the “wrong” colors were pretty easy to tolerate. Those bright red comments in the terminal were pretty horrible, though:

Some very wrong colors Correct colors

Finally making it work

First of all, I had to set up the correct colors in my terminal. I use Gnome Terminal, so I used Sigurd Gartmann’s scripts to configure my Gnome Terminal with the Solarized colors. I have tried running emacsclient in other terminals that aren’t configured with the Solarized colors, and the result looks very bad.

The hardest part was finding a version of the Solarized color theme for Emacs 24 that actually works well in a terminal. Michael Olsen’s fork of the Solarized color theme is working perfectly for me.