Ten Weeks with Seafile

| Comments

I started using Seafile over two months ago for all of my self-hosted cloud storage needs. I think enough time has passed, and I figured I should report back on how things have been going so far. Seafile has actually been working quite well for me so far. I’ve had a few snags, but they’re all minor, and they’ve all been pretty easy to work around.

How much data are we storing?

I have about 9 GB worth of files stored in Seafile, and Chris has around 24 GB up there. She’s beating me by a pretty wide margin, but she has her music collection stored up there. I’ll probably think about doing the same thing some day, but my music collection is ancient, mostly ripped from CDs, and is terribly unorganized.

Users on my Seafile server

I’m not entirely sure how Seafile determines the amount of storage space everyone is using. The web interface says we’re using a total of 32 GB between us. Even after manually running the garbage collector a few times, the server’s drive has over 40 GB of data. I’m guessing the extra 8 GB is taken up by our file revision history; most of our libraries are configured with 90 days of history.

My libraries are scattered all over the place, but Chris only has one, so hers are much easier to measure. Her local Seafile library is 31 GB, which is quite a bit smaller compared to what Seafile is reporting. I’m going to hazard another guess here, and I’m going to say that compression and deduplication are saving her some space. I wouldn’t be surprised if she has more than just a few gigabytes worth of duplicate songs and photos.

Are these discrepancies alarming?

Not at all. Most of my Seafile libraries are replicated to my laptop, and everything seems to match up just fine. Accounting for used space in the presence of compression and deduplication is a very hard problem. I’m happy as long as my data is replicating correctly.

How much bandwidth does Seafile use?

I figured my bandwidth usage for August would be pretty typical, and I was hoping to get a screenshot of my Seafile virtual server’s bandwidth graph to show off here. It was going great for the first three weeks or so. It was up at around 4 GB of traffic, and probably on its way to closing out the month with less than 6 GB of bandwidth consumed.

Bandwidth use is a little higher than expected

Then Chris’s new computer arrived, and she finally got around to moving all of her stuff up to the Seafile server. This upload pushed the bandwidth consumption for August up over 30 GB, and totally ruined the graph on me.

I ended up taking a screenshot of September instead. The image above covers September 1 through September 8, and we’ve already used up over 18 GB of data transfer. I’m not sure if Chris is just meaner to the Seafile server than I am or what, but we’ll be going way over the 6 GB I was expecting to see last month.

The data I have so far is pretty inconclusive, and I expect that everybody’s bandwidth use will vary quite a bit anyway. I’ll post some updated bandwidth numbers after I have a few more months to collect data.

Is it faster than Dropbox?

I have fast server hardware on a fast, reliable Internet connection in a data center that is less than 10 ms away—Seafile is significantly faster than Dropbox for me, and usually has no problem maxing out my 35 megabit-per-second FiOS connection.

My 35/35 FiOS SpeedTest results

Symbolic links are broken

I keep my Zsh configuration in a Git repository. There are a few symlinks in there, one of which is a link from my ~/bin directory. This one in particular ended up replicating in a pretty strange way, and I wish I paid more attention to what happened. I somehow ended up with infinitely recursing bin directories that looked something like this:

Near Infinite Recursion
1
2
3
4
wonko@zaphod:~/.zprezto/modules/persist/files/bin/bin/bin/bin$ cd bin
wonko@zaphod:~/.zprezto/modules/persist/files/bin/bin/bin/bin/bin$ cd bin
wonko@zaphod:~/.zprezto/modules/persist/files/bin/bin/bin/bin/bin/bin$ cd bin
wonko@zaphod:~/.zprezto/modules/persist/files/bin/bin/bin/bin/bin/bin/bin$

I manually cleaned up the receiving end of that sync, and it has been syncing correctly ever since.

I had another directory in there that was a symlink to one of its siblings. Seafile ended up syncing that one like it was two different directories. I’m pretty sure I was able to use a git checkout command to fix that. I should have been taking notes instead of just randomly trying out different ways to fix the problem!

There are replies to bug reports on Github that imply that symlink support will eventually be added to Seafile.

Git index files are often in conflict

I could probably list off a ton of reasons why synchronizing Git repositories between multiple computers is a bad idea. Git can push and pull for a reason. I tend to store lots of config files in Git repositories, though, and I’d like to have them synced up. When I’m on the road, I don’t want to be looking for a change that I forgot to commit, either.

I’m not a very advanced Git user, and I’m not entirely sure what is happening here. If I commit a change to my Zsh configuration, that change will be cleanly synchronized to my laptop, but I will end up with a Seafile conflict entry for the .git/index file.

I’m not exactly certain what is causing this, and it hasn’t actually caused any noticeable problems, but it does worry me a bit. Each copy of the ~/.zprezto repository seems to be identical before I start editing. Seafile should be synchronizing them every time I save a file.

I expect that the index file gets modified as I run various Git commands, but I would expect that my more recent, local copy should be pushed to the other machines. I’m not sure why Seafile even notices a conflict.

I’m the only person editing these files. This may be problematic if multiple people are editing files in a Git repository in the same Seafile library. That would be a bad idea even without this problem, and it would probably be a good way to lose some edits.

Update: One of the Seafile developers, JiaQiang Xu, was nice enough to address some of my concerns in the comments. I will have to do some testing to figure out exactly what is going on here.

Watch out for colons in filenames!

I knew that Seafile doesn’t sync files with colons, question marks, or dollar signs in their names. That didn’t stop me from wasting 20 minutes trying to figure out why a screenshot file with an actual time stamp in its name wasn’t syncing.

I don’t really care that Windows doesn’t support these characters in a filename. I haven’t used Windows in years. I’d really like to see this restriction removed, but it isn’t by any means a show stopper.

Update: JiaQiang Xu says that this restriction is going to be removed.

Seafile encryption and the NSA

I’ve been relatively paranoid about my data for a long time, and the documents that Edward Snowden leaked have brought quite a few people up to my level of paranoia.

I don’t actually know exactly how secure Seafile’s client-side encryption is, and I don’t know if it is even implemented correctly. If you’re paranoid enough, there is definitely a major flaw in Seafile’s encryption implementation.

If you want to access your encrypted data from a web browser, then your password will be sent up to the server and stored there for an hour. If you want to use encryption and the web interface, then you are required to trust the server. You’re also required to send your encryption passphrase up to the server at least once in order to set up an encrypted library.

My Seafile virtual server runs on a piece of hardware that I own. I am not too worried that my password was potentially stored in plain text in my server’s memory for a short time when I set up each of my libraries. If I were leasing a server, virtual or otherwise, I’d be more than a little concerned about this.

You’ll also need to send your encryption password up to the Seafile server if you want to access your file revision history. I find this even more problematic.

Even so, it is still a major step up compared Dropbox’s security.

Update: JiaQiang Xu mentioned that they are working on a new encryption scheme for Seafile that won’t require sending the password up to the server. He also tells me that a new QT based Seafile client is in the works, and this new client will be able to create new encrypted libraries without having to send your passphrase up to the server.

Are you having as much success with Seafile as I am?

QNIX QX2710 IPS LCD Monitors and Linux

| Comments

Update: I got a bill from FedEx today for $37.20 to cover the customs fees. The invoice has no account number, so apparently I can’t pay it on their website. I sure wish they told me that before I spent twenty minutes bumbling around their website and setting up an account.

$37.20 is less than I would have paid in sales tax locally, so that seems pretty fair to me. That brings the total for these two beautiful monitors up to $651.00.

Customs fees

Update: It is now July of 2015, and one of these QNIX QX2710 monitors has failed. I’m still very happy that I decided to buy these monitors almost two years ago, and I just bought another QX2710 to replaced the dead monitor. I did briefly investigate some other options before ordering the replacement, but buying another QX2710 was still the best choice for me.

A couple of days ago, I upgraded from a pair of 21.5” 1920x1080 TFT LCD monitors to a pair of 27” 2560x1440 IPS LCD monitors. I’ve really wanted to make this upgrade ever since a friend of mine bought a QNIX QX2700 late last year, but my laptop didn’t have the dual link DVI-D port that is required to drive these inexpensive 1440p monitors from Korea. Since building my new Linux desktop, I no longer have this limitation, and I just couldn’t resist this upgrade.

Two QNIX QX2710 monitors under a yardstick

Why the QNIX QX2710?

The least expensive 1440p monitor I could find was the Asus PB278Q 27-inch 1440p monitor for $553.99. It only ended up costing me about $60 more to purchase a pair of 27-inch QNIX QX2710 monitors. This was a much better value for me, but there are some caveats.

The more expensive monitors, like the Asus PB278Q, can be connected using HDMI 1.4, DisplayPort, or dual-link DVI, and they can display just about any resolution you choose. The QNIX QX2710 and other similar Korean monitors, like the X-Star DP2710, can only be used with a dual-link DVI-D port, and they must be driven at their native 2560x1440 resolution. They lack the hardware to scale other input resolutions up to their native resolution.

My Nvidia GTX 460 has a pair of dual-link DVI-D ports, so this wasn’t a problem for me.

Update: I’ve upgraded to an Nvidia GTX 970 and overclocked both QX2710 monitors to 120 Hz.

I also ended up choosing a glossy screen. Glossy screens usually have a better contrast ratio than matte screens, and I have total control of the lighting in my office, so there’s never anything behind me to glare on the screens.

The QX2710 was not exactly plug and play on Linux

This actually surprised me a bit. This is the first time in over a decade that I’ve plugged a monitor into one of my Linux machines without immediately seeing a picture.

The problem seems to be that the EDID information returned by the monitor is completely invalid. Since Xorg didn’t know what the monitor was capable of, it ended up defaulting to an output resolution of 1024x768. The QX2710 didn’t seem to know what to do with this, and it seemed to run through a series of full screen test patterns.

I thought it would be as simple as adding a “modeline” for 2560x1440, just like we often had to do in the good old days. It was a tiny bit more persnickety than that, though. I also had to add a few options telling Xorg to ignore the EDID info.

My x.org Configuration File
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig:  version 1.0  (buildmeister@builder58)  Fri Mar 12 02:12:40 PST 2010

Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "Screen0"
EndSection

Section "Files"
EndSection

Section "Monitor"
    Identifier     "Monitor0"
    VendorName     "QNIX"
    ModelName      "2710"
    HorizSync       88.8
    VertRefresh     59.5
    # 2560x1440 59.96 Hz (CVT 3.69M9) hsync: 89.52 kHz; pclk: 312.25 MHz
    Modeline "2560x1440"  312.25  2560 2752 3024 3488  1440 1443 1448 1493 -hsync +vsync
    DisplaySize 597 336
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    Option "NoLogo"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    SubSection     "Display"
        Depth       24
        Modes "2560x1440"
    EndSubSection
    Option "UseEDID" "False"
    Option "UseEDIDDPI" "False"
    Option "UseEDIDFreqs" "False"
    Option "ExactModeTimingsDVI" "True"

## Metamode for single QX2710 (2560x1440)
#   Option "metamodes" "DFP-0: 2560x1440 +0 +0"

## Metamode for dual QX2710 (5120x1440)
    Option "metamodes" "DFP-0: 2560x1440_60 @2560x1440 +0+0, DFP-2: 2560x1440_60 @2560x1440 +2560+0"
EndSection

I’m not entirely certain if all the EDID and the ExactModeTimingsDVI are required. I went through a few iterations of trial and error based on information from a number of sources. I tried various “modelines” that I found on the Internet, but I ended up using one generated by the cvt command line tool.

cvt command output
1
2
3
4
wonko@zaphod:~$ cvt 2560 1440 60                                             1
# 2560x1440 59.96 Hz (CVT 3.69M9) hsync: 89.52 kHz; pclk: 312.25 MHz
Modeline "2560x1440_60.00"  312.25  2560 2752 3024 3488  1440 1443 1448 1493 -hsync +vsync
wonko@zaphod:~$

Everything is up and running smoothly, and my desktop resolution is now 5120x1440. This is a huge step up from the pair of 1024x768 LCD panels I had back in 2001.

My old Debian machine

Making use of the new screen real estate

I don’t use a tiling window manager, but I have some tiling scripts set up for my preferred window manager, Sawfish. With the old monitors, I usually had each monitor split up into two asymmetric columns. The narrow column on the left might contain Pidgin chat windows or terminal windows, and the large column on the right might contain something like a web browser or a text editor.

The old 3840x1080 layout The new, spacious 5120x1440 layout

I’m completely amazed by how much room I have now. I decided to split each monitor up into three columns—one wide column down the middle with a narrow column to each side. I had to trim the width of the columns a tiny bit to make room for the new narrow column, but even the narrow columns are wide enough for an 80-column terminal window.

Those narrow columns are almost tall enough to stack three 80x24 terminal windows on top of each other—they only miss by a single row for each window. It is close enough for me to be very excited, though, because I can fit a giant Emacs window in the center column with three 80x23 terminal windows on each side.

This is so much better than trying to find a legible font that would let me squeeze four 80x24 terminal windows on a single 15” 1024x768 monitor.

Where to buy? Do I need a pixel perfect guarantee?

There seem to be plenty of QNIX monitors available through Amazon, but you’ll save a little money if you buy them through eBay. I bought mine on eBay from someone called dream-seller. There was a delay in shipping, though. They told me there was some pretty bad weather over there due to a typhoon, and Google sure seemed to agree with them. Once the monitors were shipped, though, they arrived in less than a week.

Some of the QNIX monitors are sold with a “pixel perfect guarantee”. I thought it was well worth the extra $20 to $30 to get that guarantee. The LG panels used in these monitors are from the batches that Apple, Dell, and Asus didn’t want to use in their monitors, so there may be quite a few dead pixels in these batches.

Many of the ones that were listed that didn’t have a guarantee said that they may have up to five stuck or dead pixels. If I had one dead pixel, I wouldn’t be able to stop staring at it. I don’t know what I’d do if there were five!

I also imagined a warehouse with two stacks of monitors, with perfect monitors in one pile and monitors with dead pixels in the other. I knew exactly which stack I wanted my new monitors to come out of.

There’s more work to be done

These monitors were just too big for my dual-monitor stand. I hung them up there briefly, but it just wasn’t going to work out, and they seemed to be sitting up much higher than my old displays. I felt very small looking up at them.

They new monitors are on their stock bases for now, which is a little weird for me. My old monitors were mounted about five inches above the surface of the desk. Sitting on their own, these monitors are only a little over two inches above the surface of the desk. I feel like I’m looking down at them.

My old monitors on their stand

I’m planning on buying some unistrut and modifying my existing monitor stand to fit these gargantuan monitors. They really are quite large. They’re set up at about a 45-degree angle, or is it a 135-degree angle? I guess it depends on where you measure it from. They butted right up against each other, and together they measure over 45 inches from end to end.

Update: QNIX QX2710 and Ubuntu 14.04 (Trusty Tahr)

I had a little bit of trouble with my QNIX monitors after upgrading to Ubuntu 14.04. Whenever the screensaver would power down the screens, they would almost immediately power back up. This would be so bad if they weren’t also being set to an unsupported mode that makes them draw ugly, bright, green vertical lines down the right side of the screen.

I didn’t notice it the first time that it happened, and I ended up with burned in vertical lines that took about ten minutes to fade away. The problem turned out to be the new light-locker program that hands control of the display back to lightdm when the screen is locked.

I opted for the easy fix. I just uninstalled light-locker, and everything is working just fine now.

1
wonko@zaphod:~$ apt-get remove light-locker

Update – July, 2014: Dead pixel scare

I thought I found a dead pixel on one of the monitors. I found it within two inches of a mote of dust that seems to be stuck between the panels. I took a picture to post here, and in that close up, the dead pixel doesn’t look so dead. It looks like a little more than half of two different pixels are being blocked out.

Dead Pixel or Dust?

I have no idea how dust finds its way inside of an LCD panel, but this isn’t the first time I’ve had this happen. A large chunk of something managed to work its way into one of my old laptop’s LCD panels.

I still plan to debezel these monitors, but I thought I should post this finding first. The dust was definitely not there when these monitors arrived last year, and I don’t want anyone thinking I did something stupid while I was removing the bezels!

I can only assume that the dust dropped into place while I was setting up my new monitor stand. Maybe it will shake loose when I’m taking them down to take the bezels off.

Update – August, 2014: Faux dead pixel update

Earlier today, I checked to see if that pixel sized dust mote was still there. I found a bright white window, and moved it over to the location of the dust mote. It was still there. I don’t know why, but I decided to give it a good solid thwap with my finger. The dust immediately vanished!

I must have worked it loose, and it fell out of site. The smaller speck of dust is still there, and I couldn’t dislodge it in the same way. It seems to be much more out of focus and fuzzy than the other one, so I’m assuming it is between the LCD and the backlight, while the bigger one was in front of the LCD panel. Maybe it will work its way loose on its own.

Budget Linux Workstation Build - 2013

| Comments

I recently built a new Linux workstation to replace my laptop, and that got me thinking that it was probably about time to upgrade Chris’s laptop as well. I did put an SSD in her very old Dell Inspiron 6400 last year, and that did breathe a bit of new life into it, but having only about 3.2 GB of usable RAM was really starting to feel cramped.

She hasn’t taken her laptop out of the house in over a year, so I figured it was safe move her back to using a desktop computer again. She doesn’t need as much horsepower as I do, but I wanted to make sure that her new machine would be a significant upgrade.

Geekbench score for the FX-4130

I thought it might be worthwhile to someone if I published this parts list. All the hardware is working just fine under Ubuntu 13.04, and this combination of hardware also managed to eke out one of the higher Geekbench scores for an FX-4130.

The Parts List

Optional:

Total cost: $493 ($517 with upgraded CPU cooler)

Even though I had to buy a new video card for this machine, the price was still over $200 less than my own 8-core machine. That’s a pretty significant savings, and for her purposes this machine won’t be noticeably slower, either.

Why an FX-4130 CPU?

Chris doesn’t often perform tasks that would benefit from the eight cores of an FX-8350—single core performance is much more important for her. The FX-8350 only outperforms the FX-4130 by roughly 15% in single threaded tasks. That’s a very reasonable performance hit to take in order to save $90.

I’m not a very advanced gamer, but none of the games I play seem to use more than three or four cores. The FX-4130 should work nearly as well for gaming as the FX-8350.

Taking a chance on a different motherboard

There was a combo deal on the motherboard and CPU when I ordered these parts last week. I ended paying about $30 less than what I listed up above. I’ve had good luck with Gigabyte motherboards in the past, and even if it ended up running a bit slower, that would be alright.

I’m actually quite pleased with this motherboard, even if the chipset is getting pretty old now.

Is 8 GB of RAM enough?

I think 8 GB is a little on the low side, but it is definitely enough to get by. Her computer has been up for a couple of days now, and she’s barely touching swap and has plenty of cache.

8 GB is a bit tight, but still pretty comfortable
1
2
3
4
5
6
wonko@chris-desktop:~$ free
             total       used       free     shared    buffers     cached
Mem:       8158820    4293420    3865400          0     208300    2359672
-/+ buffers/cache:    1725448    6433372
Swap:     31249404       3584   31245820
wonko@chris-desktop:~$

If you’re primarily a gamer, you probably won’t notice much improvement today with anything beyond 8 GB of RAM. That is likely to change in the future, but hopefully memory prices will come back down by then.

32 GB feels quite cavernous
1
2
3
4
5
6
wonko@zaphod:~$ free
             total       used       free     shared    buffers     cached
Mem:      32896640   11416564   21480076          0     884124    6201292
-/+ buffers/cache:    4331148   28565492
Swap:     33554428          0   33554428
wonko@zaphod:~$

The Raidmax RX-500AF power supply

The Topower power supply that I used in my computer seems solid enough, but its pair of small fans are pretty loud. The Raidmax power supply has less capacity than the Topower 650, but its single large fan is much quieter. The Raidmax is also less expensive.

Don’t tell Chris, but I ended up stealing her Raidmax RX-500AF, and I put the loud Topower ZU-650W in her new machine. My office is a noticeably quieter now!

Some benchmarks

I went into quite a bit of detail on the benchmarks I ran in my last workstation build entry. I’m going to try not to be as long-winded this time.

Benchmark/Test FX-4130 FX-8350
Geekbench v2 7249 12981
Geekbench v3 single-core 1998 2313*
Geekbench v3 multi-core 5518 11870*
Kernel Compile -j 1 10:59 8:46*
Kernel Compile -j 4 3:38 2:54
Kernel Compile -j 8 1:36

The FX-8350 machine is only about 15% to 25% faster in the tests that stress only a single core and about twice as fast when they are maxing out all of their cores. The FX-4130 is one generation older and 400 MHz slower than the FX-8350, so I think the FX-4130 is doing a bit better than I would have expected.

There are some asterisks in that table next to some the FX-8350’s scores. My FX-8350 is currently overclocked by 200 MHz, and I don’t have scores recorded anywhere for those tests at stock speeds. Those numbers may be a little inflated, but they aren’t all that far off.

The NVidia GT 640

I was planning on replacing my NVidia GTX 460 with the new GT 640. I didn’t research this much at all. It was one of the better-looking cards at the price point I was aiming for. It has more CUDA cores than my GTX 460, but they run at a somewhat slower speed. Some quick math in my head said that would probably make it a wash as far as performance was concerned.

The GT 640 I ordered also has twice as much memory as my old GTX 460. Rumor has it that the extra video RAM is helpful with higher-resolution displays, and I’m waiting for a pair of QNIX 2710 2560x1440 displays to arrive. My own math and logic didn’t quite jibe with that rumor, but I didn’t think a video RAM upgrade that was effectively free would hurt.

It turns out that the GT 640 is quite a bit slower than the GTX 460. The first few games I loaded up were getting similar frame rates, so I figured I did a good job. The next day, though, I noticed some significant stuttering in Paranautical Activity, especially in the boss rooms.

The biggest difference between the GT 640 and the GTX 460 is memory bandwidth. The memory in the GTX 460 is clocked quite a bit higher and the memory bus is twice as wide. This difference in memory bandwidth doesn’t make for a noticeable change in frame rate in every game, but sometimes that difference is hard to ignore.

The GT 640 is a fine video card for the price, though. It may be slower than the GTX 460, but it costs less and uses significantly less power. My GTX 460 requires TWO PCIe power connectors, and the GT 640 doesn’t require any. That means the GT 640 makes for a cooler computer that costs a bit less in electricity to run.

The verdict

I’m very happy with this budget conscious Linux workstation build. When it comes to common office productivity or gaming, this FX-4130 desktop computer is almost as fast as my FX-8350 desktop. This is perfect because this is exactly what this new machine is intended to be used for.

It ended up costing me about 30% less to build this FX-4130 based machine than it would have cost to build another FX-8350 rig, and Chris won’t even notice the difference.

Does DDR3-1600 or DDR3-1866 Provide a Measurable Performance Increase Over DDR3-1333?

| Comments

I had one bad stick of DDR3-1333 among the parts I recently used to build my new Linux workstation. That bad stick of RAM had me running many, many passes of MemTest86. I was surprised to see that MemTest86 was telling me that my RAM has quite a bit higher throughput than my FX-8350’s L3 cache.

This piqued my curiosity. Is DDR3-1066 faster than my L3 cache? If it is faster, then am I seeing any benefit at all from my DDR3-1333? DDR3-1600 should be faster still. Would I see any improvement if I overclocked the RAM to DDR3-1600?

DDR3-800 is much slower than the FX-8350's L3 cache DDR3-1066 is probably just about fast enough, as it nearly matches the FX-8350's L3 cache speed DDR3-1333 is significantly faster than the FX-8350's L3 cache - I wouldn't pay for anything faster DDR3-1600 tested at almost double the bandwidth of DDR3-800

After the replacement stick of RAM arrived, I ran MemTest86 and Geekbench 2 at all those speeds. The results weren’t surprising, but I am happy to report that all four of my 8GB Corsair XMS DDR3-1333 DIMMs overclocked to DDR3-1600. They survived three long nights worth of MemTest86 passes, and they’re still happily running in their overclocked state, but is there really any value in leaving my RAM overclocked?

The Geekbench results

I ran Geekbench 2 at each of the four speeds between DDR3-800 and DDR3-1600. These are the results:

DDR3- Total Score Int Float Mem Perf Mem B/W MemTest L3 Cache
800 13767 11452 23736 5298 3924 7644 10500
1066 13900 11216 23870 5634 4982 10114 10500
1333 14025 11178 24071 5690 5507 12225 10500
1600 14244 11273 24511 5678 5841 13746 10500

I didn’t know exactly what the number would look like, but I have to say that the results were very predictable. Faster memory has been providing similar results for decades. Within reason, as long as you have enough CPU cache, it doesn’t really matter how fast your RAM is.

Just look at the huge bandwidth gap between DDR3-800 and DDR3-1600: DDR3-1600 has almost 50% more bandwidth. Yet there’s less than a four-percent difference in their overall Geekbench scores.

What about gaming performance?

I found a lot of anecdotes regarding gaming performance and memory speed on the Internet. Most claims use phrases like “some games” and “7 frames per second” without telling me what sort of frame rates they had to begin with. If you’re only getting 30 frames per second, then an increase of seven-frames-per-second is huge. Whereas if you’re already breaking 100 frames per second, then that same seven frames per second increase becomes tiny. These anecdotes are almost always lacking important information.

I didn’t use any proper science here, but I can say that I certainly didn’t see a noticeable increase in frame rate when I overclocked my RAM to 1600.

It would have cost me an extra $30 per DIMM if I had bought DDR3-1866 instead of DDR3-1333. That would have added $120 to the cost of my workstation build. If you want better gaming performance, then I say spend that money on a better video card.

The verdict

I’ve always said that if I have the choice between buying faster memory or more memory, I will always choose more memory.

This data probably only directly applies to AMD’s Bulldozer and Piledriver CPUs, but this line of thinking has been generally correct at least all the way back to the first 256 KB SIMMs I replaced in the early nineties.

My three-year-old Core i7 laptop gets better memory and cache scores in MemTest86 and Geekbench 2, so it is quite possible that Intel hardware squeezes a bit more performance out of faster memory. Even with Intel hardware, I bet you’d still get much better bang for the buck putting that $120 towards a better CPU or video card.

I’m going to keep my RAM running at 1600. It isn’t speeding things up much at all, but it also isn’t hurting anything.

My Backup Strategy for 2013 - Real Time Off-Site Backups

| Comments

The automated backup strategy that I’ve been using for the past three or four years was very solid, and it served me very well. I was using rdiff-backup to perform daily backups of my entire home directory to local storage, and I was using Duplicity to create and upload encrypted backups of my most important data to a remote server every day.

This was very stable and very bandwidth-efficient, but each backup only ran once a day, and rdiff-backup and Duplicity are both very heavy on I/O and CPU.

Moving my backups to the cloud

I started exploring various self-hosted cloud storage solutions a few months ago. After much consideration, I decided to use Seafile as my cloud storage solution. I had to rearrange my home directory a little to get everything important synced up to my Seafile server, but once I did, I realized that I didn’t need my old backup plan anymore!

Seafile library details

A good backup plan needs to include history. It might take days or even weeks to notice a corrupt or missing file. It is important to be able go back into your backup history and recover that file Seafile definitely has me covered in this case. Seafile lets me set the number of days of history I want to hang on to for each individual library. My rdiff-backup and Duplicity backups only recorded a snapshot of my files once each day, while Seafile does that constantly throughout the day as files change. I consider that to be a very big win.

A good backup plan also needs to move your files off-site Seafile has me covered here. Changes to my files are pushed up to my server almost immediately after I save them. My computer could get fried right now, and I would probably only lose this paragraph.

Sketch of my backup locations

Seafile doesn’t just give me a backup of my data, though. It is also keeping my important data synchronized between my laptop and my desktop. That not only gives me an extra full copy of my current data, but it also means that I don’t have to remember to take files with me when I leave the house.

Don’t forget to back up your self-hosted cloud!

In my opinion, a proper cloud-based storage solution has redundancy. Of the self-hosted cloud storage solutions that I checked out, the only ones that offered any real redundancy didn’t have client-side encryption. That made them pretty useless for me.

I don’t care if my Seafile server becomes unavailable for a while, but it is very important to me that I don’t lose all history that it is storing for me. I have some existing infrastructure in place to help keep my Seafile server safe.

  • My Seafile virtual server is stored on a RAID 10 to help protect against hardware failures
  • All my virtual servers are backed up to another local server every night
  • Those nightly backups are then synced off-site

This means that two copies of my current data exist at almost any given time: one on my local machine, and one on the Seafile server. I get even more redundancy if my laptop is powered up. It also means that my backup history is stored in three places: on the Seafile server, on the local backup server, and on the off-site backup server.

I’m probably doing a better job of managing my backups than most corporations at this point.

My Seafile libraries

How safe is Seafile?

I’ve only been running Seafile for a little more than a month so far. I haven’t had any data loss, and I’ve been able to pull files out of my history. One month isn’t a long time. I’ll definitely report back regularly to let everyone know how things are going.

So far, though, things are looking pretty good!

My Friend Brian is Rebuilding an Arcade Cabinet

| Comments

My friend Brian decided that he wants to build his own arcade cabinet. He’s been poking around on Craigslist for a couple of weeks trying to find a decent shell to rebuild. He finally found one that he liked, so we pulled the seats out of my minivan and drove to Arlington to pick it up.

Brian's empty arcade cabinet shell

I think he did alright. The cabinet was already in such a state that we had absolutely no idea what game it used to be. In fact, we thought that it might have started life as a scratch built MAME cabinet.

We were wrong about that. Going by the shape and dimensions of the cabinet, and the original holes on the underside of the control panel, we are pretty certain that it is a Main Event cabinet. It is already so far gone from its original state that we don’t have any reservations about cutting into it and converting it to a MAME cabinet.

I’m pretty excited. It has been quite a few years since I finished up my own arcade cabinet build, and I haven’t had the opportunity to work on anything like this since. I think it is going to be a lot of fun helping to bring this cabinet to life!

My Home Office - 2013

| Comments

The view from my chair The view from behind my shoulders The view from below

The desk – Bush Series A – Left L-Bow

I’ve been using this desk for a long time, probably since around 2006. It isn’t made of proper wood, but it is quite heavy-duty. The three legs are 1” sheets of laminated particle board, and the surface is a heavy, 1”-thick slab of laminated MDF. My desk is quite large, and it doesn’t fit through doorways when fully assembled.

The desk is roughly 60” wide by 44” deep. It is big enough that I can just rotate my chair to the right if I need to work with my laptop, and back to the left to use my desktop. I often tend to primarily focus on one monitor at a time, but the primary display changes depending on what I am doing. I’m pretty fluid in this regard, but I do almost all of my text editing and gaming on the right-hand monitor.

When I switch my focus from one monitor to the other, I rotate my chair and move the keyboard over so it is centered on that display. It is more comfortable to keep your neck straight most of the time.

I upgraded the desk’s grommet by adding a powered USB hub and some power outlets. I use the USB hub to charge my phone, and it comes in handy for flash drives and USB hard drives. The power outlets are plugged into my UPS, and they come in handy all the time.

The surface of the desk is very smooth, so I don’t need to use a mouse pad. It has taken a bit of a beating over the last half-dozen years, so there are some scratches and dings.

It is shaped to fit in a corner, but I have it in the wrong corner. It made sense to rearrange things in here a bit after I connected a wall-mounted LCD TV to my arcade cabinet. I can now easily see both of my monitors and the TV from where I’m sitting.

The dual monitor stand

Harley is happy to be blocking Torchlight

I built this monitor stand three years ago. The use of PVC combined with steel pipe is pretty unique, and not without its problems. The initial design ended up being a failure, and I ended up wandering around Lowes trying to come up with a way to salvage the design.

It turned out that the screw holes in ¾” PVC j-hooks match up perfectly with VESA mounting points. The rest of the design fell into place from there. I didn’t end up having to glue any of the PVC; friction was enough to hold it in place. I only had to tweak it once every few months, but that wasn’t really much of a problem.

That is, until my new friend Harley showed up. He likes to attack mouse pointers, rub up against the monitors, and shimmy his way underneath them. He doesn’t mess them up as often as he used to, but I still have to straighten things out every few weeks.

I’m very happy with the way the steel pipe flange bolts so solidly to the desk. I’d like to keep that and upgrade the rest of the mount using unistrut, but I’m going to put that off until my next monitor upgrade.

The mechanical keyboard

The keyboard is an IBM Model M PS/2 keyboard. This particular one happens to have been born in 1993. I have some plans for this keyboard. I’d like to at least replace the old, coiled cable with a nice fabric-jacketed cable. I also miss the volume control knob on my old Bluetooth keyboard. I might try to add a nice analog volume knob using an Arduino.

For now, though, I’m just happy to be using a mechanical keyboard again.

The computer

FX-8350 Linux workstation

I just recently built a new Linux workstation to replace my aging laptop. It is an AMD FX-8350 with 32 GB of RAM, 128 GB Crucial M4 SSD, and an older, power-hungry NVidia GTX 460 video card. I am also mirroring a pair of 7200 RPM 1 TB that I had lying around; they are an excellent replacement for the second 500 GB drive that was in my laptop.

The video card might seem a little outdated and out of place in this machine, but it is more than fast enough to meet my needs. It is at least 10 times faster than the NVidia GT 230M that was in my laptop, and it runs nearly every game I play at well over 100 frames per second with all the settings maxed out. I won’t bother upgrading it until I want to play a game that won’t run smoothly.

The headset

I’m using a Monoprice gaming headset. They’re one of the few gaming headsets that not only have a removable microphone, but the microphone isn’t obnoxiously huge. They also happen to be just big enough for my over sized cranium, which is a nice bonus. They’re also solid black and very plain looking, which is exactly what I was hoping to find.

They’re the only set of huge, closed, monitor-style headphones that I’ve ever owned, so I’m not sure how useful my opinion is. They’re comfortable enough that I don’t mind wearing them for an hour or more at a time. They sound pretty good to me, and they’re louder than I’d ever need. They also happen to be reasonably priced, which certainly doesn’t hurt.

The Aeron chair

I love my Aeron chair. I’ve only had it for about four years so far, and I really wish I’d bought one sooner. Everyone should invest in a quality office chair. I should have bought an Aeron at least ten years ago. I would still have it today, and it would still be under warranty.

The three Aeron chairs that I’ve bought are all used or refurbished. One of them was manufactured in 1997. That one has a very slight wobble to it, but is otherwise in very good shape. Computer hardware becomes outdated pretty quickly. Quality office furniture lasts much longer, and is worth investing in sooner rather than later.

The UPS – an old APC 650

I’ve had this UPS for over a decade. I bought it used off of eBay shortly after I moved to Irving, TX in 2001. We used to get a lot of brownouts that would cause my computer to reboot, and a UPS was a good fix for that. My APC 650 is on its third or fourth replacement battery, and I won’t be surprised if it is still running in ten more years.

The cocktail arcade cabinet and television

Front view 3/4 view One of the early whiteboard diagrams

The arcade cabinet is one of my favorite pieces of hardware in my office. My brother-in-law John and I probably spent nearly 100 hours building it, and we’re both very proud of how it turned out.

It is not just a fun machine for playing classic video games. It is also has room for seven hard drives, so I used it as my home file server. I also started using it as a media player now that it has a wall-mounted TV connected to it.

I’ve already written many thousands of words about the build process. Here are some useful entry points into that series of articles:

Whiteboards

I love whiteboards, and I am a big fan of using melamine sheets as inexpensive whiteboard material. My previous home office had 84 square feet of whiteboard. The layout of my current home office just didn’t have room for all three of my old whiteboards because the arcade cabinet and desk both block quite a bit of wall space.

The larger whiteboard The smaller whiteboard

I now have one 4’ by 8’ and one 4’ by 5’ whiteboard, and that is still quite a lot of square footage. The nice thing about having a lot of whiteboard space is that you don’t have to erase things as often. I bet we spent the better part of a year drawing out all sorts of arcade cabinet designs before it evolved into what we actually built. There were often at least two generations of design on the walls at any given time, and it was nice to have enough space to draw a life size diagram of the tabletop and controls.

zsh-dwim: New Transformations for watch, modprobe, and dpkg

| Comments

I haven’t had a chance to update zsh-dwim in quite a while. I had a tiny spark of inspiration for a new transformation tonight, so I went ahead and implemented it. I also picked two other easy ideas off of my to-do list to implement, since the new idea was so tiny.

Tonight’s new idea

I was manually installing a couple of packages using dpkg. There were a couple of missing dependencies, so I immediately knew that I’d have to run apt-get -f install to pull those dependencies down. I was surprised that I was smart enough to realize that this was exactly the sort of case I was thinking of when I created zsh-dwim.

When dpkg --install fails due to missing dependencies it exits with a status of 2. Now, if you hit control-u when that happens zsh-dwim will be smart enough to give you back the apt-get -f install that you are looking for.

watch –> watch -n 10 –> watch -n 30 –> watch

I don’t know about you, but adjusting the interval of the watch command is something I do quite often, and I rarely remember to make the change until I’ve already punched in the entire command. With this new transformation I can just hit control-u repeatedly to cycle through various intervals.

modprobe –> modprobe -r –> modprobe

I’m not entirely certain how useful this one will be, but it was on my list. It was also very easy to implement, and I find these circular transformation interesting. We’ll see how it works out, if I manage to remember it is even there.

My Linux Workstation Build - 2013

| Comments

Inaccurate picture: My desk actually looks like a war zone after surviving this workstation build!

The motivation

I’ve been thinking about upgrading all year. I almost did back in January, and I wish I had decided to do it then. Memory prices have just about doubled since then. I’ve been using a laptop as my primary workstation since 2006, and I wasn’t sure if I wanted to switch back to a desktop. Since I’ve had some recent success using Seafile to synchronize a huge chunk of my home directory, I figured I could get away with using two different computers again.

I’ve had my current laptop for over three years now, so I figured it was time for an upgrade. It is fast enough for almost everything I do, but I’ve been playing more games lately and the video card just isn’t fast enough. I have the graphic settings turned way down in most games.

Some of the games I play, like Team Fortress 2, run at perfectly acceptable frame rates. I’m not so lucky in other games. Upgrading to a desktop gives me a guaranteed video card upgrade for free.

The goal

History and Moore’s law made me think that I’d be able to build a reasonably priced Linux desktop that was at least three or four times faster than my laptop. I was very mistaken. Moore’s law doesn’t seem to be translating directly into performance like it used to.

Doubling my memory and processor speed of my Core i7-720QM laptop turned out to be pretty easy to achieve, and I didn’t have to break the bank to do it.

Parts List

Optional:

Total cost: $715

A slightly faster equivalent to the video card I used would be an Nvidia 650. That would bring the total cost up to about $825.

All the devices on this motherboard are working just fine for me on Ubuntu 13.10. I don’t have any USB 3 devices, though, so I am unable to confirm whether they are working correctly or not. I can confirm that the USB 3 ports on the rear panel are working just fine with USB 2 devices.

This is neater than any computer I've ever built

The processor and motherboard

I’ve had my sights on the AMD FX-8350 for quite a while now. To get a processor from Intel with comparable performance you’ll end up paying around $100 more, and you’ll also have to pay more for the motherboard. I could have spent $150 to $200 more on Intel parts for about a 15% performance boost, but that didn’t seem like a good value.

Should the FX-8350 really be referred to as an eight core processor? Probably not. It sounds to me like it has eight nearly complete integer units and four completely floating point units. I’d like to do a bit of testing to see exactly how close to complete those eight integer units actually are, but for now, I am going to say that the FX-8350 is more like an eigth-core processor than a four-core processor with hyper-threading—at least as far as integer operations are concerned.

When my friend Brian built his FX-8350 machine, he ended up using a motherboard with the 990 chipset. At that time, the motherboards with the 970 chipset weren’t shipping with BIOS support for the FX-8350. This isn’t a problem anymore, so I was able to choose a less expensive motherboard.

I opted for the MSI 970A-G43. I chose this motherboard because it was one of the least expensive 970-based boards from a manufacturer I trusted. I’m much more impressed than I thought I was going to be. I knew before placing my order that it had six SATA 3 ports and room for 32 GB of RAM. When I opened the box, I was surprised to see solid capacitors on the board. I’ve never actually had a capacitor failure, but it was still nice to see.

Start of memtest86+ run: Not sure why memtest+ is reporting DDR1600

32 GB of memory

I definitely went overboard on memory. My laptop has 16 GB of memory, and almost 10 GB of that is usually being used as disk cache. I could function relatively comfortably most days with only 8 GB. It doesn’t matter, though. Memory is still cheap enough that it made sense to me to max it out, even though it is almost twice as expensive as it was late last year.

The NZXT case

This NZXT Source 210 is the second case from NZXT that I have seen. They are both quiet and well made. They lack some features, though. I usually prefer cases with easier to access, 90-degree rotated 3.5” drive bays, but I’m willing to live without them at this price point.

The Topower ZU-650W power supply

The power supply was another pleasant surprise. The spare video card I have has a pair of PCIe power connectors that need to be populated, so I wanted to find a power supply that could meet that requirement. The Topower ZU-650W just happened to be on sale while I was placing my order, and I am lucky that it was.

All of its cables are wrapped in sleeves, so it is easier to manage that potential rat’s nest of wires. I was also surprised to see that it came with five Velcro cable ties. The Topower has one feature that really surprised me: a “turbo fan switch.” I haven’t had any sort of “turbo” button on any of my computers in 20 years!

The solid-state drive

I didn’t have to buy the 128 GB Crucial M4 SSD. I simply moved it from my laptop into the new machine, and it booted right up. I included the price in the parts list to help paint a more complete picture of the build.

The video card

Late last year, a friend of mine built himself a new computer very similar to this one. He donated his old video card to me for use in my arcade cabinet. I couldn’t use it in the arcade cabinet because it requires two PCIe power connectors, and the power supply in the arcade table only has one.

This card is an NVidia GTX 460. With its 336 CUDA cores, this card should be around ten times faster than the mobile NVidia card in my laptop. This should be fast enough for the foreseeable future. It is doing a fine job running all the games I have. That isn’t too surprising, since the games I play are all pretty old.

I’m getting 100 to 150 frames per second in Team Fortress 2 with some antialiasing enabled and all the rest of the settings maxed out. Killing Floor still drops down below 60 frames per second when things get busy; I’m pretty sure the Linux port is just buggy. Maps like “Abusement Park” that were nearly unplayable on my laptop are running just fine now, though. I think this video card will keep me happy for quite a while.

If I did have to buy a video card today, I would choose the NVidia 650 Ti. It is at a very nice point on the price/performance curve, and I’ve seen the Nvidia 650 Ti Team Fortress 2 at 2560x1440 with all the settings maxed out. That is more than fast enough for my own purposes. You could save a little money with the NVidia 650, but it has half as many cores as the 650 Ti, so the bang for the buck isn’t as good.

Benchmarks

I was very interested in seeing just how far I’ve come from my laptop. I tried to come up with a few benchmarks that would help gauge just how much of a real world performance increase I would see.

Geekbench – Laptop: 5,950 FX-8350: 12,981

Geekbench score at stock clock rate

Geekbench is a pretty good benchmark of CPU performance, and I relied on Geekbench’s results browser very heavily while I was shopping. I was really hoping to triple my laptop’s processor performance, but I quickly learned that the required hardware was pretty expensive.

My first-generation, quad-core i7 laptop manages a Geekbench score of 5953. I didn’t want to bother upgrading unless I could double that score. I was a little worried, though, because the scores for the FX-8350 cover a range from 9500 to 13,500. I was hoping to reach 13,000.

This wide range of scores for the FX-8350 was my primary motivation for this write-up. I have no idea what is wrong with those poor FX-8350 machines that are scoring under 10,000, and I was a bit worried that I would be down there with them. I’m happy to be able to report that the FX-8350, paired with the very reasonably priced MSI 970A-G43, performs very well.

I was happy to see that the parts I chose for my new Linux desktop were able to pull off a Geekbench score of 12,981. That’s close enough to 13,000 for me, and it is better than the majority of scores for FX-8350 machines. This is definitely good enough for now, but a tried out a small bump in CPU multiplier, and that brings that score up to 13,649

Linux kernel compile time – Laptop: 3:48 FX-8350: 1:40

I did run this test on both machines in RAM on a tmpfs file system. This seemed more fair. My drives are encrypted, and I didn’t want the laptop’s lack of encryption acceleration to be a factor.

I ran make defconfig && /usr/bin/time make -j 12 on fresh copy of version 3.10 of the Linux kernel. I did some testing way back when I bought this laptop, and determined that 12 jobs was pretty close to ideal. I did make a run with -j 16 on the FX-8350, but I saw no improvement.

The laptop completed the task in 3:48, while the FX-8350 took only 1:40. That’s 2.28 times faster than the laptop and is in line with the Geekbench results.

Note: You don’t need more jobs than you have cores when compiling from a RAM disk, since the compiler never has to spend any time waiting on the disk. A make -j 8 gives virtually identical results in this test.

1080p h.264 encoding with Handbrake – Laptop: 9.03 FPS, FX-8350: 26.71 FPS

I haven’t actually transcoded much video in the last two years, but I’ve had to wait on this kind of job often enough that this seemed like a useful test. To save some time, I encoded chapter 15 of the Blu-ray “Up” using my slightly modified “high profile” settings in Handbrake. Chapter 15 is roughly four and a half minutes long, so I didn’t have to spend too much time waiting for results.

My new FX-8350 workstation is almost three times faster than my laptop in this case. The laptop only managed 9 frames per second, while the FX-8350 pulls off 26.7 frames per second. That’s fast enough to transcode a 24-frame-per-second Blu-ray movie in real time, even using these “high profile” settings.

openssl speed aes – Laptop: 60 MB/s, FX-8350: 210 MB/s

This test is a bit flawed. I was hoping to see how much advantage the AES acceleration instructions would give the FX-8350, but the openssl package that ships with Ubuntu doesn’t support them. The FX-8350 still manages to pull numbers that are over three times faster than my old laptop.

Laptop:
 type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
 aes-128 cbc      76225.62k    82501.14k    85311.70k    85539.62k    84862.54k
 aes-192 cbc      65285.86k    69284.52k    71094.54k    70744.36k    71078.61k
 aes-256 cbc      56213.70k    59583.91k    61326.30k    60588.61k    60864.64k

FX-8350:
 type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
 aes-128 cbc     112766.76k   119554.97k   123908.45k   280502.03k   285362.86k
 aes-192 cbc      95632.70k   100571.26k   103522.55k   238100.82k   242368.13k
 aes-256 cbc      82652.83k    86610.07k    88366.59k   207808.64k   210090.55k

SSD and whole disk encryption performance

These are the bonnie++ results, including my previous benchmarks from an older article:

Version 1.03c                  ------Sequential Output------- --Sequential Input--  --Random-
                               -Per Chr- --Block-- -Rewrite-- -Per Chr- --Block---  --Seeks--
Machine                   Size K/sec %CP K/sec %CP  K/sec %CP K/sec %CP K/sec  %CP  /sec %CP
Laptop, M4, no encryption  16G   954  98 183205 20 105877   9  4858  99 327088  16  4306 120
Laptop, M4, aes            16G   604  94 152764 16  63475   6  4064  98 145506   6  2380  46
FX-8350, M4, aes            6G   642  99 178581 28  82598  25  3124  99 300500  15  4234 104

The sequential block output has mostly caught back up with the Crucial M4’s unencrypted speed. Sequential input very nearly caught up. I am a little bit disappointed in that, though. I was expecting the SATA 3 ports on my new MSI 970A-G43 motherboard to allow the read speeds to surpass the SATA 2 limited 320 MB/s. I must still be hitting a decryption bottleneck, even with the AES-NI kernel module loaded.

I’m still very pleased with my Crucial M4. Its price and performance are both good, and the drive is still performing well after nine months of hard use and random benchmarks.

The verdict

The new hardware easily exceeded my performance goals. The grunt-work tasks that I usually have to wait for are running two or three times faster than before, which should save me quite a bit of time.

All the games I play are running faster, and they look better to boot. The video card I have is fast enough for now, and it is nice to know that I’m just a video card upgrade away from having a pretty powerful gaming machine.

I have to say that I am very pleased with this build. I now have more performance than I actually need, and I feel that I got plenty of bang for the buck.

Update 2013-07-16: I have two new pieces of information. I ended up with one bad stick of RAM. I have it boxed up and ready to ship back. I’m not in a hurry to find a UPS drop box, though, because 24 GB is still more than I need.

I also swapped out the stock AMD heat sink and fan combo. It is quite loud. I ended up replacing it with an Arctic 7 CPU Cooler. It was very reasonably priced, and it doesn’t get nearly as loud as the stock cooler does when it spins up to full throttle. The computer isn’t as quiet as my laptop, but that isn’t surprising. The PSU fans are now the loudest thing in there, and one of my spare hard drives I stuck in there seems surprisingly noisy.

My First Arduino

| Comments

Update: Everything arrived. I was a little worried about the push buttons. The USPS had them marked as delivered on Friday, but they weren’t in the mailbox until Monday. Other than that, everything arrived without incident. I just wish I ordered the resistors earlier. They arrived with the last group of items, and everything else was useless without them!

My custom made Arduino starter kit

I finally took the plunge and bought an Arduino development board. I ended up ordering a knock off Arduino Uno board through eBay for $12.99 shipped. The board arrived a few days ago, and I don’t have anything useful to hook up to it yet.

I ordered an assortment of various electronics components, most of which are shipping out of Asia. This was significantly less expensive than buying locally or buying something like the Arduino Starter Kit. I just have to be patient, and I have to hope everything successfully makes the journey across the pacific.

My list of components so far

From various eBay sellers:

  • Arduino Uno clone – $12.99
  • Breadboard + 65 Jumpers – $6.18
  • 100 micro push button switches – $4.43
  • 10 10k OHM NTC Thermistors – $2.50

From dx.com:

Total cost: $46.76

I didn’t buy everything that is in the Arduino Starter Kit, but I think I made some pretty good selections. I don’t need any of the motors or the LCD display for any projects that I have in mind, and I have some random DC motors, potentiometers, and laser pointer parts in my junk box.

Some project ideas

When you have a hammer, everything looks like a nail. Every problem I see lately looks like something that can be fixed with an Arduino. I don’t expect to build all of these things, but here is a list of some of the things I’ve come up with so far:

I also have a few potential uses for an Arduino in my keyboard:

  • RGB LEDs for notifications
  • an analog volume control knob
  • a low-level “panic” button (sometimes games turn off my external monitors on me)

I may have chosen the wrong Arduino

I’m pretty sure that it would have been better to go with an Arduino Nano. The Nano plugs directly into a breadboard, and it looks like it would be quite a bit more convenient for prototyping. I will stick with what I have for now, but I won’t be surprised if I add an Arduino Nano to my toolbox in the near future.

My first attempt at programming the Arduino

An Arduino is rather boring all by itself. It was sitting on my desk for quite a few hours before I learned that the board has an integrated LED connected to pin 13. Shortly after I figured that out, I had it programmed to blink out an S.O.S. on that LED: