Back in January, I purchased a used Canon XSi DSLR. It was a huge step up over the camera on my phone, and thanks to my friend Andy, it didn’t take long to start learning about things like f-stop, shutter speed, and ISO. I am far from an expert, but I feel like I’m quickly working my way past novice status.
I had a lot of fun with the Canon 18-55mm kit lens that came with my camera, but my research told me it wasn’t a great lens. I read about the “Nifty Fifty”—the f/1.8 50mm Canon prime lens. I asked Andy about this 50mm “fixed-zoom” lens, and he explained to me that I shouldn’t say “fixed-zoom” ever again. He did confirm that the 50mm prime was a great idea, and since it was so inexpensive, I ordered one immediately.
In fact, I was so pleased with my Canon 50mm f/1.8, that on the day my lens arrived, I immediately ordered one for my friend Brian.
Unlike zoom lenses, Prime lenses have a fixed focal length. This makes prime lenses much simpler, smaller, and cheaper. Zoom lenses with a fast f-stop of f/2.8 start at around $500, while the faster Yongnuo 50mm f/1.8 only costs around $60.
The Canon 18-55mm kit lens is almost twice as long as either of my prime lenses. They may not be as small as Canon’s pancake lens, but my camera feels quite a bit less bulky when I’m using a prime.
While the size, fast f-stop, and price of the prime lenses are all great, not having zoom is sometimes problematic. I take a lot of great shots with my 35mm and 50mm lenses, but I can’t always stand in an appropriate location. A fast zoom lens may cost five or ten times as much as either of my primes, but there are definitely circumstances where I would have appreciated that zoom.
That said, a prime lens is an extremely inexpensive way to take some amazing shots.
Canon EF 50mm f/1.8
The “Nifty Fifty” is a fantastic lens. The f/1.8 is very handy. A lower f-number means a wider aperture, and a wider aperture means more light and more bokeh. My 18-55mm kit lens has an f-stop range from f/3.5 to f/5.6. Unfortunately, the kit lens can only reach f/3.5 when zoomed out to its widest angle. I can rarely zoom out that far, so most of the time I was operating at f/4.0 or f/5.6.
Being able to shoot at f/1.8 was like using a brand-new camera. I can shoot without the flash indoors with the “Nifty Fifty,” whereas my kit lens would never work without a flash. I also really enjoy the bokeh at f/1.8—that’s the big out-of-focus area in front of and behind the focus of the shot. I like not having to worry about the backgrounds of my photos being perfectly neat and tidy!
Knowing what I know now, I wouldn’t buy the Canon 50mm lens. Yongnuo makes a 50mm f/1.8 prime lens that sells for less than half the price of the Canon. I haven’t had the opportunity to try the Yongnuo 50mm, but I’m extremely pleased with my Yongnuo 35mm prime lens. I expect Yongnuo 50mm f/1.8 to be similarly well made.
Is a full-frame 50mm lens the right choice for a crop-sensor DSLR?
I have a lot of fun with my 50mm prime lens. So much fun, that I almost completely stopped using the kit lens. It isn’t perfect, though. Since I can’t zoom with the lens, I have to zoom with my feet. On my low-end, crop-sensor DSLR, the 50mm lens has an apparent length of 80mm—that means I have to stand even farther away than with the 18-55mm kit lens when it is at full zoom. Sometimes that isn’t even possible!
I researched other prime lenses. I was tempted by the Canon EF-S 24mm f/2.8 pancake lens. At $150, I thought it was on the expensive side—especially considering that f/2.8 is one full stop slower than the “Nifty Fifty.” I also want to avoid buying any more EF-S lenses, because they won’t work on a full-frame DSLR, and I’d like to keep my upgrade options open.
Yongnuo EF 35mm f/2.0
When I saw the Yongnuo 35mm f/2.0, I purchased it almost instantly. The f-stop is almost as good as my 50mm f/1.8, and it is under $100—less than one quarter the price of the Canon 35mm f/2! How could I go wrong with this lens?
The Yongnuo 35mm is by far my favorite lens in my bag—it has barely been off my camera since it arrived, and I’ve taken over 1,000 shots with it already. I take photos with the Yongnuo 35mm that are every bit as good as the shots I got with the Canon 50mm, but it is usually much easier to frame my shots with the 35mm.
When I used to use “Nifty Fifty,” I found myself trying to get farther away, and I was backing into things all the time. Sometimes I’d run into something like a table, and I’d be able to walk around it to get my shot. Other times, though, I’d back into a wall—there’s no shooting through a wall.
I’m extremely interested in acquiring a fast zoom lens with a big aperture. I’m considering either the Tamron 28-75mm f/2.8 or the Tamron 24-70mm f/2.8. One of these lenses is quite inexpensive used, but it lacks image stabilization. Both lenses have an f-stop of f/2.8 when wide open or zoomed in, which seems intriguing. It may have been a better value to buy a used Tamron 28-75mm zoom lens instead of a pair of brand-new prime lenses.
What do you think? Do you enjoy your prime lenses? Will I be as happy with an f/2.8 zoom as I am with my f/1.8 and f/2.0 prime lenses?
UPDATE: Everything in this blog post is still relevant today, but I’ve since upgraded to 40-gigabit Infiniband cards. When I bought the used 20-gigabit Infiniband gear, it was already 10 years old. Today, the 40-gigabit stuff is 10 years old, and it costs about what I paid for the 20-gigabit hardware. You should skip the 20-gigabit Infiniband hardware and go straight to 40-gigabit.
Gigabit Ethernet ought to be fast enough for anybody, right? It usually is, at least for me. Heck, even Wi-Fi is fast enough for most of my local streaming and Internet needs. I am impatient every now and again, and when I’m impatient, I think about what I might do to upgrade my home network.
My friend Brian decided to add 10 Gigabit Ethernet to his home network. I would enjoy having a faster link between my desktop computer and virtual machine server, and I didn’t want Brian to leave me behind in the dark ages, so I decided to give InfiniBand a try.
As it just so happens, used InfiniBand hardware is fast and inexpensive. It isn’t just inexpensive—it’s downright cheap! I bought a pair of 2-port 20-gigabit InfiniBand cards and a pair of cables on eBay for less than $70. That $20 per card and $15 per cable.
If that’s not fast enough for you, 2-port 40-gigabit cards don’t cost a lot more. I figured I’d save a few bucks on this experiment, since I don’t have fast enough disks to outpace a single 20-gigabit port. In fact, the RAM in my little server probably can’t keep up with a pair of 20-gigabit ports!
I decided to be optimistic, though. I bought two cables just in case 20 gigabits wasn’t enough.
I bought InfiniBand cards with CX4 ports. The cards aren’t always labeled well on eBay, though. Most of the CX4 Mellanox-brand cards will have XTC in their model number. My Mellanox cards were rebranded as HP, so they don’t have those helpful part numbers. Luckily for us, pictures of the cards make it terribly obvious.
You can buy used InfiniBand gear at Amazon or eBay. I usually prefer the convenience of Amazon, but the prices are quite a bit better at eBay.
How is InfiniBand different than 10 Gigabit Ethernet?
10 Gigabit Ethernet is another incremental upgrade to the Ethernet protocol, and it works with TCP/IP in exactly the manner you’d expect. You can run IP over InfiniBand, but that isn’t what it is designed for—you lose the advantages of InfiniBand’s Remote Direct Memory Access (RDMA). I knew before ordering my InfiniBand cards that this might cost me some performance, but I didn’t know how much.
I can run iSCSI over RDMA, and these InfiniBand cards should be fast enough that I wouldn’t be able to tell if an SSD were plugged into the local machine or the server on the other side of the InfiniBand cable.
Even though I don’t currently have a need for iSCSI, I did initially plan to test its performance. However, you need to install all sorts of third-party OFED packages to make use of iSCSI. I don’t need any of those packages to use IP over InfiniBand (IPoIB), so I didn’t think it was worthwhile to pollute my desktop and homelab server with extra cruft.
Performance is much better than Gigabit Ethernet, but I have a bottleneck somewhere
IP over InfiniBand (IPoIB) was easy to set up. The InfiniBand network interfaces default to datagram mode, which was extremely slow for me. Putting the interfaces in connected greatly improved my speeds. I am able to push around 5.8 or 6.5 gigabits per second between my little server and my desktop computer, depending on which direction the traffic is flowing.
I had no idea how much network performance I’d be giving up to the extra memory-copying requirements of IPoIB. 6.5 gigabits per second is well short of the theoretical maximum speed of 16 gigabits per InfiniBand port, but it is already a HUGE improvement over my Gigabit Ethernet ports.
At first, I had assumed my speed limitation was caused by the low-power CPU in my KVM server—I certainly didn’t choose it for its performance. During all my early testing, iperf’s CPU utilization was always precisely 50%. While writing this blog, however, I’m seeing 55% utilization when testing in one direction and 74% in the other.
I’ve also tested RDMA and rsocket performance, which should both be much faster. The RDMA benchmark tools were no faster than my IPoIB iperf tests, and my librspreload.so tests using iperf yielded identical performance as well. Those LD_PRELOAD tests with librspreload.so were definitely working correctly, because there was no traffic over the IPoIB link during the test.
12345
[ 1.709385] mlx4_core: Mellanox ConnectX core driver v2.2-1 (Feb, 2014)
[ 1.709449] mlx4_core: Initializing 0000:02:00.0
[ 4.001247] mlx4_core 0000:02:00.0: PCIe BW is different than device's capability
[ 4.001301] mlx4_core 0000:02:00.0: PCIe link speed is 2.5GT/s, device supports 2.5GT/s
[ 4.001356] mlx4_core 0000:02:00.0: PCIe link width is x4, device supports x8
These results led me to dig deeper. Before ordering these InfiniBand cards, I knew I remembered seeing an empty PCIe 16x slot in both my desktop and my KVM server. As it turns out, both of those slots are only 4x PCIe slots. Uh oh!
According to lspci, my Mellanox MT25418 cards appear to be PCIe 2.x devices, so they should be capable of operating at 5.0GT/s—that’s 16 gigabits per second. Unfortunately, they’re running at half that speed. This easily explains my 6.5-gigabit limit, but I had to dig deeper to figure out why these cards identify as PCIe 2.x while operating at PCIe 1.x speeds.
The Mellanox site says that the MT25418 cards are PCIe 1.x and the MT26418 cards are PCIe 2.x. As far as I can tell, the Mellanox cards with model numbers that contain an 18 or 28 are PCIe 1.x,while model numbers containing 19 or 29 are PCIe 2.x.
Unfortunately, the faster cards cost four times as much. If you don’t have 8x or 16x PCIe slots available, and you need to double your performance, this might be a worthwhile investment. For my purposes, though, I am extremely pleased to have a 6.5-gigabit interconnect that only cost $55. That’s roughly 800 megabytes per second—faster than most solid-state drives.
My extra InfiniBand cable was completely useless for me. It took me a while to learn that my cards just aren’t capable of maxing out even a single DDR InfiniBand port in my machines. Of course, before testing anything at all, the first thing I tried to do
after a successful ping was attempt to bond my two InfiniBand ports.
You can’t do it. IPoIB runs at layer 2, and the Linux kernel can only bond layer-1 devices. You can use the channel bonding interface to set up automatic failover, but you can’t use channel bonding to increase your IPoIB bandwidth.
This led to a second problem.
You can’t attach KVM virtual machines to IPoIB interfaces
KVM can only bridge directly to layer-1 network devices. This seemed like an easy problem for an old-school network engineer like myself. I figured I’d just need to create a new bridge device on the KVM server with a new subnet. Then I’d just need to route from the IPoIB subnet to the new virtual subnet. Easy peasy, right?
It should have been easy, but I spent days trying to make it work. I knew I had to set those giant InfiniBand MTUs of 65520 on all these interfaces, but I just couldn’t get decent speeds when routing. At first, I was getting 5 or 6 gigabits per second in one direction, but I wasn’t even hitting DSL speeds in the opposite direction.
It was easy enough to fix, and I’m more than a little embarrassed to tell you what the problem was. I missed an important interface. I just couldn’t set the MTU on my KVM bridge to 65520. It failed. As it turns out, you can’t set the MTU of a bridge device to 65520 if the bridge isn’t already bridged to another interface.
When my virtual machine on that bridge starts up, it creates a vnet0 device and immediately attaches it to the bridge. Once that device is created, you can set its MTU to 65520, and then you’ll be allowed to set the MTU of the bridge to 65520. Then everything works as expected.
I am losing some performance here. I can “only” manage 5.6 or 4.6 gigabits per second between a virtual machine and my desktop computer. That’s still four or five times faster than my Gigabit Ethernet network, so I can’t complain.
How do I configure these IPoIB and KVM interfaces?
Documenting my new network and KVM configuration would probably double the size of this blog post, so I’m going to do a separate write-up on that soon. I will summarize things here, though!
You need an InfiniBand subnet manager. I’m under the impression that one of these may already be running on your InfiniBand switch, assuming you have one. I’m only connecting these two machines, so I don’t have a switch. I’m running opensm on my KVM server. I don’t believe I had to do anything to configure opensm after installing it.
Setting up the link between the two physical hosts was extremely easy, and it only required a few extra lines in my /etc/network/interfaces file to put the IPoIB links into connected mode. Other than that, they look just like any other network device.
1234567
auto ib1
iface ib1 inet static
address 10.42.253.1
netmask 255.255.255.0
pre-up modprobe ib_ipoib
pre-up echo connected > /sys/class/net/ib1/mode
mtu 65520
I am not routing any traffic between my InfiniBand subnet and my old Ethernet subnet. It isn’t necessary, since both machines on my InfiniBand network are also on the Ethernet network.
The verdict
I’m quite pleased with my InfiniBand experiment. It may have a few quirks that can be avoided if you use 10 Gigabit Ethernet instead, but InfiniBand costs quite a bit less, especially if you have enough machines that you need to use a switch—there are plenty of 8-port InfiniBand switches on eBay for under $100!
Had I known ahead of time that the InfiniBand cards I chose only supported PCIe 1.x, I would have spent more to upgrade to faster cards. If I had done that, though, this would probably be a much less interesting blog post. I can’t really complain about the performance I’m getting, either. All the disks on my KVM host machine are encrypted, and its CPU can only process AES at about seven gigabits per second. These cards are still just about fast enough to push my hardware to its limits.
I convinced my friend Brian to build a heavy-duty, dual-processor homelab server using a pair of 8-core, 16-thread Xeon processors. I don’t really have a need for a beast like that in my home office, but I haven’t built anything like that just for the fun of it in a long time. I may end up building a similar machine later this year to add to my InfiniBand network!
Are you using InfiniBand of 10-Gigabit Ethernet at home? Do you have questions? Leave a comment or stop by and chat with us on our Discord server!
I get annoyed when I resize Emacs, and the window becomes narrower than 80 columns. I’ve had a solution to this problem floating around my head for years, but I just haven’t run into this situation often enough to implement an automated solution. On those rare occasions, it’s just been easy enough to hit a key to change the font size.
Narrow Emacs frames have become more annoying for me recently, so I figured it was about time to create an elisp function that would set the font size based on the width of my Emacs frame. In my case, this turned out to be quite simple. I’m not a fan of tiling window managers—I’ve tried a few, but they always felt uncomfortable.
I attacked this problem from the other direction. I use the Sawfish window manager, and I have some handy customizations that divide my monitors into columns. These columns aren’t of equal width. My 1080p laptop has a narrow column on the left and a wide column on the right. The 1440p monitors on my desktop have a wide column in the center with narrow column on either side.
These columns are roughly the same width on my desktop and laptop—the desktop just gets an extra column on each monitor. The narrow columns automatically tile vertically, and they’re just wide enough to fit an 80-column terminal window with a comfortably font size.
I have convenient key bindings to push the focused window into any of these columns on either monitor, but I also have some automation that moves new windows into appropriate places—Firefox in the wide column, Thunderbird in the narrow column on the left, and Pidgin chat windows get stacked in the narrow column on the right.
On my desktop, Emacs goes in the wide column in the center of my second monitor. It is usually flanked by several terminal windows. Emacs and my terminal windows both use the Solarized color theme and Inconsolata for their font, but I prefer Emacs to have a larger font size. This is fine, until I decide to push Emacs into one of the narrow columns. Sometimes I actually have two Emacs frames open at the same time. Other times I’m reviewing proofreading notes that I’ve received from my editor—LibreOffice often requires the wider column for the editing notes to be large enough to read.
At my usual font size, the Emacs frame isn’t much more than 60 characters wide in my narrow columns. As I said, this is easy enough to fix manually, but I’m sick of doing things manually. If my office lights can shut off automatically when Steam launches a game, then surely Emacs can change my font size automatically as well!
My use case is quite simple, since I only need to use two different font sizes. I hooked a function into Emacs that runs any time the window configuration is changed. If the frame is less than 76 characters wide, it scales down the font. Under any other circumstances, it sets the font to the default size.
Late last year, I bought a used Canon DSLR. It was an excellent purchase, and I’ve had tons of fun using it to learn more about photography. The DSLR is a lot bigger and bulkier than my phone, but the photos I can shoot with the DSLR just look so much better than anything I’ve ever shot with a camera phone.
There is one thing that has annoyed me about my DSLR since day one—the camera strap. The strap is invaluable when I take my camera out with me—but it is way too bulky to fit in my pocket! The neck strap that comes with every DSLR quickly becomes uncomfortable, and when I’m shooting at home or on a tripod the strap just gets in the way!
My research quickly told me that a shoulder strap is the way to go, and there’s a huge selection to choose from!
I quickly figured out what to avoid
My first instinct was to look at the cheapest shoulder straps I could find. Camera straps are just fabric belts with hooks on the end. How bad could a cheap one be?
As it turns out, a cheap strap can be an extremely bad idea! I read several horror stories about the cheaply made hooks on some straps breaking and resulting in hundreds or thousands of dollars’ worth of camera gear crashing to the ground!
The Altura Quick-Release Shoulder Strap
I looked at quite a few camera straps, but my search kept bringing me back to the Altura Quick-Release Strap. The reviews are good, it has a lot of useful features, and it is only about $20. These are the features that interested me:
Sturdy quick-release clip
Steel bracket that attached to the tripod mount
Pocket in the strap for SD cards and batteries
Optional two-point connection to camera
The Altura shoulder strap comes with a steel connector plate that attaches to the bottom of your camera using a big, sturdy tripod screw. The plate has a hole on one end that fits the quick-release clip, and a slot at the other end that can fit a regular camera strap.
The strap comes with both the quick-release clip and a regular camera strap connector. If you’re very worried about your expensive and heavy camera and lens coming loose, you can attach the normal-style camera strap to the body, and attach the quick-release to the tripod plate. It would be amazingly difficult to accidentally separate the camera from the strap in this configuration!
I’m extremely pleased with the quality of the quick-release clip. It is made from sturdy metal, and the hooks on either side of the clip overlap by almost half an inch when closed. I don’t think my camera is ever going to come loose, and it would be impossible for me to accidentally clip on the camera incorrectly.
I use my camera around the house most of the time. Around here, I always have somewhere to put the camera down, so the neck strap is just a nuisance. My new shoulder strap spends most of its time in my camera bag.
The heavy-duty bracket that came with the strap has found a permanent home on the bottom of my Canon DSLR. It doesn’t get in the way very often, and with it in place, it only takes me a few seconds to clip the strap back on and head out the door with my camera. This is extremely convenient!
The large bolt that attaches the bracket to the camera has a threaded hole in the bottom, and it can be connected directly to your tripod. When I read the product description, I thought this sounded scary. After using this bolt, though, I am very confident that it isn’t going to snap if I attach it to my tripod.
I don’t think I’d be likely to use this feature. The bracket on my tripod has a lot of surface area that comes in contact with the camera. At the very least, I’d worry that I’m losing some stability. Even worse, I’d worry about the plastic bracket on the tripod giving way.
It only takes a few seconds to disconnect the bracket, so I don’t think this is a big deal.
My only complaint
All the videos I’ve watched show people reaching across their bodies to lift up the camera with their left hand. I prefer to reach for the camera with my right hand. That means I have to attach the bracket facing in the opposite direction.
The battery door is blocked when you attach the bracket this way. It would be nice if this wasn’t the case, but it really doesn’t cause much trouble—it only takes a few extra seconds to loosen the bracket. It would be simple enough for the manufacturer to correct this problem—they just need to add a second slot for the tripod-mounting bolt.
Conclusion
I took my new shoulder strap to the Texas Pinball Festival last weekend. That was the first time I carried the camera with the strap for several hours, and it did an excellent job! I could almost completely forget that I was carrying my camera while playing pinball, but it was still sitting right there at my side ready to take photographs.
I don’t think I could play pinball with my camera on a neck strap!
Last year, I built a reasonably fast and energy-efficient server to host KVM virtual machines in my office. I included a pair of 240 GB Samsung EVO 850 solid-state drives. That gives me a fast RAID 1 for the operating system, and I was hoping to use the rest of the available space for dm-cache.
dm-cache (a.k.a. lvmcache) is a block device caching layer that was merged into the mainline Linux kernel back in April of 2013. It is a “hotspot” cache that lets you use fast SSDs to cache reads and writes to your slower, old-school spinning media.
When I set up my little server, I installed Ubuntu 14.04 LTS. This seemed like a good idea, since I didn’t want to have to touch the host operating system very often. Unfortunately, the dm-cache tools that ship with the 2014 version of Ubuntu aren’t very advanced—you had to do your own sector-level math when setting up your cache.
I tried to get dm-cache going anyway, but I didn’t do a very good job. It seemed to be caching, but my spinning disks were being written to at a constant 1 MB/s for days, and my benchmarks didn’t show any improvement. This seemed like a failure, so I turned off dm-cache.
The Ubuntu 16.04 release is now only a few months away, so I decided to upgrade early to properly test out dm-cache.
I haven’t figured this out yet. When I started researching this last year, I don’t remember seeing a single mention of lvmcache. This year, you almost exclusively use lvm commands to set up and control your dm-cache.
I don’t think you can go wrong using either name. It is dm-cache in the kernel, and lvmcache in user space.
What was I hoping to get out of dm-cache?
I had a feeling dm-cache wouldn’t meet all of my expectations, but I thought it was worth giving it a shot. Here are some of the benefits I was hoping to see when using dm-cache.
Faster sequential reads and writes
Faster random reads
Much faster random writes
Power savings from sleeping disks
Performance improvements
dm-cache is a hotspot cache—much like ZFS’s l2arc and zil. These hotspot cache technologies may not improve your synthetic benchmark numbers, as my friend Brian recently discovered. At first, I didn’t find any performance improvements either, but this was my own fault.
When I upgraded from Ubuntu 14.04 to 16.04, I didn’t notice that I was still running the old 3.19 kernel. There’s been a lot of dm-cache progress since then, and I wasn’t able to get the smq cache mode to work with the older kernel. Once I upgraded the kernel and switched to smq, things improved dramatically.
As you can see from the benchmarks, smq has been a really big win for me. The write speeds are 75% faster than the uncached disks. In fact, the cached writes are almost as fast as the solid-state drives. The read performance is even more impressive—the cached mirror is 50% faster than the solid-state drives alone!
1234567891011
-------------------------------------------------------------------------
LVM [2.02.133(2)] cache report of found device /dev/raid10_crypt_vg/kvm
-------------------------------------------------------------------------
- Cache Usage: 41.1% - Metadata Usage: 1.6%
- Read Hit Rate: 85.5% - Write Hit Rate: 94.7%
- Demotions/Promotions/Dirty: 0/971904/6
- Feature arguments in use: writeback
- Core arguments in use : migration_threshold 2048 smq 0
- Cache Policy: stochastic multiqueue (smq)
- Cache Metadata Mode: rw
- MetaData Operation Health: ok
My cache is over 200 GB. I haven’t even managed to fill half of that, and I’m certain it would work adequately at a fraction of that size. My virtual machines occupy about 2 TB, but the vast majority of that data is taken up by backups and media on my NAS virtual machine. The data that gets accessed on a regular basis easily fits in the cache.
Most people don’t care about spinning down the hard drives in their servers. My little homelab virtual machine server is very idle most of the time. Spinning down a couple of 7200 RPM hard disks will only save me six or seven watts, but I think it’d be a nice bonus if I could make that happen.
I’ve tried cheating as much as I can, but I just can’t get dm-cache to stop writing to the cached media. I see hours go by throughout the day with no reads on the magnetic media, but with the default smq settings, there is a slow but constant stream of writes to the cached disks. Most of those writes seem to be caused by OpenHAB’s log file.
I’ve tried cheating by setting the smq’s migration_threshold to zero—low values also seem to work. This often silences the writes to the cached disks, so at least I’m on the right track. Sometimes, the cached disks still see several small writes each minute.
Even when I manage to get dm-cache to keep the cached disks in a very idle state, something is still preventing my disks from going to sleep—even when they haven’t been accessed in over an hour! Google tells me the usual culprits are smartctl, smartd, or hddtemp. I’ve ruled all of these out. I can manually put the disks into standby with hdparm -y, and they will stay asleep for hours.
It would have been nice to get the drives to spin down. They don’t use much power—probably less that 1 kWh every six months—but they are the noisiest thing in my office. Fortunately for me, they don’t need to be in standby to be quiet.
Why not skip the ancient spinning disks?
Solid-state drives are nice, but they’re still small and expensive. At a minimum, I would need a pair of 2 TB ssds in my homelab server. Those 2 TB solid-state drives cost over $600 each. One of those drives costs almost as much as my entire server, and my server has twice as much storage.
The majority of the data my virtual machine server touches regularly fits very comfortably in my 200 GB dm-cache—it has a read cache hit rate of 84.9%. That means my server almost always has the responsiveness and throughput of solid-state drives, while still retaining the benefits of the large, slow, cheap mechanical disks.
I used to have an ssd and a mechanical drive in my desktop, but I had to decide which data to store on each drive. The best part about dm-cache is that I don’t have to worry about that, because dm-cache manages that for me!
Was dm-cache worth the effort?
For a virtual machine host, it was definitely worth the effort. My KVM host is performing much better since enabling dm-cache—almost as well as if I’d only used solid-state disks for my purposes! This is a great value to me, and I can easily and inexpensively add more rotating disks in the future to expand my storage capacity.
I could have saved $50 to $100 if I used smaller solid-state drives, but I’m pleased with my decision to use the 250 GB Samsung EVO 850 SSDs. They have a larger RAM cache than the 120 GB model, but they are still rated for a write load of 41 GB per day—the 500 GB Samsung EVO is rated for 82 GB per day. I won’t be needing the extra endurance of the 500 GB model, and the 120 GB model is just too small for me to make use of anywhere else in the future.
When I bought my own 3D printer, some of the first things I printed were small 3D-printer upgrades. I find the idea of using a machine to create parts to upgrade itself fascinating. When we bought a pair of FlashForge Creator Pro 3D printers at TheLab.ms, I knew it wouldn’t be long before we’d find a need for some simple and necessary upgrades.
Filament alignment bracket
A better filament guide was definitely a requirement for one of our printers. No matter what adjustments we made, the left extruder’s filament would loop behind the spool and get caught up on the spool holder. This was less of a problem on the second printer, but it did happen at least once there. Fortunately, it was easy to fix.
There are quite a few filament-guide parts on Thingiverse, but most of them require support material. I do my best to avoid support material.
Makersome’s filament guide does a great job, and it prints without any supports. In fact, it prints very well without supports. On our printers, the filament-guide tubes are fit loosely in the Makersome filament guide, and they end up moving around during the print. This was easily fixed with some small zip ties.
The spool holders that ship with the FlashForge Creator Pro are designed to precisely fit the huge diameter holes in the spools of the filament sold by FlashForge. My favorite spool holder on Thingiverse so far is the Shaxon Spool Holder. Just like the awesome spool holders for my own 3D printer, these print on their sides and require no supports.
Totally by coincidence, the first spool of filament we bought from Fry’s was Shaxon brand filament, and it did fit the holder perfectly. This spool holder is small enough to fit all but one of our spools. That spool is from Hatchbox, and it very nearly fits.
The spool holders I use at home are ridiculously thin compared to the models available to fit the FlashForge, but they are sturdy enough to hold two spools without any trouble. One of these days, I’ll design an even more universal spool holder for our FlashForge printers.
The FlashForge Creator Pro ships with a BuildTak sheet installed. When it works, it works surprisingly well. BuildTak seems to work better at higher temperatures, but those high bed temperatures would make our 0.1mm prints warp around the edges.
Try as we might, we just couldn’t get any of our white ABS filament to stick to the BuildTak. It was just too problematic.
I’ve been printing on glass at home for two years, so I figured we should give it a try over at TheLab.ms. I stopped by the Lowes in Plano, TX, and I had them cut some cheap 2mm glass into 9” by 6” sheets for me. I was able to get six of those and a pair of fresh 8” x 8” sheets for my printer at home for about $15.
ABS sticks beautifully to glass with a bit of hairspray, and the hairspray wipes off quickly after soaking it in water. We don’t actually clean the glass very often—usually only if you need a perfectly smooth finish on the bottom of your print. Most of the glass plates have been in use for two months now, and most of them have never been cleaned.
Next time, I am going to spend a little more on the glass. We have to be careful of the rough, sharp edges. The prints pop right off the glass after a few minutes in the freezer, but at least one member at TheLab.ms has gotten impatient. They pried a large print off while it was still warm and pulled a chunk right out of the middle of the glass.
With the BuildTak, we had to wait for things to cool down before removing a print. With the glass, you can take your print out of the printer immediately. That means the next person in line doesn’t have to wait as long to start their print!
This one is definitely not a necessity. My own printer’s Logitech C270 camera has been at TheLab.ms ever since we unboxed the first FlashForge Creator Pro back in December. I certainly haven’t missed it, but cameras are much more useful at a public space.
The camera brackets I designed aren’t perfect, but they’re a big upgrades over the IKEA arm mount we were using up until now. The arm mount is awesome—I use two at home, but they’re not well suited to the setup we have at TheLab.ms. They take up a lot of room, and they’re too easy to bump out of alignment. They also see a lot of glare if you point them through the acrylic covers on the printers.
The new camera mount corrects most of these problems, but the angle isn’t perfect yet. They aim at the print jobs, and they’re almost parallel to the glass print surface, so they pick up a lot of glare from the glass print surface during the first few inches of your print job. This isn’t ideal.
I think mounting the cameras on the printer’s handles is a good idea, but I think I can improve the execution quite a bit. Stay tuned for updates!
When they asked me to run the 3D-printing department over at TheLab.ms makerspace, I knew right away that we had to use OctoPrint. I’ve been using OctoPrint to run my MakerFarm Prusa i3 printer at home, and I have been very pleased with it. OctoPrint presents itself as a web interface, which is a fantastic fit for an open community space like TheLab.ms.
OctoPrint’s web interface is handy enough around the house—keeping an eye on my 3D printer from the living room is nice. It is even more useful at TheLab.ms, where you can keep an eye on your print job from Marinara Pizza!
So far, we have two 3D printers at the TheLab.ms—a pair of FlashForge Creator Pro 3D Printers. There is an OctoPrint distribution for the Raspberry Pi called OctoPi. OctoPi is awesome. Just about everything you need to get started is configured out of the box—even the webcam!
Raspberry Pis are cheap, and so are MicroSD cards. It wouldn’t be a big deal to buy two of them. I’m looking into the future, though. I don’t want to have to buy a new Raspberry Pi each time we buy a 3D printer. I also don’t want to have to configure another Raspberry Pi, or add another network drop. I’ve spent most of my career building and maintaining servers. I’d much rather maintain one Raspberry Pi.
OctoPrint isn’t very resource intensive, but it isn’t built to control multiple printers. However, it wasn’t difficult to run multiple instances of OctoPrint on different ports on the same Raspberry Pi. This isn’t surprising, because all it needs to do is send gcode over a serial port.
Running a 3D print simultaneously on both printers while OctoPrint broadcasts a pair of 480p video feeds doesn’t even use 20% of the CPU power of our Raspberry Pi 2.
You need to run a separate instance of OctoPrint for each 3D printer. OctoPrint defaults to running on port 5000, so I just incremented the port for the second instance and told it to use a different directory to store its configuration. At TheLab.ms, we call our printers badger1 and badger2, so we set up those hostnames in DNS. Since the OctoPi distribution hides OctoPrint and mjpeg-streamer behind HAProxy, all I had to do was add those hostnames to the HAProxy configuration and point them to the correct ports.
You can share some of the directories that live inside each OctoPrint instance’s configuration using symlinks. Since our printers are identical, sharing the uploads directory makes a lot of sense—it’s nice to be able to come back a couple days later and print another copy on the other printer. We’re also sharing the timelapse directory.
There’s nothing preventing either instance of OctoPrint from connecting to either printer’s serial port. I’ve gotten around this by creating some udev rules to assign unique names to the printers based on their serial numbers. Badger1 is set to use /dev/ttyBadger1, and badger2 is set to use /dev/ttyBadger2. This prevents people at TheLab.ms from accidentally connecting to the wrong printer.
I don’t know much about mjpeg-streamer. It looks like you should be able to stream multiple cameras from a single mjpeg-streamer process, but I wasn’t able to make it work. Instead, I ran a second copy of mjpeg-streamer on the Raspberry Pi—just like I did with OctoPrint.
I need to tweak udev to assign each camera a unique device name, but I haven’t gotten around to that yet.
What’s next?
I thought about writing this blog post as a step-by-step guide, but I’d like to eventually automate the creation of new OctoPrint instances on our OctoPi server. I want to be able to run a single command to set up a new instance of OctoPrint and mjpeg-streamer, set up HAProxy to point at those new instances, and configure OctoPrint to point at the new camera.
I’m planning to work on this next month, but we’ll see. At the very least, I better have it working before TheLab.ms needs to purchase a third 3D printer!
I was talking to my friend Don from M2M Circuits last week, and he was telling me about his Maker Modem project. I told him it sounded interesting, and I wanted to try it out. He got me together with Jesse, the other brain behind M2M Circuits, a few days later.
They explained how the Maker Modem works, what makes it different from other existing solutions, and they handed me one of their prototypes. I am a little late to the party here, so I had to use one of the first-generation prototypes. The new prototypes use a different power connector, and they have more indicator LEDs. For my purposes, they’re identical to the newer prototypes.
Jesse from M2M Circuits explained the problem that started the journey towards their Maker Modem. His father spends hours driving around his farm taking measurements from wells. Jesse accompanied him on one of these trips, and he immediately realized that he could save his father a bunch of time and effort if they automated this process.
He tried the cheap 3G modems you can get from China, but he just couldn’t get a signal out on the farm—even with a giant antenna. Verizon is the only game in town in very rural places, and Verizon doesn’t allow these devices on their network.
I was more than a little skeptical about the idea of a modem designed specifically to be used with Verizon, but Jesse’s story convinced me that there is a very real need for an easy-to-use 3G or 4G modem with Verizon’s seal of approval. If you need to get your “Internet of Things” way out into the middle of nowhere, there’s no better solution than the Maker Modem.
What is M2M?
The guys at M2M Circuits explained to me that “M2M” stands for “Machine to Machine,” and they told me a bit about Verizon’s “M2M” pricing. I thought it sounded outrageous, so I looked up the pricing myself.
Verizon’s pricing starts at $9 for ONE MEGABYTE. Holy potatoes, that’s expensive!
With the Maker Modem, it is extremely easy to send and receive SMS messages. This will be a huge savings on your M2M data plan, and simple text messages will work great for tasks like periodic sensor logging or two-way communication of your remote micro-controller.
I sent SMS messages using an Arduino!
It is ridiculously simple to talk to the Maker Modem from an Arduino. You can connect the Maker Modem to your computer using the USB port, and this worked great for manually testing the device. The modem also has TTL serial pins, which makes it easy to connect it to an Arduino.
All I had to do was connect ground, transmit, and receive to my Arduino Nano, and I was up and running! From here, it was a simple matter to send the appropriate AT commands to the Maker Modem to send myself a text message.
MakerModemTest
12345678910111213141516171819
voidsetup(){Serial.begin(115200);delay(3000);}voidloop(){Serial.println("AT+CMGF=1");delay(1000);Serial.println("AT+CMGS=\"+8678675309\"");delay(1000);Serial.println("Test message from Arduino Nano via Maker Modem from M2MCircuits.com!");delay(500);Serial.write(0x1A);delay(500);Serial.println("");delay(60000);}
Receiving text messages didn’t look much more complicated, and M2M Circuits has an Arduino library for the Maker Modem to abstract all of this away from the end user. I didn’t have a lot of spare time to dig much deeper, though.
M2M Circuits just launched a Kickstarter campaign to bring their Maker Modem project to fruition. Getting their Maker Modem certified for use on Verizon’s network is very costly, so they will need your help.
Last year, I decided my blog needed a camera upgrade. I’ve been using my smartphone to take pictures for the blog for a long time, and my most recent phone upgrade was actually a huge downgrade as far as the camera was concerned. I wanted a camera with big glass and a big sensor, so a DSLR was the obvious choice.
I didn’t know how committed to photography I was going to be, so I decided it would be prudent to purchase an older model DSLR. I am quite pleased with this plan. I was able to buy a used 2008 model Canon XSi DSLR and a camera bag chock-full of goodies for less than the cost of either a Canon EOS Rebel t5i DSLR or the Canon EOS Rebel t6i DSLR.
The Canon t5i and Canon t6i have a lot of features that are missing on my older Canon XSi, but I’d be rather surprised if I could manage to take better photographs with either of the newest models. I just don’t have the skills, abilities, or knowledge.
NOTE: I already had an unbranded 60” tripod. It isn’t the AmazonBasics tripod, but it is very similar.
As I’m writing this blog, the Canon Rebel t5i costs about $650 at Amazon. That puts me over $100 below the price of a brand-new Canon Rebel t5i. I’m sure the price of the Canon t5i will drop. I’ve seen it on sale as low as $450.
UPDATE: I don’t own the Sigma 18-250mm EF Lens, but I wish I knew about it before I bought my gear. It covers the entire range of the Canon kit lens, and it doesn’t look all that much bigger, either. Even so, it is still able to cover the range of my telephoto lens. I understand this is far from a perfect lens, but at about $350, it looks like a good compromise.
I probably should have bought a DSLR body and the Sigma lens. My camera bag would be lighter, and it would make my lens choice easier when I’m leaving the bag at home!
Why the Canon Rebel XSi?
The Canon XSi (a.k.a. 450D) is the first “Upper-Entry” level Canon DSLR to use an SD card—the earlier models use Compact Flash. I have tons of SD and MicroSD cards around the house, and the XSi didn’t cost much more than the older models. That made the XSi a good starting point.
The next model released after the XSi is the Canon t1i. When I was shopping a few months ago, the used Canon t1i DSLRs cost quite a bit more than than the Rebel XSi. However, as I’m writing this, there are several used Canon t1i cameras on Amazon that are only $10 or $20 more than the XSi models. My Canon XSi can’t record video, while the t1i can. I’d definitely be willing to pay a few extra dollars for a Canon t1i.
Shallow depth of field is awesome
I knew I made the right choice just minutes after unpacking my “new” camera. I popped the battery in, turned on the camera, and took off the lens cap. As I was doing this, one of our cats was on my desk watching me. So I did what any good cat owner would do—I snapped his picture a few times.
I opened the photo on the computer, and I was simply amazed by what I’d already done. His face was in focus, his body was starting to get blurry, and my monitors were completely out of focus. Aside from all the clutter in the shot, that was one of the most professional photos I had ever taken, and all I did was point and shoot!
Forget about megapixels…
Any camera can take decent photos in sunlight. I wanted to be able to take great pictures in my dimly lit home office. That means I needed a big sensor and big pieces of glass—the tiny lens on a camera phone just doesn’t compare at all. You’ll be able to take much better photos with a 6-megapixel, 1” wide APS-C sensor than a 30-megapixel camera phone.
…It’s all about the lens
The 18-55mm zoom lens that came with my camera is versatile. I’m able to use it right here at my desk to photograph the projects I’m working on, and it was quite usable when taking pictures of trebuchets at SlingFest. It may be good enough for either use, but it isn’t ideal.
I recently read a blog post titled “A Quick Guide to Understanding Your Canon Digital Rebel XTi.” One of the things the author wrote about at length was Canon’s 50mm “Prime” lens. The lens sounded interesting, and it is inexpensive. I asked my friend Andy what he thought about it, and he didn’t have to say very much at all to convince me that this was exactly the lens I needed.
The stock 18-55mm lens has an F-stop of f/3.5-5.6. The Canon 50mm fixed lens goes all the way to f/1.8. That means the camera can open the aperture much wider. This allows more light to get in, and it lets you capture an even shallower depth of field.
This EF lens is meant to be used with a full-frame DSLR, so the effective field of view with the APS-C sensor in my Rebel XSi is equivalent to an 80mm lens. Since I can’t adjust the zoom with this lens, my feet now provide the zoom function. I usually have to back away from my desk to take a picture of my projects now, but that shallow depth of field makes the inconvenience totally worthwhile!
I haven’t had my Canon 55-250mm telephoto lens long. I do most of my photography indoors, but I realized very quickly how useful a telephoto lens would have been at SlingFest last year.
These pictures of my Space Invaders are pretty boring. I’m going to attempt to plan an outing to put my telephoto lens to good use. That way I can report back here with some better photos!
If you follow me on Twitter, you already know I can be quite boring. There’s a very high probability I will snap a few photos of my morning latte and tweet one of the better shots. These photos may be boring and repetitive, but I’ve been learning a lot in my attemps to improve them.
One of my biggest problems has always been lighting. No matter what I do to my office lighting, I have a very hard time composing a photo where the brown-and-white foam of the latte isn’t washed out or over-exposed.
The remotely controlled flash from Altura was an easy fix for this problem. I plugged the flash-control unit into the hotshoe of my Rebel XSi, and I set the flash down on my desk and pointed it directly away from the delicious coffee. This lit up the mug without casting too much light on top.
I couldn’t do anything like this with the built-in flash, and I’m absolutely certain I’ll find more cool uses for the remote controlled flash as time goes on.
The cube is a difficult subject for my limited photography skills. With a wide open aperture, the various internal reflections are completely out of focus. I knew setting the camera to a high f/stop could bring everything into focus, but there was no way I could hold the camera steady enough.
I happened to be carrying my tripod that night, so I decided to try putting it to good use. I took quite a few pictures with higher f/stop settings and longer exposures. The final shot was taken at f/8.0 with a 15-second exposure. I can’t wait to take some similar shots of the completed project!
I’m pleased with my decision to purchase a used Canon DSLR. It was a no-brainer when my plans only involved buying the camera—the Canon t5i would have cost me three times as much. After buying a camera bag and filling it with lenses, though, that gap narrows quite a bit. Even so, I’d still prefer to save that money and buy even more lenses!
If I decide to upgrade in a few years, every single item in my camera bag will work with my new camera—as long as it is another Canon with an APS-C sensor, of course. Who knows, maybe I’ll end up with a used Canon t6i in a few years!
I’m quite pleased with our pair of FlashForge Creator Pros so far. We had the first one unboxed and printing in less than half an hour, and our initial prints were quite nice. Much nicer than anything I managed to print with my MakerFarm printer in my first few weeks.
It didn’t take long to get OctoPrint up and running on a Raspberry Pi, and it wasn’t too difficult firing up a second instance of OctoPrint on the same Pi to control the other printer. OctoPrint doesn’t use up all that much CPU on the Raspberry Pi, even when broadcasting a live video feed and saving images to create a timelapse. I bet we can run at least six or eight 3D printers using a single Raspberry Pi!
Printing on glass is the way to go
The FlashForge Creator Pro comes with a BuildTak printing surface, and we used that for a couple of weeks. It works fairly well, but neither spool of white ABS filament that shipped with our printers would adhere to it very well. Also, I’ve read that the BuildTak is only good for 50 to 100 hours of 3D printing. That would be a lot of sticker changing at TheLab.ms!
I’ve been printing on glass at home for years, so I eventually broke down and had half a dozen 9” x 6” sheets of 2.5mm glass cut at our local Lowe’s. All our ABS filament sticks to glass and hairspray quite nicely, and the glass has improved the efficiency of our 3D-printing workflow at TheLab.ms. With the BuildTak, we had to wait for our prints to cool and remove them from the printer before the next person in line could start their print. Now we just pull the glass out as soon as our print is done, and the next person in line can jump right in and get started.
My favorite part about 3D printing with ABS on glass is how easy it is to remove your print. We just pop the glass in the freezer, and most parts pop right off in about two minutes.
I’m not the only one using the 3D printers at TheLab.ms!
Our new friend j5mc stopped at our makerspace on his journey. He’s traveling around the country visiting as many makerspaces as he can. He is blogging about his journey, and he plans on compiling his data into a makerspace guidebook.
It was a lot of fun having him at TheLab.ms, and he was kind enough to leave something behind for us. He designed and 3D printed an “Open / Nope” sign for us, and we display it proudly on our window. I affectionately refer to it as our “Nopen” sign.
William has been 3D printing almost nonstop since I showed him how to operate the machines. He’s printed a working crescent wrench, a tiny green TARDIS, and a really cool Storm Trooper Buddha. I can’t wait to see what he prints next, and I hope he prints another Buddha now that we’ve got the 0.1mm settings tuned in on the FlashForge printers!
Andy has been making good use of the printers as well. He’s been working hard to modify an existing Makedohinge from Thingiverse to be more compatible with existing Makedo pieces. The hinge is neat because it prints as a single piece, but it can still pivot at the joint.
Andy also designed a bracket to help attach an IKEA lamp to an IKEA bed using OpenSCAD. His bracket is one of our tallest 3D prints at TheLab.ms so far!
Shortly after the 3D printers arrived at TheLab.ms, I began looking for something quick, easy, and useful we could print to give away to visitors. The best thing I found was the KeyStand at Thingiverse. Unfortunately, the KeyStand wasn’t capable of holding up my 6” clown phone.
I recreated the design using OpenSCAD. I changed some of the angles to move the kickstand behind the phone, and I added “TheLab.ms” to the side of the key. This is the first part we managed to print using both extruders.
I am teaching classes on operating the 3D printers at TheLab.ms
I’ve been giving some one-on-one instruction here and there, but I just completed our first semi-official 3D printing class at TheLab.ms last night. I gave very little notice that the class was going to happen—less than 24 hours’ notice. I figured that would keep the attendance down, and give me an opportunity to figure out exactly what people need to know.
Even on such short notice, nine people attended the class. We managed to take four or five people from the point of zero 3D-printing experience to successfully printing their first object. In my humble opinion, I’d say the first class was a huge success.
I’ll definitely be scheduling these classes at regular intervals, and I’d like to teach some classes on using OpenSCAD, too.
If you’re a member of TheLab.ms makerspace in Plano, and you’d like to learn how to use our 3D printers, please stop by one of the classes! If you’re not a member, but you’d like to learn how to operate a 3D printer, you’re still welcome to attend one of our classes! We’d be happy to show you around the space and tell you all about the projects we’re working on.