I’ve been referring to this as the buddy system for a while, though I might be abusing the term a little. Last year, I bought a Raspberry Pi and a 14 TB hard drive for around $280. I loaded it up with my favorite cloud synchronization software and dropped it off at my friend Brian Moses’s house.
Using the buddy system for my cloud storage needs has a few advantages over Dropbox or Google Drive. I’m currently eating up about 5 TB of storage, and that would cost me $300 per year with Google, so I am already saving money. I also get to be a little extra paranoid. My data is encrypted on the client. That encrypted data is stored on an encrypted file system. The only way to access my Pi is through my Tailscale network.
I’ve been thinking a lot about the buddy system over the last year. I have a lot of things to say, so I figured it was time for a blog post!
Some of these problems can be mitigated, but there are so many bad things that could happen if you can’t trust your friend!
From my end of this bargain, I don’t feel the need to trust Brian all that much. There are enough layers of encryption, firewalls, and VPNs in the way to make me feel comfortable. Not to mention the fact that my data isn’t really all that sensitive. I wouldn’t leave a copy of my data sitting at someone’s house if I weren’t comfortable with a burglar walking off with it.
Brian is most definitely assuming more liability here. I could be snooping around his network. I could even be downloading illegal content and getting him in trouble with his ISP. I can imagine even worse things I could do from there.
A mesh VPN like Tailscale makes the buddy system easy
I definitely wouldn’t have gone back to hosting my own Seafile server without Tailscale. Tailscale is a zero-configuration mesh VPN. Every one of my devices running Tailscale is connected directly to each other via a Wireguard VPN connection. That means I can always hit my Raspberry Pi at Brian’s house no matter where my desktop, laptop, or phone are currently sitting.
To keep things secure, I configured the firewall to block everything except the Tailscale daemon’s port. My Raspberry Pi is nearly invisible on Brian’s home network. The only way to connect to the ssh or Seafile services is through my Tailscale network.
I’ve known Brian for 25 years. We shared an apartment for a couple of those years. Brian knows that I won’t try to get him into any trouble.
Requiring this level of trust puts a lot of constraint on the buddy system. Wouldn’t it be nice to be able to safely widen this network up? Instead of Brian and I trading servers, it’d be awesome if we had a third acquaintance. I colocate with Brian, he colocates with our acquaintance, and our acquaintance colocates with me. We could turn it into a bit of a ring, or we could be doubling up on our off-site copies of our data.
We could host those servers in our router’s DMZ. That’ll keep our friends away from the stuff on our local network. Then we can firewall off the DMZ so only the Tailscale UDP ports can pass. Then our friends would have to use a Tailscale exit node to keep their software on the Raspberry Pi up to date, and I could definitely live with that.
Is that safe enough? Probably. Maybe. Assuming your router and firewall aren’t buggy, I imagine the only thing you really have to worry about at this point is non-Tailscale connections going out on the Tailscale port. Are you paranoid enough to worry about this?
Here’s my problem with locking this down. If your supposed friend is industrious enough, I bet they’re going to find a way to sneak out of their jail and potentially get you into trouble. If they’re going to try to circumvent your security, are they really your friend?
In any case, I think putting even a trusted friend in your DMZ is a fine idea, but I don’t think I want to put someone’s device on my home Internet connection unless I trust that they’re not going to abuse it.
Brian and I definitely talked about this. It would be easy for Brian to up an empty virtual machine for me and map me a slice of his NAS, and it would be just as easy for me to do the same thing.
This plan brings a little more risk along for the ride, but Brian knows I won’t be attempting to break out of my VM, and I don’t expect Brian to attempt this either.
Even so, I still decided against this. Giving me two power outlets and switch port for my Raspberry Pi and USB hard drive is easy. Brian didn’t have to set up a guest machine for me. He didn’t have to install an OS in a virtual machine. He didn’t have to set up any special networking rules. He just had to plug in my Raspberry Pi.
The Pi didn’t add much to the cost over the USB hard drive, it is easier to repair if something goes wrong, and it puts most of the burden on me. Not only that, but you can send a setup like mine to someone less skilled. I could mail a Pi and a USB hard drive to my father in Pennsylvania, and I bet he could plug it into his gear.
If you have a lot of data, the money you save will start adding up pretty quickly. I did a bad job accounting for when my hypothetical Google Drive storage would have crossed from $200 to $300, but I would be paying for my second year soon, and I will be ahead by more than $300. My data is growing, so I could very well be saving $400 next year.
If you trust your friend, this is easy. If you had to call your friend and explain a situation about a dead body, and the first thing he’d do is ask how many shovels to bring, you have little to worry about. Your biggest risk is that a third party hacks into that poor little Raspberry Pi.
I’d bet I could trust most people I know if I stuck their Raspberry Pi in a DMZ and locked their server down to only allow Tailscale’s UDP ports out.
Conclusion
I had a lot of reasons for writing this blog rolling around in my head. The first reason being that I wanted to make sure that if any of my acquaintances asked me to host their Raspberry Pi at my house, I would be able to articulate exactly why that would be risky for me. Also, though, I wasn’t exactly certain of how I felt about the possible risks and how much effort it would be to mitigate them.
For a variety of reasons, I am not physically prepared to lock down someone else’s Pi on a VLAN in its own DMZ. If I were, I’m beginning to think that I’d be willing to do this. Maybe. Before I wrote this blog, I was completely against the idea, but I am warming up to it.
What do you think? Is it worth swapping servers with a buddy to save $200, $300, or more every single year? Is it worth accepting the risk that a friend might download dozens of terabytes of movies and force your ISP to drop you as a customer? Do you feel you could mitigate that risk well enough to not worry about it?
There were two problems sneaking up on me this year. I was rapidly approaching the storage limits of my hosting provider, and there was a huge chunk of my video data that I wasn’t syncing, because I didn’t have anywhere near enough space available.
This is my little #RaspberryPi Seafile server with its 14 TB drive. The Pi's case is affixed with 3M Dual Lock, and there some stickyback Velcro keeping the cables tidy.
In February, I took inventory of my total storage requirements. If my memory is correct, I was using right around 3.2 TB. The options for syncing that much data to a Dropbox-style are all rather costly. Dropbox was $120 per year for 2 TB of storage, and Google Drive was $100 for the same. That would have made my annual bill either $200 or $240.
NOTE: Google Drive sync only works with third-party clients. It would be more accurate for me to compare my costs to Dropbox, since that is the service I would actually use instead of Seafile. I’m OK with comparing to Google Drive pricing, though, because their prices are lower, and that makes things more of a challenge on my end!
I expected to save money, but I didn’t expect it to happen so soon!
In fact, I completely missed the point where I crossed into the black. I just had it in my head that if I had gone with Google Drive, I would have paid $200 back in February, and then I’d be having to pay another $200 after the first year was over. I knew that once I made it to February 2022, I would have paid back my initial investment and been ahead by about $110.
I forgot to take into account the fact that my data is growing. I checked earlier this month, and my Raspberry Pi’s hard drive is 41% full. That’s just over five terabytes.
NOTE: I did a bit of housekeeping and garbage collection while writing this post, but I’m still over 4 TB!
I imagine I flew past the 4 TB mark during the summer. If I were using Google Drive, I would have gotten a warning that I was running out of space, and I would have had to do some work to pay for more storage.
This means I’ve already paid for my Raspberry Pi and 14 TB hard drive, and I’ve even managed to put about $10 in my pocket!
You can’t really buy 6 TB of storage on Google Drive
At least, I don’t think you can. Even if you want to spend $300 per year on storage, they won’t let you put that much data on a single account. I’m pretty sure you’d have to do something goofy like set up three separate accounts. That would be yucky.
A Dropbox business plan supposedly has unlimited data, but they charge $20 per month per user, and there is a three user minimum. That’s $720 per year. If you have two friends, and you need to store a boatload of sync a boatload of data, that might be a good deal. It isn’t a good fit for me. I don’t have dozens of terabytes of data to sync!
Using Google Drive as my pricing benchmark seems like a good compromise. Google’s sync client isn’t great, and they don’t support Linux. Dropbox’s sync client is excellent, and it does work on Linux, but I feel like using their pricing makes my setup seem like too good a value. There are cheaper services from smaller companies, but they aren’t all that much cheaper.
My Seafile server’s job is to synchronize my files. That’s just about the only feature I truly expect out of the software or rely on.
It pushes copies of every change I make to my desktop, laptop, and NAS. If this is your goal, Seafile isn’t the only game in town. Syncthing does this job very well, and is a much simpler piece of software. The last time I tested Nextcloud, it wasn’t up to the task of syncing my quantity of files, but that was 9 years ago. I expect their situation has improved tremendously in the mean time!
The Seafile client encrypts my data locally. The server doesn’t have the keys to access my files. This is extremely important to me. This keeps me safe if anyone on the Internet manages to hack into my Raspberry Pi. Nextcloud might be able to do this now, but I don’t believe Syncthing is up to the job yet.
Seafile also keeps historical copies of all my files. I have most of my Seafile libraries set to keep three months of history. A new version of every file is recorded just about every time I hit save. There are probably already 18 revisions of this blog post on my server already.
Seafile is at the heart of my backup and disaster recovery plan
The first thing my backup plan attempts to mitigate is hardware failure. Seafile doesn’t sync changes instantly, but it isn’t far off. I can save a file on my desktop, and the update will usually show up on my laptop in about 30 seconds. If I have that file open in Emacs on both machines, it will even automatically refresh itself.
If the SSD in my desktop melted right now, I could walk across my office, open up my laptop, and pick up where I was a couple of minutes ago. Nearly instantaneous backups are absolutely amazing.
The next disaster I am defending against is the natural kind. What if my home office floods? What if my house burns down? What if there’s a burglar?!
If I’m lucky, I get to walk out the door with my laptop. If I can’t, there will be a full copy of all my data six miles away on my Seafile server. It would be nice to have more distance between me and my off-site backup, but I’m willing to live with this.
The last problem boils down to software failure. What if my new Emacs extensions decides to write garbage into all my Markdown files? What if I accidentally delete my home directory? What if I get hit with ransomware that encrypts then deletes all my important files?
The bummer here is that this change will be happily synced to all my computers. The good news is that history will still exist on the Seafile Pi. It’ll take a good number of clicks, but it won’t be too much trouble to restore.
My implementation of a 3-2-1 backup
What is 3-2-1 backup? It means you should have at least three copies of your data. Your data should be on at least two different mediums. At least one of those copies needs to be at a remote location.
In the old days, we usually wanted the second medium to be tape. These days, most people are happy enough making sure the second medium is just a different device. It is important to make sure that second device is out of band.
What do I mean by out of band? If your backup is to an always connected USB hard drive or a file share on your NAS, then your accidental rm -rf might delete your backup, or that piece of ransomware might take your backup right along with it! If you can see a get to a file with your file manager, it is quite likely not safe from accidents, viruses, or ransomware.
I have four copies of most of my data: on my desktop, laptop, NAS, and the Raspberry Pi. The copy on the Seafile server can only be accessed via the Seafile client using its own protocol. I wouldn’t be surprised if a dedicated attacker could exploit this connection somehow, but a random piece of malware or an accidental rm -rf certainly isn’t going to have a way to delete anything in the server’s history.
Am I really $10 ahead? Will I really save another $300 by 2023?
This first year involved a bit of financial risk. I don’t know what sort of warranty I have on my Pi or 14 TB Seagate hard drive, but the odds of that hard drive dying are a good bit higher than zero. Having to pay for a new replacement hard drive would have set me back quite a bit on this journey.
Please don’t forget that time is money. I invested some numbers of hours of my time in February in the hopes that I will be able to save $200 or $300 each year for the next two or three years.
How much time did it take to get my Pi server up and running? I didn’t keep track, and it took me a lot longer because I was documenting the process in various ways as I was going.
If we ignore the blogging aspect of things, I will be surprised if I spend anywhere near 8 hours between installing and maintaining my Seafile Pi during the first two or three years.
It is probably important to note here that I’m comparing pricing to other services based on the storage I am actually using. I don’t have 6 TB of cloud storage. I have 14 TB. That would be more like $700 per year from Google Drive.
Should you be hosting your own cloud storage?
Most people shouldn’t. Most people fit in the free or nearly free tiers of Dropbox or Google Drive. The savings really start to become clear as your storage needs approach or exceed 2 TB.
Self-hosted cloud file synchronization isn’t exactly a drop-in replacement for a service provider. There are some advantages that I appreciate very much, but there are also a number of important disadvantages.
Dropbox is making sure your data is replicated to multiple locations. Google is making sure there are backups and redundancy. Even if that weren’t the case, they are doing work that would take you at least a few hours.
Maybe I have to do a little work, but I also know my files aren’t leaving the computers that I own. I don’t have to worry about Google restricting my files.
Dropbox’s unlimited business plan is potentially a really good value at $750 per year, but I can buy a lot of storage for my Raspberry Pi server with that $750. That could be something approaching 60 TB, and as long as those drives don’t fail, that would be $750 back in my pocket next year.
The important thing to note here is that I am quite willing to suffer through a cloud storage outage. If Brian’s 3D printer catches on fire and takes out my Raspberry Pi, I can order a new Pi and hard drive from Amazon and have it here in two days. I have a copy of the MicroSD card that I can pop into a fresh Pi in a few seconds, and I’ll be resyncing my Seafile libraries in no time.
I wouldn’t have done this without Tailscale and the buddy system
The worst part about hosting my first Seafile server was that it was exposed to the Internet. I had to watch for Debian or Seafile security alerts like a hawk. I had to drop whatever I was doing and get to work patching things when updates were available. That was a lot of work!
I’m hosting Seafile on my Tailscale mesh VPN network. My Raspberry Pi’s firewall is blocking just about everything on the Ethernet port except Tailscale, so there are only four or five computers that are even able to connect to my server.
There’s only one downside to this for me, but it isn’t a big one. I can’t use Seafile’s built-in file sharing abilities. Tailscale lets you share individual machines with other Tailscale users, and I have shared my Seafile server to both my wife and my co-host on The Create/Invent Podcast.
My Raspberry Pi is colocated at Brian Moses’s house. It is sitting underneath his Prusa MK3 3D printer, and it is connected to his gigabit Internet connection. This is essentially free for me, though I of course offered to plug a similar device into my network whenever he gets around to it.
If you had asked me to implement this same setup half a decade ago, I would have been hesitant. I used to have a stack of USB 2.0 drives in a RAID 5 that I would plug into my Core Duo laptop for backups and extra storage, and those drives were more than a little flaky.
In that same era, hard drives were getting pretty awful. Every time I’d buy a stack of hard drives to build a RAID, I usually had a failed disk within the first couple weeks, and it was normal for a disk to fail about once a year after that.
Things have changed. Two of the 4 TB drives in my virtual machine server are six years old, and the other two aren’t all that much fresher, and they’re all still happy. Until we flipped the breaker to shut down Brian’s office last week to install a fancy old Wink Relay smart switch, my Raspberry Pi server had an uptime over 180 days with no weird USB errors or hiccups.
My investment has paid for itself, and I absolutely won’t be surprised if the hard drive lasts long enough to pay for itself three or four times over.
NOTE: It is important to take measures to not wear out your Pi’s microSD card. You can use raspi-config to use an overlay filesystem, which is a fantastic idea. I just made sure to disable as much logging as I can, since that’s what 99% of the writes end up being.
The math looks better if you buy a smaller hard drive!
My 14 TB drive isn’t quite the largest USB hard drive available, but it is far from the smallest. I didn’t need to go this large. An 8 TB drive would have cost less, and it would probably take me three years to fill it up, but the 14 TB drives are usually the lowest costs per terabyte. It is also better to have extra space than too little space!
The 6 TB version of my Seagate USB drive was on sale for $104 during the first week of December. That and the Pi would have cost me around $160. That’s 50% more storage for $35 less than the annual price of 4 TB of storage from Google.
Not only that, but sometimes the best computer is the one you already have. There’s an old Intel Core i3 laptop with 8 GB of ram in my closet. I could have used that instead of the Pi, and I would have saved about $55. Not only that, but every laptop comes with a built-in UPS!
You don’t have to do exactly what I’ve done. There are so many ways to accomplish the same thing! You can use a smaller drive. You can use an older, weaker Pi. You can run Syncthing or Nextcloud, or you could even use ZFS push to send backups to the Pi.
NOTE: All of this is a little weird during the current supply chain conundrum. It is difficult to buy a Raspberry Pi 4 for the price I got less than a year ago, and I have no idea when that will get better. I am particularly excited about the idea that you could build a 14 TB micro server for about 20% more than the cost of the disk. It is still neat today that you can do it for less than 33% more than the cost of the disk, but the more that gap increases, the less cool it is!
I couldn’t be more pleased with my off-site Raspberry Pi cloud storage server. I have convenience. I have piece of mind. I have security. I am maintaining my privacy. I’m saving money. What more could I ask for?
I only have one disappointment. I bought Pi with 2 GB of RAM for my Seafile project, but I swapped it with the 4 GB Pi from my Pi-KVM build. Neither requires more than a few hundred megabytes of RAM, and I figured I’d eventually find something else to host off-site on my Tailscale accessible Raspberry Pi server.
My Seafile Pi is using less than 300 megabytes of RAM right now, and I haven’t come up with anything to host out there. Maybe you can give me some suggestions?!
What do you think? Am I crazy for not just paying Dropbox for my sync and remote storage needs? Do you expect I’ll be doubling my investment in savings by this time next year? Are you already doing something similar, or do you have plans to build something similar? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!
When I started co-hosting The Create/Invent Podcast last year, I started casually shopping for macro pads or a Stream Deck. We were recording our interviews locally with OBS Studio, and on the rare occasion that we did a live stream, we were pushing the stream to YouTube via OBS as well.
The most common and basic use for a Stream Deck here would be assigning each camera or scene in OBS to a different key, so you can focus the live stream on the right person.
Then we started using Riverside.fm to record our interviews, and they also have functionality for streaming our interviews to YouTube. That’s when I stopped investigating macro pads, because there really isn’t much I can control while live streaming with Riverside.fm.
Then my friend and Create/Invent Podcast co-host Jeremy Cook designed a macro pad, and now I have absolutely no idea how I survived without one.
Jeremy has been sending me prototype PCBs since he got started, but I don’t want you to think I’ve been just getting free toys as an incentive to post about them on my blog. Jeremy has only sent me printed circuit boards. I’ve had to buy keys. I’ve had to buy keycaps. I’ve had to buy Arduino Pro Micro boards, and I’ve even physically broken and had to replace a couple of those!
The secret to acquiring cheap Cherry MX key switches and keycaps!
I’m not sure this is truly a secret. I’ve been posting cheap mechanical keyboard deals on Butter, What?! for years. Some of those mechanical keyboards have hot-swappable switches. That means you can just pull them right out of the board, and you don’t have to desolder anything.
I’ve noticed that the E-YOOSO 87-key keyboard goes on sale all the time, it has hot-swappable key switches that just pull right out, and the switches and caps are the sort that work with LEDs! At its usual $26 price tag, it is a good deal. There seems to be a coupon code every week that pushes it down under $20, and I got mine for $18. I was bummed out taking it apart, because it seems like a fine keyboard for its price.
At the price I paid, it winds up being under $0.20 per switch with keycap. The bummer is that you can’t really use all the keycaps, because each row is a different shape. You can see the problem in picture of my CNC pendant macro pad. I thought it was more important to have the letters X, Y, and Z on those keys than to have the correct shape!
If you and your friends are looking to save some money, maybe pooling together to make a handful of macro pads would be a good value! There are enough switches on the E-YOOSO keyboard to build 10 macro pads, and I think Jeremy’s pricing on the PCB is a really amazing value. Maybe you have a friend who is proficient at soldering. You could have him do all the soldering work in exchange for a free macro pad!
Who knows. I’m sure you and your friends can figure something out.
You can do a lot with the sample Arduino code Jeremy has posted so far, and I imagine there will be more of that available before the Kickstarter ends. I’m pushing the limits quite a bit further, but not in a clean way that’s ready to be easily consumed.
What I’ve strung together here are layers of kludges. It is working, it is a good proof of concept, but some parts need replacing and others need quite a lot of smoothing over. I’m going to include all my code in this blog.
Before we get to how I’m doing it, here is a list of the interesting things I’ve been doing:
I can send commands from the PC to the pad via USB serial
I can update the color of individual LEDs from the PC
I can update the color, direction, and speed of “spinners” on the OoberLights Micro
I am adjusting key LEDs based on the state of their function (red for speakers, green for headphones)
I am changing LED colors based on the focused window title
I am changing key function based on active or present windows
Different number of button presses have different functions
In what way is all of this a hack? There’s a script I run in the background that is calling out to xdotool once a second to check window titles. There’s another script doing math on the output of the ifconfig command to show me upload and download speeds on my OoberLights Micro board.
I have the keys and knob on the JC Pro Macro set up to output oddball key combinations like control-alt-F12. I forward those keys on to a script that checks the active window title to decide which real key to pass back on using xdotool. You might turn the knob right, the pad generates control-alt-F12, my window manager passes that on to my custom script, the custom script verifies that Davinci Resolve isn’t active, so your turn of the knob sends a volume up keycode.
This is absolutely the wrong way to do this, but it is the most convenient to set up, and I can make changes to things effortlessly. I feel like the right way to do this is to set up alternate layouts on the macro pad, then just send a command to the macro pad to change layouts when appropriate. If I did that, though, I’d be flashing the Arduino over and over again.
Can you be more specific about the unique things you’re using your macro pad for?
Of course I can!
I like Jeremy’s idea of being able to have custom modes for different applications. This is similar to layers in the QMK firmware. I just didn’t want to have to remember to switch modes, so I am adjusting the keys based on window title.
Most of the time, the knob is just a regular volume knob. If I’m currently editing video in Davinci Resolve, the knob nudges the play head forward or back by a single frame. There’s also a button I can hit that toggles the knob between one frame at a time or one second at a time, and the color of the light on that key tells me which mode I’m in.
I took that a step farther with my OBS Studio keys. Those keys become active any time OBS Studio is open—it doesn’t even have to be the active window! For lack of a better idea, those keys glow red, yellow, and green, just like a traffic light. They’re pretty simple. They’re just tied to three different scenes in OBS Studio.
Controlling my studio lighting with Home Assistant and the macro pad
There’s some good home automation going on in my office. When I’m not in here, the lights are off. If I’m at my computer or in my recliner, the basic lighting turns on.
I also have video lighting at my desk and workbench. I don’t have a good way to detect when those should be used correctly, so I have to manually change to those Home Assistant scenes. I was doing this in the web user interface, but now I have it bound to my macro pad.
When I first set this up on the original prototype macro pad, I was using three different keys. That seemed quite wasteful, and it encouraged me to work a little harder.
I now have four lighting modes bound to a single key. If you click it once, it turns off my overrides and the automation takes over. That sets the key to a green light.
Two clicks gives me a red light and my single podcasting light, three clicks goes cyan and gives me the brightest lighting possible, and four clicks goes purple and only lights up my workbench. This is a lot like double-clicking a mouse, but I was sure to put these in the order I most use them!
The lighting overrides in Home Assistant are more complicated than I’d like them to be. It is a Node Red loop that keeps checking a global variable, and I’m setting that global variable by calling hass-cli via the macro pad.
A button to toggle between headphones and speakers
This one bums me out a bit because I don’t like doing things that the system should be able to do for me. For a long time, I used a set of JBL Elite 700 Bluetooth headphones, and for a short time I connected my Bose QuietComfort 35 Gaming headset via Bluetooth. That was automated for me. When I turned the Bluetooth headphones on, my computer switched to them. When I shut them off, my computer switched back to the speakers. Easy!
Now I’m plugging my Bose headset in to reduce latency. Now there’s no way for the computer to puzzle out whether or not I have my headphones on, so I mapped a two-handed, three-fingered key combination to toggle my audio output. Something I wouldn’t accidentally hit, but something I could hit in the middle of a video game if I decide to switch outputs.
I mapped some JC Pro Macro pad keys to OBS for next week's live stream. The scene keys are only active and lit red/yellow/green when OBS is open.
I'm not sure that red, green, and yellow are the appropriate colors for this. pic.twitter.com/LbR0ykxJyp
One of the first things I did when configuring my JC Pro Macro pad was assign a button to swap audio outputs. I had no idea how much nicer this would be!
Sure, it takes two hands to put my headphones on, but it is still nice having an entire hand free while changing audio outputs. I can continue to adjust the fit of my headset. I can pick up my phone to check messages. If the game I’m playing loads quickly, I can already be mousing around the menus. It just feels better.
Not only that, but I’m setting the color of the headphone button based on the audio output. If the light is green, things are safe, and my output is going to the headphones. That means I won’t accidentally pick up speaker noise on my mic while podcasting. Isn’t that cool?!
I’m only scratching the surface with Davinci Resolve
As soon as I attached a big old knob to the rotary encoder, I knew that this had to be made to work with my video editor. I only have two functions set up here.
In the default mode, one click of the knob moves the play head a single frame. When I toggle the second mode, each click moves the play head one second. This makes it easy to fine-tune a cut down to exactly the right frame, even when the timeline is zoomed out too far to be that precise. It also allows you to spin the knob quickly to zip around at higher speed.
I watched a video explaining how to use the real Davinci Resolve keyboard, and so far I haven’t been smart enough to replicate its fancier functionality. With their keyboard, you’re able to hold a key while turning the knob to do things like move the connecting point between two clips forward or backward along the timeline.
I’d love to be able to do that, and Resolve has an extreme number of functions that you can map to keyboard controls, but I can’t for the life of me figure out how to replicate their own keyboard’s trim function!
I love my network traffic meter!
When I got the first prototype of the JC Pro Macro pad in my hands, I immediately noticed the four solder pads on the right side behind the keys. I thought it would be a neat place to wire up some OoberLights, but we didn’t have a small enough OoberLights board for that. We do now!
The new OoberLights Micro boards are about the size of the base of a Cherry MX keycap. There are three rings of RGB LEDs. The outer ring has 12 pixels, the middle ring has 8 pixels, and the center ring is just a single pixel. I think blinkenlights are cool, but what can you do with these pixels?!
This might be the first time I've used any of my #OoberLights for their intended purpose! The blue and green LEDs are spinning at an RPM that is related to upload and download speed on my Ethernet interface! #Arduino#raspberrypipic.twitter.com/p1vOCVvX0o
I’m using a shell script, math, and my spinner Arduino code to display rotating bandwidth meters on my JC Pro Macro. A green pixel spins clockwise to show my download traffic, and a blue pixels spins counterclockwise to show upload traffic.
It isn’t perfect. I have had a bug making the pixel jump in any counterclockwise spinner when changing speed, and getting the speed right for a bandwidth meter has been challenging. I wasn’t sure what to do when there is nearly zero traffic, so they’re always spinning at an extremely slow rate. When they stand still, they look too boring!
This is most definitely just the tip of the iceberg. I have an OoberLights Micro board plugged into a Raspberry Pi, and I’ve managed to add a clean mounting bracket to my favorite 3D-printed Pi case. It won’t be long before I have network spinners and disk usage pie charts on my Raspberry Pi Seafile server!
I’m using a macro pad as a pendant for my Shapeoko CNC
This deserves an entire blog post of its own, but I want to code up some improvements first.
My Shapeoko XXL CNC machine is plugged into a Raspberry Pi Zero W running CNCjs. One of my old laptops is sitting next to the Shapeoko. That laptop connects to the CNCjs web interface, and I sometimes use the laptop to make CAM and design adjustments between cuts.
Much of the time, though, I don’t even power up the laptop. I have an old Nexus 7 Android tablet out there connecting to the CNCjs tablet interface. This works great, but you have to actually touch the screen in the correct places, and that means you have to look at the tablet.
A common thing to do is jog the Z-axis down 0.1 mm at a time until it touches a piece of paper. That’s how you know the tool is just above your material. If you’re using the touch screen, you can’t see the tool, the paper, and the button. You also can’t feel the buttons.
Using a macro pad as a pendant is fantastic. I get the tool close, choose the Z-axis, and then set the jog speed to 0.1 mm. Every click of the wheel will move the router down 0.1 mm, and it is extremely easy to feel the wheel without looking. I can keep my eye right on the tool and never have to worry about touching the wrong part of the screen!
I’ve done a bad job, and I have been pretty slow at writing this blog post. Jeremy’s Kickstarter campaign for the JC Pro Macro 2 is already way more than half over. By the time Google search starts sending many of you fine readers in my direction, it will probably already be over!
My JC Pro Macro keyboard is soldered together and working. I don't have any RGB LEDs on hand, so I left those off. I hope that's OK.
At the time I am writing this, you can only get the 8-key version of the JC Pro Macro pad on Kickstarter. The original 5-key version is available on Tindie, but I’m sure the 8-key version will be stocked on Tindie after the Kickstarter orders have been fulfilled.
I like the original, smaller version of the macro pad. Five keys is actually a pretty good fit for use at my desk, but I’ve grown attached to using the light-up keys as functional indicators, and the original pad lacks those lights.
Five keys is probably just enough to use as pendant for my CNC router, but the extra key and the lights make it easy to see which axis I’m about to drive and just how fast it is going to move.
I don’t really have a conclusion. I feel like I’m only just scratching the surface of what my macro pad can do for me. All my software is a hack, and I’m slowly working on reining that in to make things a little less wonky.
I’ve gotten a little derailed from the macro pad stuff by our OoberLights Micro boards. You can’t imagine how excited I am to see an actual, working, spinning bandwidth meter attached to my macro pad.
What do I really want to know? I want to know what you think! Are you using a macro pad or the Stream Deck? Which macro pad are you using? Does the JC Pro Macro look like a good fit for you? Is there a better macro pad I should be using?! Let me know in the comments, or stop by the *Butter, What?! Discord server to chat with me about it!
I’m not new at this. As expected, I didn’t have any real problems. Things are certainly built on a more fragile foundation this time, but reliability has still been great.
Eight years ago, my Seafile server lived in a datacenter in downtown Dallas with redundant links to the Internet. I don’t remember how fast those connections were, but they were faster than the gigabit Ethernet adapter in my old 1U servers. My server was built like a tank, had redundant power supplies, and my data lived on a small RAID 10 array.
My new Seafile server is a Raspberry Pi 4 with a single 14 TB USB hard drive, and it lives on my friend Brian’s home network. He has a symmetric gigabit fiber connection from Frontier. I have the same ISP, but I have a 200-megabit symmetric fiber link.
Sometime in my new Seafile server’s first month of service, it completely disappeared. I couldn’t ping it. The Tailscale admin interface said it hadn’t seen it check in since the night before. Brian couldn’t ping it, but that’s to be expected, because just about everything on my Seafile server is blocked on the local interface. Including ICMP packets. The only way in is via Tailscale.
Brian power-cycled it for me, and everything came right back up. It hasn’t happened again since.
In an effort to keep my Pi’s microSD card going, I have disabled just about everything that writes to the root file system. This includes disabling just about every ounce of logging, so even if there would normally be a trail to follow, I wouldn’t have anything to look at.
I have to manually restart Seafile when there’s a power outage
My Raspberry Pi isn’t plugged into a UPS at Brian’s house. If the power blips, my Pi reboots.
The root file system isn’t encrypted, so Linux boots back up without a problem, and it immediately connects to my Tailscale network. The 14 TB external hard drive is encrypted, and it needs me to enter a passphrase to unlock it. If there’s a power outage, I have to ssh in and run a script that mounts the encrypted file system and starts Seafile.
This should happen less often now, because Brian invested in a Tesla solar and Powerwall setup. If there’s another outage any time soon, I will be quite surprised!
Why am I hosting my own cloud storage and file-sync service?
First, there’s the problem that I’m running Linux. Google Drive doesn’t have an official sync client, and Dropbox has been doing goofy things with their Linux client.
Then there’s the paranoia factor. In almost every IT department I’ve ever worked in, I have had the ability to read your email. I’ve never wanted to, and I always thought it was creepy when management wanted to check an employee’s email history. In every one of those IT departments, there has always been at last once person that was EXCITED to tell you that they can read your email. They thought it was awesome.
I planned to do a nice, smooth S-turn over those trees at the other side of the field, but then I had to play a game of is-that-a-dude-or-an-electrical-box. It wasn't a dude. It was a power box.
Last time I checked, Dropbox has the ability to decrypt your data. I have no idea how Google Drive works. These are big companies, and my brain immediately imagines the clones of my old coworkers that are excited about being able to poke around in our data. If you saw the glee in their eyes, you wouldn’t want them nosing around in your files.
Then there’s cost.
Let’s talk about cloud storage pricing!
I’m currently up at 4.4 TB of data on my Seafile server. That includes my data, my wife’s data, and some episodes of The Create/Invent Podcast. Well over 3 TB of that data is my own.
Google’s largest storage plan is 2 TB for $99.99 per year. I don’t think they’ll let you stack two plans to get to 2 TB, but if they did, I guess it would be $200 per year.
Dropbox’s individual plans are 2 TB for $119 per year or 3 TB for $199 per year. I don’t fit into either of these plans, but at least I am close!
I can move up to a Dropbox business plan, but the minimum number of users is three. That puts it at 5 TB for $450 per year, or you can pay $720 for unlimited storage.
NOTE: I cheated a bit here! I had a 4 GB Raspberry Pi 4 here as part of my Pi-KVM kit. The whole Pi-KVM setup only uses about 200 megabytes of RAM. Seafile fits quite nicely in 2 GB of RAM, but my Seafile Pi is up and running 24/7, and it is located off-site. I figured I may as well put the 4 GB Pi out there, since there’s a chance I might decide to host something else on there!
Addendum to the note: I took a peek, and my Seafile server was using 1.8 GB of RAM after about six weeks of uptime. I restarted Seafile a few days ago, and it is sitting at around 300 MB of RAM. There must be a leak of some sort. If you’re on a small Pi, you might want to schedule Seafile to restart every once in a while!
A fresh #RaspberryPi and 14 TB USB hard drive arrived today. I have Raspbian installed, Tailscale is up and running, and Seafile seems to be working now.
I’ve been using Google’s storage pricing for arithmetic that justifies my choices because they’re a little cheaper than Dropbox. I am aware that Google won’t actually let me stack a pair of 2 TB plans onto my account, and I’m mostly ignoring the fact that I am well on my to needing more than two plans’ worth of storage.
I am six months into my experiment, and the gamble is well on its way to paying off. Not having to pay Google has saved me $100, so I am more than 1/3 of the way to paying off my hardware.
Comparing self-hosted to Google Drive or Dropbox is difficult
If you sign up for Dropbox, you don’t have to do any real work yourself, and that is awesome. You just install their client, and everything starts syncing. It is absolutely fantastic, and this has a HUGE value!
Then you read stories about people getting locked out of the Google accounts. Sometimes it is an absolute nightmare getting things straightened out. I don’t know how to measure the risk of losing my data to something like this, but I’d imagine it is infinitesimally small.
I am quite confident that having to fight to get my Google account turned back on even once would feel like it cost me hundreds of dollars of time, effort, sanity, and frustration.
We hope that Google and Dropbox are doing a good job replicating our data, but it is pretty opaque to us. We have no idea what is really going on up there.
I know for a fact that my Seafile server has no redundancy. Even so, that server is an integral part of my backup plans. Seafile is configured to keep all my data for 90 days. If I save a jpeg 20 times today, each version will be available on the server—assuming I’m not saving faster than the files can sync!
I know that safety net can disappear instantly. The most important data on my Seafile server is synced to my desktop, my NAS, and my laptop. If the Seafile server disappeared right now, I would still have three copies of that data. The NAS even has a few snapshots.
I don’t have three copies of all my data outside the Seafile server. The video I regularly record is just too big to fit on my desktop’s SSDs or even my laptop’s second hard drive. I sync the current year’s video files to my laptop, but the previous several years just won’t fit. I’m keeping it simple here, but this paragraph could easily be turned into a 1,500-word blog post.
Maybe. Sort of. What I’m really going to be giving up on is the centralized file server. I’ve been slowly working toward this for at least the last eight years, but the tiny SSDs we had back then were holding me back, and I really did need bigger volumes than a single hard drive could give me, so I needed to build RAID 6 or RAID 10 arrays.
Things are different now. I can easily fit a year’s worth of the content I generate on a single SSD. My laptop is pretty beefy, so it has room for an NVMe drive along with a 2.5” hard drive.
There are four hard drives in my server. I bought them a long time ago, so they’re small, but they cost $150 each. Today I could shuck two 10 TB drives to stick in my server and desktop, and grab a 4 TB or 5 TB 2.5” drive for my laptop. Then I could sync every bit of my data to three different drives on three completely different machines in my house, and I’d still have a fourth copy with full history on a Raspberry Pi off-site.
I enjoy the idea of replicating my data almost instantly among a redundant array of inexpensive computers.
It is nice that I don’t have to move entirely in one direction. I’m straddling the fence between centralized and decentralized storage today, but the drives will start failing as my NAS ages. When they do, I’ll likely just find myself accidentally sitting fully on the other side of this fence!
I’m using the buddy system for my off-site backup and storage, and so should you!
Every good backup plan includes backing up your data at a second location. More locations would be even better, but I’ll settle for just the one.
There’s a copy of my data on my NAS, desktop, and laptop. If I drop the laptop, I won’t lose anything, but if the house burns down, I’d be in big trouble! That’s why my Seafile server is hosted on a Raspberry Pi at Brian Moses’s house.
Uptime isn’t critical. If the power went out at Brian’s house for a few days, that would be OK. If either of us were having issues with our Internet connections, that would be fine. I don’t need to spend extra to host my Seafile service in a real datacenter with redundant connections to the Internet and on-site power generation.
My little Pi server is sipping about as much power as a 100-watt-equivalent LED light bulb, so I’m not much of an imposition. I’m also more than willing to return the favor.
You should think about finding a buddy to swap Raspberry Pis with, but it should definitely be someone you trust not to exploit your Internet connection for nefarious purposes!
Having cloud file syncing is living in the future
In the old days, if you wanted to share files, you would use a centralized file server. Everyone working on a project would map a share on that NAS, and they’d access the files on that remote machine. If that server lived on the other side of the world, it might feel quite slow working with the files. It might only take a few extra seconds to open a document or spreadsheet, but working with a remote database might be quite slow.
When using a sync service, every time a file is changed, that change is pushed to your local machine. When I am editing video files for The Create/Invent Podcast, they are already on my local SSD, so I am always editing a local file. All the video files are on my desktop and laptop, so I can work on them anywhere. Even if I’m on a slow 3G cellular connection.
Most of the work I do, like this blog post, is stored in text files, and I commit those files to Git repositories. The trouble with this is that I have to remember to commit my changes. Sometimes, those changes aren’t really ready to be committed and pushed to the server.
If I forget to push my changes, and I walk out the door with my laptop, it can be challenging to continue my work. Tailscale will let me easily sneak back in to fix this mistake, but what if I don’t have Internet access on my laptop?
This used to be a pretty common scenario, but I’m rarely completely without an Internet connection. With Seafile, I don’t have to worry. My laptop is up and running right now. As long as it takes me at least 30 seconds or so to walk away from my desk and pack up my laptop, this blog post I’m working on right now will automatically be synced to my laptop. I can ride to the park, open my laptop, and I won’t have to wait to work.
I realize that I’m repeating a lot of what I already said six months ago. I’m trying to emphasize the most important bits while adding as much new information as I can. One of those important bits is Tailscale.
I stopped hosting my own Seafile server because I was sick of rushing to keep my software updated. If a security flaw was patched in Nginx or Seafile, I had to rush as quickly as I could to get my server updated. It was sitting out there facing the entire Internet. Anyone could be poking at it.
I was already using Tailscale for a few months before I decided to host my own Seafile server again. I knew I wasn’t going to put the new server on the Internet. I knew it was going to only be accessible on my Tailscale network.
Tailscale is a zero-config mesh VPN. You install the Tailscale client on two or more machines, log into your Tailscale account, and all those machines can talk directly to each other over point-to-point Wireguard VPN connections. Tailscale is ridiculously easy to set up, it is reliable, and the pricing is fantastic for us home users.
My Tailscale machines can talk to my Raspberry Pi no matter where they are located. My Seafile server is at Brian’s house in Texas, while I could be on hotel WiFi in New York with my phone connected to T-Mobile 5G. All three machines can ping each other directly.
Tailscale also lets you share machines with other Tailscale users. My wife has her own Tailscale network that includes her laptop, desktop, and phone. I’ve shared my Seafile server and our Home Assistant server with her. She can sync all her files, and she can check the thermostat when she’s away from home. How cool is that?!
I’ve also shared my Seafile server with my Create/Invent Podcast co-host, because neither of us have enough spare room on our Google Drive accounts to hold much more than a single episode of the show. I just export my work, it lands in our shared library, and it starts syncing right away.
Should you be hosting your own cloud storage and cloud sync service? Maybe. Especially if you have to store and sync more than 2 TB of data. Don’t forget that your time is valuable, and setting up a Pi and keeping all your software up to date will add up to at least several hours. Maybe you’ll find that tedious, maybe you’ll find it enjoyable.
I wholeheartedly believe hosting my own Seafile server on a Pi using Tailscale was the right choice for me. My Pi’s hard drive is encrypted. Seafile encrypts every block on the client side. My network traffic is encrypted with Seafile. I know my colocation provider has no interest in breaking into my stuff.
I am saving money. I’ll be saving more money as my storage needs keep growing. I’m pleased with my increased level of privacy.
What do you think? Did I make the right choice by hosting my own cloud storage and sync? Should I be spending $450 per year on Dropbox instead, or did I make the right choice spending about $280 on this hardware? Let me know in the comments, or stop by the *Butter, What?! Discord server to chat with me about it!
Brian snagged me a handful of the original CloudFree smart plugs for Christmas. I immediately opened them up, attached them to my network, and I put various lights around my office under the control of Home Assistant.
The exciting thing about the CloudFree devices is that they don’t require the servers in the cloud to operate. They arrive at your door preloaded with the open-source Tasmota firmware. You just have to configure them with the hostname and credentials of your Home Assistant server, and they will automatically be detected.
I wanted more, but I was bummed out that they were out of stock.
They’re in stock now!
When we talked to Kenny from CloudFree on The Create/Invent Podcast, he told us that the version 2 plugs are a huge upgrade. He’s discontinued the original switches, and he was waiting patiently for a big shipment of the new switches to arrive from overseas. They’re shipping now, and I’m quite excited.
The new switches cost a bit more, but I think they’re worth it. They’re rated for 15 amps instead of just 10. They have power monitoring just like a Kill-A-Watt power meter. Their new form factor also allows you to plug two plugs into the same outlet.
What am I doing with my six CloudFree smart plugs?!
My own Home Assistant setup is still quite dumb. I’m collecting lots of data, but I haven’t automated anything yet. It is currently just a glorified remote control with zero automation.
I’m hoping to use the power monitoring to help in automating my Rancilio Silvia espresso machine. If it is drawing power more often, that means I’m making a latte. I should be able to use Node Red to use that information to create a virtual sensor in Home Assistant that keeps track of the last time I made coffee.
The idea being that I can automatically power down the espresso machine an hour or so after using it. Then Home Assistant can power it back up when I wake up the next day. Figuring out when I’m awake will be its own can of worms.
Bonus feature of the CloudFree v2 smart plug
This doesn’t seem terribly surprising, but I was excited when I realized that you don’t need any home automation infrastructure or even a WiFi access point to connect to in order to make use of the CloudFree plug’s power metering.
An unconfigured Tasmota device announces itself as a WiFi access point. You connect to that access point with a phone or computer to configure the plug. You would normally put in your home’s access point name and password, then you might put in the information to access your Home Assistant server.
The main page of the web interface shows power usage details. Just about everything you might use a Kill-A-Watt meter for is on this page.
How awesome is that? I might have to keep one of these in my laptop bag. Not only can you meter power, but you can of course still toggle the switch over WiFi. Have you ever had to holler across a building to have someone power cycle a device for you?
Power metering isn’t configured optimally from the factory
This isn’t a big deal. The plugs use Tasmota’s default telemetry reporting settings. That means they send data to Home Assistant every 60 seconds.
When I installed my first CloudFree plug on my Rancilio Silvia, it didn’t seem to be reporting any information at all. Then I noticed the kilowatt hour meter was indeed slowly climbing. The trouble was that my espresso machine’s heater doesn’t run long enough to be active when data is uploaded to Home Assistant.
My quick Google-fu told me to change the TelePeriod setting. The minimum is 10 seconds, and this certainly did the job. I was seeing my espresso machine jump to 800 watts every time it turned on.
This worked, but it wasn’t the correct answer. Adam from the Local Bytes store told me to check out Tasmota’s PowerDelta setting. This doesn’t use a timer. Telemetry is sent to Home Assistant whenever the power usage increases by a set number of watts. I set mine to 5 watts.
Now there is almost no delay between the espresso machine heating up and Home Assistant knowing about it.
What’s next?
Kenny Stier from CloudFree told us how he uses the power metering to make his deep fat fryer smarter. He is able to power up the fryer from his phone before he heads home, and the power metering on the CloudFree smart plug lets Home Assistant tell him that the oil is up to temperate. That’s pretty slick!
I’m going to work on putting together the last few pieces of my own puzzle here. In my old OpenHAB setup, I used a combination of data to determine whether I was sleeping or not. Are my computers idle? Is my phone charging? Has the Fire TV been used in the last hour? Is my phone screen off? Is my phone at home? If all of this was true, I was probably asleep.
My charging habits are a bit different these days, so I’m going to need to figure out how to determine when my espresso machine needs to turn on in the morning. Once I do that, I’ll just need to figure out when to have Home Assistant turn it back off.
What do you think? Are you using power metering to aid in your home automation actions? Are you using Tasmota? Are you flashing your own plugs, or are you paying a few extra dollars to let CloudFree do it for you? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!
I’ve only had my GVM RGB20W light for a day or so. I don’t have a ton to tell you about it, but as soon as I got the light in my hand I was able to answer a question that didn’t seem to be asked anywhere on the Internet. That seems like a good enough reason for a quick write-up.
Let’s just get that question out of the way. How do they get a high CRI rating out of RGB LEDs?!
The answer is that they don’t. There are three sets of LEDs in here: daylight, warm, and RGB. When in correlated color temperature (CCT) mode, the daylight and warm LEDs are mixed to provide the correct color temperature. When in RGB mode, the daylight and warm LEDs go dark.
When they say this is a 20-watt light, I am assuming the total output at 100% in either mode reaches around 20 watts. So it should be a 20-watt bicolor LED in CCT mode or a 20-watt RGB LED. Maybe. Probably.
I didn’t actually set out to buy this light. My friend Brian Moses wanted one of these lights, but they come in packs of two at Amazon. He asked if I wanted to split a pack, and I said sure.
For around $125 you get two lights, two 9-volt power supplies, and two mediocre light stands. I think it was a pretty reasonable value.
NOTE: Twitter’s compression demolishes the video quality, but at least I can still tell that the lighting is better!
Ever since signing up to be co-host of The Create/Invent Podcast, I have been trying my best to improve my video recording setup here at my desk.
I was bouncing a random 100-watt quivalent LED bulb in an IKEA Tertial arm lamp off of a card then onto my face. This left me looking a bit green, so I upgraded that to a 100-watt equivalent high-CRI Cree bulb. That helped a lot with my white balance, but my camera still needed to correct towards magenta.
The GVM light is most definitely an upgrade. I’m struggling a bit trying to figure out where to mount it and how to diffuse it, but the white balancing my Sony ZV-1 with a gray card is working great. It isn’t pushing towards magenta at all. It is dead center, and that is fantastic.
I was eyeballing the much more expensive GVM 80-watt chip-on-board (COB) light with a soft light dome for $160. Mostly because I really want a nice light dome.
The body of the big GVM light is probably 8” to 12” deep, and the dome is another 18” or 24” deep. There’s just not enough room between me and the wall for that light.
Even if I could squeeze it in, I would have to take it down between shoots, and I really want something more permanent. I would feel uncomfortable with that giant light dome looming over me while I’m working.
Is it really 20-watts?!
I can’t tell you for sure. I don’t have equipment to measure this.
Here’s what I can tell you. The Cree bulb I’m replacing claims to be an 18-watt bulb, and Cree is a reputable LED manufacturer.
The GVM light definitely seems brighter, so I have no reason not to believe them. It is probably a 20-watt light.
Conclusion
I don’t have a ton to say. It seems like the GVM RGB20W is a fine video light, but I haven’t used it long enough to tell you much else.
Are you using the GVM RGB20W video light? Are you using other lights from GVM? Did I make a good choice with this light? Should I have bought something completely different? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!
Am I excited about the Steam Deck? I’m pretty sure I am! I’m not quite excited enough to order one—gaming on the go just isn’t something I do all that often.
The problem with the Steam Deck is me. I’m quite boring these days! I don’t commute to an office. I don’t visit clients. I don’t spend a week working away from home in a hotel and fly home on weekends. I don’t spend a month at a time away from home. I do most of my work right here in my home office.
I used to do all of these things, and that has the gears in my brain turning. If you carried a Steam Deck in your laptop bag, what could you do with it besides play games? It is just an x86 laptop. You’re supposed to be able to install Windows on it, so I’m assuming we can boot it from external media.
The Steam Deck is a low-end laptop in a different form factor. It has a smaller screen, built-in gamepads, and lacks the keyboard and mouse. My quick searches say that it costs about the same as most four-core Ryzen laptops with similar specs. Some comparable laptops cost more, some cost less.
Finding more uses for what you carry might not be about the money
I feel like this needs to be near the front of the post. If you’re flying to a new hotel every week doing consulting work, the cost of the gear you carry in your bag probably isn’t your primary concern.
You have a limited amount of space in your carry-on bag. You are probably trying to cram as much functionality in that laptop bag as you can manage.
If money is no object, I could try to stuff my laptop, a Steam Deck, a Nintendo Switch, and an extra laptop into my backpack. They may not all fit, or it may get ridiculously heavy, but I can certainly try.
What I’m trying to do here is figure out what roles a Steam Deck in my laptop bag might play. Can it add more value than just gaming? Can it replace hardware I am already carrying today?
Low-hanging fruit
The Steam Deck is supposed to work with pretty much any USB-C docking station. There are big, fancy, more expensive docking stations. There are plenty of options under $35, and there are also simple USB-C to HDMI adapters for around $13.
That means it will be easy to plug your Steam Deck into the TV at your hotel. That will let you play local video files directly off the Steam Deck or a thumb drive. You’ll also be able to use Netflix, Hulu, and friends from a web browser.
I’d be excited if this meant I didn’t have to pack a Fire TV stick in my backpack, but the USB-C dock will be roughly the same size and price. I guess the advantage here is that I could use the dock with my laptop as well. You might already be carrying a dock anyway!
This is the idea I am most excited about. The part of the job I am doing right at this moment doesn’t require a lot of hardware. All I need is a screen, a keyboard, and a text editor and I can be writing blogs all day long. The screen doesn’t even have to be that great.
I have an old, cheap Bluetooth keyboard here that isn’t much bigger than my Nintendo Switch. I wouldn’t be surprised if it is smaller than the Steam Deck. I could throw the keyboard and the Steam Deck in my small laptop bag when I take a ride to the park.
I could sit in the shade and play some Dead Cells, then stop at a picnic table and write a blog post.
Maybe this setup is just enough to have with me in an emergency. Having a real Linux box with me to troubleshoot problems would be a step up from just having my Android phone. It wouldn’t be as comfortable to work on as a real laptop, but the only real problem would be comfort.
Boot from a USB SSD or run virtual machines under SteamOS
If this were my Steam Deck, I’d want to avoid messing with the host operating system as much as possible. I’ve been a Linux server guy for more than 20 years, so I’m not afraid to tinker with things, and I expect to be able to reverse any changes I make. I’m assuming SteamOS’s kernel will ship with everything required to run KVM and QEMU.
If you’re new to Linux and worried about breaking your gaming handheld, you might want to boot your alternate configuration from a USB SSD or hard disk.
Why would I want to run virtual machines on my Steam Deck?
This is the first thing that came to mind when I saw that the Steam Deck ships with 16 GB of RAM. How many virtual machines can I cram on this thing, and what should they be doing?!
Maybe you are a C or C++ programmer using distcc, and you would benefit from having four extra cores nearby compiling your code. Maybe you are a 3D artist, and your work would go faster if you could farm out your Blender rendering to your Steam Deck.
I might set my Steam Deck up with Davinci Resolve so it could render YouTube videos for me while I edit the next one on my laptop.
Do these need to live inside virtual machines? Not necessarily, but it would be nice to separate them from future SteamOS updates that might break their configurations.
How about a build, test, or demo environment for software development?
What might a developer set up here? Virtual machines where they can run Docker, Kubernetes, or k3s? Groups of virtual machines for each client?
You might boot a different SSD for each client? Maybe you have a separate encrypted partition for each client on a single external SSD? You could boot your Steam Deck off that SSD, then you can boot up the correct set of machines for your client, and if you don’t have your Steam Deck available, you could probably boot those same virtual machines on your laptop.
I imagine most folks would do this sort of work in the cloud, but maybe that just doesn’t fit your workflow. Maybe you already have a squadron of virtual machines running on your laptop.
If you do, maybe the Steam Deck would let you offload some of that to another device to free up resources on your laptop.
I already carry a tiny HDMI-to-USB dongle in my bag for my Pi-KVM. Will this work with my Steam Deck? I don’t see why not!
This is definitely a ridiculously small niche of a use case, but I’d find it handy. I use my Sony ZV-1’s HDMI output as a webcam for live streaming. If you’re some sort of YouTuber, you probably packed at least one camera anyway, but why would you want to connect it to your Steam Deck?
I wouldn’t mind being able to live stream to YouTube remotely without bringing my admittedly gigantic laptop—the Steam Deck is probably less than ¼ the size.
Every once in a while I wish my live-streaming gear wasn’t plugged into my desktop PC. Maybe I’d like to shut down the PC to eliminate the fan noise. Maybe I’d just like to not worry about software on the PC goobering up my live stream. Maybe plugging the camera into the Steam Deck would just help with cabling logistics.
I know there aren’t many people who would do this, but I’d be excited to have the option.
Could I use the Steam Deck as a tablet?
I am guessing it would be super awkward, but it might work in a pinch?
I use a 7” Android tablet as a book reader. Using a color theme like Solarized Dark is quite pleasant when reading with the lights out. Could I leave the tablet at home and read science fiction books on my Steam Deck?
If the software existed, the Switch would do a fantastic job at this. All you’d have to do is pop the controllers off.
The controllers are integrated into the Steam Deck, and they’re much wider than Switch controls. Valve has been pretty quiet about the dimensions and weight of the Steam Deck, but I’m willing to bet it will make the Switch look petite.
If I were stuck in a hotel room at 2:00 a.m. itching to read a few chapters, would I be able to do it on the Steam Deck? Can I surf Reddit, Imgur, and Twitter? Can I read Hacker News? The answer to all these questions is definitely yes, but would I be comfortable doing so? That’ll be the question!
Will you do anything with your Steam Deck besides gaming?!
I want to hear from you. Are you planning on buying your own Steam Deck? Are you just going to use it as a game console? Are you going to use it to augment your other computing devices when you travel? Will you use your Steam Deck as a small laptop for writing blog content?
I’m excited about this! I’ve had my Gotway Tesla V2 for just over a year now, and I’ve been riding for about four months longer than that. It is easily one of the best toys I’ve ever bought. An electric unicycle (EUC) is a good fit for where I live, and it gets more useful every day. The EUC is a hobby all by itself, but it also does a great job supporting my FPV drone-flying hobby.
Not long after we bought this house, I ordered an inexpensive folding electric scooter. It was fine for what I used it for. I’m two houses away from Plano’s amazing paved bike paths. There’s probably a dozen miles of trail to ride without even having to cross a street. It was fun to take my laptop or FPV drone backpack out riding with me.
The scooter wasn’t portable. It sounds portable because it folds. Fifty pounds doesn’t sound too heavy, but it is when you’re trying to heft an awkwardly shaped 4’-long tube with handlebars into the back of your tiny SUV. I wanted a personal electric vehicle (PEV) that I could easily take anywhere.
My friend Alex told me I should get a OneWheel XR, but they’re about $1,800. I didn’t even know if I could ride one. Then I discovered the existence of EUCs. I happened to see a refurbished InMotion V5F for $399 shipped. Could I ride a unicycle? Would I ever use it? At that price, it seemed worth finding out!
The InMotion V5F has as much power, range, and nearly as much top speed as a OneWheel XR at a fraction of the price. It was a good deal, and it got me hooked. I’ve been lending the V5F out to friends so they can learn to ride, but it came back last month with a dead battery. I suspect there’s just one or two dead 18650 cells in the pack, but I haven’t opened it up to check.
Why did I upgrade to the Gotway Tesla V2?
To tell you the truth, I didn’t really outgrow my InMotion V5F. I would have been content to ride it for at least a year, but my friends made that difficult for me! My friend Tanner bought a Gotway MSX Pro, and my wife upgraded to a Pace Aventon 350 e-bike. Both of these easily go faster than my InMotion V5F’s 17 mph top speed. I was getting warning beeps all the time when I rode with those two!
My scooter had me shying away from buying one of the biggest wheels with huge batteries. The V5F was less than 25 pounds. [Tanner’s MSX][mxp] was 55 pounds. Do I really want to spend even more money on a heavier wheel? If I have a 40 mph top speed, am I going to want to go that fast? Probably. Eventually.
There are so many choices of wheel between my InMotion V5F and Tanner’s then top-of-the-line MSX Pro. Which one should you choose?
A discount at eWheels on the Gotway Tesla V2 nudged me in that direction. It has almost three times as much battery and nearly double the top speed of my little InMotion V5F, and it only weighs 42 pounds. I believe it was on sale for $1,350, and it shipped almost immediately. Tanner’s top-of-the-line wheel would have cost $650 more, and it wouldn’t have shipped for a month or two!
How much do I ride the Tesla?
Not as much as I’d like! When I had around 700 miles on the odometer back in March, I was joking that I’d like to put another 700 miles on it before I’d had the wheel for a year. While this was absolutely within the realm of possibility, I expected I might actually hit 1,000 miles. I didn’t even manage that! There’s something just shy of 850 miles on the odometer right now.
I do the majority of my riding on our local bike trails. There’s a fantastic picnic table in a secluded spot quite a ways from any parking about four miles from here. I like to take my laptop out there and write blogs for an hour. The vast majority of my rides wind up being 8-mile rides because of this.
The weather has been keeping me home. A lot of days were just a little too windy and chilly to be sitting outside at my laptop. We’ve had a lot of heavy rain, but we’ve also had a ton of days with light rain.
The rain washes fine silt onto the bike path, and that stuff is ridiculously slippery, and every puddle could have a layer of that stuff at the bottom. If it rains today, I probably won’t want to ride tomorrow. If it rains twice in a week, I’m unlikely to get out.
The pandemic has been restrictive
It is so easy to throw the Tesla in the car. Now that my friends and I are fully vaccinated, it won’t be long before we start driving out to places and going for rides. That will be fun and exciting!
Not only that, but there are a handful of places I can ride to that are within 5 miles of home. Many of these places are just off the bike trail, so I barely have to deal with cars and traffic. I’m looking forward to hitting up the donut shop, Napoli’s pizza, and the frozen yogurt shop.
The pandemic isn’t stopping me from stopping at the donut shop. The trouble is that they close at noon, and I am not a morning person.
We eat pizza from Napoli’s almost every week. I don’t even know if they’re open for indoor dining. I assume and hope that they are not, but when it is time for that, I look forward to heading over there with my laptop on hot afternoons so I can work in some air conditioning while eating delicious pizza!
The yogurt shop seems icky during a pandemic, because it is a self-serve shop. As far as I know, that is still the case right now. I’d feel much better about one or two employees touching all the toppings and ice cream dispensers. Knowing that all the folks that can’t even figure out how to put their mask over their nose are touching the gummy bears and peanut butter cups creeps me out.
Everything will be getting better, and I’ll be excited to use my wheel for all these activities I had in mind when I ordered my first EUC before the pandemic!
Was the Tesla V2 the right choice?
The Tesla V2 really does sit in the sweet spot for me. I do my best to not ride faster than about 20 mph, because I don’t wear enough gear for that to be safe, so the Tesla still has speed that I can grow into. I rarely ride farther than 12 miles on a single day, and I’ve gotten 34 miles out of a single charge, so the range is quite reasonable for me.
At its price point, the Tesla is an amazing wheel.
Do I ever wish I had more wheel?
Yes. Most definitely. My Tesla is fantastic on the buttery-smooth bike paths, it does just fine on the rougher streets, and it does well enough when I need to take a shortcut off road to avoid obstacles or people.
I’ve put a couple of miles on Tanner’s Veteran Sherman. I always joke that it really does ride like a tank. The first time I tried it, I rode to the end of my street, but I couldn’t turn it around in our cul-de-sac. I leaned and twisted as hard as I felt comfortable doing so, but it just didn’t want to turn. I felt the same way when I got on my Tesla after spending months on the tiny InMotion V5F.
The Sherman just goes where you point it. It is big and heavy, and that knobby tire just wants to stay on course. Every time I put a few miles in on streets instead of on the bike path, I kind of wish I was riding a Sherman. I’d feel more confident, and I know it’ll handle any surprise bumps or small potholes better than my Tesla.
I’m constantly tempted by the KingSong S18!
When I bought my Tesla V2, the KingSong S18 was just about to be open for preorders. It was a tempting wheel at the time, but I’m so glad I chose the Tesla. I would have been waiting months for the S18 to ship, and the quality of the first batch was pretty awful.
It sounds like the current revision of the KingSong S18 has ironed out most of the problems. It sounds like a fun, agile wheel, and I would love to have a suspension.
On paper, the KingSong S18 is basically a Tesla V2 with a suspension, though Wrong Way says the S18 goes quite a bit farther on a charge!. The extra $500 gets you a bit more battery and power, a little more top speed, a suspension, and a wider tire. I would pay the extra $500 just for the suspension. The rest is a nice bonus!
If something happened to my Tesla today, I would almost definitely order a KingSong S18 to replace it.
There is no way I would buy an S18 to park next to my Tesla. They’re just too similar. If I were going to own two wheels, I’d be looking at something like a Sherman or Gotway RS19. Something big. Something with more range. Something that just plows through grass and rough terrain.
Among other minor changes, the Begode T3 has a hollow motor and a bigger battery. In theory, the hollow motor should be a big upgrade since it allows for a much bigger bearing. In practice, people have been having trouble with bearings failing in hollow-motor wheels. We’ll see how that goes.
I wouldn’t mind having 50% more battery, but I worry that this is going to push the price too high. I’m waiting for the Tesla V3 to show up at eWheels before I decide whether I think it is a good value. At $1,350 the Tesla V2 has been an easy choice. If that gets pushed to $1,550 or more, then I’d be more tempted to pay a little more for an 18” or 19” wheel.
The changes make sense for Gotway/Begode. The Gotway MCM5 v2 specs and pricing make more sense if they also push the specs of the Tesla up a few notches.
I haven’t even told you how the Tesla has held up!
It has held up way better than the InMotion V5F! I can’t really blame InMotion for that. I dropped my V5F A LOT while I was learning. At least four other people besides myself practiced on my V5F, and I am assuming they all dropped it quite a bit. It is scuffed, bruised, and a bit cracked. I wouldn’t be horribly surprised if the drops are the reason it is no longer holding a charge.
I have dropped my Tesla a few times, though never in the spectacular ways my V5F has crashed. There are a few scuffs on the Tesla, but no cracks. The dipping pedals have scraped the ground a bit, but overall it is in fine shape.
I haven’t done a range test since the first few weeks I had it, but I did do a pretty long ride last month. I clearly remember being impressed that the battery was still working well, but I can’t find photographic evidence to back up my memory. I recall riding around 15 miles and still having about 70% charge remaining.
I am quite confident that I can still [get 34 miles on a single charge][gtrt].
The Tesla forced me to upgrade my safety gear
When I rode my little 14” V5F, I usually only wore my cheap bicycle helmet. I used to average about 10 mph with short bursts up to 15 mph. I was always able to jump off and jog to a stop when things went wrong.
Then I bought the Tesla. My average speeds quickly climbed to 15 mph with short bursts up to 20 mph. These days I have EUC World set to give me warning beeps at 21 mph. Most of my rides average 15 mph still, because I do have to spend time navigating around other people, but I’m bursting up to 25 mph, and I’m pretty good at staying just 1 mph short of the beeps.
I started wearing my wrist guards as soon as I acquired the Tesla. When I noticed that my average speeds were getting to 15 mph, I knew I needed a better helmet. There’s a much higher chance of falling flat on my face now, so I wound up buying a Bell Super 3R helmet.
I tried a cheap motorcycle helmet, but visibility and airflow are both poor. The Bell cost three times as much, but it looks like a block of Swiss cheese. I get plenty of airflow even if I’m moving slow, the face hole is quite large, and I have that sturdy guard protecting my chin.
I’m in Texas. We’ll be over 100° most days every summer, and it definitely isn’t a dry heat. The Bell Super 3R is worth every penny.
It has only been a year, and you can’t buy a Tesla V2 anymore!
I wish my experiences could be a little more relevant to you. I don’t think the Tesla V2 was released more than a year before I bought mine, and as of a few months ago, stock of the last of the V2 models seems to be running out everywhere.
I’m excited that I can tell you that I’ve had a good year with my Tesla V2, and that it has treated me so well. I’m disappointed that you can’t directly repeat my success. Sure, the new Tesla seems like it will be a fantastic wheel, but I’m not expecting it to be the right wheel for someone that would be eyeballing the Tesla V2!
As near as I can tell, it is difficult to choose a bad wheel. There are some obvious gotchas to watch out for. You’ll probably hate lugging a 77-pound Veteran Sherman up three flights of stairs every day, and commuting on a tiny MTen3 will probably be uncomfortable.
These wheels are all made by companies in China that are looking to cut every corner they can get away with. Some wheels have been catching fire, though the problem doesn’t seem to be nearly as rampant as the hoverboard debacle everyone probably remembers.
What are the plans for the next year?!
First of all, I am hoping to not upgrade to something bigger and better this year. New toys are always tempting. I’d love a suspension. I’d enjoy a bigger, faster, even more stable wheel. More importantly, though, I would like a properly waterproof wheel. Not necessarily something I can submerge, but something that won’t be likely to cut out on me in the rain. Maybe in a year or two there will be a nice suspension wheel that checks this box.
I am just going to continue to ride. I look forward to riding to the donut shop. I look forward to writing blogs while eating pizza. I look forward to finding new places to ride. In other words, I am looking forward to doing all the things that the pandemic has prevented me from doing!
What do you think? Are you riding a Gotway Tesla V2? Are you looking to buy something similar? Have you been riding for a while? Are you a casual EUC user like me, or are you putting a ton of miles on your wheel? Do you think I need to invest in the gear that will allow me to safely ride faster, or should I just stay casual? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!
I have had my Sony ZV-1 for nearly four months now. It has been a fantastic little camera. It is doing the jobs I expected it to do, and it is doing them quite well.
I considered waiting another month or two before writing this follow-up blog post. I’ve been very much underutilizing my poor ZV-1. We’ve had month-long medical emergencies, cold, rain, a long cat-related medical emergency, and then snow, power outages, and extremely cold weather here in Plano!
I certainly expected that I’d haul the camera and an FPV miniquad out with me on some rides on my Gotway Tesla electric unicycle and talk at the camera about something interesting by now. It just hasn’t happened. I’m confident that it will.
Under ideal conditions, the C920 isn’t a bad video camera. I noticed last week when Brian and I tested the functionality of Riverside.fm for podcasting. Brian’s Logitech C922 didn’t look bad, but something looked off. When I analyzed his locally recorded video, I noticed that quite a few frames wound up being doubled.
Does the Twitter butcher a screenshot of a video as badly as it does a video clip? I'm betting that it doesn't! That's the Logitech C920 tuned as best I can get it on the left, and the Sony ZV-1 on the right. pic.twitter.com/1AI7YiU2ME
I don’t seem to drop any frames. The ZV-1 really shines under poor lighting conditions, but when I light myself well, the C920 really isn’t all that far behind.
Except for the inconsistent frame rate. That’s just awful.
Adjusting settings while recording a podcast is challenging!
I record all my podcasts using manual exposure settings. I want to get as close to a 1/60 shutter as I can. I also want as much blurring of the background as I can get, so I want to make sure the ZV-1 is wide open at f/1.8. That leaves me with control of the ISO, so it isn’t difficult to tune things in.
The only problem is that the camera is looking right at me. Sure, the screen is flipped around, but I have to reach around the other side to work the limited numbers of buttons and knobs on the Sony ZV-1. Yes, up is always up, but left and right are reversed. I often move cursors in the wrong direction or hit the wrong button. I’m glad I can get this stuff set up before we start recording an interview!
I have my new Sony ZV-1 plugged into my HDMI doodad, and it is working with OBS. I'm still making a few adjustments. Is this scary!? pic.twitter.com/ZV0PzfA22G
I did finally manage to get the Android app working. It certainly helps with this, but connecting an app to make these adjustments brings its own kind of pain. You have to find the app, make sure the camera is in the right mode to accept a connection, and even then you might try two or three times before it works. Even when everything works correctly, this still takes quite a bit of time.
Firmware update turns the Sony ZV-1 into a USB webcam?
As soon as I heard about this firmware update, I downloaded it immediately. Could I really use my Sony ZV-1 as a webcam and skip my HDMI-to-USB dongle?!
Sort of. It works. With the new firmware, the ZV-1 does indeed show up as a plain old UVC webcam as long as you activate the correct options in the menu. It most definitely isn’t a replacement for using a USB dongle.
The ZV-1 is limited to 720p via USB. It claims to run at 30 frames per second, but the output sure doesn’t look like it. It is an upgrade in picture quality over the Logitech C920, but the ZV-1 has that same sort of stuttering frame rate problem over USB.
It is worth spending $20 on an HDMI cable and a USB dongle to get smooth 1080p or 4K video out of your Sony ZV-1.
It depends. Just a few weeks ago my friend Alex sold his Sony RX100 IV to my friend Brian, and Alex snagged a used Sony ZV-1 off Craigslist. They both got extremely good deals, and I am most definitely envious.
I got to do a good amount of research when Brian was trying to figure out if the older RX100 would be a good fit for him. The Sony ZV-1 is really an RX100 VIII. That’s four hardware revisions newer than Brian’s used RX100 IV.
NOTE: That’s Brian talking to the camera about Tailscale using his RX100 IV.
Brian wants to use his RX100 as a webcam and as his top-down camera in his recording studio. His older RX100 has a comparable lens and exactly the same 1” sensor as my Sony ZV-1. The RX100 IV also has good eye-tracking autofocus.
It is very well possible that 5 years of image-processing improvements mean my ZV-1 makes better use of that sensor, but at least we know they’re working with the same hardware.
I’m quite confident the RX100 will work just as well as my ZV-1 would for his intended use case. I don’t want to tell you how much money Brian saved.
I’m not saying you should buy a used RX100 IV. It may not be the best value. For all I know, you can find newer cameras in the RX lineup for less. The MK4 just happened to be the one that traded hands within my circle of friends recently!
What are the advantages of the ZV-1 over previous RX100 models?
The Sony ZV-1 has some really nice upgrades over the previous models, especially if you’re planning on doing the talking-to-the-camera shtick. The built-in microphone is much improved, and the screen has more comfortable articulation if you want to film yourself. Most of the RX100 series even lacks a microphone input.
Are these features worth $150 to $300 or more compared to one of the previous RX100 models? For me, the answer is easily a yes!
The older the RX100, the bigger the difference in price, but the difference in performance grows wider with age, too. It is easy to start pointing at improvements on the spec sheet to find $150 or more extra value in the ZV-1.
I told Brian that I’d be buying that RX100 IV just to keep it mounted in my office if he didn’t buy it. Even if I did buy that camera, I would still want my Sony ZV-1 to take with me for recording on the go.
I don’t know that I have a lot to add to this part of the discussion since my first few days with the camera. I don’t like using the ZV-1 for photography. It doesn’t have a viewfinder. If I’m going to hold a camera up in my hands without being able to help hold it steady with my face, I might as well just use my phone.
That said, the Sony ZV-1’s dynamic range is much better than I ever expected! My aging full-frame Canon 6D is supposed to have somewhere around 12 stops of dynamic range. I did some pretty simple tests in my office. I shot very underexposed pictures on both the ZV-1 and 6D and pushed the raw photos up to a reasonable exposure.
The nightly builds of Darktable seem to understand the raw files from my Sony ZV-1. Twitter will likely overcompress both of these. I'm hoping the difference between the jpg off the camera and the processed RAW will still be obvious. pic.twitter.com/ihqnqeXof9
My tests were terrible, and it didn’t feel like a fair comparison. The ZV-1 seemed like it could be pushed even brighter from underexposure than the Canon 6D, but the 6D has much nicer color science.
I really only have one actual photo that I’ve taken with the Sony ZV-1. At the time I took this photo of a hot air balloon, I didn’t have a recent enough version of Darktable to edit the raw file, so I just published the jpeg on Instagram.
I’ve since upgraded Darktable, and I can edit the ZV-1 raw files. I definitely feel like it is worth shooting raw on this camera. This isn’t a heavily edited photo. The extra blue in the sky and the shade of green of the grass was probably my choice, but the raw file really let me recover detail in the shadows so you can see the texture of the grass. The jpeg from the camera just didn’t have that possibility.
Did I buy the right camera? Should I have paid more for a Sony a6600?!
I’m doing a bad enough job at taking my Sony ZV-1 with me. I haven’t gotten any vlogging done over the last four months except for a couple minutes of test footage. If I’m not going to manage to take my tiny ZV-1 out with me, I would never manage to take an APS-C body and lens out on the road!
This is like comparing apples to oranges, but at least they’re both fruits. There’s a lot of overlap in functionality, and there’s a lot that the Sony ZV-1 can do nearly as well as the Sony a6600.
Both cameras are going to do well when I can control the lighting. Sure, a nice lens on the a6600 would improve my podcasts a bit, but would that be worth spending three or four times as much money? Maybe someday!
The a6600 without a lens is larger and weighs nearly twice as much as the Sony ZV-1. Add even a pancake lens to the a6600 and it just won’t fit in my front pocket. Having the option to drop the Sony ZV-1 in my pocket is awesome, and when I do, I am carrying a camera, lens, and reasonable microphone with me!
When I bought the Sony ZV-1, I was asking myself if I would continue to use the Osmo Pocket for vlogging. I wish I figured out the answer to that question over the last four months. I’m sure I’ll have a better answer over the coming months, but I’m going to give you the facts.
I did some vloggy test footage with the Osmo Pocket, the Freewill wide angle lens, and a mic clown nose as a wind blocker. If you could see how windy it was, you would be extremely impressed by these results. pic.twitter.com/sr7Nt0YF15
I’m excited that I can fit the ZV-1 in my pocket, but that’s not the whole story. There’s only so much you can do without some sort of tripod. You most definitely can’t hole the ZV-1 up in front of you and talk to the camera without a small tripod!
This is where I start to ask questions. Why carry two things when the Osmo Pocket is smaller than either the tripod or the ZV-1?
If I take the awesome Sony ZV-1 out with me on a journey where I want to talk to the camera, I have to bring a little tripod.
But the DJI Osmo Pocket is an entire vlogging camera that I can point right at my face without using accessories, and it is smaller than most any tripod! pic.twitter.com/g5xZRZWEOk
I’ve tried vlogging with a phone. Every time you turn on the phone you have to unlock it, find your camera app, flip the camera around, then hit record. I can take the Osmo Pocket out with one hand and be recording myself in less than 5 seconds, and I don’t even have to look at the thing to do it.
I don’t have an answer. I expected I would have posted at least two or three vlogs with the ZV-1 by now. I’ve been doing a bad job and it hasn’t happened yet!
Yes. We’re already to the conclusion, or at least the conclusion so far. I’m doing a bad job. I should have used the Sony ZV-1 for more work over the last four months, but life has gotten in the way.
I will try to do a better job utilizing my new camera over the coming months, but so far it has been doing its primary job of replacing my webcam quite flawlessly. It is a huge upgrade, and I’m happy to have it!
What do you think? Did I write my update post on the Sony ZV-1 too early? Am I underutilizing it? Am I missing out on some important features, or do you think I bought entirely the wrong camera?! Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!
I’m not new to the world of Infiniband. I bought a pair of 20-Gigabit Mellanox Infiniband cards in 2016 to connect my desktop PC to my virtual machine host. It has been chugging along just fine, though I’ve never gotten a full 20 gigabits per second out of this setup. We’ll talk more about that later.
My posts about my upgrade from gigabit Ethernet to Infiniband have always been solidly in my top ten posts according to my analytics, and that has only just barely been true lately. That got me thinking that my 20-gigabit setup is getting long in the tooth, and it might just be time for an upgrade!
No. I do not need to upgrade. When files are cached in RAM on my NAS virtual machine, my 20-gigabit hardware can hit about 700 megabytes per second over NFS. That’s more than twice as fast as it can pull data off the hard disks or SSD cache. I’m most definitely not going to notice an upgrade to faster hardware.
That’s not the only reason to upgrade. You fine folks shouldn’t be buying 20-gigabit hardware any longer. The 40-gigabit gear is nicer, and it costs about what I paid for used 20-gigabit gear in 2016. I don’t like recommending things I’m not using myself, so an upgrade was definitely in my future.
My server side's Infiniband card has 8x PCIe 2.0 lanes available. I don't have either an 8x slot or PCIe 3.0 4x slot available in my desktop. Either of those would just about double my speeds, but I'm not willing to move my GPU out of its 16x PCIe 3.0 slot! pic.twitter.com/IqlsB5ztod
My old gear wasn’t limited by the Infiniband interface. It was limited by the card’s PCIe interface and the slots I have available in my two machines.
My server side has a 16x PCIe 2.0 slot available and my desktop has a 4x PCIe 2.0 slot. I knew I wouldn’t hit 40 gigabits per second with the new hardware, but as long as they negotiated to PCIe 2.0 instead of PCIe 1.1, my speeds would surely double!
I had a lot of confusion about PCIe specifications!
When I wrote about my 20-gigabit Infiniband cards in 2016, I claimed that the 8 gigabits per second I was seeing was a limit of the PCIe bus. I was correct, but in rereading that post and looking at my hardware and the dmesg output on driver initialization, I was confused!
Mellanox claims my old 25408 cards are PCIe 2.0. When the driver initializes, it claims the cards are PCIe 2.0, but the driver also says they’re operating at 2.5 GT/s. That’s PCIe 1.1 speeds.
This isn’t relevant to the 40-gigabit or 56-gigabit hardware, but I think it is worth clearing up. All the cards in Mellanox’s 25000-series lineup follow the PCIe 2.0 spec, but half of the cards only support 2.5 GT/s speeds. The other half can operate at PCIe 2.0’s full speed of 5 GT/s.
Do you really need 10-, 40-, or 56-gigabit network cards?
We are starting to see a lot of 2.5 gigabit Ethernet gear becoming available at reasonable prices. This stuff will run just fine on the older Cat-5E cables running across your house, and 8-port switches are starting to go one sale for less than $20 per port.
Not only that, but inexpensive server motherboards are showing up with 2.5 gigabit Ethernet ports. The awesome little power-sipping [Topton N5102 NAS motherboard][ttp] that Brian Moses is selling in his eBay store has four 2.5 gigabit Ethernet ports.
Is 2.5 gigabit fast enough? It is faster than most SATA hard disks. It isn’t quite as fast as the fastest SATA SSDs, but 2.5 gigabit Ethernet is definitely playing the same sport.
It is a good bump in speed over gigabit Ethernet, and it is rather inexpensive.
[I Am Excited About the Topton N5105 Mini-ITX NAS Motherboard!][ttb]
You might want to look at 56-gigabit Mellanox cards
I looked at 56-gigabit Infiniband cards. I bought two, but I made a mistake. I accidentally ordered a pair of HP FlexibleLOM cards for only $25. FlexibleLOM is very close to being PCIe, but the pinout isn’t compatible and the form factor isn’t quite right. Actual PCIe 56-gigabit cards cost $80 on eBay.
I don’t know why I ordered FlexibleLOM cards. I think I was just super excited about 56-gigabit Infiniband cards for only $13 each. Don’t make my mistake.
No Infiniband upgrade for me today! I don't remember noticing that these aren't standard form factor PCIe cards when I ordered, but they sure are!
If this weren't for a blog post, I'd just make them work, but I'm going to reorder correct hardware. pic.twitter.com/xxJ1G1fzA9
NOTE: FlexibleLOM to PCIe adapters exist, and they might be a really good value, since you can get two FlexibleLOM 56-gigabit cards for $25 compared to $150 or more for a pair of PCIe cards. They didn’t seem easy to source, so I opted to go the easy route.
I wound up downgrading to 40-gigabit Mellanox ConnectX-3 PCIe cards. The 56-gigabit cards won’t run Infiniband any faster for me because my available PCIe slots are the real bottleneck here. If you’re running Infiniband, this will likely be true for you as well, and you can save yourself $80 or more.
If you want to run super fast Ethernet using these cards, it might be worth spending a few extra dollars. My 40-gigabit cards can only operate at 10 gigabits per second in Ethernet mode. The 56-gigabit Mellanox cards can operate as 40-gigabit Ethernet adapters.
Ethernet is easier to configure than Infiniband, especially if all you’re interested in is IP networking. I was hoping to test this out, because 40gbe would simplify my setup quite a bit. I opted to save the $80 and just continue routing to my virtual machines.
Did I mention that this is all used enterprise-grade hardware?
I’m not encouraging you to buy brand new Infiniband cards. You’ll pay at least twice as much for a single card as it would cost you to connect three machines with dual-port Infiniband cards from eBay.
The 20-gigabit Infiniband cards I bought in 2016 were already 10 years old when I started using them. The 40-gigabit cards I just installed are probably around 10 years old as well.
Can I run Infiniband across my house?
Not easily. I’m using a 1-meter QSFP+ cable to directly connect one Infiniband card to another. My desktop computer and KVM host both live in my office and they sit right next to each other. These QSFP+ cables can only be about 3 meters long.
Oh good. The Mellanox QSFP+ cable I ordered has a sticker that says FDR on it. That's an indication that I ordered the correct cable to match the pair of 56-gigabit FDR Infiniband cards that will be arriving Monday! pic.twitter.com/sMwCbcsY0k
If you need a longer run, you have to use fiber. I’m seeing some 50’ lengths of fiber with QSFP+ modules on each end for around $70. There are QSFP+ transceiver modules for $30. You’d have to find your own compatible fiber to plug into those modules.
What if I need to connect more than two machines?!
The vast majority of Infiniband cards on eBay have two ports. That’s enough ports to directly connect three machines. This is what my friend Brian did with his 10-gigabit Ethernet setup. In practice, our configurations are pretty similar. I just have one fewer machine on my super-fast network.
My desktop and VM server live on two different networks. They’re both connected to my home’s gigabit-Ethernet network, and they’re both plugged into my tiny Infiniband network. The Infiniband network has its own subnet, and I’m using the hosts file on my desktop to make sure the Infiniband connection is used to connect to any virtual machines that need super high-speed connections. This is especially important with my NAS virtual machine.
What if I need to connect more than THREE machines, Pat?!
You could install even more Infiniband cards, but I wouldn’t recommend it.
There are a few small 40-gigabit QDR Infiniband switches, but most are huge 36-port beasts. They’re not expensive. Some are as low as $150, but most are closer to $250.
This is quite a bump in cost compared to plugging three machines directly into one another in a star topology, but using an Infiniband switch also simplifies the network configuration considerably. It is still a fraction of the price of 10gbe over CAT-6 cable.
Why are you using Infiniband?
It just sounds cool, doesn’t it? It often starts fun conversations too. When people are chatting about network stuff, and you mention that you run Infiniband at home, folks are often surprised. More often than not they’ve never even heard of Infiniband. You also get to say goofy things like, “To Infiniband and beyond!”
For me, Infiniband makes my NAS feel like a local disk installed in my desktop. The virtual machine host where my NAS VM lives isn’t exactly high-end these days, and I didn’t build it to saturate a 10-gigabit connection. It has a pair of mirrored 250 GB Samsung 850 EVO SSDs and four 4 TB 7200 RPM hard disks in a RAID 10. The SSDs are the boot volume and are also being used as lvmcache for the hard disks.
I usually see read and write speeds in the 300 megabyte-per-second range. Small random writes get propped up by the SSD cache, but most of what I hit the NAS for involves video editing. The storage in my cameras is much slower than this, and my disks are rarely the bottleneck when editing video.
The fastest disks in my server are the 850 EVO SSDs, and their top benchmarked speed is somewhere around 350 megabytes per second. The spinning RAID 10 probably tops out around there too. My disk access wouldn’t be any faster if they were installed directly in my desktop.
This just means I have room to grow. I could upgrade to faster solid-state drives for my lvmcache and triple the count of disks in my RAID 10, and I would still have a bit of extra room on the network. That’s awesome!
What can you do with Infiniband that you can’t do with Ethernet?
Infiniband supports Remote Direct Memory Access (RDMA). This allows memory to be copied between hosts without much CPU intervention.
The most common use of RDMA is in conjunction with iSCSI devices. iSCSI normally operates over TCP/IP. When using iSCSI on Infiniband, the IP stack is bypassed and memory is transferred directly from one machine to another. This reduces latency and increases throughput.
If you’re connecting virtual machines to a Storage Area Network (SAN), this may be of interest to you.
I’ve really only ever used iSCSI to say that I’ve done it and to tell people how easy it is to do. I’m not interested in setting things up here at home to rely on iSCSI and a separate storage server.
How do I set up Infiniband on Linux?
Everything I wrote about setting up Infiniband in 2016 works today. Sort of. A few weeks ago I upgraded my KVM host from Ubuntu 16.04 to 18.04 and then immediately to 20.04. One of those upgrades decided to rename my Infiniband interfaces.
12
[ 11.960168] mlx4_core 0000:01:00.0 ibp1s0: renamed from ib0
[ 11.975603] mlx4_core 0000:01:00.0 ibp1s0d1: renamed from ib1
This goofed up my configuration in /etc/network/interfaces. Not only that, but the old network configuration using /etc/network/ has been deprecated in favor of NetworkManager.
I’m still using the old-style configuration on the server, and it works fine. All I did was pull the old 20-gigabit cards, install the new 40-gigabit cards, and all my configuration was just working on my first boot.
If you have a fresh install of Ubuntu 20.04 or any other distro that is using NetworkManager, I have to imagine that it is much easier to just use NetworkManager.
There are two solutions for running regular network traffic over Infiniband. There’s Ethernet over Infiniband (EoIB), which runs at layer 2, and there’s IP over OB (IPoIB) which runs at layer 3. EoIB is not in the mainline Linux kernel, while IPoIB is. IPoIB just works out of the box.
I wanted to avoid using EoIB because it requires installing software from Mellanox. What if I want to upgrade my desktop to a bleeding edge kernel that Mellanox doesn’t support? What if there’s a conflict between my Nvidia driver and the Mellanox EoIB driver? I don’t want to deal with any of that.
That created a new problem. Since IPoIB runs on layer 3, I can’t just bridge virtual machines to that device. Bridging happens at layer 2. This means I am forced to route from the Infiniband interface to my virtual machines.
I touched on this a bit earlier when I mentioned that 56-gigabit Mellanox cards could also be used as 40-gigabit Ethernet devices. If you want to use drivers in the mainline kernel AND be able to plunk your virtual machines onto a bridged interface, it may well be worth spending the extra cash on 56-gigabit cards. The Ethernet drivers will have no trouble with this.
This is already a long blog post. I wrote about my adventures in getting IPoIB to work well with the 20-gigabit Infiniband cards, and the configuration hasn’t changed. There are some gotchas in there, for sure.
You need to get your MTU up to 65520. If any interface in the chain is stuck at the default of 1500, you might experience extremely slow speeds to your virtual machines. I had a persnickety interface hiding on me.
Even with everything configured correctly, you’re going to lose a little throughput when routing. On the 20-gigabit Infiniband hardware, I was losing roughly one gigabit per second when talking to the virtual machines. I’m doing better with the 40-gigabit gear, so your mileage may vary here.
This is the part I’ve been waiting for ever since I pulled the trigger on the new Infiniband cards. Here’s what I know.
I tend to see 300 megabytes per second when connected to my NAS VM with my old 20-gigabit Infiniband hardware. That’s about three times faster than gigabit Ethernet, and it is pretty much the top speed of my solid-state and hard drives. This isn’t going to be improved, which is a bummer.
Let’s start with what the logs say when the driver initializes the Infiniband cards:
12345
Desktop:
[ 16.270996] mlx4_core 0000:06:00.0: 16.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x4 link at 0000:02:04.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link)
KVM server:
[ 9.313679] mlx4_core 0000:01:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x8 link at 0000:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link)
If I had a PCIe 3.0 slot with 8 lanes available on each end, my maximum speeds would be around 64 gigabits per second. I’d need both ports to reach speeds like that!
In the server, there are 8 PCIe 2.0 lanes available giving us up to 32 gigabits per second. My desktop has 4 PCIe 2.0 lanes available, which is my limiting factor here. The only faster slot in my desktop is the 16x PCIe 3.0 slot where my Nvidia GPU lives. I’m just going to have to live with a 16-gigabit top speed.
Next up is the iperf benchmark. This will give me a more realistic top speed including all the IP network overhead.
1234567891011121314151617
KVM Server as host:
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 10.42.253.1 port 5001 connected with 10.42.253.2 port 45530
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 14.3 GBytes 12.3 Gbits/sec
KVM Server as client:
------------------------------------------------------------
Client connecting to 10.42.253.2, TCP port 5001
TCP window size: 6.01 MByte (default)
------------------------------------------------------------
[ 3] local 10.42.253.1 port 36662 connected with 10.42.253.2 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 14.9 GBytes 12.8 Gbits/sec
The old cards topped out at 6.53 gigabits per second. The new cards are nearly twice as fast!
When routing to my NAS virtual machine, my iperf tests would run about 700 megabits per second slower compared to testing directly against the KVM host. I was super hyped up when I saw the new numbers!
1234567891011121314151617
NAS as host:
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 10.42.252.2 port 5001 connected with 10.42.253.2 port 37304
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 14.1 GBytes 12.1 Gbits/sec
NAS as client:
------------------------------------------------------------
Client connecting to 10.42.253.2, TCP port 5001
TCP window size: 2.50 MByte (default)
------------------------------------------------------------
[ 3] local 10.42.252.2 port 59246 connected with 10.42.253.2 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 14.9 GBytes 12.8 Gbits/sec
This is awesome! I’m losing zero to a couple of hundred megabits per second to my extra hop. That’s a big improvement!
iperf is fun. I get to throw around gigantic numbers that I can point at excitedly. That’s great, but I’m more interested in what these numbers mean for me on a day-to-day basis. What kind of speeds can my NFS server reach?
Caching and forcibly dropping those caches isn’t going well for me in my attempt to reproduce these tests. They’re all going too fast! Here’s the tweet with my original NFS tests:
As with the 20-gigabit cards, I'm getting about 270 megabytes per second on an uncached read over NFS. The 40-gigabit cards have brought a RAM cached read up to 1.2 gigabytes per second. A locally cached dd on my desktop is right where it always was at 9+ GB/s. pic.twitter.com/mO33YVeCGk
My unprimed, mostly uncached test copy of a 4 GB DVD image ran at 272 megabytes per second. That’s right around my usual speeds. It is limited by the SSD cache and the rather small number of ancient 4 TB mechanical drives in the server.
Then I dropped my local caches and transferred the same file again. There’s more than enough RAM in the NAS virtual machine to hold the entire DVD image in cache, so I should be testing the maximum throughput of my NFS server. You can see that I’m hitting 1.1 or 1.2 gigabytes per second. I’ve seen it hit 1.3 gigabytes per second just as often, so my NFS server is hovering right around the 10-gigabit-per-second mark. That’s not bad!
The most I’d ever seen out of the old 20-gigabit hardware over NFS was around 700 megabytes per second.
The last dd command winds up testing the local cache on my desktop. That can move the file at nearly 10 gigabytes per second. Isn’t it neat being able to move a file across the network at even 10% the speed of RAM?!
This is pretty much what I’d predicted and exactly what I was hoping for. My Infiniband network speed has just about doubled. That’s fun!
I’m not going to notice a difference in practice. My disks were my bottleneck before, and I knew they would continue to be my bottleneck after the upgrade.
I’m actually maxing out my available PCIe slots. That’s exciting! Not only that, but my network is actually truly faster than Brian’s 10-gigabit Ethernet. That’s even better!
For most home NAS builds, the gigabit Ethernet interface is the bottleneck. My tiny Infiniband network is rarely going to be using more than 25% of its capacity. I can grow into a lot more hard drives and faster SSD cache before I saturate this 40-gigabit hardware!
Conclusion
I’m pleased to be able to say that I feel the same way about the 40-gigabit Infiniband hardware as I did about the 20-gigabit hardware five years ago. At around $100 to connect two machines, it really is an inexpensive performance boost for your home network.
It may not have been a wise investment of time, effort, and $100 for me. I’m not going to see any real advantage over my old gear. If you’re already running 10gbe or 20-gigabit Infiniband, you’re probably in the same boat, and there isn’t much reason to upgrade. If you’re investing in faster-than-gigabit hardware for the first time, I think you should skip that stuff and go straight to 40-gigabit Infiniband or even 56-gigabit Infiniband cards that can do 40-gigabit Ethernet.
What do you think? Do you need to be able to move files around at home faster than the 100 megabytes per second you’re getting out of your gigabit Ethernet network? Is 40-gigabit Infiniband a good fit for you, or would you rather pay double for 40-gigabit Ethernet cards? Are you glad I paid for a useless upgrade just to publish my findings? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!