Is It Time For You to Set Up Tailscale ACLs?

| Comments

If you’re a lone Tailscale user like me, there’s a good chance that you have no pressing need to set up Tailscale’s access control lists (ACLs). Until quite recently, I didn’t feel there was much reason to lock anything down.

Pretty much every computer I own has been running Tailscale for more than a year now. They could all ping each other. In fact, most of them are on the same LAN, and they could ping each other before I had Tailscale. Tailscale already locked them down a bit more thoroughly for me. Why lock them down any more?

Then I started using Tailscale SSH

As soon as I started enabling Tailscale SSH, I needed to set up some access controls. I wanted to emulate my previous setup.

My desktop and two laptops had their own SSH private keys, and their matching public keys were distributed to all my other machines. That meant these three computers could connect to any computer I own.

"ssh": [
        // I don't actually use this rule anymore!
          "action": "accept",
          "src":    ["tag:workstation"],
          "dst":    ["tag:server", "tag:workstation"],
          "users":  ["autogroup:nonroot", "root"],

I gave those three devices a tag of workstation, and I stuck a server tag on everything else. Then I set up an ssh rule in Tailscale to allow any workstation to ssh into any server or any other workstation.

So far, so good. This configuration does happen on Tailscale’s Access Controls tab, but it isn’t in the acls section of the file. At this point, my Tailnet was still wide open.

I got worried when I added my public web server to my Tailnet

I have a tiny Digital Ocean droplet running nginx hosting,, and I always said I should install Tailscale out there, but my web server droplet has been running an outdated operating system for a while, so I knew I would be creating a fresh VPS at some point.

I finally did that. I spun up one of the new $4 per month droplets, copied my nginx config over, and installed Tailscale. I am super excited about this because it means I don’t even have to have an ssh port open to the Internet on my web server.

However, this means that a scary server that I don’t personally own that is sitting out there listening for connections on the Internet is connected directly to my Tailnet. Yikes!

Tagging all your machines for use in ACLs is hard!

It isn’t hard because you have to click on every machine to add tags. It is challenging because choosing names for your tags is easy to goof up!

My original decision that a workstation would connect to a server and never the other way around was too simple. It wasn’t the right way for me to break things down, and as I started adding more tags, I wasn’t able to easily set things up the way I wanted.

I’ve been doing my best to make sure my Tailscale nodes don’t have any services open on their physical network adapters. My workstations are mostly locked down well, and I moved things like my Octoprint virtual machine behind the NAT interface of KVM instead of being bridged to my LAN.

Even so, I have two servers at home that need to be accessible from outside my Tailnet. My NAS shares video to my Fire TV devices just in case I need to watch Manimal, and I have lots of unsafe devices around the house that need to connect to my Home Assistant server.

This seemed easy. I immediately tagged my NAS, my Home Assistant server, and my public web server with a tag of dmz.

What was the problem with this?

I want my workstations to be able to see everything. I want my servers to be able to communicate with each other, but I don’t want my servers in the dmz to be able to connect to my internal servers or workstations.

This all seemed simple and smart until I realized that everything in my dmz already had a server tag. I also very quickly realized that my Home Assistant server listening to my LAN is much less threatening than my web server listening to the public Internet. One of those should be on an even more restricted tag!

Where did I actually land?

I have four main tags now:

  • workstation
  • server-ts
  • server-dmz
  • server-external

My personal workstations can connect to anything. Machines tagged server-ts can connect to machines tagged server-ts and server-dmz, while the server-dmz servers can only talk to other server-dmz machines.

  "acls": [
    {"action": "accept", "src": ["tag:workstation"],   "dst": ["*:*"]},
    {"action": "accept", "src": ["tag:server-ts"],     "dst": ["tag:server-ts:*", "tag:server-dmz:*", "autogroup:internet:*"]},
    {"action": "accept", "src": ["tag:server-dmz"],     "dst": ["tag:server-dmz:*"]},
    {"action": "accept", "src": ["tag:blogdev"],      "dst": ["tag:blogprod:22"]},
    {"action": "accept", "src": ["nas"],              "dst": ["seafile:*"]},
    {"action": "accept", "src": ["autogroup:shared"], "dst": ["tag:shared:22,80,443"]},

These are all my ACLs as of writing this. There are a couple of more specific rules there that I didn’t talk about yet.

There’s a rule there that allows one of my virtual machines here at home to publish content to my public web server.

My NAS is in the dmz, so I had to give it its own rule to allow it to connect to my Seafile Pi*. My NAS syncs extra copies of some of my data for use as a local backup!

I goobered up my exit nodes!

I am more than a little embarrassed by how many times I had to go back and forth between desks to figure out why the exit node on my GL.iNet Mango stopped passing traffic to the Internet.

The Mango had a tag that allowed it to access the exit node. If I took that tag away, it couldn’t ping the exit node. I’d add it back, and while it could ping the exit node, it couldn’t route any farther. If I dropped the original ACL that leaves everything wide open, the Mango could route traffic just fine. What was going wrong?!

It seems like I had this idea in my head that Tailscale’s ACLs only applied to Tailscale nodes and addresses. I didn’t immediately realize that I had to explicitly allow access to the Internet or even other subnets I might be routing!

{"action": "accept", "src": ["tag:server-ts"], "dst": ["tag:server-ts:*", "tag:server-dmz:*", "autogroup:internet:*"]},

I just had to add autogroup:internet to the allowed destinations for the appropriate tag. Duh!

Don’t think too hard before implementing your ACLs

This is especially true if you are down here at my scale with a couple dozen nodes and only a few shared nodes. Just drop some tags on things and set up some access controls that allow nodes access to what they need.

You probably won’t set things up optimally. I know I didn’t on my first try, and I am already seeing things I’d like to do differently. Even if my initial attempt left things more open than I might like, it was still a huge win just because it blocked my public web server from connecting to the rest of my Tailnet. Any other improvements are minor by comparison.

If money and other people’s livelihoods are on the line, maybe you should spend some time having meetings and planning things out on whiteboards. It only takes a few seconds to switch back to the single default ACL that leaves your Tailnet wide open, so if you do find a problem, you can at least revert your changes quickly and easily!

Tailscale SSH is affected by Tailscale network ACLs!

This seems obvious, but I wasn’t positive that this would be the case! Tailscale seems to always make the best possible default choices, and that got me thinking that it might be the case that Tailscale’s own SSH server would ignore the ACLs if the connection were allowed in the ssh section of the access control configuration.

This does not seem to be the case. If you want to use Tailscale SSH, then your networking ACLs have to allow it. To be clear, I think this was the correct thing for Tailscale to do.

Shared nodes are allowed access by default

I wasn’t sure about this. The default single ACL just has one line that allows everyone access to everything. The first thing you do when designing your own ACLs is delete that entry. At that point nobody has access to anything, so I assumed I would need to add a line similar to this:

{"action": "accept", "src": ["autogroup:shared"], "dst": ["tag:shared:*"]},

We tested this. This wasn’t necessary, but I figured it would be a good idea to lock down my shared nodes just a bit, so I wound up using this ACL:

{"action": "accept", "src": ["autogroup:shared"], "dst": ["tag:shared:22,80,443"]},

It is a bit lazy. Three people need access to ports 80 or 443 on the Seafile server, and Brian needs SSH access to rsync files to his blog. It gets the job done.

I did test out removing ports 80 and 443 from this ACL, and I watched the connections on my Seafile server. All the Tailscale IP addresses that I didn’t own dropped off the netstat list, and when I put those ports back in the ACLs, everyone connected back up immediately.

I am sure the documentation explains this, but I doubt I am the only one who likes to see things work in practice just to make sure!

Forgetting you have Tailscale ACLs configured makes troubleshooting a real challenge!

This happened to me yesterday! A friend sent me a GL.iNet GL-AXT1800 router to help him get his identical router to pass local traffic through a Tailscale exit node.

I installed the ancient OpenWRT Tailscale package, replaced the binaries with the official Tailscale static ARM binaries, ran tailscale up, and it gave me the URL to open to authenticate this new node. Everything went smoothly, except I couldn’t ping any of my other Tailscale devices!

Derp. Since I didn’t remember to put any tags on my new Tailscale device, it wasn’t matching any of my Tailscale ACLs, so it couldn’t actually connect to anything!

This was a simple mistake, but I walked back and forth between two desks and rebooted the GL.iNet router at least twice before remembering that I even configured any Tailscale ACLs in the first place!


If you’re just a home gamer like I am, you probably don’t need to worry about Tailscale ACLs. If you have one or more nodes on your Tailnet that have services running on the open Internet, you may want to lock things down a bit. It would be a real bummer if someone managed to crack open your public web server, because they might be able to ride Tailscale past your other routers and firewalls.

One of the awesome things about Tailscale is that I have absolutely no idea what you’re doing with it. You might just be one person sharing a Minecraft server with some friends. You might be sharing a couple of servers with business partners like I am. You might even be managing a massive and complicated Tailnet at a giant corporation.

You and your Minecraft server probably don’t need to worry about ACLs, but if you are in a position where you should be thinking about tightening up your access controls, I hope my thoughts have been helpful!

The OpenWRT Routers from GL.iNet Are Even Cooler Than I Thought!

| Comments

I have had my little GL.iNet Mango router for about two years now. It was an impulse buy. It was on sale for less than $20 on Amazon, and I just couldn’t pass it up. It was exciting for me to learn that there is a manufacturer that ships OpenWRT on their routers, and I really wanted to mess around with one.

I rarely use my Mango router. It live in my laptop bag. If I ever need a WiFi extender, it is there. If my home router fails, it would be my emergency spare. My Mango is a Tailscale subnet router, so if I am ever away from home and need to reach an odd device via my Tailscale network, then I can. It is pretty awesome!

I bought a new laptop a few months ago, and I have been tidying up my various laptop bags. I realized that I hadn’t updated OpenWRT on my Mango in two years, and my Tailscale install is just as old. It seems like it is time to update things!

I had some problems along the way, and I managed to lose all access to my Mango. It could hand out DHCP addresses. It could route traffic. It wouldn’t respond to pings, HTTP, or SSH.

I am really excited that I had problems, because I learned that the GL.iNet routers are even more awesome than I thought!

NOTE: I didn’t really have any problems with my Mango! Something weird was happening on my Windows 11 laptop.

There’s more available than just the stock firmware!

When I could no longer ping my Mango router, I first tried resetting to factory defaults. That didn’t work. Then I tried re-flashing the latest firmware, and it still didn’t work.

Then I noticed that GL.iNet supply several different firmware images for their routers. There’s the stock image with their own GUI called Domino. There’s another that skips Domino and just has the office OpenWRT LuCi GUI. Then there’s a third firmware that routes all your traffic through the Tor network. How cool is that?!

I flashed the LuCi-only firmware and my Mango starting working correctly. All the official GL.iNet firmware images for the Mango are based on OpenWRT 19.07.08. That’s not too bad. The OpenWRT folks are still updating version 19, but first release of version 21 happened last year.

You can definitely download a version 21 build or a release candidate of version 22 for the Mango directly from the OpenWRT site.

Should I just run LuCi, or do I want the Domino GUI?

I love LuCi. If I were permanently installing a GL.iNet router in my home I would most definitely skip GL.iNet’s Domino GUI. I would most likely be installing that release candidate of OpenWRT 22.03 just to avoid a major upgrade in the near future.

My Mango doesn’t have a permanent home. It is a tool that lives in my laptop bag. There’s a very good chance that I might let a friend borrow it. The Domino GUI is WAY more friend-friendly than LuCi!

The Domino GUI also makes some difficult things as easy as clicking a button.

The GL.iNet interface has a simple interface to allow you to use another WiFi network as your WAN port. It has an equally simple dialog to configure the Mango as a WiFi repeater.

Either of those configurations would require dozens of clicks in OpenWRT’s LuCi GUI, and Domino even lets you tie those configuration settings to a physical switch on the router.

I definitely want the Domino GUI on my toolkit’s router.

Should I have bought a higher-end GL.iNet router?

Two really cool things came into my life at about the same time two years ago: the GL.iNet Mango and Tailscale. The Mango only has three or four megabytes of free disk space, and the Tailscale static binaries add up to more than 20 megabytes. One cool thing doesn’t fit on the other cool thing!

Two years ago, the only way to get Tailscale onto an OpenWRT router was to install it manually. Now you can just install it with the OpenWRT package manager, and that is awesome!

I cheated and put the Tailscale binary on a USB flash drive when I set things up two years ago. It’d be nice to not have to do this, but in a way, I am pleased with this configuration.

What if I loan my Mango to a friend? What if they’re less than trustworthy? I can just pop the USB drive out! All the Tailscale configuration and keys live on that drive. If they don’t have that, they can’t access my Tailnet.

I am pretty sure the OpenWRT Tailscale package will work on the Mango

The Tailscale package is only around 2.8 megabytes. That would nearly fit on a fresh Mango router with the stock GL.iNet firmware!

The GL.iNet firmware is running OpenWRT 19, and there don’t seem to be any Tailscale packages in the OpenWRT 19 repositories. Even if you could squeeze the package in, you’re going to have trouble getting an official OpenWRT package.

I did notice that when I installed the clean OpenWRT 19 image from GL.iNet that there’s around 7 megabytes of free space. That’s plenty of room to install the Tailscale package!

You should be in good shape if you download the latest version of OpenWRT for your Mango straight from the OpenWRT site. It sure looks as though you’ll have enough room, and the packages will be in the repository for you to install right away.

I didn’t want to give up the Domino GUI. Being able to connect to the router and click a few buttons to switch modes between routing, repeating, and other things is ridiculously handy.

How do I run Tailscale on the Mango if the Mango doesn’t have enough storage?

I have been arguing with myself for five minutes about how much information to include in this section. A step-by-step guide would make this blog way too long, and a 10,000’ overview seems too broad. Let’s see if I can land in a good spot near the middle.

I mostly repeated what I did to install Tailscale on my Mango in 2020, but I made room on the diminutive SanDisk flash drive for Ventoy. I also cleaned things up so I can modify the Tailscale startup job without logging in to the Mango.

Ventoy is occupying the first two partitions on my USB drive, so I added a small ext3 filesystem as the third partition. This has a copy of my script, the state file for Tailscale, and it is where I unpacked the Tailscale mipsle package. For the convenience of future upgrades, I created a symlink pointing to the current version of Tailscale. This is the root directory of the ext3 filesystem:

pat@zaphod:~$ ls -l /mnt/sda3
total 17744
drwx------ 2 root root    16384 Jul 24 16:49 lost+found
lrwxrwxrwx 1 root root       25 Sep 18 06:52 tailscale -> tailscale_1.31.71_mipsle/
drwxr-xr-x 3 root root     4096 Jul 18 12:58 tailscale_1.28.0_mipsle
drwxr-xr-x 3 root root     4096 Sep 15 22:54 tailscale_1.31.71_mipsle
-rw------- 1 root root     1418 Sep 18 07:05 tailscale.state
-rwxr-xr-x 1 root root      676 Sep 18 07:12

This is my

#! /bin/sh

# Not sure if the sleep is necessary!
sleep 10

/mnt/sda3/tailscale/tailscaled -state /mnt/sda3/tailscale.state > /dev/null 2>&>
# Make sure my bootable USB partition is unmounted cleanly
/bin/umount /mnt/sda2
/bin/umount /mnt/Ventoy

To make this work, I used the advanced settings tab to add this one line to the end of OpenWRT’s startup script:

(sleep 15; /mnt/sda3/ &

This could all be better, but it works. I did have to sign in once via ssh to run tailscaled and tailscale up manually so I could authorize the Mango on my Tailnet.

The various sleep commands sprinkled around are just laziness. You can probably guess why each of them exist.

I purposely chose to store the tailscale.state file on the flash drive. If I loan out my Mango to a friend, I might not want them connecting to my Tailscale network. If I pop the flash drive out, they won’t have any of the data needed to make a connection.

My GL.iNet Mango can’t use Tailscale as an exit node

And I am not sure exactly why! Tailscale routes packets without issue. I have this node configured as a Tailscale subnet router for its own local subnet. That seems to work correctly, so it is able to route packets from WiFi clients to nodes on my Tailnet.

I was hoping to be able to have the Mango route traffic through an exit node. That was a FireTV or AppleTV or something similar could watch American Netflix from Ireland, but it isn’t cooperating with me.

At first I tried tailscale up --exit-node=seafile, but that immediately cut off all access to local clients connected to the Mango. I was able to ssh in via Tailscale and verify that the Mango was using the exit node.

I updated that command to tailscale up --exit-node=seafile --exit-node-allow-lan-access, and my Mango’s local devices were able to talk to the mango again, but they weren’t able to pass traffic any farther than the Mango.

I am close, but not quite close enough!

UPDATE: I got my Mango routing properly through an exit node just a few hours after publishing this blog! This should most likely get a proper write-up, but here’s the short answer. I added the tailscale0 interface as an unmanaged interface in the LuCI interface and made sure it was attached to the WAN firewall group. I am guessing this let the OpenWRT NAT rules do their thing!

What else can I do with my 32 gigabyte Tailscale USB drive?!

When I tested the viability of running Tailscale on a USB flash drive, I used a drive I had on hand. It was an extremely large drive in the physical sense. Once I knew it was working, I bought the smallest Sandisk Cruzer Fit that I could fine. It was 32 GB, which was nearly 32 GB more storage than I needed!

While I was redoing things this week, I decided that I should find a use for the rest of that space. I installed Ventoy and a whole mess of bootable disk images. Ventoy should let the drive boot on both UEFI and legacy BIOS systems. Ventoy’s installation script even had an option to leave some space on the end of the drive, so I added a little 512 megabyte ext3 partition for OpenWRT to use.

My little Ventoy drive has images for:

  • Memtest86
  • FreeDOS
  • Ubuntu 22.04 installer
  • Xubuntu 22.04 installer
  • Windows 10 installer
  • Windows 11 installer

None of this is terribly exciting. I only boot up a computer with a USB drive once every few years now, but did have to make several USB boot drives over the last few months. I had to reinstall Windows 10 a laptop with a dead NVMe. I had to install Xubunu 22.04 on my desktop when I upgraded to an NVMe. I had to run Memtest86 when I bought new RAM a few weeks ago.

I wish I thought to set this up sooner!

I should be carrying an identical bootable drive in my laptop bag, but I figure it can’t hurt to have spare boot images squirrelled away in my travel router’s USB port!


I think I made the correct choice by continuing to use the stock GL.iNet firmware on my Mango. If this were my permanent home router, it would be way more valuable having an extra 10 megabytes of flash for packages, but this isn’t my home firewall. This is a Swiss Army Knife that I keep in my laptop bag.

Being able to quickly configure the Mango to be a router using a wired connection, a router using WiFi, or a WiFi extender is so much more valuable in my laptop bag! Why can’t I do this easily with stock OpenWRT? Is there a package I don’t know about?!

How Much RAM Do You Need in 2022?

| Comments

I probably wouldn’t have given this much thought if I didn’t have a stick of RAM fail on me last year. I don’t know that I can remember another time when a stick of RAM that passed a long Memtest86+ test failed on me, and this was a total failure. The machine locked up and wouldn’t boot until I found and removed the bad stick.

Four sticks of RAM, One is Dead!

I couldn’t figure out whether I could do an advance RMA of my memory, and Corsair wanted me to RMA all four sticks as a set. I didn’t want to deal with downtime, and I didn’t want to buy RAM to hold me over while I waited, so I figured I’d limp along with this single-channel 24 GB configuration until it caused problems.

Running while short 8 GB really didn’t cause problems. Everything that I do fit pretty well into 24 GB of RAM. Even so, I bought a faster pair of inexpensive 16 GB DDR4-3200 DIMMs a few weeks ago, so I am back at 32 GB, back to a dual-channel configuration, and my slightly overclocked Ryzen 1600’s RAM is running at 2933 instead of 2666.

Some benchmarks are a quite a bit faster with dual-channel RAM, but I’m not noticing the extra 8 GB.

I’ve always bought as much RAM as possible

Within reason. There are always diminishing returns, but any extra RAM will be used for disk caching. For the last two or three decades, disks have been slow. Really slow. Especially when it comes to random access.

A 7200 RPM disk can perform between 100 and 200 random reads or writes per second. That was true for my 13 GB IBM Deskstar drives twenty years ago, and is true even for the latest 18 TB 7200 RPM drives today. The heads in a mechanical disk have to wait until the data they need passes underneath. Any given point on the disk only passes under the read head 120 times each second.

In those days, extra RAM was the only thing hiding those slow seek times.

pat@zaphod:~$ free -h                                                      130 
               total        used        free      shared  buff/cache   available
Mem:            31Gi       9.0Gi       1.6Gi       917Mi        20Gi        20Gi
Swap:           23Gi        93Mi        23Gi

My memory usage today is definitely higher than it was a decade ago, but I have always bought enough RAM to make sure at least 50% would be used for disk cache.

My last two workstations have had 32 GB of RAM

I am doing my best to think back to all past desktop computers. The timeline is pretty fuzzy, but it seems like I approximately doubled the memory each time I have upgraded. I almost listed each machine off here, but that feels unnecessary.

My FX-8350 had 32 GB of RAM, which was double the RAM in [the giant HP laptop][lt] that it replaced. I put 32 GB into the Ryzen 1600 when I built it in 2017, and I left it at 32 GB when I replaced the RAM last month.

What’s different today?

NVMe and SSD drives are really fast!

We’ve needed to cache our disks with RAM for decades because our disks had been stuck at 200 random I/O operations per second. My first SSD could manage more than 5,000 I/O operations per second, and my new NVMe can handle 500,000 operations per second.

We don’t have to spackle over slow disks any longer. If I push my machine to the point where it only has a couple of gigabytes of free RAM to use as disk cache, it doesn’t matter. I won’t notice the difference.

While I am just sitting here writing this blog, my machine is using around nine gigabytes of memory for actual work. If I fire up a game, that will likely eat up another 10 or 12 gigabytes.

While I was limping along with only 24 GB in my machine, this was never a problem. Running a game might bring me down to just several gigabytes of RAM for disk cache, yet I didn’t notice any sort of slowdowns or stutters when I would switch back and forth between my game and productivity tasks.

My SATA SSD was fast enough!

I forgot to mention that the vast majority of the months where I was limping along with 24 GB of RAM happened before I upgraded to an NVMe. My 280 gigabyte per second Crucial SSD that could only manage a few tens of thousands of I/O operations per second was plenty fast enough for me to never notice when I as down to just a couple of gigabytes of memory available for caching.

In the old days before solid-state drives, this would never have worked out. In the days when I only had eight gigabytes of RAM, my workstation would have felt like it was struggling if I only had a gigabyte or two of free RAM available for caching. If I didn’t have half my RAM available for cache, I would have been shopping for a memory upgrade!

The future we are living in is fantastic.

Your mileage may vary!

I was getting by just fine with 24 gigabytes, and I bet I could just barely squeak by with just 16 gigabytes of memory, but I wouldn’t want to bother trying. I definitely wouldn’t want to give up dual-chanel RAM, but if it were possible to buy 12 gigabyte DIMMs, I might have enjoyed having a dual-channel 24-gigabyte setup!

I’m using a handful of gigabytes of memory for Firefox, Thunderbird, Discord, and various other programs. The stuff that is normally running eats up around 9 gigabytes or memory.

The heaviest things I run are Davinci Resolve and games, but never at the same time. I don’t have enough GPU memory for that.

gigabytes of ram meme

In the old days, I would have at least one or two virtual machines running on my workstation. Today, I have had a separate server in my office handling that job.

It used to be handy having my virtual machines on my laptop in the days when my only workstation was my laptop. It was awesome having everything with me at home, at the office, or on the road.

I get more value today having those virtual machines on a dedicated box. I don’t want Home Assistant or Octoprint to reboot just because my Nvidia driver goobers things up forcing me to reboot my desktop!

Besides which, it isn’t 2008 anymore. I don’t have to hope a coffee shop has WiFi. I don’t have to wiggle through an ssh tunnel to get to my data at home or in the office. I can share my phone’s 500-megabit Internet connection and connect to my machines around the world using Tailscale, and they’ll work just like they do when I’m at home.

You might need more memory for those tasks that I don’t have!

If you’re running a mess of virtual machines on your workstation, then you probably already know how much memory you need for those to comfortably fit. If the VM disk images live on an SSD or NVMe, maybe those machines don’t need as much RAM allocated as you think they do.

Those virtual machines are still computers, even if they are sharing processors and disks. Just like my desktop PC, our virtual machines don’t need to rely on cache memory nearly as heavily as they did before we had NVMe drives. The old rules of thumb from the days of slow mechanical disks just don’t apply anymore.

If you’re running make -j 32 on your 16-core Ryzen 5950X, you know you might need a lot of memory just to support all those compiler tasks, but it almost definitely isn’t a big deal if your whole source tree doesn’t stay cached all day. Your NVMe can touch hundreds of thousands of files every second without breaking a sweat!

Is swapping to an NVMe fast?!

I spent a week unscientifically messing with various swap and dirty page settings. I figured that Apple must be leaning on fast NVMe swap and paging to make their 8 GB M1 MacBook Air a usable machine. If they can do it, maybe I can force Linux to dip deeper into swap.

I was able to get about 5 or 6 GB onto my swap partition. When I did, things usually acted just fine. I couldn’t even tell you that my machine was swapping.

Every once in a while, though, things would get really goofy. The whole machine would just grind to a halt without much warning. I never timed it, but if I walk away, it would usually have worked itself out by the time I got back.

There’s probably somewhere between my current settings and the problematic settings that would work alright. Maybe the defaults would push me a few gigabytes into swap if I disabled half my RAM.

This was fun to experiment with a bit, but not worth spending a real amount of time working on.


It is better to err on the side of caution. If you need to round your memory requirements to the nearest pair of sticks of RAM, you should definitely round up instead of down. If you’re like me, and you think you can get by with 24 GB of RAM, then you had better buy 32 GB!

For decades I always made the same choice. If you asked me to choose between more memory or faster memory, I would always choose more memory. It wasn’t always a problem you could spend your way out of. Sometimes DIMMs with double the capacity were only available in slower speeds. Sometimes your chipset only supports faster RAM speeds with two DIMMs and not four.

My FX-8350 build in 2013 had 32 GB of RAM. My Ryzen 1600 build from 2017 has 32 GB of RAM. If I upgrade to a Ryzen 7800X, it will also have 32 GB of RAM. Before my FX-8350, every major computer upgrade I have gone through has at least doubled my RAM. This feels weird, but it is also amazing and awesome!

Six Months of lvmcache on My Desktop

| Comments

I admit it. It hasn’t quite been six full months since I put a fresh NVMe into my desktop machine and turned on lvmcache. I am nearly three weeks short of that date as I am writing this sentence, but it will probably be another week before I finish this blog, and if I wait any longer I might miss the target by a few months!

I believe I only have good news to report. I’ve torn down, rebuilt, or reconfigured the cache at least three times: once when I installed the NVMe drive, once when I split my slow storage volume into two pieces, and again when I replaced the ancient 4 TB drive with a 12 TB drive.

Here’s the tl;dr!

The cache is fantastic. It works well enough to cache all the games that I play, and they load just as fast as they would if they were installed directly on the NVMe.

I am no longer the least bit concerned about wearing out my flash storage. It sure looks like I won’t run out of write endurance until 10 years after Samsung’s 5-year warranty expires.

I split my slow storage into two separately cached volumes

I did some math a few weeks after setting up my lvmcache. My lvmcache partition on the NVMe is 300 GB, and I process around 200 GB of video files each month. That much is just fine.

Quite a few of my Steam games are over 100 GB in size.

Testing says that the video files I am working on do indeed wind up nearly 100% cached. If we oversimplify the way lvmcache works, and we assume that the cache will be smart enough to always evict the older video files that I won’t be working on in the near future, this only leaves me enough room in cache for a single game.

LVM [2.03.11(2)] cache report of given device /dev/mapper/zaphodvg-slow
- Cache Usage: 99.9% - Metadata Usage: 6.6%
- Read Hit Rate: 66.2% - Write Hit Rate: 56.7%
- Demotions/Promotions/Dirty: 27129/27165/0
- Feature arguments in use: metadata2 writeback no_discard_passdown
- Core arguments in use : migration_threshold 8192 smq 0
  - Cache Policy: stochastic multiqueue (smq)
- Cache Metadata Mode: rw
- MetaData Operation Health: ok

LVM [2.03.11(2)] cache report of given device /dev/mapper/zaphodvg-churn
- Cache Usage: 85.2% - Metadata Usage: 24.4%
- Read Hit Rate: 50.1% - Write Hit Rate: 3.1%
- Demotions/Promotions/Dirty: 0/183205/0
- Feature arguments in use: metadata2 writethrough no_discard_passdown
- Core arguments in use : migration_threshold 8192 smq 0
  - Cache Policy: stochastic multiqueue (smq)
- Cache Metadata Mode: rw
- MetaData Operation Health: ok

The math says I should have used 300 GB for my operating system and 700 GB for the cache. Resizing the encrypted root filesystem and juggling everything around felt like too much effort, so I just set up a separate cache on my old Crucial 480 GB SATA SSD.

I call my cached volumes slow and churn

I probably need better names. The slow volume’s name has been grandfathered in, but the churn volume’s name is rather appropriate.

One of those volumes is where I churn through data. I dump 100 GB of video onto that volume at a time, work on it for a few weeks, then I dump another 100 GB of video. This just keeps happening every few weeks. This sounds a bit like churning, doesn’t it?!

The churn volume is an 8 TB slice of my new 12 TB hard drive, and it is cached by the old 480 GB SSD. That SSD is plenty fast enough to handle the 50 and 100 megabit files our Sony ZV-1 and Sony A7S3 cameras create.

The slow volume got its name because it just isn’t fast like the NVMe. This is where my Steam library lives. I have installed just over 2.3 TB of games that are being cached by a 300 GB partition on my 1 TB NVMe.

Is my mid-range NVMe going to survive being a cache?!

I was going to tell you that it depends on who you ask, but both the conservative and pessimistic answers are both positive!

My 1 TB Samsung 980 has a 5-year warranty with a guarantee of 320 TB of written data. I am right around 10 TB of writes after six months. That means Samsung thinks I will make it 10 years past their warranty period.

Percentage Used:                    1%
Data Units Read:                    22,669,873 [11.6 TB]
Data Units Written:                 19,392,238 [9.92 TB]
Host Read Commands:                 188,764,027
Host Write Commands:                308,581,800

The data in the SMART report says I have only used 1% of my writes. If that’s correct, then this NVMe will outlive me.

I assume that my writes have slowed considerably. I had to drop the lvmcache every time I resized my slow and churn volumes. That means 1 TB of those writes had to happen just to refill the cache.

I’m also no longer passing 300 GB of video files through the NVMe’s cache partition every month. My old Crucial SSD is bearing that weight now.

That little old SSD is doing a good job. It spend eight years being the primary storage device in my desktop computer, and SMART says it has 33% of its life remaining. The data sheet says the Crucial SSD is rated for 72 TB of writes, so it will probably make it through the next couple of years!

The conundrum of two caches

Is it better for me to have two separate caches? Would I be better off with one 800 GB cache instead of a 300 GB cache and a 480 GB cache? It is complicated, and I just can’t make up my mind about how I feel about this.

I do know for certain that I would much rather have both these caches on my NVMe. If they were on the NVMe the proportions would be adjusted.

On one hand, dealing with a single storage volume can be much more convenient. When I installed my 12 TB hard drive, I had to decide how much space I needed for my Steam volume and how much space I would need for video files.

If I made the wrong choices, I will have to shrink one volume and extend the other in a year or two. I will have to disable and recreate both lvmcache caches to make that happen.

On the other hand, having two different caches handling two different kinds of data is a much more effective use of cache space. My Steam games that I play regularly tend to just stay put in the cache, and it doesn’t matter how long the older videos stay in their separate cache, because they won’t be pushing games out!

If I had one unified cache I bet it would probably take a month or more for old videos to get demoted. It wouldn’t surprise me if that means I’d have 300 GB of unnecessary video that I’ve already finished editing clogging up my cache at any given time.

One could argue that I could have sidestepped that problem by buying a 2 TB NVMe and using a bigger cache, but that doesn’t eliminate the issue. It just makes it a lot smaller, right? Besides, the goal was to save money by buying less flash storage!

I’m not running Linux! Is there some way I can do this on Windows?!

Yes. Maybe. Most likely.

My friend Brian Moses has been watching me talk about lvmcache for ages, and he’s been watching me post screenshots full of cache data for just as long on our Discord server. When he built his new gaming PC, he did some research and wound up buying a copy of Primocache.

I don’t think he’s run much in the way of benchmarks, and if he has, he hasn’t posted the results of this tests. I asked Google about primocache for gaming and the first hit is Leonard Putra’s video showing side-by-side footage of a few games loading with and without Primocache.

Primocache seems to be doing the job for Leonard. Three out of four games had a 98% cache hit rate. Just Cause 4 had a slightly lower hit rate, and it didn’t load much faster.

Some games just don’t benefit from faster disks. Most of the games I tested load just quickly from my SATA SSD as they do from my much faster NVMe.

I have no first-hand experience with Primocache, but it certainly looks like it is worth checking out.


This sort of caching is only a Band-Aid. In a five years we will likely have more NVMe storage than we know what do with.

In the mean time, I am excited to have lvmcache available in the mean time. I only have a 1 terabyte NVMe, but I have 2.4 terabytes of games installed. How awesome is that?!

Are you thinking about using a solid-state disk cache in front of a slow disk on your desktop or workstation? Are you already caching your workstation with lvmcache or something similar? How is it working out for you? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

So Many Tailscale Exit Nodes!

| Comments

I don’t know how I managed to notice this, because I almost never open the Google Play store on my phone, but I did open it a few nights ago, and there was a Tailscale update waiting. I clicked the update button, and I think I might have had to open Tailscale to fire the VPN connection back up.

That’s when I noticed a menu option to enable using my phone as an exit node. What?! My phone is set to install Tailscale beta releases. This says it is a release candidate, so I guess this feature has been hiding on my phone for a little while already.

Of course I had to try it out. It works just fine. This did make me realize that I have yet to set up any exit nodes on my Tailnet, so it must be time to put exit nodes on all the things.

I set up an exit node on one of my virtual servers in the house, my Android phone, and on my Raspberry Pi server at Brian Moses’s house.

Then I got an email telling me that I paid $4.26 for the month for my Digital Ocean droplet that runs the Nginx server for several of our blogs. Why didn’t I think to enable the droplet as an exit node?! It is an exit node now.

What is an exit node? Why would you need one?

An exit node is how you get yourself some of the functionality of something like NordVPN or Private Internet Access for free. Once a machine is configured to be an exit node, any other machine on your Tailnet can force all their Internet traffic through that node.

Exit Nodes Everywhere!

What if you’re on your laptop at Starbucks and want to make sure the barista who owns the WiFi can’t snoop on your traffic? What if the network in your hotel is blocking access to YouTube? What if you’re in Ireland and want to watch shows that are only on American Netflix?

You just click on your Tailscale icon, choose the exit node option, and choose which exit node you want to route this computer’s Internet traffic through. All your traffic will flow through an encrypted Wireguard connection from your laptop in Ireland to your other computer in Plano, TX, and from there it will travel the unencrypted Internet to Netflix.

Tailscale does the right thing again

It wasn’t until the next morning that I worried I had committed an offense! It seemed sensible to turn on at least one exit node at every physical location where I have a Tailscale node, and one of those nodes is my Seafile server at Brian Moses’s house.

I remembered that I am sharing the Seafile Pi with Jeremy Cook and my wife. Neither of these are nefarious characters that I would expect to abuse Brian’s Internet connection, but I certainly hadn’t thought about this, and I most definitely didn’t want to abuse my free colocation facility!

I didn’t need to worry. Tailscale does the right thing. If you activate an exit node after you’ve already shared the node, they won’t have access to the exit node. Not only that, but you can’t give your friends access to the exit node after the fact without their knowledge.

Tailscale Sharing Dialog

You have to send them a new share invite with the exit node enabled. I verified this by having Brian check to see if my Seafile server showed up in his list of available exit nodes.


Tailscale exit nodes are neat. Sometimes you need Netflix to think you’re in a different country. Sometimes you want to hide your traffic from Starbucks or your employer. Sometimes you just need to test that your website is working as expected from another physical location. A Tailscale exit node can cover all these situations and more.

I am not sure when I will need an exit node on an Android phone, but I am excited that I have the option, and I am excited about the idea of repurposing old Android hardware. You can run Octoprint on a phone using Octo4a, someone has set up a backup server on their old cracked Android phone, and now you can throw Tailscale on a cheap old phone from your junk drawer and leave an exit node behind anywhere you want. That’s awesome!

What I Learned Selling the 3D Printed Soda Adapter for Six Months on Tindie

| Comments

Putting the SodaStream Terra adapters up for sale in my Tindie store was an accident. My friend Alex designed the adapter. He got busy with real life and didn’t want to deal with the hassle of selling them on Etsy any longer, so he asked if we would like to take over.

Chris had just started setting up her Etsy store the week before, and she only had one item for sale. The timing seemed good, and he was selling one or two adapters every day. It seemed like a good way to get some initial sales onto her store, so we took on the task of printing and selling 3D printed soda adapters.

There was some lag between Alex running out of stock and us adding the item to Chris’s store, so there were immediately a bunch of orders. Chris paid for labels and shipped those out, then more orders came in, and she paid for labels and shipped those out.

Then Etsy closed her store. Etsy didn’t say why. Etsy didn’t respond to her emails. The store is gone, and Chris never got paid for the inventory she shipped out. It was quite a bummer.

So we dropped the item on my existing Tindie store.

tl;dr I just want a SodaStream Terra adapter!

I am no longer selling the adapters. As has been the case for most of the time the adapters have been in my store, you can download the 3D model of the soda adapter from and make your own.

The harder part is acquiring the rubber o-rings. They’re easy to get in quantities of 100 or 200, and they’re easy to get in assortments of hundreds of o-rings. The trouble we’ve had with the assortments is that not all assortments are measured the same way!

I have a whole mess of o-rings left over. You can find the correct o-rings in my Tindie store.

In my opinion, you should skip the 3D printed adapter and get the metal soda adapter from Amazon. I’ve been recommending this in my Tindie store since the product became available. It is a much more robust solution!

Why was I hesitant to sell the adapter?

Alex called me up one day and explained that he wanted to use his 3D printer to make an adapter to connect the old-style SodaStream CO2 canisters to the SodaStream Terra. I told him it was a bad idea, and that it couldn’t be done.

We drove to Target, bought a SodaStream Terra, and got to measuring. We had a basic part designed and printed in a couple of hours. It didn’t work, because SodaStream designed the new fitting to be difficult to connect to. Even though the adapter worked for Alex, I suspect SodaStream’s purposefully convoluted engineering has been trouble for our some of our customers.

It took him a few iterations to get the air directed to the correct places, but he did get it working.

Just because it was working doesn’t mean it is a good solution. I’ve been designing 3D printed parts for eight years. I know that 3D prints are weakest along their layer lines. I know PLA and PLA+ aren’t the ideal material to stand up to this sort of pressure.

Seeing it work and hearing that his customers were excited about using their adapters helped ease my concerns here.

There’s also the fact that SodaStream made it difficult to adapter their connector on purpose. I could write 2,000 about this part alone!

The failure rate is just too high

I sold 240 adapters over roughly six months. I’ve issued refunds or send replacements for around 30 orders. Why are they failing so often? Let’s start with the problems that may qualify as user error.

More than a few people have managed to cross-thread the adapter. If you are at all mechanically inclined, it is really obvious that this is about to happen. It is also pretty difficult to do accidentally, but if you do, most people are plenty strong enough to destroy PLA+ threads.

At least a few people seem to have trouble trimming the plunger to the correct length. It is a bummer that the plunger has to be trimmed to account for different bulk CO2 kits.

We suspect that many failures happen because the customer doesn’t screw the adapter on tight enough. If you don’t compress the large o-ring enough to make a good seal, CO2 can escape. Once the CO2 starts escaping, it has a much larger surface area to push up against.

This ties in with another problem. Some folks have most definitely managed to tighten the adapter way too much! The adapter is only 3D printed PLA+, so a person is definitely strong enough to break things. Especially if they put a wrench on it!

There’s no good way to document this for the average customer. Saying, “You have to tighten it enough, but don’t tighten it too much!” just isn’t terribly helpful.

There is also a good chance that some people’s SodaStreams are just built to slightly different tolerances than the machine Alex designed the adapter against. If the machining on Alex’s unit leaned towards the tighter side of the tolerances, then there’s a good chance that folks with machines leaning towards the looser side would have leaks.

Mitigating the weaknesses of 3D printing

At first, I was 3D printing with the default PrusaSlicer profiles just like Alex. As the failures came in, I started making tweaks.

Alex tried increasing the infill percentage, but that doesn’t make parts all that much stronger. I started by adding as many perimeters as would fit. Then I started slowly increasing the temperature and extrusion multipliers.

Hotter plastic tends to have better layer adhesion, at least up to a point, but it leads to stringier prints. I’d rather the adapters work than attempt to completely avoid stringing.

The slightly higher extrusion multiplier also helps keep gaps out of the layers, which helps with adhesion. I doubt either of these changes make a huge difference, but every inch counts!

The increased extrusion multiplier also has the side effect of the tolerances a bit tighter. That means the small o-ring is tighter in its slot, and the plunger pushes on it just a little harder. That ought to make it less likely to leak. The correct way to tighten up the tolerances would be editing the model, but that wasn’t really my goal. It was just a happy accident.

Why not try a different material?

This is where we get to the fundamental problem of Pat selling soda adapters.

I don’t drink soda. I am not a soda enthusiast. I am not excited about SodaStream machines. Printing with a very different material would require testing, tweaking, and more testing.

If this were my hobby, I would be diving right in. It isn’t my hobby, though, so I am just not excited about pushing the design into new materials.

Especially now that the all-metal soda machine adapter is available. There’s no beating that solution with plastic!

Since PLA+ works more than 80% of the time, I am confident that nylon would survive more than 99% of the time. Nylon is a pain to print with an FDM 3D printer. I sure don’t want to be doing that every day!

Expectation vs. reality

Most customers found my Tindie listing by way of Alex’s video about his adapter design. Alex’s video is pretty positive. He is proud of the work he did, as he should be, and he made those videos before significant number of people got adapters in their hands.

I’ve tried to keep a balanced description product description on Tindie. I don’t hide that there are failures. I made sure to point everyone towards the solid metal bulk CO2 adapter.

I believe most people understood what they were ordering. I think at least a few people were expecting some sort of unicorn to arrive in their mailbox.

Why continue to sell the plastic adapter when the metal adapter exists?!

I expected that I would be discontinuing the product as soon as the metal adapter was in stock. Surely everyone is using the 3D-printed adapter for bulk setups, right?!

Some people definitely continued to use the 3D-printed adapter for bulk CO2. A few people ordered adapters before messaging me to ask which bulk-CO2 kit they should buy! I told them they shouldn’t, and they should order the parts that match the all-metal adapter. If they told me that’s what they wanted to do, I refunded their money.

Most of my customers just want to be able to plug the SodaStream canisters from ALDI into their SodaStream Terra. They’re the reason I decided to keep on selling these adapters.


There aren’t any soda adapters in my Tindie store, but things are still chugging along. I am still cutting carbon-fiber ducts on the CNC pretty regularly, and I added a new carbon-fiber backpack hacking item to my store recently. I am pretty excited about those no-sew backpack straps, but I don’t have a good way to put them in front of the people who would want to use them. I don’t even have a good name for them!

I am sorry to see the extra revenue go. The extra money has actually made a real difference for us this year, but the ratio of happy to unhappy customers just isn’t high enough for me to feel comfortable. I am much happier selling over-engineered carbon-fiber doodads than plastic bits that have to stand up to 1,200 PSI!

Do You Need to Buy The Fastest NVMe?

| Comments

Do you want the easy answer? No! You almost definitely do not need the fastest NVMe available. Most of us probably won’t even notice the difference between the slowest and the fastest NVMe drives.

NOTE: The XPG Gammix S70 isn’t literally the absolute fastest NVMe available, but it is definitely very near the top of list, and it is the super-fast drive I most often see good deals on. Even if you manage to exhaust the S70’s large write cache in one go, it is still quite fast, and it often goes on sale for $100 per terabyte.

I am also absolutely certain that there is someone out here with a very particular use case that would truly benefit from 7 GB per second reads or writes. Most of us don’t even have software that can keep that up for more than a fraction of a second.

I don’t have a budget! I am just going to buy the fastest thing!

If you truly have no budget, then you should absolutely buy what makes you happy. Most of us who say we don’t have a budget are still making choices based on price.

The price-to-performance graph for any piece of hardware in your computer tends to look like a hockey stick. The price of a component usually increases pretty linearly from the low-end to very nearly the high end, but it usually takes a sharp turn about 80% or 90% of the way to the end of the graph. You might have to pay three times as much to go from 80% to the very top of the performance graph.

This may not even be worth writing about because the fastest NVMe drives only cost twice as much as the no-name cheap drives. The cheapest no-name NVMe deal I’ve seen so far was $55 per terabyte, while some of the fastest NVMe drives go on sale for around $110 per terabyte. The middle-of-the-road drives with good warranties from reputable manufacturers are usually between $80 and $90 per terabyte.

I wonder how much cheaper these will while you are reading this in the future?!

This isn’t anywhere near as big a jump as going from the biggest Ryzen CPU to the smallest Threadripper. Even so, if my words mean you can move $60 from your NVMe to a slightly faster CPU or GPU, then it was worth my time!

I can’t max out my lower-end Samsung 980 NVMe

I can run a benchmark or spam some zeroes over the drive with dd and hit several gigabytes per second. I am running LUKS on top of my NVMe, and that layer of AES encryption seems to have me capped out at around 1.6 gigabytes per second. I haven’t found a use case that will register anywhere near that much bandwidth while monitoring with dstat.

I can hit these big numbers if I copy a huge directory of files from the NVMe to itself. This isn’t something most people do all that often.

The Internet says my drive will run out of write cache if I can write 300 gigabytes as fast as the drive can write. I don’t have any external sources that can supply data that fast. In practice, my 40-gigabit Infiniband network tops out at 13 gigabits per second because it is limited by my desktop computer’s PCIe slots. That’s roughly as fast as my encryption can go, but the drives on my server can only sustain about 60% of that under the very best conditions.

The most data I ever dump onto my computer comes from my cameras. It is normal for me to have one or two nearly full 64 GB microsd cards after filming. This could potentially fill up 1/3 of my Samsung NVMe’s write cache, but those cards only read at about 20 megabytes per second.

I edit lots of video, but that never needs more than a 100 or 150 megabytes per second of disk bandwidth.

I’ve been monitoring game loading times from my lvmcache. I haven’t found a game that has a bottleneck on disk operations, and I have yet to see a number higher than 180 megabytes per second in dstat while loading a game or new level in a game.

It is nice that my NVMe can manage hundreds of thousands of IOPS. That’s at least ten times more than my old SATA SSD, but my usual tasks don’t go any faster after my upgrade.

dstat doesn’t tell the whole story

Just because I am only seeing 180 megabytes per second in dstat doesn’t mean that I’m not benefiting from the 1.6 gigabyte or more than my NVMe is capable of providing. dstat is giving me a snapshot of my throughput at intervals of one second.

During that full second, whatever game was loading had read 180 megabytes from the disk. Odds are that this happened in a little over 100 milliseconds. My old SATA SSD would have also read 180 megabytes during that same second, but it would have taken nearly 500 milliseconds.

This improved latency is nice, and if software is blocked while waiting for that data, then hundreds of milliseconds saved here and there would add up to actual seconds. Something that took 20 seconds to load on the SATA SSD might now take 17 seconds.

The game loading times that I have managed to check don’t show such improvements. These games are likely still busy computing something important while waiting for more data.


I am certain that some of you reading this will actually benefit from a top-of-the-line NVMe. There are most definitely workflows out there can benefit from 7 gigabyte per second reads and writes. I haven’t run into one myself yet, and I’d bet that the majority of you won’t either.

When I upgraded from a mechanical disk that topped out at 200 IOPS to the ancient Intel X25-m with its 5,000 IOPS it was an absolute game changer. Everything seemed to load from disk instantaneously. Upgrading to the next SATA SSD with 50,000 IOPS didn’t feel much different, and neither does this NVMe with 500,000 IOPS.

We need some pretty serious changes in our hardware, operating systems, and software to really take advantage of the difference between 50,000 and 500,000 IOPS. Until then, we can definitely save a few bucks by skipping the upgrades to the fastest NVMe drives on the market.

How Do You Add Straps to a Bag When You Don’t Know How to Sew?

| Comments

The really short answer is that you can stop by my Tindie store and grab one of my new no-sew Velcro strap doodads. I am not the first to have this idea. I am absolutely positive that you can buy something similar from Amazon. I have seen them before, but no matter what I put into the search box today, I just can’t find them!

This idea has been in the back of my mind for years, but I had no reason to execute on it. That is until I bought a small shoulder bag to fit my new 14” laptop. I take this bag on rides to the park, and I thought it would be handy if I could use one of my drone battery straps to secure a bottle of water to my bag, and I’ve been cutting carbon fiber and iterating ever since.

Why are these carbon fiber strap plates so small?

The first set that I cut wasn’t small. It was a 30 mm square with a pair of slots to slide the Velcro strap through. This is probably the more aesthetically pleasing design.

Attaching things to my bookbag is only one of my problems. The other is that I generate quite a lot of waste carbon fiber, but it is almost entirely lengths of narrow 300 mm and 400 mm strips. I’ve always thought it would be nice to find something useful to put in this space.

It is nice that I can cut useful items out of the remnants of carbon fiber that would eventually wind up in the trash. I can also add a whole mess of these pieces to the end of my regularly scheduled cooling duct job.

Using the wasted carbon fiber plate still isn’t free

In my mind, I’ve been thinking of these Velcro strap holder parts as being completely free. There is some cost involved in turning trash into treasure!

It takes an extra 30 minutes to cut these strap holder pieces out of the edges of the 400mm x 300mm plate. That’s nearly as long as it takes to cut 15 ducts out of the rest of the sheet.

I tend to think that there’s no cost in running the Shapeoko. It isn’t like I stand there and stare at the machine for the entire hour, but I do have to keep an eye on things. I am also wearing out endmills a lot faster—carbon fiber is MEAN to endmills!

Why do these things even work?!

What if you poked a hole in the canvas-like material of your backpack, pushed an M3 screw through the hole, and tightened a string to that bolt? You wouldn’t be able to hang much weight on that string before it would tear its way through the fabric.

We are poking two holes, and we are sandwiching the fabric between two ridiculously sturdy pieces of carbon fiber plate. We are spreading the weight out across a much larger area, and I am assuming that the pressure from the plates being squeezed together is preventing the fabric from fraying.

The carbon fiber really is ridiculously strong. This is exactly how we strap 220-gram LiPo batteries to our 750-gram FPV drones. We can smash into a tree at 100 mph, and the carbon fiber and Velcro strap won’t break. The weakest link here is going to be the material of your bag.

Even if you can sew, this seems like a tricky problem!

I have bags with integrated straps. When they are sewn directly to the bag, the stitching covers a lot of surface area! Usually at least three inches or more across the top and bottom of the length of the strap, then there’s some pattern across the strap near the edges. The straps are thick, and so is the bag. This requires a lot more force than sewing a button on a shirt!

I have other bags with a loop of fabric to thread your beefy strap through. Those loops of fabric are always three or more inches long, and the sewing looks just as difficult as sewing the strap directly to the bag.

I can’t sew, but I can cut two tiny holes.

How much weight can this safely hold?!

I don’t know, but I am excited to find out!

I have been bouncing around riding my electric unicycle with my small AmazonBasics DSLR backpack. I sometimes have my 700-gram FPV freestyle drone strapped on, other times I have a 16.9-oz bottle of water or my smaller 280-gram drone.

A couple of pounds surely isn’t a problem. I’ve been a little more concerned about the extra force that will be applied when I ride off a curb or bounce around through a lumpy field. I haven’t seen any damage to the fabric so far. I will be keeping an eye on things, and I sure hope you will as well!

I thought about applying a bit of E6000 glue between the plates. I’ve used it on fabric before with decent results. I want to be able to test new parts, and glue would make it difficult to swap things out!

Why are main plates asymmetrical?

I am attempting to put less pressure on the Velcro strap. My first test mount was a pair of identical square pieces. I didn’t have any spacers. Once I bolted it in place, the Velcro strap was effectively locked in place.

The carbon fiber spacers that I include in the kit should alleviate most of that pressure. However, I do have one plate bolted through the padded section of my laptop bag. Once you cinch the plates together, the padding pushes up against the outside plate with quite a lot of force.

Having the connecting bar run across the edge will allow you to have those bars on opposite edges. I hope this helps alleviate that pressure in these sorts of circumstances!

How do you install the plates?

I am pretty certain this needs to be a video. It is hard to film inside a bookbag, so you will probably see me on video installing a strap on an old T-shirt in the near future.

The idea is that you poke two holes in your fabric. I have been using the reamer tool on the back of my Swiss Army knife for this.

I start by poking the first hole, then assembling the sandwich of plates and spacers using the M3 screw. I don’t tighten the first side down all the way. I want to be able to swivel it around to make sure I can install the plates as straight as possible.

Once I have the plate rotated where it needs to be, I stick the reamer tool right through the second hole in the carbon fiber. I usually can’t poke a big enough hole with the carbon in the way, but this gets me the start of a hole in precisely the correct position. I can move the carbon out of the way, then expand the hole to the correct size.

I do my best to keep the hole in the fabric as small as possible. The less damage you do to the bag, the better it will hold up in the long run. That said, I have accidentally punched some holes much bigger than I wanted. It hasn’t caused a problem so far, but we’ll see how that goes in the future!

Do these have to be carbon fiber?

Definitely not. My test parts were 3D-printed in PLA+, and they would have done the job for me just fine. I’d be a little concerned that the PLA would get soft on the passenger seat of my car if I left it out in the Texas sun in July, but I would also be concerned for the safety of my laptop in that kind of heat.

I picked up a spool of black ABS filament to use for the plastic parts. I have had a 3D-printed ABS part on my dashboard for almost three years. It isn’t as red as it used to be, but it hasn’t deformed in any way. I figure ABS will be a reasonable plastic to use for any plastic accessories.

I need to clean up the OpenSCAD code a bit before I upload the files to I accidentally goobered up some of the parameters. They are cutting at the size I want, but if you try to adjust it, things get wonky!

None of us are planning on crashing our bookbags the same way I crash FPV drones. I bet most of us just want to strap a small tripod or drone to our backpack. A few pieces of PLA will be able to handle that!

I have a plastic accessory!

I didn’t add it to the Tindie listing yet, but I have added a 3D-printed arch on my Pi-KVM case. The arch piece is bolted to one of the carbon fiber backing plates, and it lets me loop in a carabiner so I can hook it to another bag.

I am not sure that I needed a loop like this on my Pi-KVM kit. Sure, I can quickly strap it to my small laptop bag now, but will I ever really need to? Wouldn’t I just throw my laptop in my big laptop bag and stuff the Pi-KVM kit inside with it? There’s plenty of room!

What’s next?

Probably more plastic accessories. The carbon fiber is awesome because it is ridiculously strong and so easy to machine, but it limits me to flat parts. I could make use of the third dimension, but I would need to buy much thicker carbon plates, and the extra work would absolutely obliterate my endmills.

I am planning on printing a TPU cover that can be used on the inside of the bag. My small laptop bag has two M3 lock nuts pointed right at the side of my laptop. I’d much rather my laptop get jostled into a rubbery print than two steel nuts!

What do you think?!

Is this a good idea? Why can’t I figure out what someone would search for if they wanted this sort of product? What other 3D-printed accessories do I need to work on?

Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

I Bought a Pick and Place: I Have No Idea What I’m Doing!

| Comments

These are some of my favorite blogs to write. I bought a thing, and I have no idea how it works. I don’t get to write many of these, because I rarely jump directly into the deep end of the swimming pool.

When I bought a 3D printer, the only relevant was goofing around in a pirated copy of 3D Studio Max for MS-DOS sometime around 1994. When I bought my Shapeoko CNC, I was at least a bit more prepared. I model all my 3D-printed parts using OpenSCAD, and I still get to model my 2D parts for the CNC in OpenSCAD. When I bought an electric unicycle, I was more than a little worried that I’d never figure out how to ride it.

I am almost completely unprepared for the LumenPNP. I have never designed a printed circuit board (PCB). I have never soldered tiny surface mount components. The OpenPNP software looks rather terrifying! At least the LumenPNP looks kind of like someone built my Shapeoko XXL out of 3D printer parts, and all three machines speak G-code!

What is a pick and place machine?!

It is a robot! If you’ve seen a 3D printer or a CNC router, then a pick and place will look familiar to you. They’re all CNC machines, and the biggest difference between the three is the tool that they carry. A 3D printer’s tool extrudes plastic, a CNC router’s tool cuts away material, while a pick and place machine’s tool is a vacuum nozzle that picks up and drops off tiny components.

Why would you want to move components around? We have a tiny board with 21 LEDs that we want to manufacture in bulk. We order PCBs from PCBWay. The PCB has the wiring to connect each LED in the correct order, but someone has to install those 21 LEDs in the correct place.

We can pay PCBWay to do this, and if you buy enough units the price is quite reasonable. I can do it by hand, but this is not a good use of my time. I am way too slow.

We decided to buy a robot kit from Opulo pick the LEDs and place them on the boards for us!

How far along are we?!

The open-source LumenPNP kit arrived a little over a month ago. I had the machine assembled and moving around on its own in less than a week. OpenPNP has control of the machine. The cameras are probably calibrated. My machine can very precisely locate its home position using the camera.

I have a stack of 2x5 panels of our OoberLights Micro PCBs on hand. I have 1,000 LEDs. I just have to put some time into following Opulo’s documentation.

How difficult was the LumenPNP kit to assemble?

Putting the LumenPNP kit together was mostly painless, and the documentation was nearly perfect. I’ve never assembled a genuine Prusa MK3, but I have put together older unofficial Prusa i3 printers. I have also assembled a Shapeoko XXL kit.

The LumenPNP fits somewhere in the middle. 3D printers can be a pain to assemble because they are so compact. The LumenPNP is so spread out that you rarely have to do any contortions to get your hands into position to work on something.

It shares that quality with the giant Shapeoko XXL, but the LumenPNP has the advantage that it weighs almost nothing in comparison. I can easily lift the LumenPNP above my head or carry it around the house with one hand. When I move my Shapeoko, I make sure I have help!

It took me the better part of a week to finish the assembly, but I wasn’t working hard. I’d sit down, usually work my way through one entire section of the build guide, then walk away. If I got a bit frustrated, or if my back bothered me from bending over to reach thing, I would just take a break.

What’s next for us?

Do you mean aside from more procrastination?!

I am certain that there’s more calibration to work on. My backlash compensation suggests that I didn’t get my pixels per millimeter correct in my camera calibration. If I’m interpreting things correctly, my backlash might still be off by 0.1 mm or so.

I don’t think that’s great, but our Neopixel LEDs are about 2 mm across. I expect these chonky boys to get pulled into place when we reflow the solder if they’re only off by 0.1 mm.

Then I have to learn how to make OpenPNP work with our design files. I think I’m expecting that to be more work than assembling the machine, but I am hoping to be surprised!

Why did we buy a pick and place?

We could have had PCBWay do 100% of the work. Why on Earth would we do this part ourselves! This could be a 2,500 word blog of its own, so I will attempt to keep this to a few paragraphs.

The trouble is in the quantities. For me to be able to sell you an OoberLights Micro module for around $10 or $12, we really need the cost per unit to be down under $3. That means we’d have to buy 1,000 units in a single batch. Then the Butter, What?! team has to hope that enough folks will be interested in buying OoberLights modules.

I’d rather have a $1,200 LumenPNP build, $200 in worthless PCBs, and a reels with 21,000 unused Neopixels than have a box with $2,400 in unsold OoberLights Micros. I could get most of my money back from the LEDs, and I bet the pick and place would be easy to sell.

What if we want to make a slightly larger board with one more ring of LEDs? If we are relying on PCBWay to do all the work for us, then we have to buy about 1,000 units of those boards. That is a financial risk.

Having the LumenPNP on hand will allow us to be much more agile. We already have LEDs on hand. We can just spend another $100 on a stack of larger PCBs, pick and place a few panels, and see how it goes. No need to spend $2,000 to $3,000 to see if people are interested in buying the larger panels.

This is awesome.

Final thoughts

We still have a long way to go, but I am super excited! We have a new robot. We have all the things we need to feed it materials to create exciting parts for us. All we have to do is convince me to keep plugging away at it!

I am excited that we didn’t have to spend $2,500 and hope that an order of 1,000 OoberLights Micro boards would arrive in working condition, and I am even more excited that we don’t have to hope and pray that we sell 1,000 boards. I am excited to see how this all works out for us!

I am Using Tailscale SSH, and Maybe You Should Too!

| Comments

I don’t know if I qualify as an early adopter of Tailscale. My first blog post about it was in September of 2020, and that post said I had Tailscale installed on a few machines for months before I really started using and relying on it. I do know that I very much rely on Tailscale, and I don’t know what I would do without it.

I woke up at around 9:00 p.m. That was really late! I saw dozens and dozens of tweets about Tailscale SSH in my Twitter timeline, and it was the number-two post on Hacker News. I read a little about what it did, and my very first thought was, “Uh-oh! This is scary!” Then I thought it was neat. Then I thought it was scary again.

I made a latte. I played some video games. Then I started setting up Tailscale SSH.

What is Tailscale?

Tailscale is a mesh VPN. You install the Tailscale client on all of your computers, servers, and devices. Then each device will attempt to make a Wireguard VPN connection directly to every other device on your network. It is like having your own virtual LAN, but it can be spread out over the entire planet.

It is also stupid easy to set up. If you aren’t dilly-dallying, I bet it takes less than two minutes to get a new machine onto your Tailnet.

One of my favorite features of Tailscale is their machine-sharing option. I depend on this pretty heavily. It is how I collaborate on blogs with Brian Moses, and it is how I share dozens of gigabytes of Create/Invent Podcast video files with Jeremy Cook.

Why am I brave enough to use Tailscale SSH?

I was worried about this idea at first. If you could hijack my SSH keys or agents, you can wreak havoc on the Butter, What?! media empire! You could post nonsense on,, and You could do horrible things to me.

Do I really want to trust a third party with this piece of my security?

The truth is that I already do. I have the Tailscale client installed on a couple dozen machines. Those clients now have SSH servers built right in, but they don’t need SSH servers to be a security risk. Tailscale has had the ability to run arbitrary code on my boxes for years.

I was aware of that when I first decided to use Tailscale. I am downloading executable binary images from a third party. I don’t know what’s inside. I decided that I was going to trust the Tailscale company, and I still do.

As long as there’s an SSH server hiding inside every one of my Tailscale clients, why shouldn’t I use it?!

What advantages will I see over distributing SSH keys manually?

If you’re using Tailscale, but you’re just using password authentication with SSH, I think you should just turn on Tailscale SSH. I don’t feel terribly safe even having password authentication turned on. You’d be better off ditching the passwords and letting Tailscale handle your SSH authentication for you.

First, I am bad at rotating SSH keys. I use a separate SSH private key on each of my workstations. The key on my new 2-in-1 laptop is as fresh as the laptop. The key on my old laptop is from 2021. The key on my desktop is scary old. I haven’t distributed the new laptop’s public key to all my servers yet. I figured I’d sit down and generate a new key for my desktop and kill two birds with one stone.

That is the second problem. Distributing new keys is a pain in the neck. I have one machine that runs Windows 11 now. Some are physical machines. Some aren’t powered on all the time. Some are virtual machines here at home, while others are virtual servers on the Internet.

I usually build a new authorized_keys file with all my current keys, then run a loop that uses scp to drop it in place on each device. If something goes wrong on a device I can’t easily access, it can be a real pain in the tuchus.

Nearly every single server I have is running Tailscale, and each Tailscale client has its own private key for Wireguard. Why do I need to maintain and distribute more keys?

Tailscale SSH authorizes machines instead of users

Tailscale doesn’t know that 18 people have shell accounts on a server. Tailscale just knows that this server and my desktop machine are both on my Tailnet. If you enable Tailscale SSH on both devices, then any of those 18 people would be able to SSH to my desktop computer!

NOTE: I should verify this. I immediately set the action value to accept. If I didn’t do that, it would check that my client is authenticated via the browser. I imagine this would save me, but I SSH so often that it would be quite annoying!

I have split my Tailnet into two different tags to remedy this situation. My desktop and both laptops are now tagged with workstation, and almost everything else is tagged with server. The machines with the workstation tag are computers that no one besides me has credentials for.

"ssh": [
      "action": "accept",
      "src":    ["tag:workstation"],
      "dst":    ["tag:server", "tag:workstation"],
      "users":  ["autogroup:nonroot", "root"],

I set the SSH ACL to allow connections from any workstation to any server or workstation. It seems to work as expected. Now all I need to do is start enabling Tailscale SSH on more devices!

From a practical standpoint, is this much different than managing SSH keys yourself?

I don’t know about you, but when I store an SSH private key on a server with multiple uses, it automatically feels like that key is compromised.

What if someone else has root? They can steal the key. What if I accidentally goofed up the permissions? What if someone has access to the backups? What if someone manages to connect to the ssh-agent?

These are the sort of keys I would have out in the world for something like a backup job, or to run jobs that publish sites to production. I try do give them as little access as possible.

Tailscale will let me continue to do that, but it will let me do it in a centralized location. I can set ACLs that say ScaryHost1 and only connect to ScaryHost2 as one particular user, and I won’t even have to log in to either host to allow that to happen or to revoke the access.

Centralized auth management will be awesome!

I am aware that there are systems built specifically to address this issue. I’ve never had much interest in working to implement them, because my SSH keys were already created, secured, and distributed. I never have to start from scratch. I usually just have to sneak one new key into a handful of places every few years.

I am already using Tailscale. Tailscale is already running on 95% of my machines. It doesn’t take much to install Tailscale on the rest.

At that point I am inches away from never having to manage SSH keys again!

What about shared nodes?

Shared nodes are the bee’s knees. Truth be told, I am more than a little uncomfortable with giving Tailscale so much power over my network. In my opinion, machine sharing is the biggest reason to use Tailscale’s service instead of hosting your own Headscale network.

I can click on a machine in my admin console, and it will give me a URL to send to a friend. They click on the URL, sign up for Tailscale, and they’ll be pinging my shared server in 5 minutes. If they already use Tailscale, they’ll be pinging my machine in seconds.

The documentation says that shared nodes don’t support Tailscale SSH, but it doesn’t say what happens if it is enabled. I had some guesses, but I didn’t have an easy way to try it for myself this week, so I figured I should ask. If you use Tailscale SSH on a shared node, anyone who you have shared the machine with will just fail to authenticate.

I am pretty sure Brian pushes to a Git repo on our Butter, What?! staging server.

Should I just continue to use SSH keys for this dev server? Should I turn on Tailscale SSH for my convenience, then make Brian use a different port? Either will work fine, but it looks like I won’t be completely eliminating my SSH keys anytime soon!

Tailscale SSH fits well with how I’ve been using Tailscale

In days gone by, I had a handful of services exposed on the Ethernet and Wi-Fi interfaces on my desktop and laptop. I don’t own many devices that aren’t now running Tailscale, so I’ve been locking more and more things down.

My desktop and laptop don’t have open ports on their physical interfaces. If I take my laptop to a coffee shop or our local makerspace, there’s nothing open for anyone to try to poke at. Except for SSH. I am always brave enough to have passphrase-only SSH open to the world.

I am only just now realizing that I can lock down my NAS, my Octoprint server, and my CNCjs server in the same way. I don’t connect to those from any devices that aren’t already part of my Tailnet!

Tailscale SSH feels like another step on this journey.

I am finally enabling MagicDNS!

This is why I assume I am an early adopter of Tailscale. I was relying on Tailscale for so long that I had things set up before MagicDNS arrived. I have a gigantic hosts file on each of my workstations instead!

I need at least two or three servers to be listed in those hosts files anyway. My virtual machine host, my NAS, and my desktop are connected with 40-gigabit Infiniband. They have to bypass Tailscale to reach even a small fraction of their maximum speed.

I am relying less on my NAS today, and I am working towards eliminating the need for this Infiniband link. Eliminating that hosts file and upgrading to MagicDNS will take me a step farther down that road.


I am looking forward to seeing where Tailscale SSH goes from here.

What do you think? Am I crazy for relying on Tailscale SSH? Or is this one of the best decisions I have ever made? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!