So Many Tailscale Exit Nodes!

| Comments

I don’t know how I managed to notice this, because I almost never open the Google Play store on my phone, but I did open it a few nights ago, and there was a Tailscale update waiting. I clicked the update button, and I think I might have had to open Tailscale to fire the VPN connection back up.

That’s when I noticed a menu option to enable using my phone as an exit node. What?! My phone is set to install Tailscale beta releases. This says it is a release candidate, so I guess this feature has been hiding on my phone for a little while already.

Of course I had to try it out. It works just fine. This did make me realize that I have yet to set up any exit nodes on my Tailnet, so it must be time to put exit nodes on all the things.

I set up an exit node on one of my virtual servers in the house, my Android phone, and on my Raspberry Pi server at Brian Moses’s house.

Then I got an email telling me that I paid $4.26 for the month for my Digital Ocean droplet that runs the Nginx server for several of our blogs. Why didn’t I think to enable the droplet as an exit node?! It is an exit node now.

What is an exit node? Why would you need one?

An exit node is how you get yourself some of the functionality of something like NordVPN or Private Internet Access for free. Once a machine is configured to be an exit node, any other machine on your Tailnet can force all their Internet traffic through that node.

Exit Nodes Everywhere!

What if you’re on your laptop at Starbucks and want to make sure the barista who owns the WiFi can’t snoop on your traffic? What if the network in your hotel is blocking access to YouTube? What if you’re in Ireland and want to watch shows that are only on American Netflix?

You just click on your Tailscale icon, choose the exit node option, and choose which exit node you want to route this computer’s Internet traffic through. All your traffic will flow through an encrypted Wireguard connection from your laptop in Ireland to your other computer in Plano, TX, and from there it will travel the unencrypted Internet to Netflix.

Tailscale does the right thing again

It wasn’t until the next morning that I worried I had committed an offense! It seemed sensible to turn on at least one exit node at every physical location where I have a Tailscale node, and one of those nodes is my Seafile server at Brian Moses’s house.

I remembered that I am sharing the Seafile Pi with Jeremy Cook and my wife. Neither of these are nefarious characters that I would expect to abuse Brian’s Internet connection, but I certainly hadn’t thought about this, and I most definitely didn’t want to abuse my free colocation facility!

I didn’t need to worry. Tailscale does the right thing. If you activate an exit node after you’ve already shared the node, they won’t have access to the exit node. Not only that, but you can’t give your friends access to the exit node after the fact without their knowledge.

Tailscale Sharing Dialog

You have to send them a new share invite with the exit node enabled. I verified this by having Brian check to see if my Seafile server showed up in his list of available exit nodes.

Conclusion

Tailscale exit nodes are neat. Sometimes you need Netflix to think you’re in a different country. Sometimes you want to hide your traffic from Starbucks or your employer. Sometimes you just need to test that your website is working as expected from another physical location. A Tailscale exit node can cover all these situations and more.

I am not sure when I will need an exit node on an Android phone, but I am excited that I have the option, and I am excited about the idea of repurposing old Android hardware. You can run Octoprint on a phone using Octo4a, someone has set up a backup server on their old cracked Android phone, and now you can throw Tailscale on a cheap old phone from your junk drawer and leave an exit node behind anywhere you want. That’s awesome!

What I Learned Selling the 3D Printed Soda Adapter for Six Months on Tindie

| Comments

Putting the SodaStream Terra adapters up for sale in my Tindie store was an accident. My friend Alex designed the adapter. He got busy with real life and didn’t want to deal with the hassle of selling them on Etsy any longer, so he asked if we would like to take over.

Chris had just started setting up her Etsy store the week before, and she only had one item for sale. The timing seemed good, and he was selling one or two adapters every day. It seemed like a good way to get some initial sales onto her store, so we took on the task of printing and selling 3D printed soda adapters.

There was some lag between Alex running out of stock and us adding the item to Chris’s store, so there were immediately a bunch of orders. Chris paid for labels and shipped those out, then more orders came in, and she paid for labels and shipped those out.

Then Etsy closed her store. Etsy didn’t say why. Etsy didn’t respond to her emails. The store is gone, and Chris never got paid for the inventory she shipped out. It was quite a bummer.

So we dropped the item on my existing Tindie store.

tl;dr I just want a SodaStream Terra adapter!

I am no longer selling the adapters. As has been the case for most of the time the adapters have been in my store, you can download the 3D model of the soda adapter from printables.com and make your own.

The harder part is acquiring the rubber o-rings. They’re easy to get in quantities of 100 or 200, and they’re easy to get in assortments of hundreds of o-rings. The trouble we’ve had with the assortments is that not all assortments are measured the same way!

I have a whole mess of o-rings left over. You can find the correct o-rings in my Tindie store.

In my opinion, you should skip the 3D printed adapter and get the metal soda adapter from Amazon. I’ve been recommending this in my Tindie store since the product became available. It is a much more robust solution!

Why was I hesitant to sell the adapter?

Alex called me up one day and explained that he wanted to use his 3D printer to make an adapter to connect the old-style SodaStream CO2 canisters to the SodaStream Terra. I told him it was a bad idea, and that it couldn’t be done.

We drove to Target, bought a SodaStream Terra, and got to measuring. We had a basic part designed and printed in a couple of hours. It didn’t work, because SodaStream designed the new fitting to be difficult to connect to. Even though the adapter worked for Alex, I suspect SodaStream’s purposefully convoluted engineering has been trouble for our some of our customers.

It took him a few iterations to get the air directed to the correct places, but he did get it working.

Just because it was working doesn’t mean it is a good solution. I’ve been designing 3D printed parts for eight years. I know that 3D prints are weakest along their layer lines. I know PLA and PLA+ aren’t the ideal material to stand up to this sort of pressure.

Seeing it work and hearing that his customers were excited about using their adapters helped ease my concerns here.

There’s also the fact that SodaStream made it difficult to adapter their connector on purpose. I could write 2,000 about this part alone!

The failure rate is just too high

I sold 240 adapters over roughly six months. I’ve issued refunds or send replacements for around 30 orders. Why are they failing so often? Let’s start with the problems that may qualify as user error.

More than a few people have managed to cross-thread the adapter. If you are at all mechanically inclined, it is really obvious that this is about to happen. It is also pretty difficult to do accidentally, but if you do, most people are plenty strong enough to destroy PLA+ threads.

At least a few people seem to have trouble trimming the plunger to the correct length. It is a bummer that the plunger has to be trimmed to account for different bulk CO2 kits.

We suspect that many failures happen because the customer doesn’t screw the adapter on tight enough. If you don’t compress the large o-ring enough to make a good seal, CO2 can escape. Once the CO2 starts escaping, it has a much larger surface area to push up against.

This ties in with another problem. Some folks have most definitely managed to tighten the adapter way too much! The adapter is only 3D printed PLA+, so a person is definitely strong enough to break things. Especially if they put a wrench on it!

There’s no good way to document this for the average customer. Saying, “You have to tighten it enough, but don’t tighten it too much!” just isn’t terribly helpful.

There is also a good chance that some people’s SodaStreams are just built to slightly different tolerances than the machine Alex designed the adapter against. If the machining on Alex’s unit leaned towards the tighter side of the tolerances, then there’s a good chance that folks with machines leaning towards the looser side would have leaks.

Mitigating the weaknesses of 3D printing

At first, I was 3D printing with the default PrusaSlicer profiles just like Alex. As the failures came in, I started making tweaks.

Alex tried increasing the infill percentage, but that doesn’t make parts all that much stronger. I started by adding as many perimeters as would fit. Then I started slowly increasing the temperature and extrusion multipliers.

Hotter plastic tends to have better layer adhesion, at least up to a point, but it leads to stringier prints. I’d rather the adapters work than attempt to completely avoid stringing.

The slightly higher extrusion multiplier also helps keep gaps out of the layers, which helps with adhesion. I doubt either of these changes make a huge difference, but every inch counts!

The increased extrusion multiplier also has the side effect of the tolerances a bit tighter. That means the small o-ring is tighter in its slot, and the plunger pushes on it just a little harder. That ought to make it less likely to leak. The correct way to tighten up the tolerances would be editing the model, but that wasn’t really my goal. It was just a happy accident.

Why not try a different material?

This is where we get to the fundamental problem of Pat selling soda adapters.

I don’t drink soda. I am not a soda enthusiast. I am not excited about SodaStream machines. Printing with a very different material would require testing, tweaking, and more testing.

If this were my hobby, I would be diving right in. It isn’t my hobby, though, so I am just not excited about pushing the design into new materials.

Especially now that the all-metal soda machine adapter is available. There’s no beating that solution with plastic!

Since PLA+ works more than 80% of the time, I am confident that nylon would survive more than 99% of the time. Nylon is a pain to print with an FDM 3D printer. I sure don’t want to be doing that every day!

Expectation vs. reality

Most customers found my Tindie listing by way of Alex’s video about his adapter design. Alex’s video is pretty positive. He is proud of the work he did, as he should be, and he made those videos before significant number of people got adapters in their hands.

I’ve tried to keep a balanced description product description on Tindie. I don’t hide that there are failures. I made sure to point everyone towards the solid metal bulk CO2 adapter.

I believe most people understood what they were ordering. I think at least a few people were expecting some sort of unicorn to arrive in their mailbox.

Why continue to sell the plastic adapter when the metal adapter exists?!

I expected that I would be discontinuing the product as soon as the metal adapter was in stock. Surely everyone is using the 3D-printed adapter for bulk setups, right?!

Some people definitely continued to use the 3D-printed adapter for bulk CO2. A few people ordered adapters before messaging me to ask which bulk-CO2 kit they should buy! I told them they shouldn’t, and they should order the parts that match the all-metal adapter. If they told me that’s what they wanted to do, I refunded their money.

Most of my customers just want to be able to plug the SodaStream canisters from ALDI into their SodaStream Terra. They’re the reason I decided to keep on selling these adapters.

Conclusion

There aren’t any soda adapters in my Tindie store, but things are still chugging along. I am still cutting carbon-fiber ducts on the CNC pretty regularly, and I added a new carbon-fiber backpack hacking item to my store recently. I am pretty excited about those no-sew backpack straps, but I don’t have a good way to put them in front of the people who would want to use them. I don’t even have a good name for them!

I am sorry to see the extra revenue go. The extra money has actually made a real difference for us this year, but the ratio of happy to unhappy customers just isn’t high enough for me to feel comfortable. I am much happier selling over-engineered carbon-fiber doodads than plastic bits that have to stand up to 1,200 PSI!

Do You Need to Buy The Fastest NVMe?

| Comments

Do you want the easy answer? No! You almost definitely do not need the fastest NVMe available. Most of us probably won’t even notice the difference between the slowest and the fastest NVMe drives.

NOTE: The XPG Gammix S70 isn’t literally the absolute fastest NVMe available, but it is definitely very near the top of list, and it is the super-fast drive I most often see good deals on. Even if you manage to exhaust the S70’s large write cache in one go, it is still quite fast, and it often goes on sale for $100 per terabyte.

I am also absolutely certain that there is someone out here with a very particular use case that would truly benefit from 7 GB per second reads or writes. Most of us don’t even have software that can keep that up for more than a fraction of a second.

I don’t have a budget! I am just going to buy the fastest thing!

If you truly have no budget, then you should absolutely buy what makes you happy. Most of us who say we don’t have a budget are still making choices based on price.

The price-to-performance graph for any piece of hardware in your computer tends to look like a hockey stick. The price of a component usually increases pretty linearly from the low-end to very nearly the high end, but it usually takes a sharp turn about 80% or 90% of the way to the end of the graph. You might have to pay three times as much to go from 80% to the very top of the performance graph.

This may not even be worth writing about because the fastest NVMe drives only cost twice as much as the no-name cheap drives. The cheapest no-name NVMe deal I’ve seen so far was $55 per terabyte, while some of the fastest NVMe drives go on sale for around $110 per terabyte. The middle-of-the-road drives with good warranties from reputable manufacturers are usually between $80 and $90 per terabyte.

I wonder how much cheaper these will while you are reading this in the future?!

This isn’t anywhere near as big a jump as going from the biggest Ryzen CPU to the smallest Threadripper. Even so, if my words mean you can move $60 from your NVMe to a slightly faster CPU or GPU, then it was worth my time!

I can’t max out my lower-end Samsung 980 NVMe

I can run a benchmark or spam some zeroes over the drive with dd and hit several gigabytes per second. I am running LUKS on top of my NVMe, and that layer of AES encryption seems to have me capped out at around 1.6 gigabytes per second. I haven’t found a use case that will register anywhere near that much bandwidth while monitoring with dstat.

I can hit these big numbers if I copy a huge directory of files from the NVMe to itself. This isn’t something most people do all that often.

The Internet says my drive will run out of write cache if I can write 300 gigabytes as fast as the drive can write. I don’t have any external sources that can supply data that fast. In practice, my 40-gigabit Infiniband network tops out at 13 gigabits per second because it is limited by my desktop computer’s PCIe slots. That’s roughly as fast as my encryption can go, but the drives on my server can only sustain about 60% of that under the very best conditions.

The most data I ever dump onto my computer comes from my cameras. It is normal for me to have one or two nearly full 64 GB microsd cards after filming. This could potentially fill up 1/3 of my Samsung NVMe’s write cache, but those cards only read at about 20 megabytes per second.

I edit lots of video, but that never needs more than a 100 or 150 megabytes per second of disk bandwidth.

I’ve been monitoring game loading times from my lvmcache. I haven’t found a game that has a bottleneck on disk operations, and I have yet to see a number higher than 180 megabytes per second in dstat while loading a game or new level in a game.

It is nice that my NVMe can manage hundreds of thousands of IOPS. That’s at least ten times more than my old SATA SSD, but my usual tasks don’t go any faster after my upgrade.

dstat doesn’t tell the whole story

Just because I am only seeing 180 megabytes per second in dstat doesn’t mean that I’m not benefiting from the 1.6 gigabyte or more than my NVMe is capable of providing. dstat is giving me a snapshot of my throughput at intervals of one second.

During that full second, whatever game was loading had read 180 megabytes from the disk. Odds are that this happened in a little over 100 milliseconds. My old SATA SSD would have also read 180 megabytes during that same second, but it would have taken nearly 500 milliseconds.

This improved latency is nice, and if software is blocked while waiting for that data, then hundreds of milliseconds saved here and there would add up to actual seconds. Something that took 20 seconds to load on the SATA SSD might now take 17 seconds.

The game loading times that I have managed to check don’t show such improvements. These games are likely still busy computing something important while waiting for more data.

Conclusion

I am certain that some of you reading this will actually benefit from a top-of-the-line NVMe. There are most definitely workflows out there can benefit from 7 gigabyte per second reads and writes. I haven’t run into one myself yet, and I’d bet that the majority of you won’t either.

When I upgraded from a mechanical disk that topped out at 200 IOPS to the ancient Intel X25-m with its 5,000 IOPS it was an absolute game changer. Everything seemed to load from disk instantaneously. Upgrading to the next SATA SSD with 50,000 IOPS didn’t feel much different, and neither does this NVMe with 500,000 IOPS.

We need some pretty serious changes in our hardware, operating systems, and software to really take advantage of the difference between 50,000 and 500,000 IOPS. Until then, we can definitely save a few bucks by skipping the upgrades to the fastest NVMe drives on the market.

How Do You Add Straps to a Bag When You Don’t Know How to Sew?

| Comments

The really short answer is that you can stop by my Tindie store and grab one of my new no-sew Velcro strap doodads. I am not the first to have this idea. I am absolutely positive that you can buy something similar from Amazon. I have seen them before, but no matter what I put into the search box today, I just can’t find them!

This idea has been in the back of my mind for years, but I had no reason to execute on it. That is until I bought a small shoulder bag to fit my new 14” laptop. I take this bag on rides to the park, and I thought it would be handy if I could use one of my drone battery straps to secure a bottle of water to my bag, and I’ve been cutting carbon fiber and iterating ever since.

Why are these carbon fiber strap plates so small?

The first set that I cut wasn’t small. It was a 30 mm square with a pair of slots to slide the Velcro strap through. This is probably the more aesthetically pleasing design.

Attaching things to my bookbag is only one of my problems. The other is that I generate quite a lot of waste carbon fiber, but it is almost entirely lengths of narrow 300 mm and 400 mm strips. I’ve always thought it would be nice to find something useful to put in this space.

It is nice that I can cut useful items out of the remnants of carbon fiber that would eventually wind up in the trash. I can also add a whole mess of these pieces to the end of my regularly scheduled cooling duct job.

Using the wasted carbon fiber plate still isn’t free

In my mind, I’ve been thinking of these Velcro strap holder parts as being completely free. There is some cost involved in turning trash into treasure!

It takes an extra 30 minutes to cut these strap holder pieces out of the edges of the 400mm x 300mm plate. That’s nearly as long as it takes to cut 15 ducts out of the rest of the sheet.

I tend to think that there’s no cost in running the Shapeoko. It isn’t like I stand there and stare at the machine for the entire hour, but I do have to keep an eye on things. I am also wearing out endmills a lot faster—carbon fiber is MEAN to endmills!

Why do these things even work?!

What if you poked a hole in the canvas-like material of your backpack, pushed an M3 screw through the hole, and tightened a string to that bolt? You wouldn’t be able to hang much weight on that string before it would tear its way through the fabric.

We are poking two holes, and we are sandwiching the fabric between two ridiculously sturdy pieces of carbon fiber plate. We are spreading the weight out across a much larger area, and I am assuming that the pressure from the plates being squeezed together is preventing the fabric from fraying.

The carbon fiber really is ridiculously strong. This is exactly how we strap 220-gram LiPo batteries to our 750-gram FPV drones. We can smash into a tree at 100 mph, and the carbon fiber and Velcro strap won’t break. The weakest link here is going to be the material of your bag.

Even if you can sew, this seems like a tricky problem!

I have bags with integrated straps. When they are sewn directly to the bag, the stitching covers a lot of surface area! Usually at least three inches or more across the top and bottom of the length of the strap, then there’s some pattern across the strap near the edges. The straps are thick, and so is the bag. This requires a lot more force than sewing a button on a shirt!

I have other bags with a loop of fabric to thread your beefy strap through. Those loops of fabric are always three or more inches long, and the sewing looks just as difficult as sewing the strap directly to the bag.

I can’t sew, but I can cut two tiny holes.

How much weight can this safely hold?!

I don’t know, but I am excited to find out!

I have been bouncing around riding my electric unicycle with my small AmazonBasics DSLR backpack. I sometimes have my 700-gram FPV freestyle drone strapped on, other times I have a 16.9-oz bottle of water or my smaller 280-gram drone.

A couple of pounds surely isn’t a problem. I’ve been a little more concerned about the extra force that will be applied when I ride off a curb or bounce around through a lumpy field. I haven’t seen any damage to the fabric so far. I will be keeping an eye on things, and I sure hope you will as well!

I thought about applying a bit of E6000 glue between the plates. I’ve used it on fabric before with decent results. I want to be able to test new parts, and glue would make it difficult to swap things out!

Why are main plates asymmetrical?

I am attempting to put less pressure on the Velcro strap. My first test mount was a pair of identical square pieces. I didn’t have any spacers. Once I bolted it in place, the Velcro strap was effectively locked in place.

The carbon fiber spacers that I include in the kit should alleviate most of that pressure. However, I do have one plate bolted through the padded section of my laptop bag. Once you cinch the plates together, the padding pushes up against the outside plate with quite a lot of force.

Having the connecting bar run across the edge will allow you to have those bars on opposite edges. I hope this helps alleviate that pressure in these sorts of circumstances!

How do you install the plates?

I am pretty certain this needs to be a video. It is hard to film inside a bookbag, so you will probably see me on video installing a strap on an old T-shirt in the near future.

The idea is that you poke two holes in your fabric. I have been using the reamer tool on the back of my Swiss Army knife for this.

I start by poking the first hole, then assembling the sandwich of plates and spacers using the M3 screw. I don’t tighten the first side down all the way. I want to be able to swivel it around to make sure I can install the plates as straight as possible.

Once I have the plate rotated where it needs to be, I stick the reamer tool right through the second hole in the carbon fiber. I usually can’t poke a big enough hole with the carbon in the way, but this gets me the start of a hole in precisely the correct position. I can move the carbon out of the way, then expand the hole to the correct size.

I do my best to keep the hole in the fabric as small as possible. The less damage you do to the bag, the better it will hold up in the long run. That said, I have accidentally punched some holes much bigger than I wanted. It hasn’t caused a problem so far, but we’ll see how that goes in the future!

Do these have to be carbon fiber?

Definitely not. My test parts were 3D-printed in PLA+, and they would have done the job for me just fine. I’d be a little concerned that the PLA would get soft on the passenger seat of my car if I left it out in the Texas sun in July, but I would also be concerned for the safety of my laptop in that kind of heat.

I picked up a spool of black ABS filament to use for the plastic parts. I have had a 3D-printed ABS part on my dashboard for almost three years. It isn’t as red as it used to be, but it hasn’t deformed in any way. I figure ABS will be a reasonable plastic to use for any plastic accessories.

I need to clean up the OpenSCAD code a bit before I upload the files to printables.com. I accidentally goobered up some of the parameters. They are cutting at the size I want, but if you try to adjust it, things get wonky!

None of us are planning on crashing our bookbags the same way I crash FPV drones. I bet most of us just want to strap a small tripod or drone to our backpack. A few pieces of PLA will be able to handle that!

I have a plastic accessory!

I didn’t add it to the Tindie listing yet, but I have added a 3D-printed arch on my Pi-KVM case. The arch piece is bolted to one of the carbon fiber backing plates, and it lets me loop in a carabiner so I can hook it to another bag.

I am not sure that I needed a loop like this on my Pi-KVM kit. Sure, I can quickly strap it to my small laptop bag now, but will I ever really need to? Wouldn’t I just throw my laptop in my big laptop bag and stuff the Pi-KVM kit inside with it? There’s plenty of room!

What’s next?

Probably more plastic accessories. The carbon fiber is awesome because it is ridiculously strong and so easy to machine, but it limits me to flat parts. I could make use of the third dimension, but I would need to buy much thicker carbon plates, and the extra work would absolutely obliterate my endmills.

I am planning on printing a TPU cover that can be used on the inside of the bag. My small laptop bag has two M3 lock nuts pointed right at the side of my laptop. I’d much rather my laptop get jostled into a rubbery print than two steel nuts!

What do you think?!

Is this a good idea? Why can’t I figure out what someone would search for if they wanted this sort of product? What other 3D-printed accessories do I need to work on?

Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

I Bought a Pick and Place: I Have No Idea What I’m Doing!

| Comments

These are some of my favorite blogs to write. I bought a thing, and I have no idea how it works. I don’t get to write many of these, because I rarely jump directly into the deep end of the swimming pool.

When I bought a 3D printer, the only relevant was goofing around in a pirated copy of 3D Studio Max for MS-DOS sometime around 1994. When I bought my Shapeoko CNC, I was at least a bit more prepared. I model all my 3D-printed parts using OpenSCAD, and I still get to model my 2D parts for the CNC in OpenSCAD. When I bought an electric unicycle, I was more than a little worried that I’d never figure out how to ride it.

I am almost completely unprepared for the LumenPNP. I have never designed a printed circuit board (PCB). I have never soldered tiny surface mount components. The OpenPNP software looks rather terrifying! At least the LumenPNP looks kind of like someone built my Shapeoko XXL out of 3D printer parts, and all three machines speak G-code!

What is a pick and place machine?!

It is a robot! If you’ve seen a 3D printer or a CNC router, then a pick and place will look familiar to you. They’re all CNC machines, and the biggest difference between the three is the tool that they carry. A 3D printer’s tool extrudes plastic, a CNC router’s tool cuts away material, while a pick and place machine’s tool is a vacuum nozzle that picks up and drops off tiny components.

Why would you want to move components around? We have a tiny board with 21 LEDs that we want to manufacture in bulk. We order PCBs from PCBWay. The PCB has the wiring to connect each LED in the correct order, but someone has to install those 21 LEDs in the correct place.

We can pay PCBWay to do this, and if you buy enough units the price is quite reasonable. I can do it by hand, but this is not a good use of my time. I am way too slow.

We decided to buy a robot kit from Opulo pick the LEDs and place them on the boards for us!

How far along are we?!

The open-source LumenPNP kit arrived a little over a month ago. I had the machine assembled and moving around on its own in less than a week. OpenPNP has control of the machine. The cameras are probably calibrated. My machine can very precisely locate its home position using the camera.

I have a stack of 2x5 panels of our OoberLights Micro PCBs on hand. I have 1,000 LEDs. I just have to put some time into following Opulo’s documentation.

How difficult was the LumenPNP kit to assemble?

Putting the LumenPNP kit together was mostly painless, and the documentation was nearly perfect. I’ve never assembled a genuine Prusa MK3, but I have put together older unofficial Prusa i3 printers. I have also assembled a Shapeoko XXL kit.

The LumenPNP fits somewhere in the middle. 3D printers can be a pain to assemble because they are so compact. The LumenPNP is so spread out that you rarely have to do any contortions to get your hands into position to work on something.

It shares that quality with the giant Shapeoko XXL, but the LumenPNP has the advantage that it weighs almost nothing in comparison. I can easily lift the LumenPNP above my head or carry it around the house with one hand. When I move my Shapeoko, I make sure I have help!

It took me the better part of a week to finish the assembly, but I wasn’t working hard. I’d sit down, usually work my way through one entire section of the build guide, then walk away. If I got a bit frustrated, or if my back bothered me from bending over to reach thing, I would just take a break.

What’s next for us?

Do you mean aside from more procrastination?!

I am certain that there’s more calibration to work on. My backlash compensation suggests that I didn’t get my pixels per millimeter correct in my camera calibration. If I’m interpreting things correctly, my backlash might still be off by 0.1 mm or so.

I don’t think that’s great, but our Neopixel LEDs are about 2 mm across. I expect these chonky boys to get pulled into place when we reflow the solder if they’re only off by 0.1 mm.

Then I have to learn how to make OpenPNP work with our design files. I think I’m expecting that to be more work than assembling the machine, but I am hoping to be surprised!

Why did we buy a pick and place?

We could have had PCBWay do 100% of the work. Why on Earth would we do this part ourselves! This could be a 2,500 word blog of its own, so I will attempt to keep this to a few paragraphs.

The trouble is in the quantities. For me to be able to sell you an OoberLights Micro module for around $10 or $12, we really need the cost per unit to be down under $3. That means we’d have to buy 1,000 units in a single batch. Then the Butter, What?! team has to hope that enough folks will be interested in buying OoberLights modules.

I’d rather have a $1,200 LumenPNP build, $200 in worthless PCBs, and a reels with 21,000 unused Neopixels than have a box with $2,400 in unsold OoberLights Micros. I could get most of my money back from the LEDs, and I bet the pick and place would be easy to sell.

What if we want to make a slightly larger board with one more ring of LEDs? If we are relying on PCBWay to do all the work for us, then we have to buy about 1,000 units of those boards. That is a financial risk.

Having the LumenPNP on hand will allow us to be much more agile. We already have LEDs on hand. We can just spend another $100 on a stack of larger PCBs, pick and place a few panels, and see how it goes. No need to spend $2,000 to $3,000 to see if people are interested in buying the larger panels.

This is awesome.

Final thoughts

We still have a long way to go, but I am super excited! We have a new robot. We have all the things we need to feed it materials to create exciting parts for us. All we have to do is convince me to keep plugging away at it!

I am excited that we didn’t have to spend $2,500 and hope that an order of 1,000 OoberLights Micro boards would arrive in working condition, and I am even more excited that we don’t have to hope and pray that we sell 1,000 boards. I am excited to see how this all works out for us!

I am Using Tailscale SSH, and Maybe You Should Too!

| Comments

I don’t know if I qualify as an early adopter of Tailscale. My first blog post about it was in September of 2020, and that post said I had Tailscale installed on a few machines for months before I really started using and relying on it. I do know that I very much rely on Tailscale, and I don’t know what I would do without it.

I woke up at around 9:00 p.m. That was really late! I saw dozens and dozens of tweets about Tailscale SSH in my Twitter timeline, and it was the number-two post on Hacker News. I read a little about what it did, and my very first thought was, “Uh-oh! This is scary!” Then I thought it was neat. Then I thought it was scary again.

I made a latte. I played some video games. Then I started setting up Tailscale SSH.

What is Tailscale?

Tailscale is a mesh VPN. You install the Tailscale client on all of your computers, servers, and devices. Then each device will attempt to make a Wireguard VPN connection directly to every other device on your network. It is like having your own virtual LAN, but it can be spread out over the entire planet.

It is also stupid easy to set up. If you aren’t dilly-dallying, I bet it takes less than two minutes to get a new machine onto your Tailnet.

One of my favorite features of Tailscale is their machine-sharing option. I depend on this pretty heavily. It is how I collaborate on blogs with Brian Moses, and it is how I share dozens of gigabytes of Create/Invent Podcast video files with Jeremy Cook.

Why am I brave enough to use Tailscale SSH?

I was worried about this idea at first. If you could hijack my SSH keys or agents, you can wreak havoc on the Butter, What?! media empire! You could post nonsense on patshead.com, butterwhat.com, and creativitycast.com. You could do horrible things to me.

Do I really want to trust a third party with this piece of my security?

The truth is that I already do. I have the Tailscale client installed on a couple dozen machines. Those clients now have SSH servers built right in, but they don’t need SSH servers to be a security risk. Tailscale has had the ability to run arbitrary code on my boxes for years.

I was aware of that when I first decided to use Tailscale. I am downloading executable binary images from a third party. I don’t know what’s inside. I decided that I was going to trust the Tailscale company, and I still do.

As long as there’s an SSH server hiding inside every one of my Tailscale clients, why shouldn’t I use it?!

What advantages will I see over distributing SSH keys manually?

If you’re using Tailscale, but you’re just using password authentication with SSH, I think you should just turn on Tailscale SSH. I don’t feel terribly safe even having password authentication turned on. You’d be better off ditching the passwords and letting Tailscale handle your SSH authentication for you.

First, I am bad at rotating SSH keys. I use a separate SSH private key on each of my workstations. The key on my new 2-in-1 laptop is as fresh as the laptop. The key on my old laptop is from 2021. The key on my desktop is scary old. I haven’t distributed the new laptop’s public key to all my servers yet. I figured I’d sit down and generate a new key for my desktop and kill two birds with one stone.

That is the second problem. Distributing new keys is a pain in the neck. I have one machine that runs Windows 11 now. Some are physical machines. Some aren’t powered on all the time. Some are virtual machines here at home, while others are virtual servers on the Internet.

I usually build a new authorized_keys file with all my current keys, then run a loop that uses scp to drop it in place on each device. If something goes wrong on a device I can’t easily access, it can be a real pain in the tuchus.

Nearly every single server I have is running Tailscale, and each Tailscale client has its own private key for Wireguard. Why do I need to maintain and distribute more keys?

Tailscale SSH authorizes machines instead of users

Tailscale doesn’t know that 18 people have shell accounts on a server. Tailscale just knows that this server and my desktop machine are both on my Tailnet. If you enable Tailscale SSH on both devices, then any of those 18 people would be able to SSH to my desktop computer!

NOTE: I should verify this. I immediately set the action value to accept. If I didn’t do that, it would check that my client is authenticated via the browser. I imagine this would save me, but I SSH so often that it would be quite annoying!

I have split my Tailnet into two different tags to remedy this situation. My desktop and both laptops are now tagged with workstation, and almost everything else is tagged with server. The machines with the workstation tag are computers that no one besides me has credentials for.

1
2
3
4
5
6
7
"ssh": [
  {
      "action": "accept",
      "src":    ["tag:workstation"],
      "dst":    ["tag:server", "tag:workstation"],
      "users":  ["autogroup:nonroot", "root"],
  },

I set the SSH ACL to allow connections from any workstation to any server or workstation. It seems to work as expected. Now all I need to do is start enabling Tailscale SSH on more devices!

From a practical standpoint, is this much different than managing SSH keys yourself?

I don’t know about you, but when I store an SSH private key on a server with multiple uses, it automatically feels like that key is compromised.

What if someone else has root? They can steal the key. What if I accidentally goofed up the permissions? What if someone has access to the backups? What if someone manages to connect to the ssh-agent?

These are the sort of keys I would have out in the world for something like a backup job, or to run jobs that publish sites to production. I try do give them as little access as possible.

Tailscale will let me continue to do that, but it will let me do it in a centralized location. I can set ACLs that say ScaryHost1 and only connect to ScaryHost2 as one particular user, and I won’t even have to log in to either host to allow that to happen or to revoke the access.

Centralized auth management will be awesome!

I am aware that there are systems built specifically to address this issue. I’ve never had much interest in working to implement them, because my SSH keys were already created, secured, and distributed. I never have to start from scratch. I usually just have to sneak one new key into a handful of places every few years.

I am already using Tailscale. Tailscale is already running on 95% of my machines. It doesn’t take much to install Tailscale on the rest.

At that point I am inches away from never having to manage SSH keys again!

What about shared nodes?

Shared nodes are the bee’s knees. Truth be told, I am more than a little uncomfortable with giving Tailscale so much power over my network. In my opinion, machine sharing is the biggest reason to use Tailscale’s service instead of hosting your own Headscale network.

I can click on a machine in my admin console, and it will give me a URL to send to a friend. They click on the URL, sign up for Tailscale, and they’ll be pinging my shared server in 5 minutes. If they already use Tailscale, they’ll be pinging my machine in seconds.

The documentation says that shared nodes don’t support Tailscale SSH, but it doesn’t say what happens if it is enabled. I had some guesses, but I didn’t have an easy way to try it for myself this week, so I figured I should ask. If you use Tailscale SSH on a shared node, anyone who you have shared the machine with will just fail to authenticate.

I am pretty sure Brian pushes to a Git repo on our Butter, What?! staging server.

Should I just continue to use SSH keys for this dev server? Should I turn on Tailscale SSH for my convenience, then make Brian use a different port? Either will work fine, but it looks like I won’t be completely eliminating my SSH keys anytime soon!

Tailscale SSH fits well with how I’ve been using Tailscale

In days gone by, I had a handful of services exposed on the Ethernet and Wi-Fi interfaces on my desktop and laptop. I don’t own many devices that aren’t now running Tailscale, so I’ve been locking more and more things down.

My desktop and laptop don’t have open ports on their physical interfaces. If I take my laptop to a coffee shop or our local makerspace, there’s nothing open for anyone to try to poke at. Except for SSH. I am always brave enough to have passphrase-only SSH open to the world.

I am only just now realizing that I can lock down my NAS, my Octoprint server, and my CNCjs server in the same way. I don’t connect to those from any devices that aren’t already part of my Tailnet!

Tailscale SSH feels like another step on this journey.

I am finally enabling MagicDNS!

This is why I assume I am an early adopter of Tailscale. I was relying on Tailscale for so long that I had things set up before MagicDNS arrived. I have a gigantic hosts file on each of my workstations instead!

I need at least two or three servers to be listed in those hosts files anyway. My virtual machine host, my NAS, and my desktop are connected with 40-gigabit Infiniband. They have to bypass Tailscale to reach even a small fraction of their maximum speed.

I am relying less on my NAS today, and I am working towards eliminating the need for this Infiniband link. Eliminating that hosts file and upgrading to MagicDNS will take me a step farther down that road.

Conclusion

I am looking forward to seeing where Tailscale SSH goes from here.

What do you think? Am I crazy for relying on Tailscale SSH? Or is this one of the best decisions I have ever made? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

Using Open-Shell-Menu and ExplorerPatcher to Clean Up Your Windows 11 Start Menu

| Comments

I have been using Windows on my 2-in-1 laptop for a little over a month so far. My hope was to be able to skip installing Linux on this machine. I figured I’d be able to mostly treat this thing like I would an overpowered Android tablet. For the most part this is working out well!

One of my complaints about Windows 11 is that the interface doesn’t allow for much in the way of customization. Sure, you can change the color theme, but the buttons and icons have to be where Microsoft decides they should be.

This wouldn’t be a big deal if my Asus Vivobook Flip 14, as seen used by Billy Butcher in season 3 of The Boys, was just an ordinary laptop. Who cares what the start menu looks like when you are sitting at a keyboard? You just tap the Windows key, start typing the name of what you want to run, and hit enter. I don’t know about you, but I barely even look at the start menu when doing that.

I find myself folding that keyboard back and out of the way ever single day, and every time I do, the shortcomings of the Windows 11 start menu become extremely apparent.

ThisIsWin11 introduced me to StartAllWin and Files

I pretty quickly found a handy open-source tool called ThisIsWin11. It has a list of tweaks and registry hacks and things that it can apply to Windows 11 for you. Some of these tweaks disable telemetry, some enable features that were available in File Explorer in Windows 10, and it can disable things like Microsoft Teams and Skype.

ThisIsWin11 is also a lazy interface to let you install some handy software. Under their “Best Apps for Windows 11” heading are two really nice software packages: StartAllBack and Files.

StartAllBack lets you customize your taskbar and start menu, and it is fantastic. I even went to their web site and was all set to plunk down my $5 for a license, but then I read about what happens if you don’t pay. It says it will start nagging you on every boot, and that the license is tied to serial numbers of your motherboard and boot drive.

Ugh! That sounds like the nonsense we had to deal with in the nineties. I didn’t want to support that, but since I knew what I wanted to be able to do, it was easy for a quick Google search to point me to Open-Shell-Menu.

Before we talk about that, though, you should absolutely check out the Files file manager. Files is a clean, feature-packed replacement for File Explorer, and it is open-source.

Open-Shell-Menu

Open-Shell-Menu is free and open-source. It seems as though it will let you tweak nearly everything related to the start menu that StartAllBack can tweak and maybe even a little more. StartWinBack has a much nicer interface.

Open-Shell-Menu doesn’t quite work correctly on Windows 11 unless you also install ExplorerPatcher. The combination of these two open-source software packages should give you most of the functionality of StartWinBack.

What can you do with Open-Shell-Menu? You can make your start menu look more like Windows 2000 or Windows 7. You can have a much longer list of frequently used programs on your start menu. You can also configure exactly what you want on the second column of the start menu.

Why did I modify my start menu?

I am guessing that I use my Asus 2-in-1 as a tablet roughly 20% of the time, and I can’t even guess how many times the default Windows 11 start menu has made me grumpy!

There are dozens of things pinned to my start menu that I will never use, and there are a few things that just don’t want to show up even when I pin them. The recommendations rarely have anything I want to click, and I have been finding myself wishing I could add at least one more folder to click on next to my downloads, videos, and pictures.

My Start Menu

It seems like I regularly have to tap the start button, tap in the search box, type two or three characters into the on-screen keyboard, then select the thing I want. I shouldn’t have to tap six times to open Into the Breach.

This is still problematic! When I open Into the Breach the most recent program on my start menu is Steam. That is absolutely not what I want, but it does make sense, because every Steam game on the start menu is just a link to a Steam URL. I sort of avoided the problem by adding a link to the Steam menu to the second column.

My farting around while working on my recent Windows 11 blog posts has gotten me to a point where I have a lot of junk in my recent and frequent programs list. I am sure that will sort itself out over time.

Conclusion

If you want to customize your Windows 11 start menu, you want to do it the easiest way possible, and you don’t mind paying $5 for something that ties itself to your motherboard and NVMe, then you should definitely check out StartAllBack. If you want something that you can install anywhere and take your configuration with you, you should check out Open-Shell-Menu.

What do you think? Am I doing the right thing by using Open-Shell-Menu? Do you prefer StartAllBack? Did I miss something even better? Let me know in the comments, or stop by the Butter, What?! Discord server to tell me about it!

Windows 11 Start Menu and Quick Settings Opening Slowly

| Comments

I bought an Asus Vivobook Flip 14 2-in-1 laptop last month. It isn’t a speed demon, but with an 8-core Ryzen 7 5700u, 16 GB of RAM, and an NVMe that can push several gigabytes per second, it is definitely not a slow machine.

This is why I was surprised by how slow some parts of my Windows 11 experience have been, but only intermittently. Sometimes I would tap the Windows key to bring up the start menu, and it wouldn’t show up right away. Other times it would show up instantly, but I’d start typing and it would just sit there.

Sometimes it’d let me type enough to see the program I want to run, then I’d hit enter, and it would look as if nothing happened. I can click on the menu item that says open, but nothing opens. Then five or ten seconds later, something happens.

Sometimes I would touch my Wi-Fi or battery icon in the task bar, and it would take several seconds for the quick-settings panel to slide in.

The problem is that this happens at all. This is core functionality that I’d expect would be quite optimized. How does this ever happen on a mid-range computer? The intermittent part has made the problem difficult to solve, but I think I finally have!

tl;dr

I disabled memory compression. How do you do it? Open Powershell with admin privileges and run:

1
Disable-MMAgent -mc

You can turn memory compression back on with this command:

1
Enable-MMAgent -mc

Should you do this? I don’t know for sure, but it can’t hurt to try. Especially if you have plenty of RAM and a good, fast NVMe.

What wasn’t the problem?

I am reasonably certain that this tends to happen most often shortly after waking the laptop up. I am a Linux professional of some sort. At least, I assume I still count as a professional. I am pretty ignorant of precisely what Microsoft has been up to during most of the last decade.

I’m pretty sure sleeping isn’t just sleeping. I’m pretty sure Windows dumps your RAM to storage just in case the battery dies during sleep. I’m pretty sure that when you shut down your computer, that is just a fancy version of hibernation. The trouble is I am only pretty sure about so many things.

The first thing I tried was telling Windows to run the laptop in performance mode instead of power-save mode. Since the problem only happens some of the time, I had to run like this for a couple of days. I thought it was helping, and it might have been helping a bit, but there were still times when I had a lot of latency when I bang on the start menu.

The Asus Vivobook laptop was noticeably warmer. It wasn’t hot, but I could tell it wasn’t cool anymore. Not a huge deal, but heat is a waste of battery, especially since the laptop didn’t really feel any snappier.

Why disable memory compression?

It was hard to get good answers. In fact, I don’t think I got good answers. Intuition says I should give this a try, and it worked.

I don’t know what compression algorithm Microsoft is using, but I hope it is something fast with poor compression ratios. Something like lz4 decompresses at about 2 GB per second on a fast machine, and I bet it goes even slower on my laptop. Why waste CPU cycles and time compressing RAM when you can swap it to you NVMe just as quickly or even more quickly?

There’s a ton of cargo cult comments around the Internet talking about disabling memory compression and disabling the SysMain service (formerly Superfetch). There’s just as much cargo cult around telling people not to disable either one.

They say the current implementation of Superfetch understands that you have fast SSD or NVMe storage. I have no information about whether or not you should disable it, but I haven’t. It doesn’t seem like something that should be necessary today, but I also haven’t attempted to measure this.

If I understand correctly, my Ryzen 5700u will be stuck at around 2 Ghz for at least several seconds any time the CPU gets hammered. This is a trick that keeps the clock speed from hitting 4 Ghz when something needs to burn a bunch of CPU cycles for half a second.

My guess is that after sleeping, idling, or playing Into the Breach for two hours, the pages of RAM with the start menu and quick-settings panel wind up getting compressed. When my laptop is in power-save mode, it only goes up to 2 Ghz, the RAM takes twice as long to decompress, and that is what I am waiting on. If you’re on a fast desktop computer, or your laptop is plugged in, you are much less likely to even notice this issue.

So far, my guess seems to be correct, because I haven’t seen this latency since disabling memory compression.

I wasn’t just having trouble with my start menu

Steam has been weird. It is already open. I can see it in my taskbar, so I click on it to bring the window into focus. It appears instantly.

Except it may as well be a screenshot. I can’t click anything, and when it does this, it takes way longer than the start menu to start responding.

Someone smarter than me would have opened Task Manager to see what was happening. It never occurred to me. If this were any of my Linux machines, I would have been watching htop, dstat, and peeking at dmesg hoping for a hint.

This is a micro optimization solving an extremely specific problem!

I hate latency. I am the sort of person that expects to be able to hit a couple of keys and have a terminal window show up with a ready-to-use shell in less than 200 ms. That was what I expected on a laptop from 2009. We should be doing much better in 2022.

I am sure Microsoft has collected mountains of data about this. I imagine that in most cases, especially with a processor that isn’t trying to sip power, that whatever time we lose waiting for the start menu is made up several times over by the next program loading faster. I am guessing that my situation is an edge case for them, and a bit of delay the first time I hit my start button isn’t a big deal to them.

It is a big deal to me.

I can count three hippopotamuses before Windows 11 can open Windows Terminal and show me my Ubuntu 22.04 zsh prompt. I know there’s an extra layer of virtual machine in there, but my laptop in 2011 could open Gnome Terminal and show me a shell prompt in less than 150 ms.

Conclusion

I don’t know if my reasoning is correct, but the results make me believe my thinking is sound. If you have a slow CPU with a fast NVMe, I bet disabling memory compression on Windows 11 will do you some good. My Ryzen 5700u isn’t exactly slow, but it definitely pretends to be slow when in power-save mode, which is what it should be doing!

What do you think? Are you disabling memory compression? Is disabling compression giving you improvements that you can see? Should I be completely disabling SysMain? Let me know in the comments, or stop by the *Butter, What?! Discord server to chat with me about it!

Games to Play on Your Windows 11 Tablet

| Comments

When I ordered my Asus Vivobook 2-in-1, I immediately began to scour my voluminous Steam library looking for touch-friendly games. I was initially disappointed, and in some ways I still am. Many of the games are just simple mobile games on a bigger screen, and some games that do have mobile ports like Prison Architect won’t work without the second mouse button and a keyboard.

I haven’t found a good list of proper touchscreen games for Windows 10, so I figured I should make one. I thought /r/surfacegaming would be promising, but 99% of what they post there is just mouse-and-keyboard games running on a Surface device. There are curated lists of touch-friendly games on Steam, but they’re riddled with games that just don’t quite work.

I have tested every game on this list. Below the list are my mini-reviews of the games. If a game doesn’t have a review, that means I’ve only played enough to make sure the first level works.

These are just my findings so far. I plan to update the list as I find more games, and I will definitely be adding reviews for games as I start to actually play them!

tl;dr Just show me the list!

Into the Breach

Into the Breach is easily my favorite game on this list. I have played 12 hours within the first four days of owning the game.

Into the Breach sits in some sort of Goldilocks zone between simplicity and complexity. I certainly don’t want to play You Must Build a Boat on a 14” screen, but it was fantastic on my phone. I also don’t think I want to play something as hugely complex as Civilization 6 on a large tablet. Into the Breach is just right.

So far, this is the only game I have found that sits well inside this Goldilocks zone. What else should I be playing?!

Kingdom Rush

Kingdom Rush is a port of a mobile tower defense game. I remember trying this out on Android about eight phones ago. I didn’t expect to enjoy an overgrown mobile game, but it is definitely more fun that I expected.

I’ve played for about six hours so far. Into the Breach is consuming all my time for now, but I expect I will get back to Kingdom Rush at some point. It is nice being able to see the entire map on screen, and there’s significantly more strategy involved than I ever would have imagined.

Mini Metro

Mini Metro is very much a mobile game. I first played it on my old 12” Windows tablet with an underpowered Atom processor. That 2-in-1 couldn’t play any real games, but it played Mini Metro just fine, and it was a lot of fun!

Mini Metro is a game that you should probably me playing on a phone. The interface is quite simple, and you certainly don’t need a giant display. Even so, it is fun watching people move around on a subway map that’s as big as a sheet of paper!

Monster Train

I don’t have a tone to say about Monster Train. It was in the Humble Choice bundle this month alongside Deathloop. Monster Train is highly rated, and it sure looked like the sort of game that would be touch-friendly, so I gave it a try!

I don’t really know what I’m doing, but I played through the first level with the touch screen, and everything worked perfectly.

Remnants of the Precursors

I haven’t played this one yet, but I think it deserves a couple of paragraphs. Remnants of the Precursors is an open-source game that is modernization of the MS-DOS game Master of Orion. I used to enjoy playing Master of Orion 2, so when I learned of this game, I installed it immediately.

Remnants of the Precursors is a complicated strategy game. It isn’t going to hold your hand and teach you how to play. I fired it up, started a new game, and then made sure I could click around on things with my finger. It seems to work fine, but I haven’t had an opportunity to sit down and learn a brand-new strategy game yet.

This sits outside of the tablet-friendly Goldilocks zone.

The Banner Saga 1, 2, and quite possibly 3?!

I don’t expect that I will be playing The Banner Saga games, but I can’t help but leave a few words here, because the style makes these games look amazing!

I played enough of The Banner Saga 2 to start a game and make a few moves on the map. They’ve certainly made sure that these games are touchscreen-friendly!

I was excited to see the animation. It reminds me of the Don Bluth animated movies from my childhood like The Secret of NIMH or even his animation from the Dragon’s Lair arcade game. The artists for The Banner Saga games have done a fantastic job!

Holy Potatoes! A Weapon Shop?!

There are four Holy Potatoes! games, and I own two of them. They seem to go on sale for $0.99 every once in a while. That’s fun!

I had to fire up Holy Potatoes! A Weapon Shop?! on my tablet to give it a try. It looks neat, but I haven’t even completed the tutorial. I’ll give it a try one of these days when I’m looking to waste some time.

I’ve managed to craft and sell my first weapon without using the mouse. Pinch-to-zoom doesn’t work, and zooming is definitely a thing in this game. I had no trouble zooming the map with the scroll wheel, though it doesn’t let me zoom out very far at all. I have not played enough to know if this will be problematic later in the game.

It does let me move around my tiny map by dragging my finger.

Games that just won’t work

Some of these are really disappointing. FTL, Prison Architect, and Invisible, Inc. each have iOS ports, and Prison Architect is also available on Android. If you can make these games worth with no keyboard and no right mouse button, it should be possible for the developers have already solved the difficult problem. Microsoft should throw a few dollars at these folks to remedy the situation!

Some of them are just dumb. Fae Tactics seemed like it was going to work, except it doesn’t detect a touch as a click. You have to double-tap everything. I noped my way out of there quite fast! This seems to be a common problem.

Games like Cities: Skylines are probably way too complicated and intricate to play with just a touch screen.

Why isn’t my favorite game on the list?!

I am only listing games I have actually tested, and I am doing my best to avoid buying new games. There are already 1,950 games in my Steam library, and I want to get a chance to play more of them. I don’t want to buy more games that I won’t play or won’t have time to play.

Where am I looking for games beside Steam?

I have been wanting to comb through the Itch.io charity bundles that I’ve bought. I know for sure that I bought both the Bundle for Ukraine and Racial Justice and Equality bundle. There are literally thousands of games in these collections.

Thank goodness you can visit randombundlegame.com to filter the list by bundle, genre, and many other properties. I know for certain that there are awesome games in here. If we’re lucky, a few games from Itch.io are both awesome and tablet-friendly!

Conclusion

I hope you’ve found my collection of touch-friendly tablet PC games helpful. I am certain it isn’t complete, but I am confident that everything that has made it onto the list is properly playable and most likely enjoyable to play on a touch screen!

I will be working to expand the list. I am always playing games, and I even sometimes manage to start playing new games, so I expect the list will naturally expand in the future.

Do you think I’ve left out any awesome games that work well on a touch-screen tablet PC? Is your favorite game already on the list, or did I completely miss it! Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

Using lvmcache for Gaming and Video Editing - What I Have Learned So Far

| Comments

I’ve been running and living with my lvmcache experiment on my desktop for more than a month now. Everything is working out about as well as I had hoped, and that means they’re doing even better than I truly expected. In fact, things are sometimes performing better than the measurements would suggest that they should. Doesn’t that seem counterintuitive?

Why use lvmcache on your desktop or workstation?

SSDs and NVMe drives can be expensive. Not only that, but they only get so large. NVMe pricing at up to 2 TB is pretty good, then you wind up paying bit of a premium to get to 4 TB. If you need more storage, it will either cost a lot more, or you need to buy multiple drives. Then you need more m.2 slots.

I decided to put a fast lvmcache in front of a slow hard drive. I bought a 1 TB Samsung 980 NVMe for $100. I set aside 300 GB of that to use as an lvmcache in front of a 4 TB 7200 RPM hard drive I already had on hand.

NOTE: Because so much of my bulky data is stored on the slow disk under the lvmcache, I am only using 142 GB on my root/home volume on the NVMe. Since large files are always going to live in the cache, I don’t expect to use much more space. It would have been safe to set aside 700 GB for caching, and I wish I did that!

The hope has been that my games would load faster, and my video editing experience in Davinci Resolve would still be smooth even when the files are stored on the slow hard drive. I don’t want to have to move large games to a bigger disk. I don’t want to shuffle around games that I am not playing often any longer. I don’t want to have to move my current Resolve project to the NVMe and move it off when I am finished.

I want lvmcache to handle the shuffling for me.

lvmcache is impossible to benchmark

That is probably an exaggeration, but it is definitely difficult to benchmark lvmcache well. Everything you read or write passes through Linux’s in-memory disk cache. That is not how lvmcache works.

Since lvmcache is a hot-spot cache, it tries to capture data that is accessed frequently, and it tends to skip caching sequential writes. That means you might dump 50 gigabytes of video footage from your camera, and it is possible that not a single byte will wind up in cache. You may not even edit that footage for a few days, so it definitely won’t wind up in the cache right away.

How do you benchmark something like this?

I don’t really want a benchmark. I just want to get a glimpse of what the cache is doing for me, and I want to see if I can measure any improvements or see if there are any delays.

I tweaked the awesome lvmcache-statistics script so that it works more like top. That will let me monitor cache hit and miss rates. I’ve been running that alongside dstat to watch how much data is moving around on each of the physical disks. This has definitely been interesting!

Game loading tests

Games are a huge part of why I want to use lvmcache on my desktop. I have so many games that are each between 50 and 150 gigabytes. I don’t hop between games. I am usually playing one single player and one multiplayer game during any given week or month. Most of this can drop out of cache, and it would be fine.

We’ll talk about how this might relate to cache sizing later on.

I already talked about this in the first lvmcache post last month, but I’ll summarize it again here. The first thing I tested was firing up Sniper Ghost Warrior Contracts on my new lvmcache setup. I wasn’t smart enough to use a stopwatch, but the game plays a cut scene while loading, and you can’t skip the cut scene until loading is complete.

The first time I started the game, I got to see way more of the cut scene than I ever did when playing the game on my SATA SSD. I quit the game, dropped my RAM caches, and tried again. This time it loaded faster. I repeated, and it was precisely as fast as loading from my old SSD.

I wasn’t seeing any numbers higher than around 150 megabytes per second on dstat, so I copied the game directly to my NVMe and tried again, and it loaded in precisely the same amount of time. I wanted to make sure lvmcache wasn’t a bottleneck.

The load time was about the same direct from the NVMe.

Before writing this blog today, I figured I should load the same game today. Steam says I haven’t played it in about six weeks. I’ve dumped hundreds of gigabytes of video since then, and I have edited two Create/Invent Podcast interviews and two Butter, What?! Show live streams.

I fired up Sniper Ghost Warrior Contracts and watched the dstat output. I was excited to see that there were rather long stretches of time when only the NVMe was being touched while on the way to the main menu. When I loaded my progress, though, things slowed down.

Yes, data was being pulled from the cache, but not much. It didn’t take all that much longer to load. If I were smarter, I would have used a stopwatch! It was probably an extra 10 or 15 seconds.

Really not that bad sounding, right? And I am confident that it will go faster if I decided to play again tomorrow, because dstat showed plenty of writes to the NVMe during the load, so more data was being cached.

Video editing is both awesome and interesting

We record one live stream and one podcast interview every month. My live streams with Brian Moses generate a large volume of video. We each record ourselves on our Sony cameras, and I record my desktop using OBS Studio. It is easy for us to break 100 GB of footage in 60 to 90 minutes. The Create/Invent Podcast interviews are recorded using Riverside.fm, so the video is more heavily compressed but still of quite good quality.

When I dump video off the cameras, I don’t see much lvmcache action going on. As soon as I add them to a timeline in Davinci Resolve, things start to get interesting. Resolve reads through every video file to generate an audio waveform to display on the timeline. During this part of the process I see reads on the slow disk and a similar amount of writes to the NVMe!

That means it is mostly ignoring the large video files when I dump them to disk. This is either because lvmcache doesn’t think a single write is worth promoting to the cache, or it doesn’t bother caching the writes because the slow disk is plenty fast enough to keep up with my SD card. It is probably some combination of both.

As soon as Davinci Resolve starts scanning those video files, the lvmcache will start getting primed. Isn’t that cool?!

What kind of disk performance do you need to edit video?

The answer to this question will be different for everyone. It will depend on how many videos you have playing simultaneously on your timelines and the bitrate of those videos. The highest bitrate videos that I edit 90% of the time are the 50-megabit files from my Sony ZV-1. Sometimes I edit 100-megabit and 150-megabit files from a Sony a7S3.

You really don’t need much disk performance to playback video files like these. My new 2-in-1 ultrabook laptop can composite three of my Sony ZV-1 files stored on a Samba share over WiFi on the on the same timeline and play them back without dropping any frames. Things get hairy there when you want to scrub through the timeline quickly, but it can handle basic editing tasks this way in a pinch.

The quickest and easiest test I could think of doing was playing back one of my videos faster and faster to watch how much I/O bandwidth would be required. I learned that playing back some footage of myself at 8-times normal speed only eats up 50 or 60 megabytes per second of bandwidth. I could have done math to figure this out, but doing it in real life also showed me that I seem to start dropping frames at 16-times or 32-times playback speed.

NOTE: Knowing this number is handy for me, because it tells me that my spare SATA SSD can outpace my CPU and GPU by nearly an order of magnitude.

My slow hard disk has at least three times more throughput than this. It could almost definitely keep up if there were two of these videos on a timeline, and it might be able to manage with three, but I am not confident of that. Video playback is very sequential disk access, and that is something spinning media is good at, but Resolve is going to be requesting blocks from three different video files. Those seeks will slow things down!

I had a chance to do a more realistic test this week with fresh footage from the Create/Invent Podcast, but disk performance isn’t my problem when editing podcast episodes. I wind up building stacks of timelines on top of timelines when I set up out various split-screen views. When I put those in a multicam bin, the multicam preview eats up CPU and GPU.

The important thing for me is that I had a chance to see first hand that my lvmcache would easily be up for the job.

This really simple playback speed test that I have been doing has been a handy yardstick for understanding how much disk throughput I need to have. I know how fast my 5 GHz 802.11ac WiFi is in my office, so it isn’t much of a surprise that my laptop can playback three video files on the same timeline in Resolve. I also isn’t a surprise that my CPU and GPU are the bottleneck when trying to play video at 32-times speed. As I alluded to earlier, throughput isn’t always enough.

If your video files are stored on a slow network share or a 7200-RPM disk you will certainly notice a slight delay every time you click a new point on your timeline, while on a solid-state drive you will be able to jump around instantly.

Is one cache enough? How big does your cache need to be?

This is another part of the puzzle where I have more questions than answers. Let’s walk through some of what I have been pondering. Maybe my needs will help you compute what is required to work for your own situation.

I have a 300 GB lvmcache volume on my 1 TB NVMe. Hindsight tells me I should have at least doubled that, and I most definitely should have realized that when I set this up.

I process between 100 and 150 gigabytes of video each month. That means I pull the files off of cameras, spend a week or two editing those files in Davinci Resolve, export 10 or 20 GB for YouTube, then I probably won’t touch those files again.

I have games that range in size from 20 gigabytes all the way up to 140 gigabytes. I really only need to fit two or three of those on the SSD cache at any given time.

I don’t have enough cache space to hold much more than one or two games and a month’s worth of video. Sure, the cache doesn’t have to hold every single piece of a 150 gigabyte game to be useful, but I have already seen that editing two live streams and podcasts in six weeks managed to push most of a 50 gigabyte game out of my cache.

Part of the problem here is that I don’t understand how lvmcache makes it decisions, and I don’t have any way to influence them. The way I see it, I have two choices.

I could use a bigger cache. That would give Team Fortess 2 and Sniper Ghost Warrior Contracts more time to show lvmcache that they need to be cached, and lvmcache would have more time to notice that I haven’t touched the video files from March in the last two months.

The other option is to split my data and use two caches. The video files are ephemeral. I am going to work on them for a few weeks and never touch them again. If I do have to revisit those files, they can weasel their way back into the cache. The video storage volume is almost like a scratch drive. I imagine that I only have to make sure the cache on my scratch volume can hold at least one project’s worth of data. Everything else could be devoted to the long-term volume’s cache.

I am leaning towards trying out the second option. I fear that I will outgrow a single large cache and just wind up right back where I am today, and adding a second cache will require a lot less work. I still have my 480 GB Crucial SATA SSD installed. It is more than fast enough to cache the big disk full of video files!

My current lvmcache configuration is still quite good!

Is it really a big deal that video editing will push Borderlands 3 out of my lvmcache? I will just have to sit through a longer loading time once or twice. That seems like a small price to pay to save me from spending time micromanaging which games live on the NVMe and which games live on slow storage.

My setup doesn’t have to be perfect to be fantastic. Neither does yours!

You need to have good backups!

I don’t enjoy writing this section. It is always important to have good backups. When relying on a mechanical drive, it is even more important to have good backups. The statistics from the most recent Backblaze report say that they have an average annual failure rate of 1.22%, while some models of drive fail three or four times more often.

Yes. Solid-state drives can also fail, but they don’t fail as often. Mechanical disks sure seem to be doing much better than they were a decade ago, but they have lots of moving parts.

Seafile will start uploading my huge video files within moments of copying them off my camera. I am pretty safe as long as I don’t immediately wipe the SD card.

Why not store /root and /home on the slow disk?

I would like my computer still be useful when my slow drive inevitably fails. I wouldn’t trust a fresh spinning drive to last, and I most definitely don’t expect the six-year-old 4 TB drive I am using today to last. I wouldn’t be surprised if it fails tomorrow!

I am only using 130 gigabytes on my NVMe. I don’t expect that to grow significantly.

I wish I had more control over lvmcache

I keep thinking about how I would like to be able to tune or give hints to my lvmcache. It would be nice if there was an equivalent to the handy eatmydata tool that ignore the flush command.

It’d be neat to be able to run a command like dontcachethis cp /media/sdcard/video*.mp4 /lvmcached/area/over/here to make sure I don’t pollute the cache with an obviously large and useless write. Maybe there could be a similar tool to inform the cache that what I am about to do is of high value and should be given more priority in the cache.

Then I thought about it some more. Using lvmcache isn’t really a solution. It is a Band-Aid. I am using lvmcache because 8, 16, or 32 terabyte NVMe drives are too expensive or just don’t exist yet. In five years this may be completely irrelevant to almost everyone.

I don’t need better tools for managing my lvmcache. I just need my lvmcache to work until I outgrow it!

Conclusion

I am extremely pleased with my lvmcache setup on my desktop. The games I play load quickly. My video editing workflow is smooth. Best of all, I don’t have to shuffles files around between my NAS, my slow local hard drive, and my local NVMe. For the most part, everything just works!

This is one of the final steps on the road to eliminating my NAS. I am just waiting for the old 4 TB hard disk in my desktop or one of the 4 TB hard disks in my server to fail. Eventually each will have a 14 TB drive of its own, and my RAID and my Infiniband network will go away. I could save $100 or more if my aging disks can survive long enough before I am forced into an upgrade!

What do you think? Are you planning to use lvmcache on your workstation? Are you already using lvmcache to solve the same problem? Or are you using lvmcache on your servers? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!