Is The $6 Z.ai Coding Plan a No-Brainer?

I’m not going to make you wait until the end to learn the answer. I’m going to tell you what I think right in the first paragraph. I believe you should subscribe to the Z.ai Coding Lite plan even if you only write a minuscule amount of code every month. This is doubly true if you decide to pay for a quarter or a full year in advance at 50% off.

I’m only a week and seven million tokens deep into my 3-month subscription, but I’m that guy who only occasionally writes code. I avoided trying out Claude Code because I knew I would never get $200 worth of value out of a Claude Pro subscription. I also now know that I could have paid for a full year of Z.ai for less than the cost of two months of Claude Pro.

OpenCode with Z.ai

I saw a Hacker News comment suggesting that Z.ai’s coding plan GLM-4.6 is about 80% as good as Claude Code. I don’t know how to quantify that, but OpenCode paired with GLM-4.6 has been doing a good job for me so far. Z.ai claims that you get triple the usage limits of a Claude Pro subscription, but what does that even mean in practice?

Let’s start with the concerns!

Z.ai is based in Beijing. Both ethics and laws are different in China than they are in the United States or Europe, especially when it comes to intellectual property.

I’m not making any judgments here. You can probably guess just how much concern I have based on the fact that I’m using the Z.ai Coding Plan while working on this blog post. I just think this is important to mention. Do you feel better or worse about sending all your context to OpenAI, Anthropic, or Z.ai?

Are the limits actually more than twice as generous as Claude Pro?

I assume that the statement is true. The base Claude Pro subscription limits you to 45 messages during each 5-hour window, while the Z.ai Coding Lite plan has a 120-message limit in the same window. That is very nearly three times more messages, but are these actually equivalent?

I haven’t managed to hit the limit on the Coding Lite plan. The fact that I haven’t hit the limit should be a good indicator of how light of a user I am!

I suspect that this is one of those situations where your mileage may vary. We know that Claude Opus is a more advanced model than GLM-4.6. Opus is more likely to get things right the first time, and Opus may need fewer iterations to reach the correct result than GLM-4.6.

I’d bet that they’re comparable most of the time, and you really do get nearly three times as much work out of Z.ai’s plan, but I would also assume there are times when you might eat through some extra prompts trying to zero in on the correct results.

I’m not sure that an accurate answer to this question matters, since Claude subscriptions cost three or six times as much.

What have I done with OpenCode and Z.ai?

My Li’l Magnum! gaming mouse project is written in OpenSCAD. I have a simple build script that should have been a Makefile, but instead it is a handful of for loops that run sequentially. This wasn’t a big deal early on, but now I am up to three variations of eight different mice. Running OpenSCAD 24 separate times is taking nearly four full minutes.

Instead of converting this to a Makefile, I decided to ask OpenCode to make my script parallel. OpenCode’s first idea was to build its own job manager in bash. I said, “No way! We should use xargs to handle the jobs!” GLM-4.6 agreed with me, and we were off to the races.

OpenCode with Z.ai

I watched OpenCode set up the magic with xargs. I eventually asked it to combine its large number of functions into fewer functions by passing variables around. I had OpenCode add optional debugging statements so I could verify that the openscad commands looked like they should.

We ran into a bug at some point, and OpenCode had to start calling my build script to make sure STL and 3MF files showed up where they belonged, but OpenCode didn’t know that my script only builds files that have been modified since the last build. After telling OpenCode that it needed to touch the *.scad files before testing, it started trying and testing lots of things. This is probably a piece of information that belongs in this project’s agents.md file!

I had something I was happy with during my first session, but I wound up asking OpenCode for more changes the next day. We lost the xargs usage at some point, but I didn’t pay attention to when!

There is still a part that isn’t done in parallel, but it is kind of my own fault. I have one trio of similar mice that share a single OpenSCAD source file. I have some custom build code to set the correct variables to make that happen, and OpenCode left those separate just like I did.

I’m pleased with where things are. Building all the mice now takes less than 45 seconds.

You can wire Z.ai into almost anything that uses the OpenAI API, but the Z.ai coding plan is slow!

I almost immediately configured LobeChat and Emacs’s gptel package to connect to my Z.ai Coding Lite plan. I was just as immediately disappointed by how slow it is.

Everything seems pretty zippy in OpenCode. Before subscribing, I was messing around with GLM4.6 using the lightning-fast model hosted by Cerebras. I am sure Cerebras is faster while using OpenCode, but it isn’t obviously faster. OpenCode is sending up tens of thousands of tokens of context, and it is doing that over and over again between my interactions.

This is different than Emacs and LobeChat. I wasn’t able to disable reasoning in LobeChat, so I wind up waiting 50 seconds for 1,000 tokens of reasoning even when I just ask it how it is doing. I assume the same reasoning is happening in Emacs when I highlight a paragraph and ask for it to be translated into Klingon.

I assume the Coding Plan is optimized for large context, so I wound up keeping Emacs and LobeChat pointed at my OpenRouter account. Each of these sorts of interactive sessions only eat up the tiniest fraction of a penny. I am not saving a measurable amount of money by using my free subscription tokens here.

OpenCode Stats

Six million input tokens would have cost at least $6 at OpenRouter, and I am only two weeks into my first month!

It is tools like OpenCode, Claude Code, or Aider where you have to make sure you’re using an unlimited subscription service. I can easily eat through two million tokens using OpenCode, and that could cost me anywhere from $1.50 to $10 on OpenRouter. It depends on which model I point it at!

I am using OpenCode with Z.ai Coding Lite right now!

I messed around with Aider a bit just before summer. It was neat, but I was hoping it could manage to help me with my blog posts. It seemed to have no idea what to do with English words.

How well OpenCode worked with my Markdown blog posts using Cerebras’s GLM-4.6 was probably the thing that pushed me over the edge and made me try a Z.ai subscription. I can ask OpenCode to check my grammar, and it will apply its fixes as I am working. I can ask it to add links to or from older blog posts, and it will do it in my usual style.

OpenCode with Z.ai

I can ask OpenCode if I am making sense, and I can ask it to write a conclusion section for me. I already do some of these things either from Emacs or via a chat interface, but I have always had to do them very manually, and I would have to paste in the LLM’s first pass at a conclusion.

I could never burn through $3 in OpenRouter tokens in a month using chat interfaces—I probably couldn’t do it in a year even if I tried! Even so, OpenCode is saving me time, and I will use it for writing blog posts several times each month. That is worth the price of my Z.ai Coding Lite subscription.

Do you need the Z.ai Coding Pro or Coding Max plan?

If you do, then you probably shouldn’t be reading this blog! I am such a light user, and I suspect my advice will apply much better to more casual users of LLM coding agents.

That said, the more expensive plans look like a great value if you are indeed running into limits all the time. The Coding Pro plan costs five times more, and you get five times the usage limit. You also get priority access with 40% faster access to the models, and you also get upgraded to image and video inputs. The Coding Max plan seems like an even better value, because it only costs twice as much again, but it has four times the usage.

Z.ai has built a pricing ladder that manages to include some actual value for your money. Even so, the best deal is to pay only for what you ACTUALLY NEED!

I would also expect that if you’re doing the sort of work that has you regularly hitting the limits of Z.ai’s Coding Lite plan, then you might also be doing the sort of work that would benefit from the better models available with a Claude Pro or Claude Max subscription. I have this expectation because I assume you are getting paid to produce code, and even a small productivity boost could easily be worth an extra $200 a month.

Conclusion

The Z.ai Coding Lite plan offers exceptional value for casual coders and writers like myself. At just $6 per month (or $3/month with the current promotional discount), you get access to an extremely capable AI coding assistant. While it may not match Claude’s raw power, it is more than useful enough to justify its price, even if you only use it a few times a month.

The integration with OpenCode, which is ridiculously easy to set up, creates a seamless workflow that is easily worth $6 per month, and the generous usage limits mean I am unlikely to worry about hitting caps. For light users, hobbyists, or anyone looking to dip their toes into AI-assisted coding without breaking the bank, Z.ai’s Coding Lite plan is genuinely a no-brainer. If you use my link, I believe you will get 10% off your first payment, and I will receive the same in future credits. Don’t feel obligated to use my link, but I think it is a good deal for both of us!

Want to join the conversation about AI coding tools, share your own experiences, or get help with your setup? Come hang out with us in our Discord community where we discuss all things AI, coding, and technology!

The Li’l Magnum! Ultralight Fingertip Gaming Mouse 2.0 Is Almost Here!

What does it take to upgrade a 3D-printed mouse mod from version 1.0 to 2.0? With software, you usually increment the major number when you’re making a change that makes the program incompatible with the old version in some major way.

Li'l Magnum! mice in different colors

I have been experimenting with some rainbow color-changing filaments. Getting a nice color change is a challenge when the shell only weighs three grams!

There are a lot of minor changes to the Li’l Magnum! in version 2.0, but I also made significant changes to the button paddles. The thinning of the paddles might not technically qualify as a compatibility-breaking change, but a few of the mice had to have their button offset lowered by one layer to regain solid pre-engagement.

What has changed since version 1.0?

Let’s start with a list of what’s new!

  • Much lower default click force
    • Configurable from 20 grams to 40 grams
  • Modeled-in supports for the grips
  • No slicer-generated supports required when using modeled-in supports
    • Better overhang angles on all grip arms
  • OpenSCAD-generated sub-parts
    • Exactly two layers of PETG support for multimaterial
      • Larger build plate contact surfaces on most built-in supports
    • Separate button parts to apply extra top layers

I believe we are just at a point where the Li’l Magnum! is a better mouse overall. Most of the models are slightly lighter. All the models feel a little more solid. While the button paddles have more flex, I expect they will be even more durable.

I love having configurable button pressure!

I took a few Li’l Magnum! mice with me to display at our booth at Texas Linux Fest last month. I wasn’t sure what to expect. This isn’t a gaming crowd, but I did expect to run into a lot of tech enthusiasts. More than a few people assumed that the Li’l Magnum! must have a motor so it can run around on the floor like a mouse.

I was extremely excited when I ran into one actual gamer who plays first-person shooters, and he immediately knew what the Li’l Magnum! was for. Not only does he play shooters, he has four or five times as many hours as I do in Team Fortress 2. I was so excited that I ended up sending him home with my VXE R1 Pro Li’l Magnum!.

His first piece of feedback was about how stiff I made the buttons, and he is right. I purposely configured it for a short press travel while ensuring I wouldn’t accidentally click when I didn’t intend to.

OpenSCAD view of the configurator for Li'l Magnum button force

I ended up thinning out the paddle between the plunger and the front of the mouse. I printed dozens of test mice. I worked hard to get that overhang in the flexible spot to print reasonably clean. I also set up the customizer so that you can choose your own click force separately for each mouse button. That means you can make it easier to shoot while also making it harder to accidentally set off your stickybomb trap with a stray right click.

Are the click-force settings really as precise as the customizer says? Definitely not. Reliably measuring 18-grams of force with the mouse on a scale is hard. Every spool of PLA varies slightly. If your printer prints the overhangs more poorly, your force will be even lighter. The actual click force will also be influenced by the stiffness of your mouse’s microswitches.

Think of the force measurement in the customizer as a guideline.

How much force does it take to hit the buttons?

It is challenging to accurately measure the click of a button with a scale, but I did my best. I think I have a good way of explaining the click feel by comparing things to my Logitech G305, because the click force of a normal mouse like the G305 gets lower when you click closer to the front of the mouse. You have more leverage out there!

The old version of the Li’l Magnum! was pretty stiff. It was like clicking the G305 just behind the mouse wheel. This is where someone with an extreme claw grip might be clicking their G305-sized mouse.

The default clicks for version 2.0 are quite light. Clicking my own Corsair Li’l Magnum! feels like clicking the G305 out near the front tip of the mouse. Adjusting the customizer upward by two or three notches would make my clicks feel similar to clicking the G305 near the center of the wheel.

Upgraded grips

I am extremely pleased with the modeled-in supports for the grips. The supports connect to the grip with tiny 0.4-mm diameter nubbins. The supports break off easily, and the nubbins can be knocked off with your thumbnail or a metal tool. Please don’t use anything sharp!

In order for this setup to work, I had to chamfer the bottom of the grips to bring things to a point for the nubbins to connect to. I had no idea how much softer and more pleasant that chamfer would make the grips feel. I don’t notice it on the finger side, but the thumb grip feels nicer.

OpenSCAD view of the Li'l Magnum V2

The new supports for the grips break off easily, and a quick scrape with a metal tool leaves the underside of the grip soft and smooth!

We can blame this on the Corsair Sabre V2 Pro and Dareu A950. I made sure to line up the arms on every other mouse with the bottom of their grips. That means that the bottoms of the grips were always printed as bridges. I had to put one of the Corsair’s arms a little higher, requiring me to print the grip on tree supports, which I didn’t like.

Now that the base of the grips is always supported, I don’t have that limitation. I moved almost every arm upwards by at least one millimeter. You can’t always feel the difference, but in theory this should make every pair of grips just a little more rigid.

No slicer supports needed!

If you can’t print your Li’l Magnum! with multimaterial supports, you will still need to enable tree supports in your slicer. If you are using multimaterial supports, there is nothing left on any of the Li’l Magnum! models that needs to be supported.

Dialing in the Li'l Magnum! button overhangs

The red mouse on the left has the original button angle, while the mouse on the right is slightly steeper. This drastically improves the quality of the unsupported overhang, and it helps achieve just the right feel for the clicks!

The connectors that join the paddles to the grips are entirely bridges and reasonable overhangs. The connector across the front is a bridge. Everything should print fine on a modern printer.

The Dareu A950 Wing and Corsair Sabre V2 Pro are now the ultimate Li’l Magnum! donor mice

I bought a Corsair Sabre V2 Pro the same day they showed up on Amazon for $99. It is a fine mouse even without modding. It looked like it had extremely light internals, and I was pleased to learn that this was indeed correct. I’ve been gaming with it ever since it arrived, and most of my Li’l Magnum! builds with the Corsair have weighed 15.2 to 15.4 grams. I even have one test print that came in at 14.92 grams!

We have confirmation from at least two people that the $52 Dareu A950 Wing fits perfectly in the Li’l Magnum! shell. The PCB is nearly identical to the Corsair, because Corsair seems to be putting their branding on Dareu’s existing mouse.

There are some differences. They use different software to configure the mice. The Dareu uses a 30,000-DPI PAW3950 sensor, while the Corsair uses a Corsair-branded 33,000-DPI sensor.

Li'l Magnum subobjects

Subobjects are labeled in your slicer, and the labels include basic print-setting reminders

The list price for the Dareu on Amazon is $20 lower than the Corsair at $80. The Dareu regularly goes on sale for around $60 and has gone on sale for as little as $52.

These prices make it hard to recommend any other mice for your Li’l Magnum! build. If you are really on a budget, the VXE R1 SE is still the lowest price. Unfortunately, they only sell the R1 SE with a massive 500-mAh battery, so your Li’l Magnum! build will come in at over 25 grams.

If you are in the United States, then you’re going to pay $36 for a 25-gram Li’l Magnum!. You could spend $20 to $30 more on the Dareu and get a better sensor and the absolute lightest possible Li’l Magnum! build. You can probably still get an R1 SE for under $20 outside of America, so the math might be different for everyone else.

The price gap between the cheapest donor mouse and the most impressive donor mouse has gotten so small. It means that the mice in between the R1 SE and the Dareu A950 Wing are mostly pointless. If you already have a VXE Mad R or a VXE R1 Pro, then I think you should print a Li’l Magnum! shell. You already have a great donor mouse.

Now there are only two mice to buy. The cheapest VXE R1 you can find or the Dareu A950.

You don’t need to shave off every possible gram

One of Optimum’s Zeromouse builds was down around 17 grams, but every iteration since then has gotten heavier. I think there is a reason for this.

I notice that my 25-gram Li’l Magnum! is heavier than the rest. I can swap out its battery to bring it down to 21 grams. I can assure you that it’s difficult to notice the difference between a 15-, 17- and 21-gram Li’l Magnum!.

You can probably pick up on it when you’re really paying attention. You’ll notice it when you lift the mouse to recenter your aim. You probably won’t notice a difference while aiming. I think it is more important for me to have a fingertip mouse than it is for me to have a 15-gram mouse.

Chasing numbers and specs can be fun. I don’t want to stop you from having fun finding lighter and lighter mice. It might even be an inexpensive hobby for you.

One of the reasons I designed the Li’l Magnum! is so that you don’t have to spend $180 to find out whether or not you like ultralight fingertip mice. You shouldn’t feel like you’re missing out if you can only afford the cheapest Li’l Magnum! donor mouse.

What makes the Li’l Magnum! special?

The Li’l Magnum! is an open-source project. You can download and modify the OpenSCAD source code. It will still be here even if I’m gone.

The Li’l Magnum! is parametric. All the surfaces that you touch while gaming are adjustable in the customizer on MakerWorld. Does your thumb sit farther back? You can move the grip. Do you need a stiffer right click? Do you want an angle on one of the grips? You can easily make it happen.

I am also aiming directly at consumer 3D printers and PLA plastic. There are other printing processes that are great for printing skeletal mouse mods, and there are other materials that could be a bit more suitable for the Li’l Magnum!.

I tried PETG early on. It is a much more appropriate material for the buttons to have flex, but that extra flex of PETG also means that the buttons want to pivot, and the side grips wind up being really soft. I would have to add material and weight to the mouse to switch to PETG, and fewer people are able to print PETG at home. I figured it was best to focus on the easier material to print.

The Li’l Magnum! supports eight different donor mice so far, and it is relatively easy to add support for new mice. The important pieces that come in contact with a new mouse are mostly parametric. Most of the work is figuring out where the screw holes and microswitches are located on the new mouse PCB.

The Li’l Magnum! isn’t just my project. It is our project!

I’d rather you print your own, but you can buy a shell from my Tindie store

I run all my Li’l Magnum! prints on my Bambu A1 Mini. I use the AMS Lite to print multimaterial supports, but you can print a perfectly good Li’l Magnum! without the AMS. You’ll just need to file the bottom of the plungers a bit. You can spend $250 on a printer, and you can print a Li’l Magnum! for you and all your friends. I can assure you that you’ll find other fun uses for your printer.

I charge about $20 for a Li’l Magnum! print in my Tindie store. Your friend with a 3D printer can print one for you for free. You can for sure find 3D-printing services that will print the STL for less.

Why should you pay a little extra for a Li’l Magnum! from my store? I think the biggest reason is that I have the print settings for a Li’l Magnum! optimized to give you the right balance between rigidity and weight. The default print settings will give you a shell that weighs around three grams more than my own settings. The settings aren’t a secret.

I also guarantee that my prints fit the mice they are supposed to fit. If you own a Dareu A950 Wing, and I send you a Dareu A950 Wing Li’l Magnum! shell, then you are going to be able to make it work. Sometimes the manufacturer changes the PCB. We have already seen this happen with the MCHOSE L7. I will either work with you to adjust the model, or I will give you a refund.

I am not here to make a living selling mice. I’ll be happy enough if the Tindie sales earn enough money to keep buying more donor mice to keep the project moving forward.

Wrapping up

That’s the Li’l Magnum! 2.0. We’ve tweaked the button feel, made the grips more pleasant, and optimized the print settings to make the whole process smoother from your slicer to your desk. This is less about a giant leap and more about numerous small refinements that add up to a much nicer experience.

But here’s the real secret: this project has never been just about me or my ideas. It’s been shaped by every piece of feedback. Sometimes feedback is about the feel of the mouse. Sometimes the feedback is about a slightly different mouse model fitting just fine. This thing is a collective effort, and that’s what makes it so special.

The best part of all this isn’t the grams we’ve shaved off; it’s the community that we are building up around a shared interest in tinkering and making gaming gear truly our own.

Let’s keep building together!

I genuinely believe the coolest ideas for the Li’l Magnum! are still out there, waiting to be discovered by someone in our community. Maybe that’s you!

I’d love to see you join our friendly Discord community. It’s the central hub where we all hang out, share prints, troubleshoot builds, and brainstorm what’s next.

Whether you’ve just printed your first shell, you’re an old hand at modding mice, or you’re just curious and have questions, you are welcome. Let’s see what we can build together.

What are your thoughts on the new version? What donor mouse are you planning to use? Do you have a donor mouse in mind that I haven’t thought of yet? Come tell us about it on Discord!

Contemplating Local LLMs vs. OpenRouter and Trying Out Z.ai With GLM-4.6 and OpenCode

My feelings about local large-language models (LLMs) waffle back and forth every few months. New smaller models come out that perform reasonably well, in both speed and output quality, on inexpensive hardware. Then new massive LLMs arrive two months later that blow everything out of the water, but you would need hundreds of thousands of dollars in equipment to run them.

Everything depends on your use case. The tiny Intel N100 mini PC could manage to run a 1B model to act as a simple voice assistant, but that isn’t going to be a useful coding model to put behind Claude Code, Aider, or OpenCode.

OpenCode for Blogging

Most of what I ask of an LLM is somewhere in the middle. The models that fit on my aging 12-gigabyte gaming GPU were already more than capable of helping me write blog posts two years ago, and even smaller models can do a more than acceptable job today. I don’t need to use DeepSeek’s 671-billion parameter model for blogging, because it is only marginally better than Qwen Next 30B A3B. If you are coding, this is a different story.

I believe I should tell you that I started writing this blog post specifically because I subscribed to Z.ai’s lite coding plan. Yes, that is my referral link. I believe that you get a discount when you use my link, and I receive some small percentage of your first payment in credits.

Z.ai is offering 50% off your first payment, so you can get half price on up to one full year of your subscription. It works out to $3 per month. I aimed for the middle and bought three months for $9. I will talk in more detail about this closer to the end of this blog post!

Why would you want to run an LLM locally?!

I would say that the most important reason is privacy. Your information might be valuable or confidential. You might not be legally allowed to send your customers’ data to a third party. If that is the case, then spending $250,000 on hardware to run a powerful LLM for your company might be a better value than paying OpenAI for a subscription for twenty employees.

Reliability might be another good reason. I could use a tiny model to interact with Home Assistant, and I don’t want to have trouble turning the heat on or the lights off when my terrible Internet connection decides to go down.

Price could be a good reason, especially if you’re a technical person. You can definitely fit a reasonable quantized version of Qwen 30B A3B on a used $300 32 GB Radeon Instinct Mi50 GPU, and it will run at a good pace. This doesn’t compete directly with Claude Code in quality or performance, but Qwen Coder 30B A3B can be used for the same purposes. Yes, it is like the difference between using a van instead of a Miata when moving to a new apartment, but it is also a $300 total investment vs. paying $17 per month. The local LLM in this case would start to be free before the end of the second year.

Local LLM performance AND available hardware are both bummers!

You certainly have to use a language model that is smart enough to handle the work you are doing. You just can’t get around that, but I believe the next most important factor is performance.

People are excited about the $2,000 Ryzen AI Max+ 395 mini PCs with 128-gigabytes of fast LPDDR5 RAM. There are a lot of Mac Mini models that are reasonably priced with similar or even better specs. They are excited because you can fit a 70B model in there with a ton of context, but a 70B model runs abysmally slow on these DDR5 machines. Prompt-processing speeds as low as 100 tokens per second and token-generation speeds below 10 tokens per second.

While these mini PCs with relatively fast RAM can fit large models, they really only have enough memory bandwidth to run models like Qwen 30B A3B at reasonable speeds. The benchmarks say the Ryzen 395 can reach 600 tokens per second of prompt processing speed and generate tokens at better than 60 tokens per second.

I send 3,000 tokens of context to the LLM when I work on blog posts. Waiting 30 seconds for the chat to start working on a conclusion section for a blog post isn’t too bad, and it will only take it another minute or two to generate that conclusion. I am used to my OpenRouter interactions of this nature being fully completed in ten seconds, but this wouldn’t be the worst thing to wait for.

My OpenCode sessions often send 50,000 tokens of context to the LLM, and it will do this several times on its own after only one prompt from me. I cannot imagine waiting ten minutes, or potentially multiples of ten minutes, to start giving me back useful work on my code or blog post.

Waiting ten minutes for a 70B model would stink, while waiting one minute for Qwen 30B A3B would feel quite acceptable to me.

On the other end of the local-LLM spectrum are dedicated GPUs. You can spend the same $2,000 on an Nvidia 5090 GPU, but that assumes you already have a computer to install it in. The RTX 5090 should run Qwen 30B A3B at a reasonable quantization with prompt-processing speeds at least five times faster than a Ryzen Max+ 395.

I have a friend in our Discord community who is running Qwen 30B A3B on a Radeon Instinct Mi60 with 32 GB of VRAM. These go for around $500 used on eBay, but the older Radeon Instinct Mi50 cards with 32 GB of VRAM used to go for around half that, but the prices have been inching up. There are benchmarks of the Mi50 on Reddit showing Qwen 30B A3B hitting prompt-processing speeds of over 400 tokens per second while generating at 40 tokens per second. That’s not bad for $500!

There just isn’t one good answer. This is all apples, oranges, and bananas here. You can either run big models slowly or mid-size models quickly for $2,000, or you could run mid-size models at a reasonable speed for $500. You would need to figure out which models can meet your needs.

You can try most local models using OpenRouter.ai

I am a huge fan of OpenRouter. I put $10 into my account last year, and I still have $9 in credits remaining. I have been messing around with all sorts of models from Gemma 2B to DeepSeek 671B and everything in between. Every time I have the urge to investigate buying a GPU to install in my homelab, I head straight over to OpenRouter to see if the models I want to run could actually solve the problems that I am hoping to solve!

I used OpenRouter this week to learn that Qwen 30B A3B is indeed a viable LLM for coding with things like Aider, OpenCoder, and the Claude Code client. That gives me some confidence that it could actually be worthwhile to invest some of my time and money into getting a Radeon Mi60 up and running.

The only trouble is that the Qwen 30B that I tested in the cloud isn’t as heavily quantized. I would need to run Qwen 30B at Q4_K_M, and the results will be degraded at that level of quantization. That may be enough to push the model beyond the point where it is even usable.

Testing the small models at OpenRouter helps you zero in on how much hardware you would need to get the job done, but it most definitely isn’t a perfect test!

Tools like OpenCoder rip through tokens!

Listen. I am not a software developer. I can write code. I occasionally program my way out of problems. I write little tools to make my life easier. I do not write code eight hours a day, and I certainly don’t write code every single day.

I have found a few excuses to try the open-source alternatives to Claude Code, like Aider and OpenCode. They eat tokens SO FAST.

OpenCode burns through tokens

Don’t trust the cost! Some of those 3.2 million tokens over the two-day period were using various paid models on OpenRouter, while more than half were free via my Z.ai coding plan

It took me 11 months to burn through 80 cents of my $10 in OpenRouter credits. Chatting interactively to help me spice up my blog posts only uses fractions of a penny. One session with OpenCode consumed 18 cents in OpenRouter credits, and I only asked it to make one change to six files. I repeated that with two other models, and I used up as much money in tokens in an hour as I did in the previous 11 months.

This is why subscriptions to things like Google AI, Claude Code, or Z.ai with usage limits and throttling make a lot of sense for coding.

Blogging with OpenCode

This week is the first time I have had any success using one of the LLM coding tools with blog posts. I tried a few months ago with Aider, and I had limited success. It didn’t do a good job checking grammar or spelling, it didn’t do a good job rewording things, and it did an even worse job applying the changes for me.

OpenCode paired with both big and small LLMs has been doing a fantastic job. It can find grammar errors and apply the fixes for me. I can ask OpenCode to write paragraphs. I can ask it to rephrase things.

OpenCode for Blogging

I don’t feel like my blog is turning into AI slop. I don’t use sizable sections of words that the robots feed to me. I ask it to check my work. I sometimes ask it to rewrite entire sections, or sometimes the entire post, and I sometimes find some interesting phrasing in the robot’s work that I will integrate into my own.

I almost always ask the LLM to write my conclusion sections for me. I never used their entire conclusion, but I do use it as a springboard to get me going. The artificial mind in there often says cheerleading-things about what I have worked on. These are statements I would never write on my own, but I usually leave at least one of them in my final conclusion. It feels less self-aggrandizing when I didn’t actually write the words myself.

Trying out Z.ai’s coding plan subscription

A handful of things came together around the same day to encourage me to write this blog post. I decided to try OpenCode, it worked well on my OpenSCAD mouse project and my blog, and I learned about Z.ai’s $3-per-month discount. I figured out that it would be easy to spend $1 per week in OpenRouter credits when using OpenCode, and I also assumed that I could plumb my Z.ai account into other places where I was already using OpenRouter.

Z.ai’s Lite plan using GLM-4.6 is not fast—I was using OpenCode with Cerebras’s 1,000-token-per-second implementation of GLM-4.6 via OpenRouter. I was only seeing 200 to 400 tokens per second, which is way better than the 20 to 30 tokens per second that I am seeing on my Z.ai subscription. They do say that the Coding Pro plan is 60% faster, but I have not tested this.

Z.ai Performance In LobeChat

These are the stats from one interaction with GLM-4.6 on my Z.ai subscription using LobeChat

I wound up plumbing my Z.ai subscription in to my local LobeChat instance and Emacs. The latency here is noticeably slower than when I connect to large models on OpenRouter. My gptel interface in Emacs takes more than a dozen seconds to replace a paragraph, whereas DeepSeek V3.2 appears to respond almost instantly.

It isn’t awful, but it isn’t amazing. I would be excited if I could use just one LLM subscription for all my needs, but my LobeChat and Emacs prompts each burn an infinitesimally small fraction of a penny. I won’t be upset if I have to keep a few dollars in my OpenRouter account!

I was concerned that I might be violating the conditions of my subscription when connecting LobeChat and Emacs to my account. Some of the verbiage in the documentation made me think this wouldn’t be OK, but Z.ai has documentation for connecting to other tools.

OpenCode performance is way more complicated. I am not noticing a difference in my limited testing. This may be due to GLM-4.6 being a better coding agent, so I might be using fewer tokens and fewer round trips for OpenCode to get to my answers.

I have only been using my Z.ai subscription for two days. I expect to write a more thorough post with my thoughts after I have had enough time to mess around with things.

Conclusion

Where does all this leave us? After spending so much time digging into both local LLM setups and cloud services, I firmly believe that there isn’t one right answer for everyone.

For my own use case, I might eventually land on a hybrid setup with both a local setup in my homelab and a cloud subscription for the heavy lifting. For now, I’ll keep using OpenRouter for short, fast prompts and testing new models. The inexpensive Z.ai subscription, while a little slower, will do a fantastic job of keeping me from accidentally spending $50 on tokens for OpenCode in a week—that $6 per month ceiling will be nice to have!

The most important thing I learned is that you should test before you buy. OpenRouter has saved me from making at least two expensive hardware purchases by letting me try models first. For anyone else trying to figure out their own LLM setup, I’d recommend the same approach.

If you’re working through these same decisions about hardware, models, or services, I’d love to hear what you’re finding. Come join our Discord community where we’re all sharing what works (and what doesn’t) with our different LLM setups. There are people there running everything from tiny local models, to on-site rigs costing a couple thousand dollars, to running everything in the cloud, and it’s been incredibly helpful to see what others are actually using in the real world.

The LLM landscape changes so fast that what’s true today might be outdated in three months. Having a community to bounce ideas off of makes it much easier to navigate without wasting money on hardware that won’t meet your needs.

Do Refurbished Hard Disks Make Sense For Your Home NAS Server?

This seems like a question that could be easily answered with math, but there is a big problem. This question has a lot in common with the Fermi paradox, because there are so many important numbers that would need to go into the equation, but we just don’t have the data to plug into those variables.

What was the life of these refurbished hard drives like? Did they get tossed around in shipping? Did they live in a properly cooled datacenter, or were they overheating for five years? Is the reseller being truthful?

Juggling hard drives

We are going to do some simple math in this blog, but we are also going to be leaning at least slightly in the direction of vibes, because I am going to explain to you when I FEEL comfortable using refurbished hard disks.

Refurbished prices and trusted vendors

The people in our Discord community have been buying what feels like a substantial number of refurbished drives from Server Part Deals and GoHardDrive.com. They both tend to have good prices, especially when there is a big sale. They both often offer 2-, 3-, or sometimes 5-year warranties, and friends in our Discord community have had no trouble exercising those warranties.

We have seen 12-terabyte SATA hard disks for $112, or around $9 per terabyte. We have seen 16-terabyte SATA disks for $180, or about $11 per terabyte. This is a pretty heavy discount, because a good sale price for a brand-new 16-terabyte SATA drive is around $250, which works out to near $16 per terabyte.

Things to remember when choosing the size of your drives

Smaller disks have been available to buy for more years than larger disks. That means your refurbished 12-terabyte drives COULD BE three years older than the oldest refurbished 16-terabyte drives. This is probably one of the reasons why smaller drives tend to be offered at a better price per terabyte.

I believe warranties are important. The statistics that Backblaze publish have always told us that annual failure rates tend to double at somewhere around five years of age. That isn’t a massive jump these days, because you’re only moving from around a 2% failure rate to 4%, but it is relevant.

All the hard disks in my network cupboard

A good warranty isn’t just your safety net; it’s a vote of confidence from the reseller. I feel better about the product when the vendor backs it up with a warranty of three or five years. When that 12-terabyte hard drive only has a one-year warranty, it makes me wonder what they know about the service life of that drive that they aren’t telling me!

I wouldn’t personally buy any hard drives smaller that 12 terabytes.

Plan for failures!

Maybe your plan was to build a 6-drive RAID 5 or RAID-Z1 using 8-terabyte hard drives to net yourself around 40 terabytes of usable storage. A quick Amazon search tells me that new 8-terabyte hard drives cost $200 each, so you would be spending $1,200 on storage.

What if we bought six 12-terabyte refurbished drives during the sale three months ago. These drives came with a 5-year warranty and cost $112 each. We could spend $672 on six drives, put them in a RAID 6 or RAID-Z2 array, and have around 48 terabytes of usable storage.

That is 20% more storage and an entire disk of extra redundancy for barely more than half the price. We have hedged our bets a little, bought a little extra room to grow, and even saved enough money to buy a cold spare to keep on hand.

You NEED a good backup strategy!

Let’s start from the other end. When can you get away without having backups? When you are collecting movies and TV shows from the high seas. What happens if you accidentally dump your NAS server in the bathtub and lose every episode of Knight Rider that you downloaded? You just download them again next week. No big loss.

What about if the only copy of the pictures of your late grandmother are stored on that server? You’re not going to be taking those photos again.

Whether you are using brand new hard disks or refurbished, RAID is not a backup. It won’t protect you if there’s a bug in Immich that wipes out you photos. It won’t help you if ransomware encrypts all your files. It won’t help you if your SATA controller or driver goes bananas and corrupts every single drive. It won’t help if lightning takes out the entire server.

It is a good thing you’re saving money buying refurbished drives with big warranties. You can use some of that money you saved to build an off-site backup server.

The more redundancy that you have, and the more separate backup copies that you have, the less the quality of your hard drives will matter.

When would I use a fresh hard drive?

I might be in a somewhat unique position. My data is too big to fit on an NVMe, too expensive to store in the cloud, but still easily small enough to squeeze onto a mid-size mechanical hard drive. This is exciting to me, because you can buy an Intel N150 mini PC for about the same price as one of those hard drives. That means I can just attach a mini PC to every hard drive I buy, and I can always inexpensively add one more on-line backups to my setup.

That means that any remote hard drives that I have should be as durable as is practical. I don’t want to have to drive two hours to replace a hard drive when it inevitably fails, so I probably shouldn’t use a hard drive that already has five years of mileage on it. It is probably worth an extra $100 to reduce my odds of a remote failure in that case.

My remote backup isn’t that far away, and Brian joins us here for pizza night almost every weekend. If my hard drive dies on a Tuesday, I can have a replacement at my door by Thursday, and Brian can haul my mini PC back to me on Saturday.

My vibe math on this situation only applies because my off-site backup storage is a single hard drive. If you’re building a RAID for your off-site backups, you might be able to leverage refurbished drives to squeeze in an extra drive of redundancy and a hot spare while still spending less money. That would surely feel like a win for me!

Conclusion: Trust the math, but back it up with a plan!

After all that, where do we land on refurbished drives? It’s less about a simple mathematical formula and more about having a holistic strategy. The value is undeniable. Getting robust, high-capacity storage for a fraction of the cost of new drives is a game-changer for cash-constrained homelab situations.

The key is to approach things smartly. By purchasing from reputable vendors with strong warranties, sizing up your drives to avoid the oldest stock, and making sure you have a strong backup plan, you can confidently leverage refurbished drives as your storage setup.

Whether your drives are brand new or refurbished, a drive failure will result in catastrophic loss without a backup. The significant savings from going refurbished can and should be reinvested into building a resilient backup solution. A combination of local AND off-site backups would be ideal.

What are your thoughts? Are you ready to take the plunge on some refurbished drives, or does the idea still make you nervous? I’d love to hear your experiences and plans. Join the conversation in our Discord community. We’re always talking about deals, storage setups, and the best ways to keep our data safe.

The Ultimate Li’l Magnum! 15-gram Fingertip Mouse? Using The Corsair Sabre Pro V2 or Dareu A950 Hardware

I have been patiently waiting for the release of the Corsair Sabre Pro V2. It is high-performance, ultralight gaming mouse at a reasonable price from a major manufacturer. You could probably pick one up off the counter at Best Buy, Target, or Walmart. I am super excited about the idea of being able to snag a donor mouse for your custom 15-gram fingertip mouse build near your home.

Li'l Magnum! with Corsair Sabbre Pro V2 guts

The specs are great: up to 8-KHz polling, and 30,000 DPI sensor, nice mechanical microswitches, and a web configurator that works on Linux. I said that the price is reasonable, and I do believe $100 is a reasonable price for a gaming mouse. The problem I have here is buying a brand new mouse for $100 only to immediately take it apart to stick in a 3-gram 3D-printed shell.

You could spend $60 more on a 20-gram G-Wolves Fenrir Asym. The specs are comparable, but you get an injection-molded shell with side buttons. I don’t think the extra five grams are a deal breaker, and you’re getting something that is ready to go. Though you might have to pay a bit for shipping.

If I were in competition, and I do not feel that I am, I would consider the 20-gram G-Wolves mouse my most direct competitor. Probably because it is the mouse I would try next if I had to buy an off-the-shelf mouse.

You don’t have to buy the mouse from Corsair!

I am excited about supporting a modern Corsair mouse. It is $100 today, but there will be sales, and I expect it will be on the shelves for a few years. Someone will stumble across this blog post in four years, realize they already have an old Corsair Sabre collecting dust in their parts bin, and they might breathe new life into that mouse. That is all good news for the future.

What about today? Someone in our Discord community informed me that the Dareu A950 Air probably uses the exact same PCB and electronics as my Corsair mouse. Not only that, but when I posted my progress on Reddit, someone in the comments pointed out that their Dareu A950 Wing also uses the same PCB.

Li'l Magnum! test prints

The blue parts are partial prints to correctly position the screw holes. The yellow prints are complete test prints that I used to align and set the height of the button plungers.

What’s even better than that? The friendly person on Reddit printed a Li’l Magnum! shell and said that their Dareu A950 Wing’s components are a perfect fit for the Li’l Magnum! shell!

The price tracker says the Dareu A950 Wing is usually $64 with 2-day shipping on Amazon, and it has gone as low as $50 in the past. This brings the price down into the territory of the VXE R1 mice, but you get upgraded to lighter electronics, a better sensor, and faster polling rates.

The Dareu A950 at $63 easily makes for the best value Li’l Magnum! with the best specs so far, at least on paper.

Do you need the lightest mouse we can get?

No. I don’t think anyone should be working ridiculously hard and giving up features or strength to make the absolute lightest mouse possible. I am personally just about as happy with my $23 VXE R1 SE Li’l Magnum! at 25.3 grams as I am with my $100 Corsair Sabre Li’l Magnum! at 15.4 grams.

It is difficult to do a completely blind test at my, because the heavier mouse is extremely obvious every time you recenter your mouse. It isn’t more challenging to life the mouse, it is just easy for your brain to register that one mouse weighs 66% more than the other.

The important thing is that I forget that my mouse got heavier after 15 minutes of gaming. My suspicion is that as long as your mouse isn’t too much heavier than your thumb, going any lighter is going to have extremely diminishing returns.

Can it be fun to chase grams? Absolutely. If you enjoy that sort of thing, go for it.

We don’t have a reliable third-party latency test of the Corsair or Dareu mouse!

I don’t think this is terribly important. The cheapest gaming mice manage to come in at something under 1.5 milliseconds of click latency.

There is a full review with latency testing of the MCHOSE L7 Ultra at rtings. It was tested at 0.9 ms of click latency when wired, or 1.4 ms over the 8-KHz wireless link. This is a mouse supported by the Li’l Magnum!, and it is neat that we have a mouse with actual testing.

In practice, I can’t tell the difference between my Li’l Magnum!s with a MCHOSE L7 Ultra, VXE Mad R, or the Corsair Sabre. They all feel the same. If I lost all my Li’l Magnum! builds in a fire tonight, I would order a Dareu A950 Wing from my hotel. I don’t care that it hasn’t been tested by a reputable third party.

I am grumpy about Omron optical switches

My VXE Mad R and MCHOSE L7 both use Omron switches. Out of those four switches, two felt really crummy out of the box. Someone in our Discord community reported a bummer of a right click switch on their L7 as well.

I’ve replaced my disappointing Omron switches with fresh switches, but even the best Omron switches don’t feel great to me. The worst part is that they aren’t compatible with older 3-pin mechanical switches, so I can’t just grab my favorite switches and solder them onto a Mad R or L7. I just have to hope I can find a pair of nice Omron switches.

Li'l Magnum! with Corsair Sabre screw

Those tiny M1.5 screws that ship with the Corsair mouse don’t have a lot of bite, and the Phillips size is tiny and fragile. You do have to screw it down snug and flat, but take your time and make sure you don’t strip the screws!

I have been waiting patiently for a replacement for my 16.4-gram VXE Mad R. I wanted to be down under 20 grams, keep my 8-KHz polling, but I wanted mechanical switches. The Corsair Sabre is definitely the successor to my own Mad R, and it is even more exciting that the Dareu A950 Wing manages to come in at the same price point while beating the Mad R on weight by more than a gram.

I am not an aficionado of mouse switches. My favorite of my collection of budget gaming mice are probably the blue shell red dot switches in my VXE R1 SE, because they are the heaviest and loudest. The clear shell white dot switches in the Corsair sound and feel like they land somewhere between the blue shell switches and the pink shell white dot switches in the VXE R1 Pro.

I am not unhappy with any of these mechanical switches.

Which Li’l Magnum! should you build?!

The tariffs in the US are really bumming me out. They haven’t ruined budget fingertip mice, but they’ve goofed up the floor. You used to be able to build a 25-gram VXE R1 SE for barely over $20 or a 21-gram VXE R1 Pro for just under $30. Either will cost you over $40 today in the United States, and that puts you inches away from a Dareu A950 Wing, which really is looking like the ultimate Li’l Magnum! now.

First of all, I think you should build with what you have. I have designed Li’l Magnum! shells to fit any of the VXE R1 models, the VXE Mad R, all the MCHOSE L7 models, and even a weird $9.60 mouse from Amazon. The best mouse to build your Li’l Magnum! around might be a mouse that you already have!

If you are outside the United States, you might still be able to snag a VXE R1 SE, R1, or R1 Pro for less than half the price of a Dareu A950 Wing. Those all make delightful fingertip mice with fantastic specs, especially for the price, and especially if you can get the models with the smaller 250-mAh battery.

If you are in the United States, I think you should spend the extra $20 or $30 and build your Li’l Magnum! around the Dareu A950 Wing. That is a small price to pay to upgrade to the best available components for the lightest possible Li’l Magnum! build.

I designed the first Li’l Magnum! shell so I could avoid paying $170 for a Zeromouse shell and the Razer mouse to steal the guts from. I didn’t want to pay that much. I expected that I would wind up using it for a week, hating it, and it would wind up collecting dust in the back of a drawer for the next five years. It also helped that the Zeromouse is never in stock.

That isn’t the case, though. I love my ultralight fingertip mouse. I will never give it up, and I am excited that you now have the ability to make the same discovery as I did. You don’t have to pay $160 for a G-Wolves Fenrir or Zeromouse Blade to do give it a try.

Verion 1.0 was just uploaded!

I wrote a lot of words here the other day, because the version 0.9 upload wasn’t quite ready. It was a serviceable mouse, but I created a problem while fixing another. The Corsair PCB is extremely thin and super easy to accidentally flex, and the microswitch pins were getting hung up on some of the supports when installing the PCB. That made it too easy to break your PCB, so I did my best to move those supports to make some room.

Moving those supports out of the way allowed the PCB to flex too much when pressing the left click, and that made the click feel slightly mushy. Only just barely. I might not have noticed if I didn’t have four other Li’l Magnum! mice near my desk to check it against. It didn’t feel terrible, but it didn’t feel like it should.

I added about 0.1 grams of bracing under the left click, and it is now extremely solid. Version 1.0 is up on Printables and MakerWorld, and it should be available in my Tindie store by the time you are reading this.

Why should I even buy this from your Tindie store?!

I would really prefer that you didn’t. You’re a gamer. You’re a geek. You should own a 3D printer, and the Bambu A1 Mini is only $250. If you’ve been looking for an excuse to pick up an awesome new hobby, this might be it.

Maybe you don’t have room for a printer. Maybe that’s out of your price range. Maybe you just don’t want to fart around with figuring this sort of stuff out. Maybe you don’t have a friend with a 3D printer.

Li'l Magnum!

My prices are a definitely higher than random places on the Internet where you can just have any STL file printed for you. I have dialed-in print settings for the Li’l Magnum!, so I get you the lightest shell possible. I use multimaterial supports, so you get perfect clicks. I also promise that when you order the correct shell for your mouse that it will actually fit your mouse’s PCB, and I will attempt to adjust the model or give you a refund if the shell doesn’t work with your mouse.

You are also funding the development of future Li’l Magnum! models and improvements. I am trying very hard not to become a collector of gaming mice, but I am already up to having seven different Li’l Magnum! mice on hand. I don’t want to spend more of my own money on mice that I will never use, but I do want to make ultralight fingertip mice more accessible to everyone.

Conclusion

I am excited. I’ve been waiting for the right mouse to build my ultimate Li’l Magnum!, and it is here. When looking at the photos of the PCB before the hardware arrived, I expected the Corsair to tick every box except the weight. I figured this would be a gram or two heavier, and I was delighted to learn that this extra thing PCB wound up being the lightest set of guts that I’ve used so far.

I think every FPS enthusiast should have the opportunity to try an ultralight fingertip mouse. I don’t expect everyone to enjoy the experience as much as I do, but I for one can’t imagine going back to a big, heavy mouse ever again.

What do you think? Do you own a different interesting gaming mouse that you feel deserves a Li’l Magnum! model? I bet we could work out a deal that gets you a free Li’l Magnum! shell while also helping me avoid collecting yet another mouse. Are you already using a fingertip mouse? What do you think of the experience? Tell us about it in the comments, or join the Butter, What?! Discord community to chat with me about it!

Did I Accidentally Build The World’s Most Power Efficient NAS and Homelab Combo Server?

There is a serious problem with the question in the title. It all hinges on what you feel qualifies as a NAS or a homelab. We could serve a README.MD over WebDAV on an ESP32 and call it a power-sipping NAS, and if that is what you had in mind, then the answer to the question in the title is a definitive “No!”

I don’t have Guinness on speed dial, and I doubt that I am literally breaking any actual records either on purpose or by accident, but I am somehow accidentally landing in the top one percent category after ordering a 6-bay Cenmate USB SATA enclosure back in June.

6-Bay Cenmate USB enclosure with my N100 router mini PC

I have not staged any cool pictures for this blog, but it has been ready to publish for almost a month now. This is a photo from one of the previous blogs. I will attempt to correct this in the near future!

I knew the first time that I picked it up after filling it with 3.5” hard drives that the Cenmate enclosure is dense, but I didn’t do the math to understand exactly how dense my enclosure paired with an N100 or N150 mini PC actually is until almost two months later. I have a NAS that hold six 3.5” SATA drives that takes up just barely more than six liters. That is less than a third the size of a Jonsbo N2 case.

I may very well have built the lowest price, lowest power, most dense homelab and NAS setup. I don’t know that you could beat it without unless you buy used parts instead.

NOTE: I don’t ACTUALLY have this NAS built and running in my home, but it isn’t just hypothetical. I do have all the necessary parts on hand to measure the cost, power consumption, and volume. I definitely don’t have the six 26 TB hard disks here to max it out to 156 terabytes!

What about power consumption?

I already know that my Trigkey N100 mini PC that I bought for $143 averages around 7 watts on the power meter. That is running Proxmox with a few idle virtual machines and LXC containers booted.

When I first plugged the empty 6-bay Cenmate enclosure into both my power meter and my mini PC, I learned that the enclosure only uses 0.2 watts of additional power. That is as close to a rounding error as it gets.

At this point I have an empty 6-bay, 6.2-liter Intel N100 NAS with 16 GB of RAM and a 512 GB NVMe that cost me $325, and it is idling away at 7.2 watts.

Plugging in hard disks adds about as much power consumption as you would expect. The meter goes up by 8 watts when you plug a 3.5” hard drive into a bay, and hammering the disks with a mean benchmark brings that up to 9 watts per drive. Your mileage may vary here, because every make and model of hard disk runs a little differently.

My fully-loaded 6-disk NAS idles at about 55 watts, and it maxes out at around 62 watts when the CPU or GPU are under maximum load.

NOTE: These wattages are gathered from notes and blogs. I’m going to plug six real hard disks back in, and power the Trigkey N100 mini PC and Cenmate enclosure using a single power-metering smart outlet to get a proper, correct, real number soon. I am in the middle of torture testing the Cenmate enclosure with massive IOPS on a stack of SATA SSDs, and I don’t want to stop that to re-verify these numbers.

Couldn’t we beat this “record” with a Raspberry Pi?

Yes. A Raspberry Pi would drop the price by $50 to $70, and it would drop the idle power consumption by 3 or 4 watts. It might even be slim enough to bring the total volume down to an even six liters!

I don’t think this is a good trade. Proxmox on an x86 machine is fantastic, and gives you a lot more flexibility and way more horsepower. It is hard to beat an Intel N100 or N150 when you’re transcoding with Plex or Jellyfin. Most Intel N150 mini PCs come with twice as much RAM as the most expensive Raspberry Pi, and they ship with a real NVMe installed, so you don’t have to boot off a fragile SD card. The mini PC will also already be installed in a case, and it comes with its own power supply.

We are starting to see Intel N150 mini PCs with one or sometimes two 2.5-gigabit Ethernet ports down near $150. That is a nice feature to get effectively for free, and the best part is that an Intel N150 is fast enough to encrypt Tailscale traffic at around 2.4 gigabits per second. That is something a Raspberry Pi can’t manage, and that is extremely important for my setup.

I don’t like focusing on volume and liters

Volume is not a terribly interesting measurement for most home users. We could build a custom two-liter server that is a few inches wide, an inch tall, and 32” deep. That would be awful! It would hang off the front of your desk!

In the olden days, you would be excited if your physical shop had 100’ of frontage along Main Street. There’s a similar concept that applies to the linear footage of your desk. It almost doesn’t matter how tall something is, as long as it isn’t too wide or too deep, then it’ll fit well on the surface of your desk.

A 4’ tall but narrow server might look silly on your desk, and that might be too tall to even hide under your desk.

I feel that my build is very well suited to sitting on the edge of your desk. It is only about five inches wide and eight inches deep, and it is still less than a foot high.

This might be the laziest way to build a DIY NAS!

Two power cables and one USB cable. That’s it. Just place the two boxes on or near each other and plug them in. That’s the hardware setup for this DIY NAS. Slide in as many hard disks as you need, and you’re ready to set up your software.

It almost feels like cheating.

Aren’t USB hard drive enclosures scary?!

I am currently doing my best to torture test my Cenmate enclosure. I have been running continuously running fio randread tests averaging 60,000 IOPS across a RAID 0 of old SATA SSDs. The test has been running for 14 days straight without a single error as I am writing this paragraph.

USB storage was sketchy in the USB 1.1 and USB 2.0 days. Things have gotten a lot more solid in the last few years. Professional-grade video cameras write RAW video directly to USB SSDs. Professional video editors are working directly with the footage over USB, or many of them are copying that footage to other USB SSDs and working from that copy.

That entire world loves Apple laptops, and Apple laptops don’t have any options for large amounts of storage besides the USB and Thunderbolt ports. These things have to be well made now.

You don’t have to follow my Intel N100 blueprint!

Mini PCs, simple external USB hard drives, and 6-bay USB enclosures are a lot like Lego bricks. Need a lot of storage? Plug in a bigger Cenmate enclosure. Still not enough storage? Plug in a second one? Need more RAM or CPU power? Use a beefier mini PC!

An example would be the Acemagician M1 that I use as a Bazzite gaming machine in the living room. It also idles at around 6 watts when running Proxmox. It costs twice as much as an Intel N150 mini PC, but it is also more than three times faster and can hold twice as much RAM.

The price will go up a bit, so we wouldn’t be building the lowest cost 6-bay NAS anymore, but you definitely get some upgrades for your money. The Intel N100 does manage to beat the Ryzen 6800H in the Acemagician M1 by a small margin, and my 6800H uses 50 watts of power to while transcoding for Jellyfin. My Intel N100 transcodes faster, and that mini PC uses less than 15 watts while doing it. This is not a big deal unless you watch movies 12 hours every day.

The Acemagician M1 is a good value for your homelab if you can get it on sale. I paid around $330 for mine. It is a good fit because it has two DDR5 SO-DIMM slots, two m.2 NVMe slots, and 2.5-gigabit Ethernet. That’s about as good of a combination as you can get in this price range.

You don’t have to build a NAS, you can directly attach the Cenmate enclosure to your computer!

I could write an entire blog post listing tons of good reasons why you might want to have a NAS on your home network.

I can’t do the topic justice in a couple of paragraphs, but I can say this! When the cost of turning a 6-disk enclosure into a NAS is only an extra $150 or so, there isn’t much excuse not to do it.

Even though it is inexpensive, you don’t have to do it. Maybe you just need a place to store footage when you edit videos at home. Maybe you need storage for your daily or weekly backups. You might already have to plug your laptop into a docking station when you sit at your desk at home, and your Cenmate enclosure can just stay plugged into the dock. This is a fine workflow to have.

What if you want to set things up so you can have remote access to that footage when you aren’t at home. Your home Internet connection may not be fast enough to edit video directly, but being able to grab a video file in a pinch could save you a drive. That’s a good reason to set up a NAS with Tailscale.

Conclusion

Should you build your DIY NAS out of a mini PC and a USB enclosure? I don’t know! My NAS needs are simple to the extreme. I don’t need my NAS to have a management interface. I manually set up my RAID arrays and the two shares or NFS exports I might need. I have absolutely no idea what TrueNAS does when you plug in an enclosure like this. Since it is USB-attached-SATA, I assume TrueNAS will treat them just like any SATA disks, but I haven’t tested this.

I just think it is neat that my lazy and simple set of LEGO-style pieces here wound up being nearly the most power efficient and storage-dense setup that anyone could make even make with off-the-shelf parts, and USB enclosures like the ones from Cenmate fit my use case extremely well. I enjoy having the extreme level of flexibility.

What do you think? Can you build a more densely packed NAS that uses mechanical hard disks? Can you do it without spending too much more money? Will your build sip even less power? Will it sip enough less power to make a difference on my monthly electric bill? You should join our friendly Discord community to tell us about your build, or to give me a link to your write-up so I can point people to it!

Using A Brevite Runner Camera Bag As An Everyday Laptop Bag

I’ve been thinking about replacing my 20-year old Targus bag with a nice, expensive camera bag for quite a while now, and Brian Moses and I are going to be running a booth at Texas Linux Fest in October. That seemed like a good enough excuse to upgrade my laptop bag!

Let’s start with a tl;dr. I am pleased with using the Brevite Runner as my laptop bag. It is a little smaller than my old bag, which means I’ve given up some useful compartments for things like a mouse and a charger. I would appreciate a better location to store these two things, but that is a minor complaint. Everything else is fantastic and a massive upgrade.

Why a camera bag?! And why spend so much?!

Professionals who carry around an expensive laptop might be weird. Photographers and videographers excitedly spend $400 on a Gnomatic backpack. I was excited to spend $200 on a giant FPV drone backpack. Yet most of my peers, who are usually well paid, are slinging around $30 laptop backpacks.

First of all, I don’t think there’s anything wrong with that. I bought my 25-liter Targus laptop bag on sale for $16 in 2007. It is fraying around the edges, but it is still structurally sound two decades later. It is reasonably well thought out, and it has plenty of storage compartments, but that laptop sleeve can fit a laptop that is nearly three inches thick. I’ll never own a laptop that thick again.

Little Trudy on my Backpack

As soon as I threw on the Brevity Runner to see if the shoulder straps were adjusted to the correct length, little Trudy decided that she needed to climb up there and lay down!

I’ve had my eye on Brevite’s backpacks, specifically the Brevite Jumper, for a few years. They are so much more budget friendly than backpacks from Gnomatic or even ThinkTank. Their bags are machine washable, which would have been nice the time my Targus got milky coffee all over the back when someone spilled their drink on an airplane.

Camera bags are usually well thought out, and they always have laptop storage. The only laptop-specific things that I carry these days are a thin laptop, a USB-C GaN charger, and a long USB-C cable. Everything else is ancillary, and a camera bag has fantastic organization for that sort of stuff.

Why the Brevite Runner?

I already mentioned that the bags from Brevite are more reasonably priced than many of their competitors. I am excited about the Runner backpack being divided into an attic on top, a camera storage compartment on bottom and, a laptop pocket along the back.

This means I get a zone to store my laptop, a zone to store my minimal camera gear along with other odds and ends, and I get a third zone on top to use for different purposes on any given day. I’ve been thinking of the attic as the “Adventure Zone.”

I can throw a hoodie in there if it is going to be chilly. I can fit my FPV radio, goggles, and some batteries up there. I modified my bag with a no-sew strap on the back so I can strap an FPV drone to the outside. I can also throw some game controllers and a mini PC in there.

My Brevite Runner's camera zone

*My Sony ZV-1 is tucked in on the left with its attached Mantispod extending under the Baseus 20,000 mAh battery bank and DJI Osmo Pocket. My Anbernic RG35XX retro handheld is tucked in on the right, and my network/NanoKVM pouch has a slot at the top.

The neat part is that I can swap out the stuff in the attic without disturbing the rest of my gear, and the attic is a cube-like volume and not just a thin slice of the bag, so I can put bulkier stuff up there. There is enough room up there for an metal lunchbox from the eighties!

I am excited about having the “Adventure Zone.” I own a 27-MPH electric unicycle, and I live two houses away from the onramp to miles of wide, paved bike trails through our local parks. When the weather is right, I like to ride out to a secluded picnic table in the shade a mile away from the parking lots to write some blog words. It’ll be nice to be able to swap some FPV drone gear into the Runner’s attic on occasion!

How premium is the Brevite Runner?

There are a lot of premium features. The material of the bag feels sturdy and nice. The zippers feel great. The soft interior and soft dividers feel comparable to my ThinkTank bag. Don’t forget those premium Fidlock clasps on the attic!

That said, I can see two places where Brevite has saved some money on production.

The shoulder straps aren’t well padded. This isn’t necessarily bad, because this is only an 18-liter bag, so it isn’t expected to be extremely heavy, but those straps remind me of my JanSport bookbag from the eighties. My $15 Targus backpack from 2007 has softer padding on the straps.

This might be a nitpick, and other people might say that this is just a style choice, but the zipper pulls are cheap. They’re just short lengths of knotted cord with covered in heat-shrink tubing. They get the job done, and they are thin enough to hide in some of the crevices so you can’t even see that a zipper exists, but they are definitely cheap.

If you ask me, they skimped in two of the right places. I’d much rather spend my money on Fidlocks, nice zippers, and sturdy machine-washable material than snazzy zipper pulls and unnecessary padding. I will rarely pack this thing up to 15 pounds, so it doesn’t need the luxurious padded straps of my ThinkTank bag that usually carries more than 40 pounds of gear!

Yes, I do carry some camera equipment!

I don’t carry the sort of gear that a photographer would put in this bag. They would be excited that they can fit a full-frame camera with a massive lens in the lower storage area, and they can set up the dividers so they can just open the trapdoor on the side to slide the camera in and out without disturbing anything else.

I don’t carry a massive camera. I carry a Sony ZV-1 and a Mantispod mini tripod. These don’t take up much space, but I did set up the camera area with a dedicated spot for these two pieces of gear. I can indeed sneak them out through the trapdoor, and I rigged things up in such a way that I can keep the ZV-1 mounted on the Mantispod and still manage to get both to fit. I do have to bend the Mantispod’s head to make it fit, but it works great so far!

I haven’t decided what I will permanently keep in the camera zone. I have some stuff in there specifically for Texas Linux Fest, like my wireless mics, a small video light, and my massive Baseus laptop-charging battery bank.

I also have my network/NanoKVM pouch in the camera zone. That seemed like a good place to keep it.

I am starting to organize my kits into pouches, because it makes it easier to swap them in and out of my laptop bag as a unit. It also makes it easier to lend toolkits out to friends!

I am packing the camera zone today as if I am going to need some camera gear at Texas Linux Fest, and I am impressed with just how much stuff I can cram in there while somehow managing to stay organized. I will definitely take my Sony ZV-1 and pair of wireless lavalier microphones just in case we decide to do an impromptu Butter, What?! Show episode, but that doesn’t require much space. I am mostly just excited to see what sort of gear I can fit, and I will almost definitely pare down before we get in the car in October!

What if you don’t need the camera dividers?

Brevite’s Daily backpack looks almost exactly like the Runner, except it costs $30 less and skips all the camera features. It is basically just a normal laptop backpack.

I really wanted the separate attic, I don’t believe that single, large divider exists in the Daily backpack. I think it is worth looking at if you’re not interesting organizing some of your gear like a photographer organizes their camera and lenses.

Upgrading my Brevite Runner with no-sew accessories

The Brevite Runner is a fantastic and capable backpack without any mods, but I just can’t help myself. I designed the open-source no-sew backpack upgrades to fit just one more thing on my smallest laptop bag. I wound up adding two straps to that bag so I could carry a water bottle and a game controller to the park.

Adding no-sew hooks and straps to a bag is easy, but finding appropriate spots on the Brevite Runner was a challenge. The bottom half of the bag is heavily padded, so it would be challenging to poke precise holes down there, and it would probably just be a bad idea.

Upgrading my Brevite Runner with No-Sew hooks and straps

I wound up adding a hook to the side of my backpack just above the padding. I use these hooks to attach my tech pouches when I run out of room inside the bag. I haven’t run out of room yet, but I like to be prepared!

I considered putting a hook on the other side, but that’s where the water-bottle holder and its accompanying strap live. I don’t expect to want to use that space for something else.

I also added a Velcro strap between the top flap and the camera storage. I have plenty of room in the attic for my big, aging Taranis FPV drone controller, and I can definitely fit my FPV goggles and plenty of batteries up there. What I can’t fit inside this bag is a 5” FPV freestyle drone.

I have my 4” Kestrel freestyle drone strapped to the Runner in the photo. We crash these drones into trees and concrete at 70 miles an hour. They have no trouble surviving unprotected when I walk around like this.

How did you get a hammer on the Brevite bookbag?!

I designed the no-sew punch template to work with a 3-mm leather punch tool. You just place the template, put the magnetic backer plate inside the bag, and give the punch a nice solid blow with a hammer in each of the holes.

Sometimes there just isn’t a good way to set things up to get a nice blow with the hammer. Sometimes the material is just too tough, sturdy, or thick to punch cleanly with a hammer.

I have since learned that chucking the leather punch up in a power drill works really well! Crank up the speed and apply some pressure, and the front edge of the leather punch will cut a nice, clean hole through several layers of fabric and padding.

I am still learning the best ways to use my no-sew template and tool!

Conclusion

I am sure I will post a long-term review once some time goes by, and I expect that I will have a lot to say when I do. I don’t expect to have any significant negative news to report in the future. I can see what the Brevite Runner offers, and I am able to fit all my everyday laptop essentials in there with plenty of room left over.

I am pleased so far. My laptop fits. I can squirrel away my Anberic RG35XX retro handheld. I can fit and organize my most important cables, wireless headphones, and even a tiny gamepad in the rear compartment. I have the camera zone packed with more gear than I will ever need, and the attic is mostly empty and ready for an adventure.

What do you think? Is using a bag meant for camera gear as a laptop bag an amazing idea, or did I make a mistake? Do you own a laptop backpack that you prefer? Tell me about your favorite bag in the comments, or join the Butter, What?! Discord community and tell me what you think!

Keychron K2 HE Hall Effect Gaming Keyboard For Writing, Coding, and Gaming

This is not a long-term review. I decided I should shop for a new keyboard last night, I ordered the Keychron K2 HE this morning, and it will arrive tomorrow morning. I will have had the keyboard on my desk for a week by the time I finish writing this blog, so I will definitely be able to tell you if I am happy with my purchase, but this is mostly going to be about WHY I chose the Keychron K2 HE.

I wound up paying $10 extra for the special edition with the wood accents. I’m not sure how necessary that was.

Keychron K2 HE at my desk

I have been using a $30 75% keyboard with knockoff Cherry MX Blue switches since 2019. I bought it on a whim. I saw a deal, I posted it to Butter, What?!, and I thought it might be fun to free up the real state on my desk between the enter key and my mouse. That was a fantastic decision, and it has been a surprisingly delightful keyboard.

I am a huge fan of the IBM Model M keyboard, so the blue switches feel light and uncomfortably crunchy to my fingers. I have been thinking for a long time that a switch upgrade would be fun. I am sure there is a smoother yet tactile switch with a much heavier actuation force available now, but there are something like 100 different Cherry-compatible switches to choose from. Making that choice seemed like a lot of work, so I kept putting it off.

The 16-gram gaming mouse that I’ve been using for the last six months is one of the lightest and lowest latency gaming mice anywhere in the world. Doesn’t it seem like a bummer to pair such an impressive mouse with a cheap old keyboard?

Why the Keychron K2 HE?!

I have been fascinated by the idea of these new hall-effect switches for quite a while now. You can set the actuation height in software, so you can have ridiculously responsive key presses while gaming.

They also allow for some interesting magic when pressing and releasing buttons: a key can be counted as released when you begin lifting your finger, while immediately reactivating another key that you were already holding down. You don’t have to lift above the actuation point. This is very similar to how movement binds in Team Fortress 2 are already done, but apparently this may trigger anticheat mechanisms in some multiplayer games!

This is all neat, but I was hoping that my next keyboard would be running open-source firmware like QMK or VIA. Last time I looked, all the hall-effect keyboards were proprietary.

I should have been paying more attention, because the Keychron K2 HE has been out for more than half a year, and it runs on QMK!

I am going to be honest with you, since I always do my best to stay honest here! Most of the hall-effect features seem like gimmicks. I’m not sure how much of a difference setting actuation height will make to responsiveness, and I half expect the feature where you can assign four actions to different depths on a single key to be too cumbersome to configure separately for each game.

That said, I am excited about having a chance to try them out. I will report back in a few months with details on how I manage to make use of that. I started my first game of Cyberpunk 2077 last week, and everyone tells me that I will hate driving with the keyboard. Maybe I will be able to configure analog steering!

Why a 75% keyboard layout?

I don’t need a number pad. I’m not an accountant working in the nineties. I don’t key in digits from receipts and purchase order printouts all day long. If you ARE keying in hundreds or thousands of digits all day long in 2025, why isn’t the machine somehow scanning those digits for you? The longest sequence of numbers I type is an occasional 6-digit 2FA code, and it is faster to type a year like 2025 without moving a hand all the way to a number pad.

The horizontal space on my desk between my fingers and my mouse is valuable. It is easier to keep things centered on the monitor if I don’t have to reach an extra six inches to get to the mouse, and it is nice to not have to reach that extra six inches over and over throughout the day.

A lot of people enjoy 60% keyboards, but I don’t find the space on my desk between my fingers and my monitor to be terribly valuable. You could fill that space with 100 extra keys, and the worst that would happen is that I’d ignore them. I will never wish I could put something in their place.

I tend to make use of function keys for one-handed operations. Emacs defaults to using F3 through F5 for recording, stopping, and replaying macros. I often click the next place I want to repeat a macro, so running the macro with one finger helps. I have F9 through F12 with modifier keys bound to shortcuts that adjust my display output between combinations of my primary monitor and my office television.

Now that I am dropping back to a single macropad, I might move some of my office video lighting controls to other combinations of function keys. The function keys are like a free macro pad in an unobtrusive spot, and I wouldn’t complain if I had an addition row of them!

Why use a gaming keyboard when I spend more time working than playing?

In a perfect world, the IBM Model M in my closet would appear on my desk when I am writing a blog or chatting in Discord, and a fancy hall-effect keyboard would magically take its place when I fire up a first-person shooter. I do swap my mouse when playing games that require fast aim, but I’m not going to attempt to play musical chairs with two heavy wired keyboards. I’m also not going to move to a separate desk, computer, monitor, and keyboard to play games. I toggle back and forth fairly often!

I can write a blog post using the crummiest laptop keyboard, but I will play Team Fortress 2 better with an appropriate tools, and I will also have more fun.

The linear hall-effect keys feel way different than blue switches!

I have been typing on this keyboard for three days. The first thing I did was switch to the preconfigured gaming profile, and I lowered the actuation distance for every key from 2 mm to 0.5 mm.

At that distance, it only takes letting the weight of one of your fingers rest on a key to see a letter appear on your screen. It happens just as the spring starts to provide proper resistance your pressure. I wound up setting the spacebar’s engagement height to the default of 2 mm because I was occasionally typing dozens of spaces while just resting my thumb on the keyboard!

Keychron K2 HE at my desk

I expected that I’d be toggling back to the 2-mm default profile when not gaming, but I have only been use the gaming profile with the short-throw switches. It isn’t causing me much trouble. I’m not typing extra random characters. The aggressive gaming switches still type like a normal keyboard for me.

I have noticed that I have a peculiar habit. When my thoughts slow down and my fingers catch up to the words, I might pause with my fingers somewhere besides the home row. I have seen a few accidental t characters pop up while waiting to think up the next word. It is always a t. I’m not sure why that is, but I seem to have already broken the habit.

I am weirded out by the creamier sound of this keyboard. The crunchiness of my old blue switches didn’t match the clackiness of my old IBM Model M, but those keyboards felt and sounded more alike than the Keychron K2’s linear Nebula switches compare to the blues. I hear this strange noise when my fingers hit the keys, and I feel like I am in the wrong office!

What about gaming, Pat?!

I don’t really know yet. I’ve only had the keyboard for a little over 24 hours. I feel like the part of my gaming experience where the rapid response of the quick-actuating hall-effect switches would help me the most would be when playing Team Fortress 2. I don’t have any plans to play in the near future, because playing an lightly competitive multiplayer game is tiring and draining. I usually have a few months in a row where I enjoy that, but then I wind up taking a break.

I have been playing through Sniper Elite: Resistance again this week. I enjoy this series of games. They reward my quick aim time and accuracy, but they don’t require me to be constantly aiming and shooting. I get to spend time slowly wandering around, positioning myself, and making sure a big group of bad guys doesn’t spot me.

The game is mostly relaxing with regular bursts of fun. It is also never going to make use of faster keyboard switches. I’m sure I’ll be excited to play some adrenaline-fueled games like Trepang2 or RoboQuest soon enough.

A keyboard that works as an analog gamepad?!

The third default profile is set up for gaming. It remaps the WASD cluster as the left analog stick. If you partially engage the W key, your guy will slowly walk forward. Push it all the way down, and he’ll walk at full speed. Isn’t that weird?!

I am curious about how this works out, because I recently started my first playthrough of Cyberpunk 2077. I prefer aiming with a mouse, but everyone tells me that I am going to hate driving with the keyboard. It sounds like the Keychron K2 HE might help me make the bad keyboard driving controls less bad. I’m not sure how well this works in practice, but I am excited about giving it a try!

But Pat! I don’t want to spend $140 on a keyboard!

Depending on which part of the Keychron K2 HE you value, there seem to be two really good options at around half the price. Keep in mind that I haven’t used either of these alternative keyboards. A keyboard is a personal and opinionated choice. It might be a good idea to shop somewhere with a liberal return policy!

You can skip the hall-effect switches and opt to go with the Keychron K2 Max for $115. I believe that replacing the hall-effect switches with red switches is the only difference between the K2 Max and K2 HE.

You can downgrade to lesser sound damping material with the Keychron K2 Pro and save another $10. You could also drop back to the base model Keychron K2 which has very little acoustic material to bring the price down to $80. Any of these keyboards would be a good option for office work, and would still be a fine gaming keyboards. Every Keychron K2 trim level supports the open-source QMK firmware.

If the hall-effect switches and rapid-trigger effect are what you’re excited about, I was also eyeing up the Yunzii RT75, which seems to go on sale regularly for $72. The RT75 does not run open-source firmware. Yunzii’s web configurator seems to have a feature set comparable to the Keychron K2 HE.

The Yunzii RT75 is in a fully plastic case, and it doesn’t have comparable acoustic foam to any keyboard in the Keychron K2 lineup. It comes with a different set of tradeoffs, but I looked up how the Yunzii sounds, and I don’t think it sounds bad at all!

Buy vs. build

I am going to tell you right now that I have a lot of ideas about what would make for my ideal keyboard. I want a split keyboard, with bonus points for using a 3D cup shape. I want more keys that my thumbs can reach so I can rely less on my pinkies. I would enjoy an extra column of keys that my pinky could reach while using the WASD cluster while gaming.

I can’t get every feature I want without compromise. A split keyboard is going to end at G, but sometimes I DO reach for the Y and 7 keys while gaming, and they’d be a mile away on the wrong half.

A cupped ergonomic keyboard wouldn’t let me move my fingers over to WASD while gaming, which means I’d have to rely on layers to make games work. Then I’d have to make sure I switch layers if I use text chat or switch to my Discord window. Maybe I could automate that with QMK, but that’s even more work!

3D printing an ergonomic shell and manually soldering 80 switches isn’t a daunting task, but I am doubtful that one could design their own three-dimensional QMK or VIA keyboard with hall-effect switches today. I don’t know if this Keychron K2 HE sounds creamy or thocky, but I don’t believe that I couldn’t make my own keyboard sound like this no matter how many heavy layers I cut out of aluminum on my CNC machine.

I have enough desk space in here that I could have a station dedicated to gaming, but it doesn’t make any financial sense. My gaming GPU makes DaVinci Resolve run faster. An overpowered CPU that I might have for compiling or rendering can still be utilized for gaming.

All these related tasks work better if I invest all the money into a single build, so I need one keyboard that works well enough for everything.

Conclusion

We know this isn’t a conclusion. I haven’t even had the Keychron K2 HE in my hands for an entire week, and I haven’t even played any games where keyboard latency will have any impact. I haven’t even gotten to test out the Dynamic Keystrokes. My boring idea is to move the harder to reach weapon-switch binds from 4 through 6 down to short presses of 1 through 3, but that is a heck of a minor upgrade from a keyboard that cost five times as much the one it replaced.

I am confident that I have chosen well. Spending $140 on a piece of hardware that I will push thousands of words through every single day is a bargain. I’ll be excited if it manages to improve my gaming experience by even 5%. I am even more excited about messing around with an extremely custom QMK build at some point in the future.

I want to hear what you think! Did I make a good choice with the Keychron K2 HE? Should I have chosen something else? Would I have been better off spending half as much on the Yunzii RT75Should I have built my own keyboard from scratch? Are you using a better 75% keyboard? Visit our friendly Discord community and join the discussion about keyboards, custom ultralight gaming mice, and other related interests!

Torture Testing My Cenmate 6-Bay USB SATA Hard Disk Enclosure

I don’t know if I am really properly beating on this thing as hard as I can, but I am doing my best with the hard drives I have available!

People in the homelab community tend to have an aversion to USB storage, and I definitely didn’t have a ton of confidence in it the past. I had my own issues with RAID arrays built from USB hard disks ten to twenty years ago, but I have had great success with external USB hard disks on both my off-site Raspberry Pi and my NAS virtual machine over the last four years, so I thought it was time to try out a beefier piece of external USB storage.

Cenmate 6-bay USB enclosure on my desk

I imagine that everyone’s distaste for USB storage is based on outdated information and old experiences. I wound up ordering a Cenmate 6-bay USB enclosure for $182. The tl;dr is that it is doing a fantastic job. It can manage over 940 megabytes per second on sequential reads or writes. It is well built. It is extremely compact and dense. The fans aren’t super loud. The price is fantastic. It handles most drive failure situations gracefully, and when things are less graceful, it doesn’t leave you in a position where you’re likely to lose any data.

I would most definitely trust my own bulk storage to this Cenmate enclosure.

More importantly, I haven’t had the USB device misbehave in any disastrous way. I started writing this blog post after six simultaneous bonnie++ benchmarks had been running for thirty hours on six old 7200-RPM hard disks without a single hiccup.

At this point, I have a few days of continuous successful bonnie++ benchmarks of the mechanical disks, three days of bonnie++ benchmarks against aging SATA SSDs, and seven full days of continuous fio randread benchmarks at an average of 60,000 IOPS.

I did have some trouble with my mechanical disk testing, but every hiccup that I have had is because my collection of aging test hard drives have aged worse than I thought!. At least half of them are dying!

Why use a USB enclosure instead of building or buying a NAS?

This post is about what I have learned about this specific USB SATA enclosure, so I don’t want to go too deep into why I think you should consider using one or more USB enclosures in your homelab. I will endeavor to keep this part short!

Price is a good reason. The cost of my $140 Trigkey N100 mini PC and my $182 6-bay Cenmate enclosure maths out to $54 per 3.5” hard drive bay. That is less than half the cost per drive bay of a NAS from UGREEN or AOOSTAR, and both companies are selling their NAS offerings at extremely competitive prices.

You can assemble a 6-bay NAS for yourself using a Trigkey mini PC with an N150 CPU, 16 GB of RAM, a 512 GB NVMe for less than $350. The whole setup takes up just over six liters of space. How much you want to spend on the six mechanical hard disks to fill that up is your choice.

Cenmate Read Testing

We pushed around 10 TB of reads and writes to the six SSDs with 3 days of continuous bonnie++ tests, then around 16 TB of random reads over 7 days at an average of over 60,000 read operations per second

Another good reason to use USB enclosures is flexibility. You can buy 2-, 4-, 6-, and 8-bay enclosures all at reasonable prices. You can plug multiple enclosures into a single computer, and if you run out of fast USB ports, you can plug one enclosure into the next. You can connect all your external enclosures to a single server, or you can split them up between mini PCs.

There isn’t even a rule that says you can only use a USB hard drive enclosure with a mini PC! Maybe you’ve already have a purpose-built NAS, but you are running out of space. You can always plug in an external USB enclosure to add more disks, but you should make sure your operating system will allow it. You could make sure your most important data is on the internal storage, while relegating the new USB enclosure to backups and scratch data.

The density of a mini PC with a Cenmate enclosure is hard to beat

I knew from the dimensions that there wasn’t a ton of empty space inside a Cenmate enclosure, but I didn’t understand just how dense it would be until I loaded it up with 3.5” drives and picked it up. That was the moment that I understood in my gut that my setup packed a lot of storage into a small volume.

A few days ago, we were talking about a ridiculous build where someone crammed ten 3.5” hard disks and an N150 mini PC into a mini-ITX gaming case. The Reddit post seems to be gone, so I can’t look up the exact specs, but it sure looked like it was packed to the gills!

A Jonsbo N4 case is 19.6 liters and holds six 3.5” disks.

If you measure the length, width, and height of the space my Trigkey N100 mini PC stacked on top of my 6-bay Cenmate enclosure, you will find that it takes up just 6.3 liters. That is counting the void left behind that mini PC as occupied space.

My setup is 1/3 the size of a Jonsbo N4 case with the same number of 3.5” drives.

I’m not saying that your NAS build needs to be this compact. I just think it is neat that I may have accidentally built the most compact 6-drive NAS in our Discord community!

My little trick for installing 2.5” SATA SSDs in the Cenmate trays!

Cenmate’s trays are awesome for 3.5” hard drives. The little plastic clips hold the drive in place, and you don’t need any tools to install the drives. Not only that, but a Cenmate enclosure with a few drives installed is heavy enough that you can just push the trays in with one finger, and they solidly clunk into place.

Snip the Cenmate tray

You need to screw 2.5” drives in from the bottom of the tray, but the real bummer is that one of these plastic nubbins interferes with these smaller drives. Cenmate wants you to remove the blue retention bracket when installing 2.5” SSDs. I didn’t want to do that. I would be very likely to lose the brackets!

I took a set of my flush cutters to the one nubbin that bumps into the SATA SSD. I checked. Only having three out of four nubbins does a fine job holding a heavy 3.5” hard disk in place, and once you snip it off there’s no problem installing your SATA SSD.

They should ship like this from the factory.

I am using the word nubbin a lot.

Nubbin.

How mean can I be to the SATA-over-USB connection?

I wrote most of the rest of this blog post almost two months ago. Last week, my friend Brian McMoses stopped by with a stack of seven old SATA SSDs. They range in size from 120 GB to 256 GB. I started out running bonnie++ which winds up being a workload that is roughly half reads and half writes.

I ran those continuous read/write benchmarks for 72 hours. That ate up around 2% of the write lifetime of the oldest drive in my 6-disk RAID 0 array. I hope you will agree with me that destroying SSDs for the sake of enclosure reliability testing is a bummer, and that three days of writes was enough.

I switched to a read-only randread benchmark using fio. When you start the first test, fio creates a bunch of files and fills them up. Every subsequent run of fio reuses those same files, so I have been doing an average of around 60,000 random IOPS spread across 6 drives on a single USB port for seven full days so far.

fio running on my Cenmate enclosure for 7 days so far

I took this screenshot at some point during the sixth day of continuous random read testing

The Cenmate enclosure survived several days at 940 megabytes per second of sequential reads while I was collecting data for the previous blog post. That is one kind of stress for the chips inside the Cenmate enclosure. Now the enclosure is surviving weeks of hammering the USB controllers with 50 times more IOPS than six mechanical hard disks could ever sustain.

I have an extreme level of confidence now that my Cenmate enclosure can handle intense workloads for prolonged periods of time as long as the disks or SSDs are in good working order.

What happens when you have failing disks? I found that out pretty quickly, and we’re going to talk about that soon!

If you trust my judgment, you can stop reading here or skip to the conclusion. I am about to go into great detail about the things that happened while hammering on the Cenmate enclosure with several failing 3.5” hard disks installed. I think the two important observations are that the enclosure’s electronics have been rock solid, and the data on my drives would be safe even when encountering the worst failure mode that I could produce.

What kind of problems am I running into?!

I had a good list of reasons for choosing a 6-bay enclosure. Six disks is a good count for a RAID 5 array that doesn’t dedicate too large a percentage of your storage to parity data. Six big disks in a RAID CAN exceed speed of the Cenmate’s 10-gigabit USB connection, but they can only do that on towards the first third or half of the disk. That felt like a reasonable balance between value and a small bottleneck.

There was an even more important reason for my decision. I was pretty sure I had six old but usable 4-terabyte hard disks in my closet.

I was wrong. One of my old disks was completely dead. Two were making clunking sounds while generating lots of read errors. Others were quietly but very regularly encountering errors. The biggest bummer is that the 12-terabyte disk that I expected to be problem free is now the only disk left in my test that is encountering read errors.

THESE ARE HARD DISKS WITH PROBLEMS. This is not a problem with the enclosure, the SATA chipset in the enclosure, or the USB connection. I just didn’t have six good disks on hand.

My batch of test hardware is now was down to 4-terabyte drives, one flaky 12-terabyte drive, one 500-gigabyte drive, and one 400-gigabyte drive. These are the drives I had when I managed to make my mdadm RAID array kick drives out of the array.

My plan was to do all the angry benchmarking against a RAID 5 array, but that would be limited to the performance of the slowest drive. The 12-terabyte drive can manage 250 megabytes per second while the 400-gigabyte drive is limited to around 80 megabytes per second.

It is a good thing Brian brought over some SATA SSDs for me to use for further testing!

I am glad that my drives aren’t perfect, because it let me test interesting failure modes!

I didn’t even consider that these failure modes would be interesting. I have three unique things happening with at least three different drives. I won’t post every line from dmesg, because sometimes they are numerous.

My 12-terabyte drive is reporting read errors, but bonnie++ is able to power through them, because they wind up being correctable.

1
2
3
4
5
[4898744.584164] I/O error, dev sdf, sector 913209008 op 0x0:(READ) flags 0x80700 phys_seg 32 prio class 0
[4898877.612857] scsi host5: uas_eh_device_reset_handler start
[4898877.612912] xhci_hcd 0000:00:0d.0: bad transfer trb length 47104 in event trb
[4898877.679055] usb 2-1.4.3: reset SuperSpeed USB device number 126 using xhci_hcd
[4898877.692684] scsi host5: uas_eh_device_reset_handler success

Sometimes when there is a recoverable read error, that individual USB SATA controller is reset. The matching /dev/sdf device doesn’t go away. Nothing bad happens. There is just a little blip in the connection. I assume this reset happens due to the drive being unreponsive while attempting to repeatedly read the bad sector. The filesystem stays mounted, and the benchmark keeps chuggin’ away.

One of my 4-terabyte disks had an unrecoverable read error. The bonnie++ process acknowledged the I/O error.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[4787355.429473] sd 3:0:0:0: [sdd] tag#1 FAILED Result: hostbyte=DID_OK driverby
te=DRIVER_OK cmd_age=2s
[4787355.429479] sd 3:0:0:0: [sdd] tag#1 Sense Key : Illegal Request [current] 
[4787355.429481] sd 3:0:0:0: [sdd] tag#1 Add. Sense: Invalid field in cdb
[4787355.429484] sd 3:0:0:0: [sdd] tag#1 CDB: Read(16) 88 00 00 00 00 00 59 df 91 48 00 00 01 00 00 00
[4787355.429485] critical target error, dev sdd, sector 1507823944 op 0x0:(READ) flags 0x80700 phys_seg 32 prio class 0
[4787355.429601] sd 3:0:0:0: [sdd] tag#2 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=2s

<removed lots and lots of repeats of the above>

[4783762.072077] scsi host3: uas_eh_device_reset_handler start
[4783762.072333] xhci_hcd 0000:00:0d.0: bad transfer trb length 65536 in event trb
[4783762.072411] xhci_hcd 0000:00:0d.0: bad transfer trb length 53248 in event trb
[4783762.072572] xhci_hcd 0000:00:0d.0: bad transfer trb length 33792 in event trb
[4783762.140263] usb 2-1.3: reset SuperSpeed USB device number 122 using xhci_hcd
[4787945.388477] critical medium error, dev sdd, sector 1507823944 op 0x0:(READ)
 flags 0x0 phys_seg 1 prio class 0

This is a little different than the recoverable read error, but doesn’t change much in practice. Had these drives still be in an mdadm RAID array, the drive experiencing this error would almost definitely be kicked out of the RAID.

The first bad drive that I pulled was the problematic one. It somehow manages to make all six drives disconnect from the mini PC’s USB controller. I started that day with sdc through sdf, but after the reset I had sdg through sdj. The connection didn’t just get reset. The USB enclosure was detected as a brand new device.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[4423503.817847] EXT4-fs warning (device sdf1): ext4_end_bio:342: I/O error 17 writing to inode 13 starting block 27623625)
[4423503.819712] Aborting journal on device sdf1-8.
[4423503.819738] JBD2: I/O error when updating journal superblock for sdf1-8.
[4423503.819743] EXT4-fs (sdf1): Delayed block allocation failed for inode 13 at logical offset 14118912 with max blocks 2048 with error 5
[4423503.819751] EXT4-fs (sdf1): This should not happen!! Data will be lost
[4423503.819755] EXT4-fs error (device sdf1) in ext4_do_writepages:2724: IO failure
[4423503.819771] EXT4-fs (sdf1): I/O error while writing superblock
[4423503.820240] EXT4-fs error (device sdf1): ext4_journal_check_start:84: comm kworker/u8:0: Detected aborted journal
[4423503.820280] EXT4-fs (sdf1): I/O error while writing superblock
[4423503.820283] EXT4-fs (sdf1): Remounting filesystem read-only
[4423503.841037] sd 5:0:0:0: [sdf] Synchronizing SCSI cache
[4423503.910074] sd 5:0:0:0: [sdf] Synchronize Cache(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
[4423504.140334] usb 2-1: new SuperSpeed Plus Gen 2x1 USB device number 106 using xhci_hcd
[4423504.183687] usb 2-1: New USB device found, idVendor=2109, idProduct=0822, bcdDevice= 8.b3
[4423504.183696] usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[4423504.183699] usb 2-1: Product: USB3.1 Hub             
[4423504.183700] usb 2-1: Manufacturer: VIA Labs, Inc.         
[4423504.186967] hub 2-1:1.0: USB hub found
[4423504.187324] hub 2-1:1.0: 4 ports detected
[4423504.817696] usb 2-1.2: new SuperSpeed Plus Gen 2x1 USB device number 107 using xhci_hcd
[4423504.829870] usb 2-1.2: New USB device found, idVendor=174c, idProduct=55aa, bcdDevice= 1.00
[4423504.829877] usb 2-1.2: New USB device strings: Mfr=2, Product=3, SerialNumber=1
[4423504.829880] usb 2-1.2: Product: ASM235CM
[4423504.829882] usb 2-1.2: Manufacturer: ASMedia
[4423504.829883] usb 2-1.2: SerialNumber: ACAAEBBB215F
[4423504.835984] scsi host6: uas
[4423504.837425] scsi 6:0:0:0: Direct-Access     ASMT     2235             0    PQ: 0 ANSI: 6
[4423504.838834] sd 6:0:0:0: Attached scsi generic sg3 type 0
[4423504.850722] sd 6:0:0:0: [sdg] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB)

I pulled that drive and drew a big, fat question mark on its label. This is the worst problem I was able to coax out of the Cenmate enclosure, and I am hoping that the question-mark drive will let me recreate this problem again in the future.

NOTE: As I am writing this, I am wondering what would have happened if I were using disk ID labels instead of lazily building my temporary mdadm devices using /dev/sdc through /dev/sdg. Would mdadm realize that the new devices match the old devices? I will have to try that next month after the randread testing is completed!

The worst failure mode isn’t that bad

The group of people today who need five nines of uptime and the group of people who can even make use of slow, mechanical disks don’t have a much overlap.

I don’t know about you, but my RAID of slow, mechanical disks it there to keep me from having to waste time restoring dozens of terabytes of data when I have a hardware failure. It isn’t a big deal if my backup target isn’t available over the weekend. Losing access to my Jellyfin library for an evening isn’t a huge problem.

Sitting down to spend a few hours of my time performing a restore and making sure I actually restored everything that was necessary is a bummer. Wondering whether or not I ACTUALLY did a good job restoring everything over the next three weeks is even worse.

Restoring from backup and getting a machine back into production at work can be a stressful task, and the job isn’t always complete when you think it is.

I will only lose a few minutes of my own time if my hypothetical stack of six 20-terabyte SATA drives in my Cenmate enclosure go offline due to a weird USB reset and drive redetection cycle. I had everything back in five minutes when I encountered the problem. I powered off the enclosure, used mdadm to stop the RAID 5, powered the enclosure back up, and my RAID 5 was detected again a few seconds later. A lazier fix would have been to just reboot the server.

This could be a serious problem if I was serving customers and this was the only copy of their data. I can’t imagine a scenario where I would be serving information to the world from mechanical hard disks in 2025.

As far as I can tell, this type of failure is difficult to trigger. I have at least four failing hard disks here, and only one of them has managed to make this happen, and it only happens after it has been throwing read errors for several minutes. It is a rare problem, you can probably see it coming, you can easily prevent it from happening again, and it is easy to recover from.

I feel that this is acceptable, especially if you are aware that it can happen.

How does this thing actually work?!

The layout of the USB devices that show up when you plug the Cenmate 6-bay enclosure in is interesting!

1
2
3
4
5
6
7
8
9
10
/:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/2p, 20000M/x2
    |__ Port 1: Dev 14, If 0, Class=Hub, Driver=hub/4p, 10000M
        |__ Port 1: Dev 15, If 0, Class=Mass Storage, Driver=usb-storage, 5000M
        |__ Port 2: Dev 16, If 0, Class=Mass Storage, Driver=uas, 10000M
        |__ Port 3: Dev 17, If 0, Class=Mass Storage, Driver=uas, 10000M
        |__ Port 4: Dev 18, If 0, Class=Hub, Driver=hub/4p, 10000M
            |__ Port 3: Dev 21, If 0, Class=Mass Storage, Driver=uas, 10000M
            |__ Port 1: Dev 19, If 0, Class=Mass Storage, Driver=uas, 10000M
            |__ Port 4: Dev 22, If 0, Class=Mass Storage, Driver=uas, 10000M
            |__ Port 2: Dev 20, If 0, Class=Mass Storage, Driver=uas, 10000M

The device at the top of the tree is a 10-gigabit USB hub. The usb-storage device you see is a USB SSD that I plugged into the Cenmate enclosure’s daisy-chain port. Then there are two USB attached SATA (uas) devices that correspond to two of my 3.5” SATA hard disks.

The next branch in the tree is another 10-gigabit USB hub that has the other four hard drives attached.

I didn’t even notice that the Cenmate enclosure wasn’t using the usb-storage driver until after plugging in that additional USB SSD!

How is the power consumption?

I have the Cenmate enclosure plugged into a metering smart outlet that is connected to Home Assistant.

It sits at an extremely frugal 0.2 watts when no drives are plugged in. During the 6-drive benchmarks, the enclosure eats up 1.33 kWh per day. That works out to an average of 55 watts, which is roughly 9 watts per drive.

I prefer to track power usage over an entire day to get a nice, clean average, but I don’t have that kind of data with fewer drives. The instantaneous readings do start at around 9 watts with one drive installed, and they go up by about 9 watts every time you click another drive into a bay.

This means you don’t have to be conservative and buy a smaller enclosure if you’re power conscious. You can buy an oversized enclosure and add drives as time goes by and your needs grow.

The fully-loaded enclosure does spike to nearly 120 watts when you flip the power switch and all six drives spin up. The included 12-volt power brick says it maxes out at 108 watts. I am not terribly concerned about this, because the spike past 108 watts ends so quickly that you’ll miss it if you blink.

Conclusion

I am more than pleased enough with the results so far, so I will be working on setting the Cenmate enclosure up for long-term use. I don’t need six disks’ worth of storage, but I am certain I can make good use of one or two bays in the immediate future, and it will be handy to have some spare bays around if I ever need to sneakernet some data around. This will have to wait a few weeks. I’d like to see at least a month of fio random IOPS testing go by without a hiccup.

I am extremely curious at this point about what you are thinking! Have you thought about using a USB SATA enclosure? Do you have fear, uncertainty, and doubt caused by the earlier days of USB storage like I did? Are you already using a USB enclosure? How has your experience been? Let us all know about it in the comments, or join the Butter, What?! Discord community and tell us chat with us about how things are going!

Should I Run Bazzite Linux On My Workstation?

You might consider it a stretch to call my gaming PC a workstation. One lazy way to define a workstation could be enterprise server-grade hardware in an office-friendly case, but I’m willing to be more liberal with my labeling. Workstation is an easy word to use in the title that conveys relevant enough information, so I am sticking with it, because this is the machine I sit at when I want to get work done.

Bazzite is the new and popular gaming Linux distro. It is built on top of Universal Blue, which is built on top of Fedora Silverblue, and these are all immutable distros. I hope I got that correct!

I am excited about the idea of immutable distros. I’ve been running Bazzite’s gaming mode in my living room for a few months, and I am impressed with it. They have desktop spins of the installer, so they have me tempted to give it a try.

Bazzite on my 5700U laptop

I usually shy away from the more niche Linux distros. I don’t want to have to reinstall and start from scratch if someone gets bored and the distro goes away.

I could wait until the end to reveal this, but I am already dipping my toe a little deeper into the Bazzite waters. I just installed the KDE Plasma spin of Bazzite on my Asus 2-in-1 laptop. Things are looking promising so far!

My Linux distro history

I started out using Slackware in the nineties. I tried SuSE for a while, because their network installer was handy when we had our early cable modems.

I settled on Debian before the end of the decade, and that is all I used until 2006.

That’s when I switched to Ubuntu. The appeal for most Debian users in those days was Ubuntu’s release cycle. We got what amounted to a fresh, reasonably stable, and up-to-date Debian build every six months. That was SO MUCH BETTER than dealing with Debian’s testing repositories breaking your machine twice a year.

I had a continuously updating Ubuntu install on this computer from 2009 until 2022. It was installed on my old laptop, had been dded to new SSD and NVMe drives a few times, and has been paired with one laptop and two different motherboards.

That is when I almost switched back to Debian. Ubuntu has been drifting farther and farther from Debian as the years go by. There are lots of inconsequential things I am grumpy about, but the straw on the camel’s back for me is forcing snaps on us. Ubuntu installs the Firefox snap via apt, and in 2022, the snap would refuse to update itself unless I closed Firefox.

It felt like I traveled backwards in time, and it didn’t help that the Firefox snap took so long to open and refused to auto update unless I remembered to close my browser. Who closes their browser?! This felt like a good time to start thinking about where I might move in the future.

I wound up aborting my Debian install. I’m not going to get all of the details right from memory, but I am sure this will be close enough to accurate. Getting a combination of recent enough Mesa and RADV libraries installed for ray tracing to work well, and getting a build of OBS to work with hardware video encoding, while simultaneously having a working ROCm setup compatible with DaVinci Resolve Studio was going to be a massive pain in the butt.

Ubuntu had two out of the three nailed, and working around the third wasn’t a big deal.

Bazzite to the rescue?!

Bazzite prioritizes gaming. Bazzite is built on top of Fedora Silverblue with nearly bleeding edge AMDGPU drivers and Mesa libraries, so my Radeon GPU will always be working great, and I will be running one of the first distros to ship support for whatever the next generation of Radeon GPUs happens to be. That means I won’t have to wait as long after a new hardware release before upgrading!

This is awesome. Gaming is the most demanding thing I use my computer for, and things always improve when you can use the latest and greatest kernels, drivers, and libraries. Shoehorning this stuff into Ubuntu LTS releases can be a pain, and you’re always lagging behind.

Bazzite ships with their ujust system. It isn’t a package manager. It is more like a consolidated set of scripts and magic to help you get certain things going, much like an officially supported set of Proxmox helper scripts.

On my laptop, I ran ujust enable-tailscale to get my fresh Bazzite install connected to my Tailnet, and I ran ujust install-resolve-studio to install DaVinci Resolve.

It was slightly more complicated than that. I had to download the zip file from Blackmagic’s site myself, but ujust handled the rest for me. It set up a custom distrobox environment with everything Resolve needs to run, and I didn’t even have to click through Resolve’s GUI installation tool. It was just ready to go, and everything seems to work. Though I did have to tweak Resolve’s memory settings to stop it from crashing on my low-end laptop!

I don’t know if it is fair to accuse my laptop of being low end. It was squarely in the mid range when I bought it, but time has gone by, and it is starting to show its age.

The best part is that Resolve is in its own container. It is unlikely that a future update to the Bazzite installation will break things.

It took me a few clicks to install OBS Studio using Bazzite’s new Bazaar frontend for Flatpak. Flatpak correctly installed the required VA-API plugin. I just had to turn on the advanced settings in OBS Studio, and I had my laptop hardware encoding a 1080p screen capture in h.265.

Those were the trio of things that were going an effort to get working on Debian three years ago. They’re all working, and they’re all in better shape than on my current Ubuntu install on my workstation. I think that is an awesome start!

Living with an immutable distro, and embracing Distrobox

I already mentioned that Bazzite uses Distrobox to containerize DaVinci Resolve, but I didn’t explain what Distrobox is. Let’s see if I can do a good enough job in a paragraph.

Distrobox sits on top of either Docker or Podman, and it handles installing, configuring, and running full Linux distros in these containers. They aren’t containerized for security or to provide any significant separation. The opposite is true! All your Distroboxes are plumbed to have access to most of your hardware and to share your home directory.

This means you can set up separate Distroboxes with Arch, Debian, and Ubuntu. You can set up terminal window shortcuts to open shells in these separate boxes. You can create an AI-generated video in your Debian box, then edit that with DaVinci Resolve in the Ubuntu box, and paste that video into Discord using your Arch box. Each Distrobox has access to your Wayland session, so you can run GUI programs on any Distro.

I had Distrobox up and running on my aging Ubuntu install in a few minutes. Not long after, I had an Ubuntu 25.04 box going with Steam installed, and I was playing games that were already downloaded to my Ubuntu host. It bind-mounted all my usual file systems exactly where they needed to be to play my existing Steam games.

My plan is to use Bazzite for the stuff that is a pain to maintain or relies heavily on the host’s hardware. Steam, OBS, and Resolve, and Firefox will live up there on the host. I expect to do nearly everything else inside one or more Distrobox boxes.

It is possible to export a Distrobox image on one machine, then import it on another. My plan is to get myself an environment that I am happy with on my old Ubuntu workstation, and move all my important work into that box. Once I am happy, I will copy that box over to my laptop.

If I do things well, I should almost instantly have my working environment fully operational once I get around to installing Bazzite on my workstation. That is awesome!

The core idea here isn’t new. I used to do something similar with work and personal virtual machines two decades ago, but it wasn’t nearly as easy to work with those separate virtual machines at the same time.

Conclusion

Wiping out my workstation and starting from scratch fills me with dread. I always worry that there will be something that I rely on that is missing, or some weird binary in /usr/local/bin that just doesn’t exist anymore. Maybe I will lose a game’s save files that are stored in a weird location and aren’t being synced by Steam. What if an important program refuses to work correctly, or I can’t figure out how to configure something correctly?

Thing never ACTUALLY go terribly wrong, but I always miss something important, and migrating to an entirely new Linux distro isn’t something I would do on a whim. I am definitely going to kick the tires on my laptop for a few weeks, and put some work into getting a Distrobox environment well configured on my current workstation before I wipe my NVMe.

What do you think? Are you running Bazzite on a productivity machine? Am I silly for thinking this will be a good idea, or am I a genius and optimizing for exactly the right thing? How long do you think it will take me to get a productive Distrobox image set up so I can start my migration? You should join our friendly Discord community to let me know if I am making a mistake, or to chat with me to see how things are working out so far!