My Asus Vivobook 14 Flip 2-in-1 Laptop and Linux

| Comments

Why am I writing about running Linux on a laptop that was already an old model when I bought it two years ago? That seems like a good question!

When I bought the Asus 2-in-1 laptop, I decided I was going to treat it more like an appliance. I haven’t owned a computer running windows in almost thirty years. I heard good things about WSL2, and I figured Windows 11 would do a better job in tablet mode than Ubuntu, so I decided to give Windows 11 a shot.

Asus Vivobook Flip 14 and my Gotway Tesla v2 at Oak Point Nature Preserve

Not a close-up shot of my Asus Vivobook 14 Flip and my Gotway Tesla at a table by the pond

I wound up doing what one might consider work, which for me involves writing blog words in Emacs, in a WSL2 session. I did everything else with Windows. Weirdly enough, this worked better two years ago than it has recently. My WSL2 session would often go unresponsive after bringing the laptop out of suspend, and the more often that happened, the more frustrated I would become.

I decided to install Kubuntu 24.04 shortly after it entered beta.

tl;dr: What actually works?

Everything related to being a laptop worked out of the box. WiFi is fast. I am pretty certain that I tested the webcam, and it is as basic as ever. Games are running as fast as I would expect. The keyboard and touchpad seem happy.

The touchscreen works, but I had to switch to Wayland to get proper touch scrolling to work in Firefox. It isn’t surprising that I needed Wayland for this, but I was surprised that Wayland wasn’t the default with my 5700U CPU’s integrated AMD Radeon GPU.

Stable Diffusion person with tablet

The fingerprint sensor almost works, but it fails to train my fingerprints. It does get far enough to know exactly when I remove my finger from the sensor, so it might not be a total lost cause.

The most disappointing failure is lack of support for the accelerometer or lid-flip sensor. Without these sensors, Gnome or KDE can’t disable keyboard input and put the laptop into tablet mode.

1
2
3
[    2.233923] pcie_mp2_amd 0000:03:00.7: Failed to discover, sensors not enabled is 0
[    2.233941] pcie_mp2_amd: probe of 0000:03:00.7 failed with error -95
[   40.920347] raid6: avx2x4   gen() 37669 MB/s

NOTE: The failure to discover my accelerometer and its related friends is delaying my boot process by 39 seconds! I haven’t dug too deeply into this yet. I tried the stock kernel and a newer Xanmod kernel. I tried booting with both amd_iommu and iommu disabled. One of the combinations eliminated this delay, but it also made my NVMe super slow and goofy!

I enjoy using tablet mode. It is fantastic for playing games like Into the Breach. It is handy to be able to hide the keyboard out of the way and prop the tablet up to read documentation while working on a project. Sometimes it is just nice to lean back on the couch and surf through comments on Hacker News or Reddit, but it is also nice to know that they keyboard is there if you have to type out a comment!

I did figure out how to easily enable and disable the keyboard, though I don’t know if I am doing it correctly enough to recommend my solution, and rotating the screen via the command line is easy. It won’t be a problem adding a button to my dock to toggle tablet mode for myself. It is a bummer that it will be manual, but it isn’t a deal breaker for me.

While I do enjoy tablet mode, I don’t flip in and out of it constantly.

What is better on Linux than on Windows?

I really want to just scream, “EVERYTHING!” Except I already mentioned a few key 2-in-1 features that aren’t working, so we both know that is a lie.

I already know that I don’t have exact equivalents of the settings on Microsoft’s slider that goes from maximum power savings to maximum performance. I can turn down all my CPU governor settings, and I just can’t get a Geekbench result as low as I did on Windows 11 with the power-saving slider turned all the way down.

I am however getting 14% more single-core and multi-core performance on the same version of Geekbench. That is a pretty impressive boost!

Geekbench scores

Link to these Geekbench results

I don’t have good data on actual battery runtime. I never verify that the estimates Windows was giving me were what I was actually seeing, and I haven’t run my battery down far enough to know that I trust KDE’s estimates. KDE’s estimates do seem plausible when checking the time and punch quick numbers into a calculator, and KDE is usually estimating an extra hour of runtime.

I think an extra hour is pretty impressive, because I have my battery charge limit set to a maximum of 60%. That is the difference between a little more than four hours and five hours of runtime, and the gap would be even more impressive at full charge.

Improvements over Windows 11 that are way more exciting

Before we get to the exciting part, I should tell you that I had a brain fart while setting up Kubuntu. I didn’t check to make sure that I would have a swap partition in an accessible place to allow the kernel to hibernate, and I am realizing that I am not even certain how it would handle resuming from a LUKS-encrypted swap partition. The important thing that I need to say here is that my brain fart means I don’t currently have a way to hibernate.

Suspend is working just fine, and here is the exciting part. The laptop does exactly what I have asked. I told it to stay awake as long as it is plugged in, and by golly, it never goes to sleep. It also never randomly wakes up from suspend. These are both things that Windows 11 would do even though I asked it not to.

Not only that, but my Seafile client never stops working on Linux. On my Windows 11 installation, I ran Seafile inside WSL2 to sync my blog posts and other data that I would work on from Linux. Every time WSL2 got goobered up, my data would stop syncing. That isn’t surprising, but it was extremely annoying.

I am not sure if KDE is bad at tablets or my nonfunctional tablet-mode sensor is to blame

I have an old Cybernet all-in-one touchscreen computer at the workbench behind me. I believe it used to live in a hospital. It is running Ubuntu with Gnome, and it does a fantastic job of popping up the on-screen keyboard at the correct moments when I don’t have my Bluetooth keyboard turned on.

I don’t even think I have managed to get KDE to pop up the on-screen touch keyboard. I have tried clicking things in the taskbar that seem related to a popup keyboard, but it just hasn’t worked.

Asus Vivobook Flip 14 next to my 27-inch monitor

A 2-in-1 can be left open slightly and stood up right on its side. This lets you prop it up right next to your monitor!

Is it because KDE believes I have a keyboard even when I disable it? I am ham-fistedly disabling the keyboard in what is almost definitely the worst possible way, so this wouldn’t surprise me.

This is my first experience with Wayland and KDE Plasma. I am excited to try this out, because I know that I am in my last few years of being able to easily use XOrg with Sawfish and my decades of accumulation of custom window-management Lisp functions. I feel hopeful that I will be able to bend Plasma to my will when I can no longer use Sawfish on my desktop.

DaVinci Resolve might not be ready for Ubuntu 24.04

When I tried to install Resolve, it complained about all sorts of missing packages that have new names now. When I forced it to install anyway, it wouldn’t launch. I found some tips on Reddit about deleting a few libraries out of Resolve’s installation directory to force it to use installed libraries, and that got it to launch.

Using Resolve with a Radeon GPU on Linux is tricky. Resolve is only happy with some versions of AMD’s ROCm and OpenCL packages, and I believe the rather old GCN 5th generation iGPU in my laptop doesn’t work with the newer ROCm releases. Getting this to a point where Resolve would agree that I had an OpenCL-capable GPU while still having a working Mesa library for OpenGL was a challenge, but I might have gotten it right.

When I try to open a project, Resolve appears to hang and its log file fills with ALSA sound card error messages.

I am glad I tried this on my laptop before upgrading my workstation to Ubuntu 24.04, because a working DaVinci Resolve installation is something that I can’t live without!

I am hopeful that this will smooth itself out before Ubuntu 24.04.1 is released.

Conclusion

I have enjoyed owning a 2-in-1 tablet for the past two years. I really do think all laptops should be made with screens that flip around all the way, and I hope to never own a normal laptop ever again.

I use it as a laptop more than 95% of the time, so I am not TOO disappointed that I have to manually switch myself into a tablet mode. I usually find myself needing tablet mode to read documentation while working with a piece of hardware, or when I want to play Into the Breach. It isn’t a big deal to manually adjust the screen orientation for those tasks.

Using my Asus Vivobook as a laptop is infinitely more comfortable and enjoyable for me when running Linux, so I can most definitely say that I am excited to have a better experience for the majority of my use cases.

Using an Intel N100 Mini PC as a Game Console

| Comments

Let’s get this out of the way immediately. A mini PC with an Intel N100 processor is almost definitely the wrong device to plug into your TV to play video games. It is a rather slow CPU, but that CPU is also paired with a really slow GPU. The N100 is overkill for the majority of the games I have been playing on my mini PC, yet it doesn’t have nearly enough horsepower for so many games that would be fun on the TV.

This blog is kind of a tangent for me. I bought this mini PC to double the CPU and RAM of my homelab. It wasn’t intended to play video games, but I just had to try it out, and I am glad I did.

My Intel N100 Gaming PC Test at my desk

The diminutive Trigkey N100 mini PC on my desk next to my cigar box knickknack storage

What’s the tl;dr here? What sort of games can you play with an Intel N100 mini PC? We have played Super Meat Boy, Super Mario World via Retroarch, Super Mario Galaxy 2 via Dolphin, and Dead Cells. The fanciest game I have had moderate success with is Gunfire Reborn. It will run at 720p upscaled to 1080p using FSR at just a few frames shy of 60 frames per second.

I am only gaming at 1080p with my Trigkey N100 mini PC

My aging 70” Vizio television only has one HDMI port that can handle 4K at 60 Hz, and I am using that for my Chromecast with Android TV dongle. I had to dial the mini PC down to 1080p to be able to run at 60 Hz.

NOTE: I am quite aware that I could be running Retroarch on the Chromecast. Just writing that sentence makes me want to give it a shot to see how modern of a console it can emulate, but I do know for sure that it won’t go as far as the N100. That said, I am quite confident that the Chromecast will have no trouble running my favorite NES and SNES games, and those are the games I am most likely to emulate.

This is important. I was most excited about using the N100 box for emulated games, and all the games from my childhood were synced to the 60 Hz of our CRT televisions. I needed at least 60 Hz.

I am running Kubuntu 24.04 on my mini PC. I haven’t managed to get Linux and the Intel GPU driver to send commands to the TV to switch HDMI ports when I wake up the mini PC. I may be doing something wrong. I don’t enjoy having to switch inputs with a remote control like a cave man, but this is also only a test run.

Streaming games via Steam works great!

This is not a surprise, but I think it is worth documenting. I dusted off my old Steam Link hardware from 2015. Streaming games at 1080p via the Steam Link gave me 16 milliseconds of latency on Ethernet and 14 to 15 milliseconds of latency on 5.8 GHz WiFi.

My Trigkey N100 manages 13 milliseconds of latency over WiFi. That isn’t too much of an improvement, though the gap might get bigger if I had a nice WiFi 6 access point available. The Trigkey N100 excels when plugged into Ethernet, because its gigabit Ethernet port can move video packets down the cable several milliseconds faster than the 100-megabit port on the Steam Link. If my math can be trusted, the gigabit Ethernet upgrade alone is shaving more than four milliseconds off the latency. The N100 gets all the way down to 8 milliseconds of streaming latency when wired.

That is only ½ of a frame of additional latency when gaming at 60 frames per second. That is quite reasonable and very difficult to notice in the vast majority of games. I wouldn’t want to play a round of Team Fortress 2 with that extra latency, but I am also bummed out if I can’t play Team Fortress 2 at 144 Hz where the frame latency is even lower. I also completely failed to do well playing Dead Cells with an extra 8 milliseconds of latency.

Those are the sort of twitchy games where perfect timing is everything. On the other end of the spectrum would be games like Red Dead Redemption 2. I only started to notice the latency there when I was streaming to my Android tablet between T-Mobile’s network and my home’s fiber Internet connection.

That said, the Steam Link from a decade ago does a fantastic job. There are tons of games that you can play happily with 16 milliseconds of additional latency.

Retroarch has been fantastic on the N100!

I have one of my Playstation 4 controllers paired to the Trigkey mini PC. Sony’s controllers have always been my favorite for use with emulators. Their unique d-pad is fantastic for NES games, their buttons feel great, and the Bluetooth implementation has always been pretty friendly to work with on Linux.

I don’t trust the cheap knock-off controllers to have usable d-pads, but there are some premium third-party controllers. Playstation 4 controllers seem to have significantly gone up in price since I bought mine.

I haven’t tried one, but I have had my eye on the Gulikit King Kong 3 Max controller. It has rear buttons like the Steam Controller, but it has twice as many! The sticks use hall-effect sensors, so they should never drift. The d-pad looks great, and it can even use a snazzy low-latency dongle. The best part is that it doesn’t cost that much more than a Playstation 4 or Playstation 5 controller. If I didn’t already have four Playstation 4 controllers, I’d be buying one of these.

The Intel N100 is more than fast enough to use Retroarch’s run ahead feature. I have been a fan of this feature ever since it was first announced, and I am not confident that the Chromecast running Retroarch could pull this off.

Super Mario Bros 3 CRT Royale Shader

This is the CRT Royale shader with Intel in the name. It requires about 70% of the N100’s GPU to run!

When playing games on an old console using an actual CRT television, your button presses are very nearly locked to the drawing of the pixels on the tube. When you push jump, Mario will begin that action on the very next frame that is displayed. The least responsive games would only be two frames behind.

This isn’t possible on a modern LCD. At best, your monitor will be one frame behind. In reality, it is probably at least two or three frames behind.

Retroach is able to run the emulator ahead of what is displayed on the screen. When you hit the jump button, Retroarch will roll the game back the desired number of frames, and Mario will instantly be in the air. It works as though you pressed jump 16 or 32 milliseconds in the past. It feels like magic.

You can see how the magic works if you crank up the number of frames that Retroarch is running ahead to 5 or 6 frames.

I have been messing around with various CRT shaders to make my modern television look a bit more like the fuzzy CRT televisions I grew up with. This is the only time I have been disappointed that my television’s HDMI port has me stuck at 1080p. Having four times as many pixels available makes for even more pleasant CRT shaders.

What is the newest system an Intel N100 can emulate?

My searching of Youtube says that the N100 is only just barely too slow to emulate a Nintendo Switch, but it is supposed to be able to handle the Wii U just fine. I would feel better if I could test this myself, but I haven’t been able to verify either of these claims. The Wii U emulator keeps crashing on me, and the Switch emulators are having legal troubles.

I have collected several stars in Super Mario Galaxy 2. It runs at a solid 60 frames per second, but it was a huge pain getting the buttons mapped on my Dualshock 4 controller. I am shocked that the Dolphin emulator doesn’t have profiles for common gamepads.

Mario Galaxy 2 running on Dolphin running on my Trigkey N100 mini PC

Mario Galaxy 2 running at 60 FPS on the Trigkey N100.

I tried doubling up the internal rendering resolution. That lands you at something a little over 720p. My little N100 just couldn’t just keep up with that, so my game dropped to just under 50 frames per second. I was able to turn on 2x MSAA. That doesn’t make for aa clean and smooth of a picture as bumping up the internal rendering resolution, but it does look better than the extremely blocky edges.

I was going to try using Gamescope to upscale Dolphin using FSR, but apparently Gamescope has some dependency issues on Ubuntu 24.04. I probably won’t get a chance to try this on my Trigkey N100 before it moves on to its next task, but I am certain it has enough horsepower to make that work, so I might have to give that a try on my gaming PC just to see how it would look!

If the Intel N100 isn’t the best choice for couch gaming, what would be better?

First of all, the N100 may be fantastic for your own use case. If the goal is to have a machine that can run emulators extremely well and also stream your Steam library, then $150 for an Intel N100 mini PC is pretty reasonable, even if it is leaning into overkill territory.

An N100 is a much better value than the Zimaboard 832. The Zimaboard costs around $60 more, and it isn’t even fast enough to run Gamecube games at 60 frames per second.

There is a wide variety of mini PCs available. The Beelink SER5 with a Ryzen 5560U or Ryzen 5700U often goes on sale for under $250. I haven’t tried one of these, but my laptop has a Ryzen 5700U processor, and it is several times faster the Celeron N100 for gaming.

I know my laptop will run games like Severed Steel or Borderlands 2 at more than 60 frames per second and should be able to at least manage console-like frame rates in Borderlands 3 with very low settings. The question I would be asking here is whether or not several times faster is really worth an extra $100, especially when I can stream these games and more from my much more capable gaming PC. It will all depend on what games you want to play, and how much you are willing to pay to be able to do it!

NOTE: Don’t pay full price for a mini PC. They have coupons and discounts every single week!

You can push ahead to a Beelink SER6 with a Ryzen 6900HX for around $460. That is a pretty good upgrade, and should be able to play Red Dead Redemption 2 with fairly high-quality settings, and it ought to be able to manage Cyberpunk 2077 with low settings. The Ryzen 6900HX has the same RDNA2 GPU cores as the Steam Deck while also bringing a few extra CPU cores along with it. If it runs well on the Steam Deck, it will run on the Beelink SER6.

The Beelink SER7 is available with a Ryzen 7840HS with RDNA3 GPU cores. This is just about the fastest mobile GPU setup you will find, but you’ll wind up paying $569 to get it. This is starting to hit the price range where you could build your own low-end gaming PC with way more GPU horsepower AND have the ability to upgrade it later.

That brings us to the Steam Deck. At $399 for the base model, the Steam Deck is almost definitely the best bang for your buck in PC gaming for your TV. Not only does the Steam Deck perform just as well as more expensive mini PCs, it also has its own built-in screen, controller, and battery.

You can plug the Steam Deck into your TV, or you can take it with you on the go. That is a pretty nice upgrade on a piece of hardware that costs a bit less, isn’t it?!

This is getting into Playstation or Xbox price territory!

There are a lot of advantages to gaming on a console from Sony, Microsoft, or Nintendo. You don’t have to worry about OS updates, driver issues, or games that just don’t like your particular combination of hardware.

One of the biggest advantages of PC gaming is the ability to upgrade, and using a mini PC negates that option almost entirely. So why on Earth would anyone consider buying a mini PC that costs as much as an Xbox?!

Pat's Steam Game Valuation

For me, the answer is quite simple. I have more than 2,000 games in my Steam library, and I don’t own a single Xbox or Playstation 5 game. As soon as I plugged in my mini PC, I had dozens of games in my backlog that I was ready to start playing on the TV!

Even if you are starting from scratch, the cost of gaming on a PC is so much lower than a console. Steam sales reach 75%, 85%, and even 90% discounts. Humble Bundle regularly has bundles where the headlining game is worth the entire price of the $12 to $20 bundle, but you get five to nine other games essentially for free.

There is a hidden cost to these lower prices. When you see a game for the Playstation 5, you know it will run as expected on your Playstation 5. Computer games list minimum and recommended specs, and you don’t know for sure what sort of frame rates or visual fidelity you will see when you just barely meet those specs. There’s some thinking, math, and guesswork involved.

I think my Trigkey N100 would be fantastic for an arcade cabinet build!

I know some people will install a fairly high-end gaming PC in their arcade cabinet. That isn’t a terrible idea! You can run some fantastic CRT shaders on a 4K OLED screen, and they look absolutely amazing! They also require a lot of GPU processing.

A lot of other people just want to play Space Invaders, Super Mario Bros., and Pac-Man, and those folks are very likely to reach for a Raspberry Pi. I think they’re missing out on a lot of amazing games!

When I built my arcade cabinet 15 years ago, I quickly discovered that there are a TON of modern games that feel like they were meant to be played on an arcade cabinet. Super Meat Boy is a blast. Death Ray Manta wouldn’t have looked out of place at my local arcade in 1991. There are so many games on Steam that just need a 4-way joystick and three or four buttons.

If you use a Raspberry Pi, you will miss out on playing games from your Steam library on your arcade cabinet. Upgrading to something like the Trigkey N100 will not just upgrade your GPU horsepower, but it will let you install hundreds of games on your arcade cabinet that just aren’t available on a Raspberry Pi.

Pat’s goofy Proxmox idea?!

I haven’t decided if this is a worthwhile endeavor, but I think it is worth writing about. You can install Proxmox on Debian. You can install Steam on Debian. You can install Dolphin, Cemu, and all sorts of other emulators on Debian.

Why shouldn’t I have a second Proxmox node that does double duty playing and streaming games from my Steam library? It would be a fantastic location to run my Jellyfin server, and it would give me another physical location in the house to place a 14 TB hard disk to store one more copy of my backups. That way, if my laundry room catches on fire, maybe the server in the living room will survive.

I could definitely do this over WiFi, but if one of my Proxmox nodes is going to be in the living room, I want that node to have Ethernet.

Why shouldn't I?!

That is the conundrum that will almost definitely prevent me from doing this. The television in our living room has only a narrow wall behind it underneath the end cap of a vaulted ceiling. It is a terrible spot to try to work from in the attic, and I fully expect there to be an extra piece of lumber to drill through roughly eight feet above the floor.

I need a really, really, really good excuse to put in that kind of effort, and I definitely don’t want to spend time in the attic now that it is 100 degrees outside. If I do decide this is worth the effort, it will have to wait until winter!

The WiFi in my house isn’t exactly slow, and I do have three access points that are all easily reachable from the TV. There’s somewhere between 300 and 500 megabits available in either direction to the closest access point, and that is plenty to do the job, but WiFi is unpredictable, and I don’t want a Proxmox backup job to hork up a Steam streaming session!

Conclusion

As seems to be the case more often than not these days, I don’t think I am at the conclusion. I think this is only the beginning of mini PC gaming for me, and I believe I have only seen the tip of the iceberg so far!

What is next in my goofing around with an N100 mini PC series of blogs? I was thinking it might be fun to haul that heavy Trigkey box over to my desk and plug it into my monitor, keyboard, and mouse. It ought to be fun do see how slow it feels doing actual work and web browsing on such a slow device.

What do you think? Do you have a machine running Steam on your TV? Is it a full-blown gaming rig, or did you settle for something like a mini PC or Steam Deck? Are you planning on giving something like this a try? Tell me what you are up to in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

I Am Using Swap On Linux Again

| Comments

Let’s start with the motivating factor that encouraged me to write this blog post today. Then we can go back to the ancient 1990s to figure out how we got here.

The workstation that I am sitting at right now chews through swap space. It doesn’t do it quickly, but I can see swap utilization growing every day until I have a reason to reboot the machine. My swap utilization grows quickly enough that I have been more than 16 gigabytes deep into swap in just a week or two of uptime.

Four sticks of DDR4 RAM

What winds up in swap on my desktop computer? My Seafile client definitely winds up accounting for the majority of my swap utilization. There might be a gigabyte or two of Firefox in there. Other processes each only account for a couple hundred megabytes each, but a dozen processes like that add up quickly.

I also noticed that I was nearly one gigabyte deep into swap on my Proxmox server right before I rebooting for the upgrade from Proxmox 8.1 to 8.2.

Why is it a good idea to let my desktop dig itself so deep into swap?

There always winds up being some stuff sitting in RAM that your system may never need to use again. Sometimes these are memory-mapped libraries and executables. Those unused bits can just be freed.

Sometimes a program generates data that it will never need again, or may only need once every few hours, or even once a day. Sometimes a program has memory leaks. I don’t want to make any accusations. Maybe Seafile really does need a few gigabytes of data in memory that it seems to never touch. Who am I to judge?

Swap on Linux

Allowing this rarely used data to be swapped out to my NVMe drive is paving over minor problems while also freeing up actual RAM for programs that will actually make use of it. I never notice a judder, slowdown, or any sort of odd pacing from data having to be moved back in from swap. Either that rarely happens, or our modern NVMe drives with their hundreds of thousands of input/output operations per second (IOPS) make things happen so fast that we can’t see it.

I had one of my four sticks of RAM in my desktop computer fail a few years ago. Crucial wouldn’t do an RMA unless I mailed them all four sticks, and that seemed like it would be a pain in the butt, so I limped along with 24 gigabytes of RAM for a while. To my surprise, I wasn’t really limping. If I ran a game while I accidentally left Davinci Resolve running, I would drop down to only two or three gigabytes of disk cache, but that never caused any performance issues.

In all fairness, though, running Resolve and a heavy game at the same time was and still is way more likely to leave me short of VRAM, and that can turn many games I play into a stuttering mess!

In the old days with slow mechanical disks, you needed every megabyte of RAM you could possibly cram into your machine to use as disk cache. This barely matters today with fast SSDs.

Swapping to your SSD will help stave off the day when 16 GB or 32 GB of memory just isn’t enough for you. Now that my workstation is back up to 32 GB of RAM, I am in good shape. My swap utilization probably grows by half a gigabyte each day, but I rarely see less than 8 GB of RAM dedicated to disk cache.

The over-simplification here is that after a week or two, my computer is moving unused data out of $30 worth of RAM into $0.80 worth of NVMe with no perceptible change in responsiveness.

Swap used to be horrible because disks were slow

Thirty years ago, the rule of thumb was that your Linux machine should have twice as much swap as it had RAM. That rule of thumb persisted long after its useful shelf life had ended.

A 7200 RPM hard disk can come in as low as 120 IOPS in the worst case. This was true 30 years ago, and it is true today. When the data you need is behind the head, the drive has to wait for the disk to make an entire revolution before that data can pass back under the head to be accessed.

Memory was getting faster every year, and sequential disk throughput was also getting faster, but not as quickly. IOPS were stagnant. We eventually got to a point where swapping to mechanical disks was just bad for performance nearly 100% of the time.

My guess would be that it was almost 20 years ago that I stopped enabling swap on most of my machines.

Early SSDs were fast, but they were fragile!

One of the first SSDs worth buying was the 80 GB Intel X25-M. Its sequential write speeds were half as fast as most mechanical hard disks, it cost ten times as much per gigabyte, and even the cheapest mechanical hard disks had at least three times the capacity.

Mechanical hard disks still had a worst-case scenario of 120 IOPS, and their average IOPS would only be about double that. The Intel X25-M could easily manage 20,000 IOPS. It was like night and day. It felt like I added a turbocharger to my laptop, and it would have been an amazing drive to use for swap as far as performance was concerned!

I believe my Intel X25-M was warrantied for 35 TB of writes. If I used that X25-M in the same way I use my workstation’s Samsung 980 NVMe, I would be less than a terabyte away from the warranty, and this Samsung drive is only around two years old.

Even if I wanted to use my X25-M for swap, it would have been a challenge. I replaced an aging 120 GB laptop hard disk, and I was already trying to squeeze down into something that was 1/3 smaller. The performance was amazing, but I was fighting to fit everything I needed on that disk!

It is hard to kill a modern SSD with writes, so don’t be afraid to set up a swap partition, volume, or file!

My desktop computer’s NVMe is one of the lower-end offerings from one of the premium manufacturers. My 1 TB Samsung 980 has a 600 TB write warranty, while the information reported by S.M.A.R.T. suggests that Samsung expects it to last twice as long as that.

Stable Diffusion Guy With An NVMe Drive

I am using this NVMe for swap, for an lvmcache volume that caches my Steam game library, for my root volume, and my home volume. I have accrued 34.2 TB of writes since March 22, 2022. My 5-year warranty will have long since expired by the time I reach 600 TB of writes, and it will probably have been replaced by a much larger NVMe long before I get there!

Not only that, but S.M.A.R.T. says my Samsung 980 can handle double the volume of writes that it is warrantied for. This makes a lot of sense. If Samsung only warrantied the drive right up to the edge of failure, then they’d probably be replacing a lot of drives!

Freeing up 8 GB of RAM isn’t a lot, until it is?!

There is a common theme when buying computer hardware. It might be inexpensive to put together a machine with 64 GB of RAM instead of 32 GB, or four 20 TB SATA disks instead of three, or two NVMe drives instead of one.

Sometimes you reach a limit, and when you need to cross that threshold, you have to buy more expensive components. Maybe your motherboard only has four SATA ports, two m.2 slots, and support for only 64 GB of RAM. Maybe your case only has room for four 3.5” hard drives.

Proxmox Server Swap Utilization

NOTE: Half of the swap space on my Proxmox server belongs to the high-availability daemons that I am not even utilizing.

Do you need more than four SATA disks? You might need to buy a more expensive case. Do you need two or three NVMe drives? You might have to pay a couple of hundred dollars more for a fancier motherboard. Maybe that new motherboard choice is bigger, and it won’t fit in your petite mini-ITX build now.

Sometimes you are just at the limits of the class of machine you own or that you are purchasing. Sometimes you have a laptop, so you only have two SO-DIMM slots, and one m.2 slot.

Sometimes beefier components just cost more. Maybe a 32 GB DIMM costs precisely twice as much as a 16 GB DIMM. That’s fantastic, but a 64 GB DIMM might cost three or four times as much as a 32 GB DIMM.

Finding ways to stay on the low side of the pricing hockey stick can save you a lot of money.

Conclusion

Every single excuse to avoid enabling swap space is gone, and there are moderate benefits to be had. The benefits are even bigger once you have filled up all your DIMM slots, and you have no more room to add more RAM, even if you wanted to spend more money!

How do you feel about swap space on modern systems? Have you been avoiding swap for similar reasons? Have you started enabling swap again now that storage is both fast and durable? Tell me about it in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

It Was Cheaper To Buy A Second Mini PC Homelab Server Than To Upgrade My RAM!

| Comments

Was it ACTUALLY cheaper to buy a second mini PC than to upgrade the RAM in my existing N100 server? It depends on how you look at it.

In the most literal and simplistic sense, it was not cheaper. I paid $60 more than I would have just for a RAM upgrade, but I think it will all make sense when we talk about why I made this choice.

Every N100 mini PC and mini-ITX motherboard has only one SO-DIMM slot!

This is a bummer, right? I believe all of the slightly older, slower, and less power-efficient N5095, N5105, and N6005 also had only one SO-DIMM slot, but the mini-ITX NAS motherboards built around that CPU had two DDR4 SO-DIMM slots, and those motherboards could support up to 64 GB of RAM.

My Proxmox N100 Mini PC, Brian Moses's Makerunit NAS Case, and my UPS

On the left, my CWWK N100 Proxmox server and 14 TB USB HDD. In the center, Brian Moses’s 3D-printed Makerunit 4-bay NAS running TrueNAS Scale. On the right, my aging APC BR-800. Up top, my network cupboard.

The N100 NAS motherboards and mini PCs only have a single DDR5 SO-DIMM slot. You can pay a premium price for a 48 GB DDR5 SO-DIMM, but that is as far as they will go.

Not only that, but all the good deals on mini PCs tend to ship with RAM and an NVMe drive already installed. My router-style N100 mini PC homelab server arrived at my door with 16 GB of DDR5. If I want to upgrade the RAM, I have to pull out that 16 GB stick, put it in a drawer, and hope I figure out something useful to do with it later on.

I scratched my head, then started checking prices of N100 mini PCs on Amazon. Then I felt smart. I hope.

I definitely got more for my money by buying a second mini PC instead of a stick of RAM!

This is all going to be a little subjective because we aren’t quite comparing apples with apples. The inexpensive mini PC I chose was a Trikey N100 with 16 GB of DDR4 RAM and a 500 GB NVMe for $149. I bought this instead of a 32 GB stick of DDR5 RAM for $89.

I think this was a smart move. The 500 GB NVMe is worth at least $35. The 16 GB stick of DDR4 is worth at least $32, but that’s where the oranges come in, because 16 GB of DDR5 costs about 50% more.

I don’t know how best to assign a monetary value to the rest of the mini PC, but I reckon it is worth more than the remaining $82, and there’s definitely more value here than the $59 difference between the stick of DDR5 RAM and my new Trigkey mini PC. There’s a WiFi card in there, a fast little CPU, and a gigabit Ethernet port.

Why was I shopping for a memory upgrade?

I didn’t actually need this upgrade at all. The virtual machines and containers on my Intel N100 Proxmox server only eat up around 10 gigabytes of RAM. I have some room to grow into, and there’s even some superfluous stuff running that I could always eliminate if the need ever arises.

Right before rebooting after my upgrade from Proxmox 8.1 to 8.2, I noticed that my server was around 500 megabytes deep into swap after 62 days of uptime. I know well and good that my server would most likely be 500 megabytes into swap even if I had 128 GB of RAM in there, but that tickled my brain enough to search Amazon for prices.

Trigkey Mini PC

When I realized how little it would cost to just buy a second Intel Celeron N100 mini PC, it was quite the challenge to resist putting one in my Amazon shopping cart.

Except that I did resist that urge. I was staring at a $145 deal on that same Trigkey N100 mini PC with 16 GB of RAM. By the time I decided that I just had to have it, the deal was gone.

There were a lot of Celeron N100 mini PC deals in the $155 to $165 range, but I somehow convinced myself that I wasn’t going to pull the trigger until I could put one in my cart at under $150.

So what am I really saying here? I wound up buying something that I don’t really need mostly for fun, but at least I got it for under $150!

The bummer about shopping for mini PCs

If you can make it work out in your favor, this bummer can be awesome.

All the aggressively priced mini PCs that are constantly being discounted ship with RAM and storage, but they never ship anywhere near maxed out. That means that if you are going to install upgraded components, you’re going to be pulling components out, and you might not have anywhere to make use of them.

You can get barebones mini PCs with no RAM or storage installed, but most of the time, those options cost more than the mini PCs that ship with some RAM and storage!

For my own homelab purposes, and probably for many of your purposes, I believe that the Celeron N100 paired with 16 GB of RAM is a nice balance. It is fortunate that they happen to ship with a reasonable amount of RAM like this.

Trigkey N100 Mini PC

The Trigkey N100 Mini PC has the same manufacturer address as Beelink Mini PCs, and it looks almost identical in construction!

I was a fan of the N5105 and N5095 mini PCs last year, but they almost always ship with 8 GB of RAM and only 128 GB of storage. That was fine with last year’s pricing, but today you can get a faster N100 with twice as much RAM and four times as much storage for about $25 more. It is almost silly to buy the N5095 mini PCs unless you have absolutely no use for the extra RAM or storage.

The Ryzen 5560U and 5700U Beelink mini PCs have the same problem, but in the opposite direction. They offer roughly twice as much CPU horsepower and possibly four times as much GPU grunt as the Celeron N100, but those faster Ryzen mini PCs almost always ship with 16 GB of RAM. Even worse, I believe they ship with a pair of 8 GB SO-DIMMs.

How could you make this work out in your favor? Maybe you could buy an N100 with 16 GB of DDR4 RAM, and N5095 with 8 GB of DDR4 RAM, and a single 32 GB DDR4 SO-DIMM. You upgrade the N100 to 32 GB, put that 16 GB SO-DIMM in the N5095, then throw the extra 8 GB in a drawer.

I will stick with Celeron N100 mini PCs with 16 GB of RAM in my homelab for now. Maybe they’ll get an upgrade someday when DDR5 is obsolete and people are just throwing away sticks of DDR4!

The Intel Celeron N100 paired with DDR4 RAM should be slower, and that is OK!

Instead of upgrading my RAM, I added a second Celeron N100 to my homelab Proxmox cluster. Even though I didn’t precisely double the amount of CPU horsepower available in my homelab, I did come pretty close, and it didn’t cost all that much extra for that upgrade.

Much to my surprise, the DDR4 mini PC clocked in at a few percentage points ahead of the DDR5 mini PC! This isn’t a big deal. The differences are small enough that it could just be a variation in the way the wind is blowing today, but I expected this minuscule difference to be pointing in exactly the opposite direction!

Last week, my entire homelab had 16 GB of RAM paired with around 2,786 units of multi-core Geekbench 5 score. Had I swapped in a 32 GB stick of DDR5, I would still have 2,786 units of CPU horsepower.

Instead, my homelab now has 32 GB of RAM paired with 5,639 units of multi-core Geekbench score. That is the combined equivalent CPU horsepower of a $250 Ryzen 5560U mini PC

You can compare the scores of my Trigkey DDR4 vs. my CWWK DDR5 N100 mini PCs.

The differences in these benchmarks, and why Geekbench 5?!

One of the reasons that I am still using Geekbench 5 is that I own a copy. I haven’t upgraded because Geekbench 6 does not include an AES encryption test.

My personal network relies very heavily on Tailscale, and my various off-site locations are starting to have Internet connections faster than some of my machines can encrypt data. When shopping for upgrades, being able to see how much of a bump to encryption speed I might be seeing is awesome.

In most of Geekbench’s tests, my new DDR4 mini PC is faster than my old DDR5 mini PC by 2% to 5%. There are a few tests that the DDR5 box does excel at, and one of those tests is AES encryption. The more expensive N100 mini PC is ahead there by a whopping 36%.

This makes me pleased that my first homelab server has the faster RAM, because it is equipped with four 2.5 gigabit Ethernet ports. It can’t encrypt Tailscale data that fast, but the extra 36% will help use more of the potentially available pipe.

The Trigkey mini PC with DDR4 RAM only has one network port, and it is a slower gigabit Ethernet port. It can easily transfer 900 megabits per second of encrypted data via Tailscale, so that encryption deficiency isn’t going to be a real-world bottleneck for me.

Mini PCs with an N100 and DDR5 cost quite a bit more, and that is also OK!

The Beelink EQ12 Mini PC with DDR5 goes on sale frequently for $229. Never buy any of these mini PCs at full price. There will almost always be one or two sales every single week.

The Beelink N100 with DDR5 is a very different machine than the DDR4 model, so you don’t just get a potential boost in performance from the faster DDR5 RAM. You also get a pair of 2.5 gigabit Ethernet ports instead of a single gigabit Ethernet port. Depending on your use case, this is either a huge upgrade or a complete nothing burger.

My current homelab machine is even weirder. It has a Celeron N100 paired with a single stick of DDR5 RAM, but it also has four 2.5 gigabit Ethernet ports and FIVE m.2 NVMe slots! I am not yet utilizing all these extra features, but I did have to pay extra for them.

If you really do need a faster network interface, or if you grow out of your 1-gigabit Ethernet port in the future, you can add a USB 2.5-gigabit Ethernet dongle for around $7.

I didn’t account for the increase in my electric bill

My CWWK N100 mini PC burns through 0.34 kWh of electricity every day. That is a little over $15 per year where I live.

I won’t get to put a power meter on the Trigkey box to get a good reading until I put it into service running Proxmox, but I am expecting the numbers to be comparable. The CWWK box is burning some extra power with its four 2.5 gigabit Ethernet ports, and the Trigkey is running a fan.

They won’t be identical, but they ought to be pretty similar.

I paid $60 more to add this second mini PC to my homelab than I would have paid if I had just upgraded my CWWK mini PC’s RAM, and I will pay that much again in electricity over the next four years.

This might only work out to ACTUALLY being a good deal if I can manage to get some good use out of it!

UPDATE: I need to sneak this data point in here somewhere just to make sure I don’t lose it! The Trigkey N100 with a fresh Proxmox install uses 0.123 kWh over a 24-hour period. That is an average 5.125 watts.

I want to mess around with this N100 mini PC before I put it to work!

I haven’t completely filled up my Proxmox server with virtual machines yet. I have several gigabytes of RAM to spare, and I am barely pushing this tiny CPU at all. I don’t have to rush to load Proxmox on the Trigkey server to quickly migrate a couple of virtual machines to alleviate pressure. I can take my time.

I have lots of questions about the Celeron N100. How terrible is the GPU? Can it play any games? How is the latency streaming games using Steam’s remote-play feature compared to my ancient Steam Link hardware?

I have already farted around with some of this stuff, but I don’t have much recorded data to compare it to.

What little data I have so far on gaming with a Celeron N100

Dead Cells runs locally at more than 100 FPS. This isn’t terribly surprising, but I am excited about this because the Nintendo Switch dips under 30 FPS any time there’s a lot of fire on the screen. Would an N100 on my TV be my new Dead Cells console?

I tried working my way backward in time. Borderlands 2 couldn’t even reach 20 FPS on the title screen at 720p with low settings with FSR upscaling to 1080p. Borderlands GOTY Enhanced Edition could break 30 FPS with the same settings. I don’t even like playing first-person shooters at 60 FPS, so I can’t imagine doing it at 30 FPS!

Trigkey N100 Mini PC as a gaming machine

I squeezed the Trigkey N100 with Steam onto our TV stand in the living room right next to the PlayStation 3!

Gunfire Reborn is a well-optimized game with relatively simple graphics. I could get it running at over 50 FPS, but most people would consider this to be playable!

I spent ten minutes streaming games from my computer to the Celeron N100 mini PC over gigabit Ethernet. I was seeing around 8 milliseconds of streaming latency over gigabit Ethernet and 13 milliseconds with the mini PC connected to 5.8 GHz WiFi and the gaming PC connected to gigabit Ethernet. Is that good? That is roughly half of a frame at 60 FPS. My cheap Alldocube Android tablet was seeing around 24 milliseconds of latency over WiFi.

I can tell you that Dead Cells was unplayable with 24 milliseconds of latency, but Red Dead Redemption 2 only felt a little off with 70 milliseconds of latency over T-Mobile’s 5G network.

Emulation might be where it is at

I am a huge fan of emulating old gaming systems. I built an arcade cabinet with all my favorite childhood games installed. I own an Anbernic RG35XX, and I use it to play all sorts of NES and SNES games. It doesn’t take a lot of horsepower to emulate these older systems.

I have not yet tested this myself, but it looks as though the Celeron N100 has enough horsepower to emulate just about anything up to and including the Nintendo Wii U. That is exciting, because there are a few good romhacks of The New Super Mario Bros. Wii with completely new and awesome levels to play. I haven’t gotten to play those since I gave away my modded Nintendo Wii, and I bet there are fun romhacks for the Wii U that I haven’t even heard of.

My choice won’t make sense for everyone!

My chonkiest VM has 3 GB of RAM allocated to it, and the majority of the services I might want to run here at home would probably fit in 1 GB or less. That makes 16 GB feel spacious, and it isn’t a big deal for me to split my virtual machines up between two or more hosts with only 16 GB of memory each.

Stable Diffusion Geeky Dude

Someone with different needs might feel claustrophobic in this sort of space. If you have lots of virtual machines that each require more than 8 GB of RAM, then you’re going to have a hard time divvying those up between Proxmox servers with only 16 GB of RAM.

Just because this was a good deal for me does not mean it will be a good idea for you, but even if it isn’t, maybe the line of thinking that got me here will lead you down a comparable path.

Conclusion

This is for sure not the conclusion. This is only the beginning, right?! I don’t know what experiments outside of emulation and video games I will come up with for this Intel N100 mini PC from Trigkey, but I expect I will come up with something!

I haven’t decided where its permanent home will be. Should it live right next to the CWWK N100 mini PC underneath my network cupboard? Should it live in my office so it can marginally streamline my Octoprint setup for my Sovol SV06 3D printer? It would be nice to attach another 14 TB USB hard disk to it so I could have a second on-site backup of my data on the exact opposite side of the house.

What do you think? Did I make the right move by buying a second mini PC instead of a RAM upgrade? Are you now thinking about doing the same thing to upgrade your homelab? Is it disingenuous to say this was cheaper than buying a stick of RAM? I still feel like I saved money, but am I wrong about that? Let me know in the comments, or stop by the *Butter, What?! Discord server to tell me why I am wrong!

Please DO Share Our Links on Mastodon - Heres Why!

| Comments

I don’t have a ton to say here. This title came to me as soon as I saw the opposite topic posted on It’s FOSS News. When I saw that their complaint might have been about only 15,000 and 115 megabytes of traffic all coming in over the span of a few seconds, I figured I should post something. Maybe you will all share it on Mastodon to try to make my site break for a little while!

I have no real content to post today, so here is a picture of my adorable cat. Her name is little Miss Trudy Judy. She is quite friendly!

Little Miss Trudy Judy the cat

I can at least tell you why you should share our links on Mastodon. You should share them wherever you like. All our blogs are sitting behind Cloudflare, and I am pretty certain that I have patshead.com just about as heavily cached as Cloudflare will let me.

It has been quite a few years since I dialed up all the caching settings, so I hope they are still working correctly. We might just find out together if you share this post on Mastodon!

For good measure, here is a Stable Diffusion image of a guy who must be working hard to keep my little virtual private server with its massive 512 megabytes of RAM online.

Stable Diffusion Power Guy

I hope you are having a good day. I have been trying to do a better job of posting on my Mastodon account. It would probably encourage me to post more if you happened to give me a follow!

I Am Tentatively Excited About The Sovol SV08 3D Printer

| Comments

There is just so much to like about the Sovol SV08 on paper. A solid-looking Voron clone running Klipper with a massive 350 mm cube of build volume for under $600 is a really good value. It is a good value even if Sovol misses the mark in a lot of other areas, and they have sure missed a few important targets that every modern printer should be meeting.

I think we should talk about the potential problems with the Sovol SV08 first. If you’re buying your first 3D printer, you should most definitely know what you are getting yourself into. Sovol has more than a little sneaky verbiage sprinkled around the Sovol SV08 listing that makes it sound like it matches the Bambu printers and the Prusa MK4 in the most important ways, but the SV08 just doesn’t have the sensors to pull off the same fantastic usability tricks.

Lil Trudy Judy and the Bambu A1 Mini

There are definitely some things you should think about before paying to be an alpha tester for Sovol’s latest printer.

Strain-sensor bed leveling should be table stakes in 2024

The Bambu X1C started shipping about two years ago. It uses strain sensors mounted in the bed to detect when the tip of the nozzle makes contact with the surface of the bed. The Prusa MK4 started shipping about a year later with a single strain sensor in the hot end to serve the same purpose. Then late last year, the Bambu A1 Mini shipped with a similar strain sensor setup in the nozzle.

The Prusa MK2, Prusa MK3, Sovol SV06, and Sovol SV08 all use an inductive probe to generate the mesh to level the bed. The inductive probe can’t detect the PEI surface. What it can detect is the steel sheet beneath the PEI surface. That means that the operator of the 3D printer needs to manually dial in a correction to account for the thickness of that PEI surface, and you will have to make this adjustment every time you swap in a new print surface.

Lost In Tech’s review is the one that made me more optimistic about the Sovol SV08!

The Sovol SV08 does use a strain sensor. That strain sensor is mounted on the bed, and it is used to allow the printer to account for the variances in the induction probe that are caused by temperature changes. This strain sensor can not figure out how thick your sheet of PEI is. You will still have to calibrate for that yourself.

Why is this a big deal? The Sovol SV08 is using seven-year-old bed-leveling technology. If Sovol put a strain sensor in their new hot end on the SV08 and T300, they would have eliminated a HUGE source of error. The vast majority of new-user problems posted in r/FixMyPrint and r/Sovol would be eliminated if Sovol used a strain sensor in the toolhead.

It is a bummer that Sovol missed the boat, but simply having a strain sensor in the right place wouldn’t mean that things would be perfect. Creality is using a strain sensor to eliminate the need to calibrate your z-offset on the latest Ender 3, but it doesn’t seem to work nearly as well as Bambu’s or Prusa’s strain-sensor bed leveling. There is some software engineering that has to happen as well!

Is this a deal-breaker? Definitely not. Inductive probes are a solid bed-leveling technology, and you should only have to dial in that Z-offset once, but dialing it in the first time is tough if you have never done it before.

Sovol has included that extra sensor to help compensate for the variance in the inductive probe, and it looks like it does a fantastic job at compensating. This will be a nice improvement over the Prusa MK3 and the Sovol SV06. The trouble is that Sovol hasn’t included this functionality in their firmware or slicer profiles. Lost in Tech has a great video talking about implementing this functionality in Klipper, and it looks like it works great once you get it working.

Sovol's Words

NOTE: This is a screenshot of Sovol’s own words from the Sovol SV08 page on their website as of April 26, 2024.

Their marketing material strongly implies that you don’t have to worry about dialing in your first layer, but they sure don’t seem to have the appropriate sensor configuration to make that happen. The first four lines of copy in that screenshot make it sound like you will have a Bambu-style experience, but the last line suggests this is probably not the case. It feels a little slimy.

You are paying Sovol to be one of their alpha testers

As is often the case, Sovol has built a rather nice piece of hardware, but they haven’t put in the time to tie it all together with well-tuned software.

Sovol has been known to do a reasonable enough job on the software side in the past. They modified Marlin just enough on their SV06 to support Prusa’s slightly odd G-code changes. It worked well enough that you could literally pretend that your Sovol was a Prusa MK3. I just duplicated my Octoprint VM, plugged my Sovol SV06 in, and started using my existing PrusaSlicer profiles. If you just tell your slicer that you were using a Prusa MK3, then your Sovol SV06 would just work.

As far as I can tell, the Sovol SV08 is shipping with the bare minimum of software support. There is a single slicer profile for PLA. The extra bed-leveling sensor isn’t utilized. Things are just not quite ready yet.

Will Sovol put in the work to build a software ecosystem for their new printer, or will they leave that for the community to worry about? Will they move on to the next model instead?

Do you really need a coreXY printer?

If you want a coreXY printer just because they are cool, that is fine. Buy the Sovol SV08 or Bambu P1S right now! If you want to buy a printer that is fast and accurate, coreXY really isn’t necessary these days.

Yes, these machines can run faster than a bedslinger, and they can be upgraded and tuned to run ridiculously fast. That said, bedslingers have gotten really fast now. Not nearly as fast as the fastest SpeedBoatRace Vorons, but a Bambu A1 or A1 Mini will for sure give the new Sovol SV08 a run for its money. The Bambu and Sovol are both usually going to be limited by the flow rate of their hot ends, and they are pretty comparable there.

Little Trudy Judy's Tongue Being Weird

When I bought my Prusa MK3S, I assumed my next printer would be a coreXY. Then everyone started using input shaping on their bedslingers, and the extra potential performance didn’t matter as much. I would still like to own a coreXY printer, but I don’t have much good reason to pay extra for one.

The Bambu A1 vs. the Sovol SV08

The Sovol SV08 looks like it wants to compete with the Bambu X1C or P1S, but the lack of multimaterial support and the price point put it in practical competition with the Bambu A1. These two printers are priced the same if you include the AMS with the Bambu A1, or the A1 is $150 cheaper.

These comparisons are always apples to oranges. No two companies sell printers with directly comparable specs. The Bambu A1 comes with an AMS at the same price, but the Sovol SV08 is significantly larger.

Tiny Benchy on a Bambu A1 Mini

Tiny 3DBenchy printed with a 0.2 mm nozzle on the Bambu A1 Mini with zero tuning

The most important thing I can say about Bambu here is that they aren’t making you be an alpha tester. Pretty much everything is well tuned and just works during the entire workflow from STL to completed print. Their slicer profiles are well tuned, especially if you stick with a 0.4 mm nozzle and common materials. Their printers all make sure you get a perfect first layer, and the A1 and X1C are both able to dial in your flow rates and pressure advance to help get that last inch of quality out of your parts. It is all easy to use, and this process was tested, tuned, and completed before you even thought about ordering the printer.

I repeat this over and over and over again because it is so important. Bambu’s and the Prusa MK4’s bed-leveling implementations are awesome for pros and absolutely amazing for beginners. This feature alone solves the vast majority of new-user problems posted to r/FixMyPrint.

If you have never used a 3D printer before, this is the feature you need. It can save you many hours of frustration. This could very well be the difference between having a fun time or wanting to smash your new toy to pieces with a sledge-hammer.

Who should be looking at the Sovol SV08?

I absolutely believe you should buy a printer from Bambu. I think the Bambu A1 Mini is a fantastic printer at an amazing price, with or without the AMS Lite. I have been using mine since early December. It is a small printer, but I have only had to fire up my Sovol twice since then to print things that wouldn’t fit on the A1 Mini.

Some of you reading this will just need a bigger printer. The Bambu A1, P1S, and X1C are way bigger than my Bambu A1 Mini, but they are only just barely big enough to print a full cosplay helmet. The 300-mm bed on the Sovol Comgrow T300 would give you a significant amount of wiggle room there, and the 350-mm bed on the Sovol SV08 may even be a welcomed upgrade for you.

Bambu A1 Mini Gridfinity

A 1x1 Gridfinity bin tucks perfectly into that slot on the Bambu A1 Mini, so I printed a T-shaped grid to fit right in!

If you have a solid use case for a huge printer, then the Sovol SV08 sure looks like a good compromise on performance, quality, and price.

You probably don’t need a printer as big as the Sovol SV08 or even the Sovol T300. You can get a Bambu A1 for about the same price.

You won’t be alpha testing a printer. You’ll be buying a proven piece of hardware with a buttery-smooth printing workflow. You’ll be trading the Sovol SV08’s increased size for better bed-leveling technology that works perfectly almost every single time, and you’ll be getting snazzy flow-rate sensors that almost guarantee perfect prints.

Is setting a Z-offset and dialing in flow rate really that hard?!

It isn’t difficult. I had my Z-offset dialed in on my Sovol SV06 in less than five minutes, and as long as you err slightly on the side of underextrusion, dialing in flow rate isn’t challenging. You have to get the Z-offset almost perfect, but flow rate can be off by as much as 10%, and you might never notice.

Stable Diffusion Guy Working on his Z-offset, probably not, but he looks angry enough

I have manually leveled beds on a dozen different models of printer. I have been dialing in Z-offsets for years. Every time you do it, it gets easier.

Do you know what the trouble is? Someone buying their first printer doesn’t know what to look for when they are adjusting the live Z during the test print. If you know what you are doing, it is really simple. All you have to go on is a few paragraphs of explanation, some YouTube videos, and a hope that you understood what things should look like.

The Sovol SV08 is for tinkerers

For some people, the 3D printer is the hobby. For others, the 3D printer enables or supplements their other hobbies or work. Most of us will likely wind up falling somewhere in between, but I bet most of you want to print more and tinker less.

We all had no choice but to tinker in order to operate a 3D printer ten years ago. Now you can buy a Bambu A1 Mini, spend 10 minutes snapping the AMS Lite together, and be printing in almost no time at all. It is a delightful experience.

The Sovol SV08 is a cheaper and much easier to assemble version of the Voron, and that is absolutely amazing. I always wanted to own a Voron, but the thought of spending more than $1,000 on a kit, having to 3D-print all the parts myself, and then having to spend a full weekend of my time assembling it before I could even hope to begin dialing it in didn’t seem like fun. Especially when you can spend half as much on a Bambu P1S and be printing in twenty minutes.

The Sovol SV08 looks SO much easier to assemble than a Voron or Prusa MK4, so it reduces the required amount of tinkering by orders of magnitude compared to a Voron. The printer hardware looks capable out of the box, but these Voron-style printers tend to attract tinkerers, so there’s a very good chance you’ll want to modify something on yours.

Even if you don’t modify the hardware, you are going to have to tinker with slicer profiles and Klipper macros. Sovol does not set you up for success here like Bambu and Prusa would. You currently have the bare minimum in software support to get you going. You will be relying on the community and your own know-how to get good, fast, clean prints.

There is nothing wrong with that. It is important that you know what you are getting into.

The Sovol SV08 is open-source

This is heckin’ fantastic! At first, I was worried that this would only apply to the Klipper firmware, but then I saw the Klicky Probe for the Sovol SV08 on Printables, and it is obviously designed around existing models of the Sovol SV08.

I am a huge fan of open-source. I have been running almost entirely open source software at home since the late nineties, and nearly as long in my professional life. Open source is one of the reasons why I chose to buy a Prusa MK3S, but Josef Prusa is trying to weasel his way out of open-sourcing as much as he can get away with now, so the Sovol SV06 and Sovol SV08 may both be more open printers than the Prusa MK4.

The closed ecosystem behind Bambu’s printers is easily their biggest downside. If Bambu disappeared tomorrow, which is becoming less and less likely every day, you wouldn’t be able to buy parts to repair your printer in the future. There are oodles of custom parts on these printers. That is no small part of why they are able to price their printers so aggressively.

If Sovol goes out of business tomorrow, and Bambu’s continuing success and extremely aggressive pricing makes this likelier than ever, you will still be able to repair your Sovol SV06 or SV08. So many of the components are common off-the-shelf parts. We have STL files for all the injection-molded parts, so we can 3D-print replacements. There is a future here even if the manufacturer goes away.

Final thoughts

The Sovol SV08 and its similarly priced competitors really are apples and oranges. The printers from Bambu are well-oiled machines with smooth workflows. They are a lot like having a small manufacturing department in your home office. You can send stuff to your manufacturing robot, and it will usually just do a good job with zero interference from you. You should still take a peek at it from time to time, but your Bambu prints are likely to just work the vast majority of the time.

The Sovol SV08 will be more like an intern fresh out of school. Sure, it is a bigger employee who can print larger things, but the SV08 is untested. You are going to have to help him along for quite a while. You’re going to have to ask your friends for advice on how to mold your new intern into the sort of employee you want to have around. It isn’t fully baked. At least, not yet!

What do you think? Are you shopping for your first 3D printer? Or are you a veteran looking to add another printer to your farm? Do you agree that the 256 mm printers from Bambu are worth the living with the smaller size for all the usability features? Or do you think it is super important to have that 350 mm print bed? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

Two Weeks Using The Jellyfin Streaming Media System

| Comments

I am not sure how to write this blog post without implicating myself in any shady, nefarious, or illegal activities. I will do my best.

I used to rip my own DVD media, then eventually Blu-Ray media. I had a pretty sizable collection. An old blog post indicates that I had five 1 TB hard disks in my arcade cabinet configured in RAID 6. That was back in 2010. It wasn’t too many years later before Netflix and friends made storing so much media obsolete.

Jellyfin under Proxmox

Things in the streaming world are getting tricky again. A lot of people are complaining about the costs of subscribing to so many services, but our large number of streaming subscriptions are still cheaper than my cable bill was in 2002, and we are able to share some of these accounts with friends and family.

What got me thinking about once again sailing the seas of sketchy TV and movie downloads is Amazon. I was three or four seasons deep into watching Northern Exposure when Amazon enabled ads in Prime video streaming. I made it through the third ad before I stopped watching.

For historical reasons, the Amazon account on our living-room TV isn’t mine. It is the account that all of our grandfathered-in Prime shipping nonsense is tied to. I didn’t have the foggiest idea of how to pay Amazon to stop showing me ads at three in the morning, so I just moved on to other shows on other services.

This happened close enough to my migration from my old homelab server to a new N100 mini PC running Proxmox. There’s a handy set of Proxmox helper scripts for installing all sorts of things, and one of those scripts sets up a Jellyfin server in an LXC container. This seemed like a good excuse for hedging my video-streaming bets!

The Jellyfin helper-script installer for Proxmox is fantastic

If you install Jellyfin in a virtual machine, then you need to pass your GPU through to that virtual machine to take advantage of hardware transcoding acceleration. You only get to pass your consumer-grade GPU through to one virtual machine, and that seems like a bummer.

You can share the host GPU with as many LXC containers as you like, and the Proxmox helper script will detect whether you have an Intel, AMD, or Nvidia GPU, it will install the correct libraries to work with your GPU, and it will correctly configure the permissions on your container so that Jellyfin can make use of the GPU.

There really is nothing to it.

My tiny Intel N100 GPU can transcode 1080p HEVC video to 1080p h.264 at 270 frames per second, and it can transcode 2160p 10-bit HEVC video to 2160p at around 75 frames per second.

This is challenging to benchmark, because playing back either of those videos with transcoding enabled does not max out the GPU. When I play a second video stream, GPU utilization rises from around 50% to about 75%. I am pretty confident that this N100 could manage to transcode at least four of those 2160p 10-bit videos at the same time.

The encoding preset in Jellyfin’s server settings is set to auto. I tried adjusting it to the fastest preset, called ultrafast, and transcoding went at the same pace as auto, so my little N100 is already squeezing as already trading as much quality for performance as is possible. The bit rates are high enough that I haven’t noticed any encoding artifacts.

The Tailscale helper script is fantastic!

One of the Proxmox helper scripts not only installs Tailscale in your LXC container, but it also configures the permissions of the container to allow Tailscale to work. That second part is amazing, because I have manually installed two LXC containers since setting up Proxmox, and both times I forgot that I had to make configuration changes to get Tailscale to work.

My phone and tablet have no problem connecting to my Jellyfin server from outside my network, and thanks to Tailscale, I didn’t have to expose my Jellyfin server to the entire world.

My Chromecast with Google TV 4K devices don’t need transcoding

They have been able to play back every weird file I have downloaded directly with no transcoding of the video or audio. That includes the 2160p 10-bit HEVC movie.

This is where I ran into my first Jellyfin bug, and I really don’t understand what is actually happening. That 10-bit movie looks like it has a flat color profile when played in my living room. I am told that you need to enable tone mapping when playing back HDR video on a non-HDR TV.

AI Dwight Rider

It fails to play if I enable tone mapping on the server. If I set the Android TV Jellyfin client to a bitrate other than auto, the colors appear more vibrant. I can only assume the colors now look correct, but I don’t have another identical video to compare to.

I assumed forcing a maximum bitrate would force the video to transcode, and I figured this might be why the tone mapping was working. This is not the case!

The video looks the same whether I enable or disable tone mapping on the server, and the video is STILL PLAYING DIRECT. There isn’t any significant CPU or GPU usage on the server, the dashboard says it is playing direct, and the colors do look correct.

I think this is pretty weird, but all things considered, it is a rather minor glitch.

I have a 1080p Chromecast with Google TV in my office. I haven’t tested it as heavily, but it does seem to play back all my 1080p HEVC video without transcoding. If I had known I would be installing Jellyfin, I would have spent the extra $20 on the 4K model even though the ancient 43” TV in my home office is only 1080p!

NOTE: The Chromecast HD regularly goes on sale for $19.99, and the Chromecast 4K regularly goes on sale for $39.99.

Sometimes my Android TV device gets stuck dropping frames

I may as well mention my only other glitch just to get it out of the way. Sometimes after playing for 10 or 15 minutes, some of my video streams will start dropping frames. The Jellyfin app on my Chromecast doesn’t give me the tools to see what is going on, but it sure looks like it is only playing half the frames.

UPDATE: If you’re not having this problem, you can safely skip this section of the blog. This isn’t a Jellyfin problem. It is a Chromecast problem, and it occasionally happens to my with Netflix as well. The solution was for me was to disabled game mode in the Chromecast settings. I haven’t had a weird stutter, judder, or jitter since adjusting this setting.

It almost feels like the Chromecast is falling behind then playing frames faster than it should to catch up, but that can’t be the case, because Jellyfin is setting the TV to 24 Hz and the videos are 24 frames per second. There’s no room there to play anything but the correct frames.

It happened consistently with one set of videos. The 2160p 10-bit movie had zero issues. The 1080p AI upscaled Star Trek episodes only seem to have this problem on occasion after I fast forward past the theme song.

The glitch is easily corrected by backing out of the video and resuming. This may not be perfect, but it isn’t a deal breaker for me. It would be supremely annoying if Jellyfin were my only video service.

Alldocube iPlay Mini 50 playing games

The first time I saw this happen, I reconnected to my Jellyfin server via the local IP address instead of the server’s Tailscale IP address. I know the extra layer of encryption is a lot of work for the Chromecast. I am pretty sure Tailscale can only push data at 20 or 30 megabits per second on the Chromecast, and that may have been a bottleneck for my video files. I didn’t do the math to verify that, but disabling Tailscale didn’t improve the situation.

There are a couple of potentially relevant bugs already listed in Jellyfin’s bug tracker. I assume it is being worked on as we speak, and this probably may be solved in the next few releases.

This is only a problem with my Chromecasts. I haven’t seen this sort of frame dropping on my Android phone, low-end Android tablet, or my computers. My choices seem to be to replace the Chromecasts, wait until Jellyfin solves the problem, or put up with the occasional glitch. I don’t expect Jellyfin to occupy more than 5% of our streaming time, so I am happy for now to just wait and see what happens.

There are things you can download that you can’t pay to stream legitimately!

I am a Star Trek fan. I grew up watching Star Trek: The Next Generation while it was originally airing, and I used to catch two reruns of the original Star Trek on our local PBS station every Saturday night. It is exciting that the studio spent millions of dollars scanning the original film from The Next Generation, redoing most of the special effects, and turning that into an amazing 4K version of what I have only ever seen in broadcast quality.

It is my understanding that they didn’t make their money back, so I don’t expect that they will ever do the same thing with Deep Space Nine or Voyager. The community has picked up some slack here, though, because there are AI upscaled versions of these shows available.

Tom Paris Cyclops Voyager AI Upscale

The upscaling doesn’t always come out perfect. When there isn’t enough information to work with, it might turn Tom Paris into a Constable Odo cosplaying as a cyclops!

I have watched the first season of one of the upscaled copies of Star Trek: Voyager. It isn’t perfect, but I did compare a few frames to what is currently streaming on Paramount+. All the title cards in the upscale are more crisp, everyone’s hair looks slightly better, and the colors a just a little more vibrant.

I never watched Buffy the Vampire Slayer, but the official 4K release is supposed to be quite awful. Some scenes that have been expanded to widescreen have camera and lighting equipment in frame. Some scenes have been cropped instead of expanded, so the top or bottom of the scene is gone. It also sounds like the color grading is quite weird.

The community upscale of Buffy is supposed to be amazing! It is in the original aspect ratio, the color grade tries to match what was originally broadcast, but it is as cleanly upscaled to 4K as they could manage. That sounds fantastic!

There are shows that just aren’t streaming anywhere!

I won’t say that these are classic shows, but I have collected some of my childhood memories that just aren’t available to stream anywhere: Automan, Manimal, and Misfits of Science. They were all available at archive.org!

Best of the 1980s

This is enough of an excuse for me to keep a Jellyfin server up and running!

Why not run Plex?

Plex started to look more than a little sketchy when they were accidentally sharing people’s recently watched video list to their friends. I imagine most Plex users have a giant collection of illegally obtained movies and TV shows. Just the fact that Plex is collecting data about what you store and watch is unacceptable.

Even ignoring that, I would have tried Jellyfin because Jellyfin is open source. I try to avoid commercial software as much as I can. I like knowing that the terms and conditions won’t randomly change, and that I won’t have the rug pulled out from under me.

Conclusion

I knew I didn’t have a ton to say about sketchy downloads, ripping media, and running Jellyfin, but I wanted to write this stuff down anyway! I am no longer any sort of expert on at-home video streaming, playback, or media collecting. I definitely wouldn’t fit in well over at r/DataHoarders!

I am excited to have learned that my little N100 server is more than capable of feeding all our screens at the same time, even if most of those devices happen to require transcoding. I am excited that I get to watch things that aren’t available on any of my streaming services, and it is really nice knowing that we will have some extra entertainment available next time our Internet connection goes down.

What do you think? Have you tried Jellyfin? Do you prefer Plex? Do you have a big old media collection at home? Or are you interested in dipping your toe in the water like I was? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

Tips For Building a More Power Efficient Homelab or NAS

| Comments

I don’t know about you, but I grew up in a house with incandescent light bulbs. In those dark days, we had multiple 60-watt light bulbs in every room. Not 60-watt-equivalent bulbs, but bulbs that actually consumed 60 watts of electricity, and they really weren’t all that bright! We were using the most advanced technology available at the local K-Mart.

Stable Diffusion AI Electrician

My entire homelab and home-network setup uses less energy than one of those light bulbs. At least, I think it does. I have never measured the power consumption of my two extra WiFi access points around the house! I probably squeak in at just under 60 watts even if we count those, but it is possible I am going over by a couple of watts!

I doubt that all of the tips in this blog will apply to your particular home-server setup, but I suspect at least one or two could be useful for you!

Use fewer hard disks!

This might be some sort of a conundrum.

Larger hard disks tend to cost more per terabyte, but each additional hard disk will increase your electricity bill. Even though power is rather cheap here in Texas, each 5400-RPM 3.5” hard drive running for five years will cost me around $45. That will wipe out any savings I might see from buying two 10 TB drives instead of a single 20 TB drive.

There are a lot of places where electricity costs two or even three times as much. It would probably be a no-brainer to buy the biggest disks available in San Francisco or somewhere in Germany!

You are probably running a RAID on your NAS, so some of your storage is being used as parity. If you have a 3-disk RAID 5 or RAID-Z1, then 33% of your capacity is devoted to parity. If you have a 6-disk RAID 5 or RAID-Z1, then only 17% of your total capacity will be parity data. When you look at your storage from this angle, there is a big advantage to using as many smaller disks as you can.

You might need to do some math to figure out what makes the most sense for your situation.

I don’t use RAID with mechanical disks anymore. I have less than 10 terabytes of data to store, and that doesn’t grow by much more than 1 terabyte each year. I am fortunate because the largest available hard disks are getting bigger even faster than my data is growing.

I have three large mechanical hard drives. One is in my workstation, one is attached to my N100 homelab server, and one is attached to a Raspberry Pi over at Brian Moses’s house. All my data starts syncing within 30 seconds of being saved, and it gets replicated between those three machines as fast as my network will allow.

It is not entirely unlike having a 3-way mirror spread over five miles. It is now cheap to attach a mini PC to every large hard disk, and I have the minimum number of disks to maintain three separate copies. I think I might be doing a good job here, but I wouldn’t mind having one more backup copy!

Run fewer servers! Even if they’re not as efficient!

There was a time when your company would have had a room full of specialized servers. The database servers would have been pushing their disks as hard as they could go, but probably had lots of idle CPU power and network bandwidth. Your file servers would have been maxing out their network ports, but they had plenty of CPU power and disk bandwidth being underutilized.

Even ignoring that server hardware costs money, every server you run pays a tax just by being powered on. If you have a busy datacenter, and you can somehow squeeze multiple services with different bottlenecks onto the same hardware, you can save a lot of money. The same can be true in your homelab.

My little N100 mini PC with its 14-terabyte USB hard disk uses about 20 watts of electricity. If I fill that machine up with as much RAM as it will hold, and I fill that meager 32 gigabytes of RAM up to the brim with virtual machines, then I might have to buy a second mini PC to run even more virtual machines. At that point, I will be up at 40 watts.

One of our friends in Discord has a Ryzen 5600G machine in his homelab. If you subtract out the watts from his short stack of 3.5” hard disks, his Ryzen server averages around 25 to 30 watts. His Ryzen 5600G is three times as fast and can hold four times as much RAM as my N100 mini PC.

NOTE: I would trust my math, the measurements, and my memory more if I personally tested the Ryzen 5600G build. It should be accurate enough for demonstration purposes!

I am doing way better as long as I only need one mini PC, but the scales tip into his favor if I outgrow my little machine. Not only that, but his build would idle about the same even with a beefier CPU.

I am in no danger of outgrowing my N100 mini PC.

Don’t scale up too far!

This could be a 2,500-word blog post all by itself. Buying single, bigger, faster machines is a really good value right up until it isn’t! At some point, you will save more money by scaling out.

For every generation of Ryzen CPU so far, it has been about the same price or a little bit cheaper to build two 8-core machines instead of one 16-core machine, and that usually includes loading each 8-core machine with as much RAM as the single 16-core machine.

The 16-core Ryzen machines are not twice as fast as an 8-core. They both have the same size memory bus, so the bigger chip is usually fighting hard to keep busy. A 16-core Ryzen 5950X is roughly twice as fast as a 6-core Ryzen 5600.

Tune your CPU governor

The more powerful your CPU, the more this will help. I wrote a lot of words about experimenting with the conservative CPU governor on my ancient, power-hungry FX-8350 build. That CPU could pull an extra 200 watts out of the wall when it would spin up to full speed. Anything that made the CPU spin up prematurely incurred a pretty big efficiency penalty, so tuning the governor helped quite a bit here!

I focused on Tailscale while tuning all the various knobs. I wanted to make sure that the CPU would spin up to the maximum during a long data transfer, but it didn’t need to ramp up much at all for something that would finish in two or three seconds. I wound up setting the up_threshold for the governor just high enough to make sure I would hit full speed after around 10 seconds of sustained maximum CPU utilization.

My new N100 mini PC homelab server is on the exact opposite end of the spectrum. The difference between absolute 100% CPU utilization at maximum clock speeds compared to idling at the lowest clock speed is only 12 watts. Occasional short spikes that use an extra 12 watts add up to almost nothing.

The bigger the gap between your CPU’s idle power consumption and its full-tilt consumption, the bigger an impact the CPU governor can have. Tuning the governor on my FX-8350 had almost zero impact on apparent performance, but it saved me almost as much power as my new N100 homelab box consumes each day.

Choosing power-efficient peripherals

It is easy to make good macro-level choices in your NAS or homelab build. It is pretty obvious that a mobile N100 CPU will use less electricity than a 96-core AMD EPYC server CPU, and all 3.5” mechanical hard disks are all pretty similar as long as they are spinning at the same RPM.

Matt Gadient did a lot of work testing components to push his home server’s power consumption to the limits. He even went as far as disabling the keyboard to save an extra watt. He also tested multiple SATA cards with different chipsets, and he even learned that these cards used different amounts of power when plugged into 4x and 16x PCIe slots.

Trying out multiple SATA cards would be both time consuming and somewhat costly. Sure, the cards don’t actually cost that much, but buying extra PCIe cards that might wind up in a box in the closet might wipe out any potentials savings you will see on your electric bill!

Should you buy a more efficient power supply?

You should do the math to verify this, but the best power supply to use is the one you already have. The most efficient power supplies aren’t THAT much more efficient than the worst power supplies. You already spent $40 to $80 or so on the PSU that you already own, and you’d have to spend another $50 to $100 to buy something more efficient. You will never make up the difference on your electric bill.

You will see a bunch of 80 PLUS ratings for power supplies. The fancier the metal used in the name, the more efficient the power supply will be. The trouble is that these power supplies are most efficient at 50% load, and the graphs you will usually find for these power curves don’t even show you the efficiency below 20% load.

There is a very good chance your home server spends most of its day closer to 5% of your power supply’s maximum load. Pretty much every power supply is inefficient when it is barely utilized.

Let’s put this into perspective with some questionable math. If your server is idling at 30 watts with an 80% efficient power supply, and you upgrade to a 90% efficient power supply, then your server will idle at about 27 watts. That would save one kilowatt hour every two weeks. That is less than $4 per year where I live.

You might be able to make some significant gains here if you are able to use a PicoPSU matched with an efficient GaN power supply, but either half of that combination costs as much as or more than a regular PC power supply. I suspect you would wipe out any monetary savings even in a region with extreme electricity costs.

If you can get your hard disks to go to sleep, that can save a ton of power!

This is tricky. The first problem is that parking the heads to sleep and spinning the motors up and down causes extra wear and tear on the hard drive. This isn’t as big of a problem as it was 15 years ago, but it still isn’t exactly fantastic.

If your hard drives are only spinning up and down three or four times each day, that’s great. If they are spinning up and down a dozen times an hour, then you may wind up wasting all the money you saved on your electric bill replacing dead hard drives.

The other problem is that it is challenging to gets hard disks to spin down using any traditional RAID setup. You could probably do a good job here if you use Unraid, but if you’re using Linux’s mdadm RAID or ZFS, this isn’t something you will be able to do a good job of optimizing.

I tried pretty hard to tune lvmcache to the extreme to keep my old homelab server’s RAID 10 from spinning up, but it didn’t work out nearly as well as I had hoped.

You are paying a power tax as soon as you boot your first virtual machine

This has gotten way better over the years. In ancient times, booting the most basic and barebones virtual machine on your laptop would absolutely demolish your battery life. Just having a virtual machine idling along still has an impact today, but it isn’t quite as bad.

I am looking at my notes on my N100 mini PC, and I am not confident in how I was notating things the first few days that I was testing. I hope this is correct, because I don’t have an easy way to retest any of this today!

Proxmox summary on my N100 Mini PC

It looks like my mini PC averaged 9 watts of power consumption with Proxmox installed but no virtual machines or containers installed. That goes up to 12 watts as soon as I booted a couple of completely idle virtual machines. Three watts doesn’t feel like much, but that is an extra 33%!

You sort of only pay this idling tax on the first virtual machine. Sure, every virtual machine you boot will be running a kernel that will be waking up the CPU a few hundred times each second, and adding more machines will make it even harder for the CPU to reach and stay in its deeper sleep states. Even so, you pay the biggest penalty booting that first virtual machine. More mostly idle virtual machines are not a big deal.

You won’t be able to prevent this. It is just something to keep in mind when comparing your NAS build to a friend’s NAS build. If you are running a handful of virtual machines, but they are just sharing files, then this may very well be the reason you’re burning an extra 10 or 15 watts!

Too much cooling can add up to quite a bit of power!

My anecdote on this topic is about the time I pulled two unnecessary 120 mm case fans out of my old homelab box. These were plugged directly into 12-volt power, so they were spinning at full speed, which means they were using as much power as they could.

Those two fans were using 6 watts all day long. My N100 mini PC idled at between 6 and 7 watts before I installed any virtual machines.

My Promox N100 Mini PC from CWWK

This little guy has FIVE NVMe slots!

If you built your own homelab server using consumer-grade parts, then I don’t expect you will be able to save 6 watts, but I bet dialing back your fan curves could make a small impact.

I suspect this could have a bigger impact if you are using older rack-mount server hardware. Especially with the high-pressure fans in 1U and 2U servers!

You CPU doesn’t need to be ice cold. It will run just as well at 30C as it will at 85C. There might be a few dollars to save if you aren’t trying to over cool your servers.

Rack-mount network gear is usually a power hog!

This isn’t always true. You should always check the specs, but it is true the majority of the time. Especially if you’re buying used enterprise gear on eBay.

I prefer my home network gear to be small, relatively low power, and fanless. The heart of my home network lives in my network cupboard. I helped Brian Moses build out that cupboard in 2011, and we picked out a pair of small 8-port gigabit Ethernet switches. I bought the house from Brian six years ago, and those same two switches are still chugging along.

My Network Cupboard

I don’t remember what I might have been troubleshooting when I left that long blue patch cable in there, but I am not using it any longer!

Devices have fans when they tend to generate a lot of heat. Devices generate heat because they use a lot of electricity. Not only that, but fans tend to fail when they run 24 hours every day for a decade. I expect my fanless gigabit Ethernet switches will still be able to do their job a decade from now, but they will likely be replaced with fanless 2.5 gigabit Ethernet switches before then!

The pair of old switches and the Linksys WRT3200ACM running OpenWRT that handles my symmetric gigabit fiber Internet connection consume a combined total of 18.75 watts. That includes any small overheard from the small APC 425VA UPS. There isn’t a single fan to be found in the cupboard.

I may be saving twice as much money as I think I am!

I am sitting in Plano, Texas. We run the air conditioning nine months out of the year. Every watt that goes into my computers turns into heat, and I have to spend even more watts powering the air conditioner to pull that additional heat back out of the house.

The heat used to be a more direct problem for me. In our old apartment, my home office was on the second floor, and heat rises. I was on the south-facing side of the building, so I got a lot of heat from the sun. I also had one of the weakest vents in the apartment.

My home office would get quite toasty on July afternoons whenever I fired up Team Fortress 2. Any watt not being spent in my office was a watt not making me warmer!

But Pat! I need a cluster of mini PCs for educational purposes!

You should think about using virtual machines for testing purposes even if electricity isn’t a consideration.

You can run nested virtual machines, so you could build up a Proxmox test environment on your Proxmox server. You can set up three or four Proxmox virtual machines and treat them as if they were physical boxes. You can cut the virtual power cord to test what happens when a node fails. You can disable network interfaces to simulate pulling patch cables out of the wall. You can write junk data to one of the virtual disks to see what happens when a disk goes sideways.

My N100 Proxmox server next to Brian Moses's N5095 Beelink TrueNAS server

You could build a Kubernetes cluster using a stack of virtual machines on a single Proxmox host. You could build some sort of virtual Docker cluster. You can build whatever you want on a single host as long as it has enough RAM.

One of the biggest advantages to using a single, big virtual machine host to test your clustering skills is that you can always add one more node to your cluster for free. If you buy three mini PCs, then you have three mini PCs. If you want to add a fourth node, you will have to buy a fourth mini PC.

This setup would be to help you learn things. It doesn’t matter if your three-node Proxmox cluster has that single point of failure that can take the whole thing down. You aren’t trying to run a big company’s IT department. You are running a test cluster.

Conclusion

There is some low-hanging fruit here that is definitely worth picking, especially if your new home server will be running every hour of every day for the next five years. You just have to figure out where to draw the line.

Stable Diffusion Tie Guy in the Server Room

Maybe you enjoy squeezing every ounce of efficiency out of your setup as you can. If you are having fun, then keep going! If you are only trying to save a few bucks, just remember that your time isn’t free. If it takes you five hours to save $20 per year on your electric bill, then that might not be enough savings to justify the time spent, and maybe you should have spent that time doing something fun! Only you can figure out the balance.

What does your homelab look like? Do you have a full 42U rack full of beefy servers and enterprise-grade network gear? Do you have a tight little cluster of mini PCs? Or maybe you are just using an old workstation packed full of extra hard disks and RAM?! Tell me about your homelab in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

How Efficient Is The Most Power-Efficient NAS?

| Comments

If you came here looking for a parts list for an extremely power-efficient NAS build, you are probably in the wrong place. I have just been running tests on my own N100 homelab server using a power meter, and what I learned encouraged me to peek at some data for other machines I have. I also asked a few friends how their setups were working out to see where most people are landing.

What we are going to attempt to do here is decide what we think qualifies as a power-efficient NAS. I just want you to be able to figure out whether or not you’re already doing a good job, and whether or not it is worth spending time and effort to make your homelab setup more power efficient.

I have been using a Tasmota smart plug to measure my CWWK N100 mini PC homelab server and its Western Digital 14 TB USB hard disk in various configurations: idle, maxed out, maxed out but with various CPU frequency restrictions, with the hard disk on the meter, and just the hard disk by itself. I have been running tests in each of these configurations in 24-hour periods to get a nice, long, smooth average.

Stable Diffusion Guy Metering Power

The result that surprised me was the USB hard disk. We are always telling people to assume that each hard disk will use somewhere between 6 and 8 watts, and to make sure you have about 10 watts of power-supply capacity for each disk.

My meter read 0.157 kWh while measuring the 14 terabyte hard disk for five consecutive 24-hour periods. It isn’t surprising that this works out to an average of 6.8 watts, but it did surprise me that it is hitting the exact same target over and over again!

Let’s just call that 7 watts for today’s purposes. It is fun that this is pretty close to a 60-watt equivalent LED light bulb!

A new unit of measurement for the power consumption of your NAS?!

This made the gears in my head start spinning. If we are going to think about how efficient our storage servers are, why not think about it in relation to the power consumption of the disks?

My N100 mini PC averages 14 watts, so my NAS’s compute uses the same amount of juice as a pair of 3.5” hard disks. I only have one hard drive plugged in, so that means my NAS requires three hard drives’ worth of electricity to provide one hard drive of storage.

N100 Mini PC Proxmox Servers are taking over the world!

In absolute terms, I think I am doing pretty well!

I don’t want to dox anyone, but a friend of ours on the Butter, What?! Discord server has a server with four 3.5” disks and a few NVMe drives built around a Ryzen 5600G. He tells us his average is a little over 50 watts. Those four hard disks are probably using 28 watts. That leaves 27 watts for everything else.

I think our friend is doing just fine there. His server is using around four disks’ worth of power for the compute, and his compute-to-disk power ratio is leaning in a better direction than my own.

What qualifies as amazing power efficiency for a NAS?

Matt Gadient got his home server down to 7 watts at idle with all the disks asleep. Matt put in a ton of hard work. He tried multiple SATA chipsets. He even disabled the keyboard. I think this qualifies as extremely efficient, because his compute is only using the equivalent power of a single mechanical hard disk.

My CWWK N100 mini PC server uses twice as much power, but I measured my server with five virtual machines booted up. I did check the power draw right after Proxmox was installed, and it bounced between 6 and 7 watts, so I could potentially reach Matt’s level here, but my server’s job is to run a handful of virtual machines, so I can’t just shut those machines down!

N100 Mini PC Proxmox Host Data in Home Assistant

A normal day vs. constant 100% CPU utilization

I believe that seeing my compute draw only as much power as two mechanical disks still counts as efficient. Especially since I have five important virtual machines either doing basic work or waiting to do important work!

We can also see that Brian Moses’s 2023 DIY NAS build seems to be in the same range as our friend’s Ryzen 5600G server, averaging around three disks’ worth of power for its compute.

In my opinion, if you are at or below two or three disks’ worth of power consumption on average for your compute, then I think you are doing a good job!

I keep using the word compute, but that is a bit of a lie!

I am using the word “compute” to mean everything but the mechanical disks. This isn’t exactly correct, but it simplifies things a bit.

There are a lot of things in your server that use electricity besides the CPU and the disks. Every chip on the motherboard is going to be using some power. The faster or older your network ports are, the more juice they are going to use. It isn’t uncommon to find slightly older 10-gigabit Ethernet cards that eat 10 or 20 watts on their own!

Not all of these things would technically qualify as compute, but when we are specifically talking about storage servers, it feels reasonable to overload the term a bit. There’s the disks, and then there’s everything else.

The theoretical limit!

I have had a Raspberry Pi 4 and a 14 terabyte USB hard disk running over at Brian Moses’s house for just over three years now. I don’t have a power meter on it, but all my recent measurements have had me curious, so I plugged my comparable Pi-KVM hardware into a smart plug, and I took some readings.

I did not let this Raspberry Pi 4 run for 24 hours, so my data here isn’t as accurate as it is for my CWWK N100 mini PC. My meter was spending most of its time reading 3 watts with occasional bounces up to 4 watts. This agrees with the 3.5 watts at idle that I kept seeing pop up for the Pi on the Internet, and I don’t need a ton of accuracy to say that the Pi uses roughly half as much juice as a single 3.5” hard disk.

I wouldn’t be surprised if there is some Raspberry Pi Zero competitor that can do a bit better. Even so, I think it is reasonable enough to say that this is just about as efficient as we are likely to get.

Don’t buy new hardware without doing some math!

My new homelab server is an Intel Celeron N100 mini PC. This replaced an old AMD FX-8350 build. The bulk storage for both servers is the same exact USB hard disk—I just moved it from one machine to the other when I migrated the NAS virtual machine to the new Proxmox hardware.

My new setup uses 0.5 kWh each day, while the old setup used 1.9 kWh each day. Where I live, that 1.4 kWh difference in electricity between the two servers every day should add up to around $60 per year in savings. It is going to take four or five years before my new server pays for itself by reducing my electric bill.

N100 Mini PC Power Metering with Tasmota

The N100 and FX-8350 processors don’t have identical performance, but they’re a fairly close match in so many ways. I have been crunching these numbers for years, and this is the first time that a mini PC was available with both comparable performance to my old homelab box and that could pay for itself in less than a decade.

The best home server is almost always the one that you already own. You will have to do the math to know for sure!

Conclusion

How you want to use this information to determine whether you are doing a good job is up to you. You know that my NAS has a single 14-tb hard disk and averages 20 watts. Would you say that I am at 20 watts per drive? Maybe you would say that I am at 1.5 watts per terabyte!

The important thing for me is that my combined NAS and homelab server uses about as much power as a big LED light bulb, it has more CPU and RAM than I currently need, and it has enough storage to meet my needs for the next three or four years.

How is your storage server doing? Are you on the small side like I am? Do you have a giant 15-disk RAID 6 in a rack? Or are you somewhere in between? Did reading this make you feel better or worse about your power consumption? Tell me about it in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

The Sovol Comgrow T300 Only LOOKS Like a Bambu A1

| Comments

I am excited about the Sovol Comgrow T300. I don’t think I would ever pay $550 for this printer, but its print volume is at least 44 mm bigger in every direction. If the T300 lives up to its spec sheet, then it is probably an amazing printer if you actually really do need that extra print volume.

The T300 is obviously designed to make you think it is every bit as amazing as the Bambu A1 or A1 Mini. It isn’t even close. Almost all of the magic that makes the A1 an amazing printer is in the toolhead: fully automatic strain-sensor bed leveling, automatic filament flow control, air-printing and tangle detection, and 30-second nozzle swaps.

Stable Diffusion 3D Printer Guy

Much of the rest of what makes Bambu’s printers so awesome is their attention to detail in your printing workflow. They have meticulously tuned slicing profiles. Bambu printers will give you fast, beautiful prints out of the box with zero effort. We haven’t seen anything remotely close from Sovol yet.

Real linear rails aren’t what make the Bambu A1 an amazing printer

The linear rails are a nice upgrade, and they sure do make the Comgrow T300 superficially resemble a Bambu A1, but those rails aren’t the upgrade that made the Bambu A1 so awesome.

The T300 is using an induction bed-leveling probe. Don’t get me wrong. These work great. They were the most exciting thing that the Prusa MK2 brought to the table in 2016. The trouble is that they require you to be knowledgeable, and they require manual tuning from time to time.

Bambu A1 Mini

I paid for the whole build plate, so I am using the whole build plate!

I haven’t used my Sovol SV06 much since my Bambu A1 Mini arrived in December. I needed to use it last week, because I had to print something that wouldn’t fit on my A1 Mini. I had adhesion issues on the SV06 because my z-offset wasn’t quite right. I wasted about 15 minutes canceling the print, cleaning the build plate, and dialing in the z-offset.

All the Bambu printers detect the height of the print surface by physically touching it with the tip of the nozzle. It doesn’t matter if the weather changes, the bed expands due to heat, or if you swap in a thicker or thinner PEI sheet. A Bambu printer will give you a perfect first layer almost every time with zero effort on your part.

If you are new, this saves a ton of time and frustration. If you already have years of 3D-printing experience, then this just becomes one less thing to ever have to think about.

You shouldn’t be looking at the Comgrow Sovol T300. You should be shopping for a Bambu A1 or A1 Mini.

30-seconds to heat up?!

Alright. I’ve said enough bad things about the Comgrow T300. Lets talk about the good things.

It takes my Sovol SV06 two and a half minutes to go from clicking the upload and print button to actually starting to print my part. My Bambu A1 Mini takes almost six minutes to do the same thing. My tuned-up super-fast Sovol SV06 can give my Bambu a run for its money, so the Sovol can win these races when I am printing single small parts with this three-minute head start.

If the Sovol Comgrow T300 can heat the bed in 30 seconds, that would be a fantastic upgrade in certain situations! This makes no difference when you are printing a 48-hour Mandalorian helmet, but I sometimes spend a lot of time prototyping.

Bambu A1 Mini

I might print a small part, see if the curves and dimensions match up correctly with something in the real world, make a few tweaks to the model, then hit print again. I might have to do this a dozen times.

Shaving four minutes or even two minutes off the print time could be the difference between printing a job taking 40 minutes instead of an hour. That might mean I could start and finish a project before meeting a friend for lunch instead of putting it off until next week.

This is extremely niche. Most of the time I don’t even care that the A1 Mini takes six minutes to start a print. It does a lot of work during those six minutes that the Sovol isn’t capable of doing. The A1 primes and wipes the nozzle, scrapes the nozzle super clean over the silicon brush, performs a vibration compensation process, and loads the color I selected from the AMS Lite.

Some of these things help ensure that my part is going to print successfully.

This is a cool feature of the new Sovol printer, but I would only be excited about it three days each year.

The Sovol Comgrow T300 is probably faster than the Bambu A1 out of the box

Maybe. We haven’t seen Sovol’s slicer profiles for the new printer. We haven’t seen how fast they actually are in practice, and we haven’t seen the quality of the printed parts. Let’s just assume Sovol has done a reasonably good job here, and that the T300 does what it says on the box.

The speed limit on the A1 and A1 Mini is 300 mm/s with 20,000 mm/s2 acceleration, but none of my OrcaSlicer profiles exceed 10,000 mm/s2 acceleration. You can definitely push this and still get nice prints, but the beauty of the Bambu printers is how well everything works out of the box.

I don’t know what Sovol has chosen for their print speeds, but there isn’t much reason to assume they haven’t pushed things to maybe 350 mm/s or so, and you can bet they are using 12,000 mm/s2 acceleration for infill and travel moves where the A1 is limited to 10,000 mm/s2.

More importantly, the T300 claims to have a hot end capable of melting 30 cubic mm of filament per second. The Bambu A1 Mini defaults to 20 cubic mm per second in the slicer, and my machine tops out at 25 cubic mm per second.

If you upgrade either printer to a 0.6-mm nozzle, you will be amazed at how slowly the tool moves around while maxing out the heat capacity of the hot end.

Thank goodness there’s finally network connectivity!

I am using Marlin’s input shaper on my Sovol SV06. It works great, but Octoprint and the Sovol’s slow serial port have become a bottleneck. I just can’t send g-code as quickly as the printer can move. I can solve this problem with a $100 Klipper screen and then a bunch of tuning, but I would much rather replace the whole thing with a Bambu A1 instead.

I believe the T300’s Klipper hardware has both WiFi and Ethernet. This is good news. You can upload g-code straight to Klipper from your slicer, and you won’t have to shuttle SD cards back and forth like a caveman.

I would expect this to be common moving forward, and that network connectivity will be showing up in cheaper and cheaper printers.

Who should buy the Sovol Comgrow T300 instead of a Bambu A1 or P1S?

Almost no one should buy the Comgrow T300. You should almost definitely be looking at a printer from Bambu. It isn’t speed, linear rails, or the aesthetics of the machine that make the A1 and A1 Mini awesome printers.

It is the combination of Bambu’s awesome software with their bleeding-edge sensor technology. These are the things that make Bambu printers a joy to operate, and lead to successful prints right out of the gate.

But as I already said earlier, I would consider the Comgrow T300 if I really, really, REALLY needed a bigger printer than the Bambu A1 or Bambu P1S. I am excited to report that I absolutely don’t need a bigger printer. In fact, I rarely need a printer bigger than my Bambu A1 Mini.

If I did, though, I would make sure to wait and see what sort of success other people have with the Comgrow T300 before I place an order. Sovol is only just barely shipping this printer so far!

Conclusion

I want Sovol to succeed. On paper, they sure seem to have almost all the ingredients necessary to almost compete with Bambu here. If they can spend a little engineering time to ship good, fast slicer profiles that just work, they might be in pretty good shape.

But come on, Sovol! You need to ship a printer with strain-sensor bed leveling. That is the biggest difference between your printers and modern printers like the Prusa MK4 or any printer from Bambu Lab. Having that one feature, and having it tuned to work well eliminates half the problems we see posted to r/FixMyPrint!

This is the killer feature separating Bambu, the Prusa MK4, and the Prusa XL from everyone else. After the horribly named Creality Ender 3 V3 SE shipping with its weird strain-sensor setup, I assumed that we would start seeing every the new model of 3D printer ship with attempts at using strain sensors, but I guess that won’t be happening until the next iteration for Sovol.

What do you think? Are you going to buy a Sovol Comgrow T300? Do you need that extra build volume, or would you be better off with a Bambu A1 or A1 Mini? Are you disappointed that the T300 is missing all the most important current-generation features? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!