Oh No! I Bought A GPU! The AMD RX 6700 XT

| Comments

This is hugely unexciting. I replaced my Nvidia GTX 970 with a lower end GPU that was launched two years ago. Isn’t that boring?

It is only in the last two or three years that I have really felt like I was limping along with my old GPU. I feel like AMD’s FSR and Proton-GE’s ability to enable it in nearly any game was what let me get by for at least this last year.

Rootin' Tootin' Cowboy Shootin'

I hit my first game that wouldn’t run properly just a few months ago. DEATHLOOP was included in Humble Choice, and after finishing Prey, I was excited about trying DEATHLOOP. My 4 gigabytes of VRAM just wasn’t enough, and the game was barely able to manage 15 frames per second.

Everybody says AMD GPUs are fantastic on Linux

I don’t believe them. The last time I used an AMD GPU was shortly after AMD had acquired ATI. I had an ATI X1400 in Dell laptop running Linux. AMD’s proprietary Linux drivers were atrocious, but they promised things would get better. They didn’t get better in the years that I owned that laptop. Every computer I have owned since has had an Nvidia GPU, except for my current 2-in-1 laptop with integrated graphics.

I am running Ubuntu 22.04 LTS. If I swap my GTX 970 for an RTX 4090, everything will just work using the drivers I am already using that were installed via apt. I wouldn’t have to make any configuration changes.

If I bought an AMD 7900 XT, and that is what I very nearly did, this wouldn’t be the case. The kernel that ships with Ubuntu 22.04 is too old, and I believe I would also need all sorts of newer libraries that just don’t ship with the current Ubuntu LTS.

It is fantastic that we can run AMD cards with open source drivers and libraries. It is a huge bummer that the support is interwoven so tightly into the Linux distribution.

What has been stopping me from upgrading?!

I am sure are aware of how expensive video cards have been over the last several years. I wanted to upgrade two years ago, but I just couldn’t stomach paying $1,000 for a GPU that is only twice as fast as the GPU I bought at its launch in 2014.

GPU prices have dropped a lot, but I had another problem. My old and still surprisingly nice QNIX QX2710 monitors require dual-link DVI connections. You just can’t buy a modern video card with a DVI port. That’s another expense, and it is a bummer that the upgrade I wound up making was more sideways that it was upwards.

I was hoping that the prices on 38” ultrawide 3840x1600 monitors would drop more before I needed to upgrade. They didn’t, so I wound up buying a Gigabyte G34WQC 34” ultrawide 3440x1440 monitor.

I figured I would get the monitor upgrade out of the way first. Then I could wait a few months to see what happens. Maybe there would be an AMD 7800 XT available by then that would be a better fit for me. That didn’t work out. I made it five days before ordering a new GPU.

How did I land on the AMD 6700 XT?

I really did have my heart set on the AMD 7900 XT. I have more than a slight concern that I would have trouble fitting that card in my case, and I wasn’t excited about jumping off the Ubuntu LTS train onto the other train where I would be dealing with an operating system upgrade every six months.

NOTE: This shouldn’t have been a big concern. There is a PPA with nightly builds of Mesa 23. I have been trying it out with my 6700 XT, and the updated drivers nearly doubled frame rates in Control with ray tracing enabled.

Not only that, but it doesn’t sound like full support for the 7900 XT exists anywhere on Linux. It seems that there is no overclocking support, and we don’t have hardware AV1 encoding yet. I bet Ubuntu will be six months behind everyone else on that front.

Severed Steel

There were some good deals this week on both 6800 XT and 6900 XT cards. I want to say they were at $580 and $650 respectively. Both have 16 GB of VRAM, and the price increase was close to proportional with the increase in performance. Either sounded like a good value, but I was definitely planning on going with the 6900 XT.

So how did I wind up with the 6700 XT? The 6700 XT was $520 when I started shopping. It made no sense to buy such a slow card with only 12 GB of VRAM while only saving $50. That would have been terrible.

Then a 6700 XT went on sale for $370. That easily made it competitive on the price-to-performance graph. Then I saw that the 6700 XT is about 1.5” shorter than the 6900 XT. That was enough to get me to place my order immediately.

I saved money. The RX 6700 XT should be more than enough GPU for my needs. I won’t have to upgrade my Linux distro every six month. Best of all, I won’t have to hope the card fits in my case without the eight 3.5” hard drive bays getting in the way. Less money and less work seemed smart to me!

Will the RX 6700 XT really be enough GPU for me?

I sure hope so! I decided last year that my minimum viable upgrade would be an Nvidia RTX 3070. I made that decision when an RTX 3070 would cost something like $1,200. They’re down to $550 or so today.

The 6700 XT is comparable enough. The 6700 XT does better in some games, while the RTX 3070 does better in others. The 6700 XT does support ray tracing, but AMD cards always do a lot worse with ray tracing enabled. Even so, the 6700 XT cost nearly $200 less, so it seems like a good value.

I don’t expect that the 6700 XT will keep me going for eight years like the GTX 970 did. If it keeps me going until I can get a 7900 XT on sale for $600 or so, then I will be really pleased.

Things were a mess during the first few hours with the 6700 XT!

I swapped in my new 850-watt power supply, removed my GTX 970, and installed my MSI RX 6700 XT. I just turned on the computer, opened Steam, and fired up Borderlands 3. I had working drivers, variable refresh rate Freesync was working, and the game was running so well. I wound up turning the knob up to Badass, which is one click above Ultra, and I set FSR at 75% resolution for my 3440x1440 monitor. I wandered around, and my frame rates were usually between 80 and 100. Running the benchmark landed at about 77 frames per second.

Then I fired up Davinci Resolve, and learned that I had no OpenCL support. Don’t do what I did. I followed the advice of using AMD’s own amdgpu-install to install their graphics, ROCm, and OpenCL drivers. That got Resolve working, but it was really slow. It also obliterated my frame rates in the games I had already tested.

AI Image

I ran amdgpu-uninstall to clear out all the damage I had done. Once I verified that gaming was fast again, I followed this advice and ran apt install rocm-opencl-runtime. You do have to need the amdgpu-install Debian package installed for this to work. It will install AMD’s OpenCL libraries alongside your existing open-source drivers.

I also need OBS Studio for my workflow. I was surprised that it offered AV1 as an option for encoding with the GPU. I tried it, and it left me with a zero-byte file. Then I tried h.264, and that worked great. That is all I need!

Davinci Resolve on Linux is missing features without CUDA

I messed around a bit in an old project to make sure Resolve will still be able to get the job done. I set up a Magic Mask to cut myself out of a short clip, and that seemed to work well. Scrubbing around the timeline seems fine.

The trouble came in when I decided to export a video. My own 2160p YouTube settings use h.265 at something like 120 megabits per second. The options for h.264 and h.265 are gone.

I am left with an extremely simplified option labeled MPEG4. I can’t specify a bitrate. I can only chose between good, better, or best quality.

I don’t know exactly what I will do next time I have to upload to YouTube. I am expecting to have to use DNxHR.

Was AMD a good choice?

I can’t say for certain. The AMD 6700 XT has only been in my computer for one full day. As far as gaming goes, I will definitely say it was a good value. As long as you aren’t using ray tracing, and we will talk about that very soon, the 6700 XT is a bit faster and has more VRAM than an RTX 3070. Even better, you can get a 6900 XT with 16 GB of VRAM for not much more than the price of that RTX 3070 with only 8 GB of VRAM.

Stable Diffusion Dude

All the AMD cards fall behind when it comes to ray tracing. The 7900 XTX costs less than an RTX 4080, and the 7900 XTX performance sits somewhere between the RTX 4080 and RTX 4090. Until you turn ray tracing on. That’s when the AMD cards fall way behind.

I will play some games, try out the bleeding-edge Mesa libraries, and then report back on how things are going!

Ray tracing with an RX 6700 on Linux

I wanted to play Control with ray tracing. With the open-source driver, the option isn’t available. During the short time when I ran AMD’s driver using amdgpu-install, I was able to turn on ray tracing in Control, but the game was super slow and jittery. It didn’t matter if ray tracing was turned on. That driver was just terrible.

Control Ultimate Edition Ray Tracing

I also wanted to play Severed Steel with ray tracing enabled. You don’t turn ray tracing on in the game. It either launches with DX12, DX12 with ray tracing, or DX11. At first I couldn’t tell the difference. There are a lot of environmental reflections baked into the reflective floors, walls, and ceilings.

Once I figured out the difference, it was easy to see that ray tracing was working just fine. First I noticed the sniper’s laser scopes were reflecting on the ceiling and floor. Then I noticed the streaks of glowing bullets flying through the air were being reflected everywhere.

I also tried turning ray tracing on in DEATHLOOP. It was not an option.

I am a little disappointed here, but I don’t feel let down. I didn’t expect to be able to use ray tracing in many games with the $370 GPU that I chose. Ray tracing works in Severed Steel, and the fact that it works in other games with AMD’s driver bodes well for the future.

NOTE: To enable ray tracing with Ubuntu’s Mesa libraries you have to add RADV_PERFTEST=rt to your game’s startup options. It might look like this in Steam:

1
RADV_PERFTEST='rt,gpl' %command%

This upgrade was inspired by DEATHLOOP

DEATHLOOP ran like a slideshow on my GTX 970, so I didn’t get to play it. I finished the first mission last night, and it was buttery smooth. I don’t know if it is a good game yet, but I look forward to finding out this week!

I turned almost everything up to the max, but I enabled some of the FSR-related settings to get some extra performance. The game looks fantastic, and it is usually staying above 100 frames per second.

UPDATE: DEATHLOOP does not run great. As soon as I got to the town, my FPS dropped below 60. Even turning the settings down rather low and setting a much lower resolution drops me into the 60-FPS range all the time. I asked Google about this, and I found a bunch of people on Reddit wondering why they can’t stay above 60 FPS with their Ryzen 5950X and Nvidia RTX 3090 machines.

Conclusion

I think I made a good choice, but I think the 7900 XT would have also been great. I am already enjoying buttery, smooth games with nearly maxed out settings. I expect I would be underutilizing a 7900 XT for at least a couple of years.

FidelityFX Super Resolution (FSR) got me a few extra years out of my GTX 970, and I think I am going to continue to rely on it with the 6700 XT. There just isn’t quite enough grunt here to get to 144 frames per second with Ultra or Badass settings at 3440x1440, and I think that is alright.

I was hopeful that my GPU upgrade would get me to 100 Hz or 144 Hz in most games with the settings maxed out without having to enable FSR. I was also expecting to pay more than $1,000 for that upgrade, though, so I can’t complain about reaching that goal with a light sprinkling of FSR enabled.

The Topton N6005/N5105 and Jonsbo N1 Are An Awesome DIY NAS Combination!

| Comments

Am I trying to win Brian Moses’s DIY NAS 2023 giveaway?!

Brian Moses DIY NAS 2023 Edition

Have you seen this thing? It looks both amazing and adorable, and it packs so much hardware into quite a tiny, power-sipping package. I exiled my homelab server to a shelf under my network cupboard on the opposite side of the house. If I had Brian’s new build, I would definitely need to make room to show it off somewhere in my home office!

If I win the DIY NAS: 2023 Edition, what would I do with it? What sort of problems would it solve for me?

Well, duh, Brian! I would do everything I am doing today, but I would do it better and faster while using less power and generating more heat!

What do I have running on my homelab server today?

I still call that first VM a NAS. I don’t actually run Samba or NFS on there anymore, but I just don’t have a better name. It runs a Seafile client to pull down an additional copy of all my data from my Seafile server at Brian Moses’s house. That extra copy is stored on an encrypted btrfs partition that takes automatic hourly, daily, weekly, and monthly snapshots.

We are generating well over a terabyte of video footage every year, and I am starting to run out of space!

What is wrong with my old homelab server?

I want to say that nothing is wrong with my old homelab server. It is doing a fine job, but it is getting quite old. I only have two mechanical hard disks connected to my server, and it still idles at more than 70 watts. That old FX-8350 is a real pig of a CPU.

DIY NAS Cartoon

In most ways, that old FX-8350 is faster than the Celeron N5105 in Brian’s new NAS build. They benchmark about the same when it comes to single-core performance, but my FX-8350 has twice as many cores, so it pulls way ahead when fully loaded. The problem is that my homelab starts pulling more than 250 watts of electricity at the power outlet when it is under full load.

That awesome new Celeron has a TDP of 15 watts. It isn’t going to be heating up the room when it is thinking really hard.

Those 2.5 gigabit Ethernet ports are awesome

I pass almost all my personal data through my Tailscale network now. That means everything I move around is encrypted, which also means that I eat a lot of CPU cycles encrypting and decrypting data. My FX-8350 tops out at around 900 megabits per second when passing data through Tailscale’s encryption.

The N5105 in Brian’s DIY NAS build for 2023 has much more modern AES instructions, and it easily beats my FX-8350. I believe the N5105 can manage nearly 1.3 gigabits per second when pushing data through Tailscale.

That is fast enough to need more than my homelab server’ss gigabit Ethernet port. Do you know what the best part is? My FX-8350 has to pull more than 200 watts from the power outlet to push 900 megabits per second through Tailscale. I bet Brian’s N5105 doesn’t even make the room warmer.

Those 2.5 gigabit Ethernet ports will cost me a lot of money!

I don’t have any 2.5 gigabit Ethernet gear in my house, but the prices are getting really competitive!

I would have to spend $30 putting a 2.5 gigabit Ethernet PCIe card in my workstation. I would need to spend $110 to add a 5-port 2.5 gigabit Ethernet switch to my office. Then I would need to buy a second switch to put in my network cupboard.

Winning Brian’s giveaway will cost me at least $250 in network gear!

Why you should want to win Brian’s DIY NAS 2023 giveaway!

This year’s NAS build is delightful. I have seen it. I have held it in my hands. I am envious of whoever manages to win this year.

That Jonsbo N1 Mini-ITX case is diminutive, looks awesome, and is packed as full as it can be with hard drives and other components. You can’t fit much else in there, and it would look absolutely amazing in my office.

Excited to win the DIY NAS

I have already written about how excited I am about the Toptop N5105 motherboard. Topton has packed so many amazing features into such a tiny package. That power-sipping Celeron packs a lot of punch for 15 watts. It has six SATA ports and a pair of M.2 NVMe slots, so you can pack in a ton of storage. I am probably most surprised by the FOUR 2.5 gigabit Ethernet ports. I am impressed that you can get so much from such a tiny motherboard for under $300.

It is an amazing, power-sipping, compact NAS build this year.

Conclusion

It doesn’t specifically state in the rules of the giveaway that I am not allowed to win the giveaway. It says right there in the rules that ANYBODY can win. I count as an anybody, don’t I?!

I didn’t discover the Topton N5105 motherboard or the Jonsbo N1 case, but I definitely encouraged Brian to build this year’s DIY NAS around these components. I know that Brian would agree with me that it wouldn’t be fair for me to win the NAS. He doesn’t even have to feel bad about it, because I am doing my best these days to own and operate less server gear and fewer hard disks. I would have to have to buy a stack of disks that I don’t need to fill the Jonsbo N1 to the brim!

I hope my entry into the contest encourages you to enter as well. This year’s DIY NAS build is quite badass, and I think anyone would be proud to display it among their homelab gear!

https://www.ebay.com/str/briancmosesdotcom/ “Topton N5105 NAS Motherboard on eBay”

My New Monitor: The Gigabyte G34WQC A Ultrawide

| Comments

I have been avoiding a monitor upgrade for as long as I could. I have been running a pair of QNIX QX2710 monitors at my desk for nearly a decade. These have served me well, and were a tremendous value. I have had these IPS panels overclocked to 102 Hz almost the entire time, and I only paid $325 for each monitor in 2013. At the time I bought them, you couldn’t get a name brand 27” 2560x1440 monitor for less than $900.

The QNIX monitors weren’t perfect. Their color reproduction didn’t look all that far off from my new Gigabyte monitor in sRGB mode, but there was more than a bit of backlight bleed around the edges. I knew it was there, but it wasn’t so bad that I couldn’t live with it.

Placeholder Photo

NOTE: This photo needs to be replaced. I am rearranging video lights, camera mounts, computers, and monitors all around my office. Everything is a mess right now. If I knew I was going to use this photo in the blog, I would have turned on more lights and used a better camera!

I am staring at my new monitor while writing this, and I can say for sure that the old monitors looked just fine. Upgrades are always fun, and swapping in this nice new Gigabyte monitor has been an enjoyable experience, but this is kind of a sideways move for me. Going from two 2560x1440 monitors to a single 3440x1440 is both an upgrade and a downgrade.

There is a very specific problem with the QNIX monitors that has been holding me back. They only have a single dual-link DVI port on the back. My Nvidia GTX 970 was probably one of the last few GPUs to sport a pair of DVI ports.

Active DisplayPort to dual-link DVI adapters can be a bit janky. Some are really inexpensive, but a pair of good adapters that might actually work might cost me $50 or $60. That’s almost 20% of the cost of a new monitor.

I am in need of a GPU upgrade, so upgrading my monitor first made a lot of sense.

Are we already too far in for a tl;dr?!

I am quite happy with my purchase. I believe the Gigabyte G34WQC is the best 34” ultrawide monitor you can buy for under $400.

The G34WQC has excellent dynamic range, low latency and a rather fast 144 Hz refresh rate for gaming, FreeSync Premium support, and quite good sRGB color accuracy.

This is definitely the best monitor for me. You can keep reading if you want to find out why!

Why didn’t I buy a 38” 3840x1600 monitor?

I have been drooling over these monitors ever since the first one was announced. These would be a proper upgrade for me!

I should probably say here that my old 27” 2560x1440 monitors, 34” ultrawide 3440x1440 monitors, and 38” ultrawide 3840x1600 monitors all have the same exact pixel density. They all have about 110 pixels per inch. The size of each of their pixels are precisely the same. The bigger monitors just have more of exactly the same thing.

There are only a few models of 38” ultrawide monitors on the market. They usually cost between $1,150 and $1,300, though you can sometimes find one for $999.99.

I would be willing to pay that much for a monitor. That is about what my old 20” Triniton monitor cost me if you adjust for inflation, and that monitor was both used and physically damaged!

All the 38” ultrawide monitors are premium products. You can find 34” ultrawide monitors from brands like Sceptre for under $300.

The premium you have to pay for the extra 4” is high, and I am hopeful that someone like Sceptre will add a 38” ultrawide monitor to their lineup in two or three years.

It seemed like a wise move to save $600 or more today. Do you think I made the right decision? Maybe I will be able to add a 38” monitor to my desk in a few years with that $600 that I saved!

Why the Gigabyte G34WQC A?

I can’t believe how much effort I put into shopping for a monitor. I figured that all of the 1440p ultrawide monitors under $400 would be full of compromises, and I assumed those compromises would be pretty equivalent.

At first I had my sights set on the AOC CU34G2X. It usually sells for $340 on Amazon, but it was priced up at $400 on the day I was shopping. I immediately added it to my wishlist, and I said I would shop around until the price dropped back to $340.

Tom’s Hardware has a great review of this monitor. They tested the latency, and it scored pretty well. They said its HDR support was basically phoned in. Overall, though, I was pleased with the test results at the $400 price point.

Then I noticed the AOC CU34G3S, and it was also priced at $400. It seems to be an update to the CU34G2X. They both have similar quality 34” 3440x1440 VA panels. The cheaper CU34G2X supports up to 144 Hz and has a curve of 1500R, while the newer CU34G3S goes up to 165 Hz and has a curve of 1000R.

This is when I stopped, blinked a few times, and said, “Oh, poop!” How much of a curve do I want? That tight 1000R curve sounded like too much curve!

I would gladly pay $400 for the 165 Hz monitor, especially since it means I could order it immediately and have it on my desk in two days. I was worried more than a little about that more extreme curve.

I clicked on a lot more monitors, but most of them didn’t have reviews that included latency testing like Tom’s Hardware. There was an Acer Nitro for $360 that looked good on paper, but I couldn’t find a single technical review.

Then I stumbled upon the Gigabyte G34WQC for $380. Tom’s Hardware has a really good review, and all the graphs in the review included the AOC monitors that I was already interested in.

The Gigabyte monitor can only reach 144 Hz, but it still manages to match the latency of the 165 Hz AOC monitor. The Gigabyte has higher maximum brightness, and it has really good dynamic range. Not as much dynamic range as an OLED monitor, but 34” 1440p OLED monitors cost four or five times as much.

All of that data was fantastic, but I was most excited that the Gigabyte G34WQC only has a curve of 1500R.

Is 1000R really too much curve?

I have no first-hand experience with a 1000R monitor. I hit up Google, used my protractor, and did some math. I believe I correctly calculated that my two monitors were set at an angle equivalent to around 650R.

Two flat monitors with an angle in between is probably not directly comparable to a continuous curve, but coming up with such an extreme number makes me think that 1000R wouldn’t be as extreme as I feared.

I feel like 1000R would be amazing for first-person shooters. I worried that it would be awkward when I have Emacs in the middle of the screen and some terminal windows off to either side.

I am staring at a 1500R monitor while writing this. It hasn’t even been on my desk for a full 24 hours, and it is already making me think I would have been perfectly happy if I bought a 1000R monitor.

I do feel that you need to have some amount of curve on a monitor this size. My friend Brian Moses has two desks in his office. Both have 34” ultrawide monitors. One has a curve, the other doesn’t. I bet you can guess which one he prefers sitting at.

Why did I settle for a VA monitor?

I was already using IPS monitors, so you might assume that a VA monitor would be a downgrade. My IPS monitors were made with LCD panels rejected by the folks at Dell or Apple. Those LCD panels came off the same assembly line as the very best LCD panels of the time, and they were intended to be used in the most expensive monitors. There was just something they didn’t like about these batches, so they ended up in cheap monitors.

That leads to the other point that this VA monitor has 10 years of technological and manufacturing improvements on my old IPS monitors.

Of course I did check the prices on 34” IPS monitors. There was one oddball down at $450, but I couldn’t find any reviews on that one. The majority of 34” IPS monitors were priced at $750 and above, so they cost twice as much as any of the VA monitors.

If I were going to spend more than $750, I would most definitely have waited for a sale on one of the premium 38” monitors. They are all very nice IPS monitors, and sometimes you can find one on sale for $1,000.

Can you believe I am only using one monitor?

I have had at least two monitors on my desk for a long, long time. I used to have two Riva TNT2 graphics cards installed in my dual-Celeron ABIT BP6 machine connected to a pair of 19” CRT SVGA monitors from Sam’s Club. I believe this would have been right around Y2K. Do you remember CRT monitors and Y2K?!

My old 27” monitors are just about as tall as they need to be. I tried mounting a third monitor above the other two once, and that was way too far to be leaning my neck back. It was uncomfortable even just looking at a small terminal window at the bottom of the screen. I know the 38” ultrawide monitors would be 160 pixels taller, but that’s really only 80 more on top and 80 more on bottom. That would still be reasonable.

The most important thing I learned from using a pair of 27” monitors is that I can really only see about one third of the second monitor without turning my head. I know that I will continue to miss some of that extra screen, but a 34” ultrawide is roughly one third wider than one of my old 27” monitors. That is pretty close to the perfect width.

I was a bit worried that a 38” ultrawide might be too wide. Especially when playing full-screen games. I am much less concerned about this after having the 34” ultrawide on my desk, and I should have figured that out with math. A 38” monitor is only 400 pixels wider than a 34” monitor. That is only 200 more pixels to the right and 200 more pixels to the left!

Don’t let me talk you out of spending three times as much on a 38” ultrawide! I would certainly be excited to have one on my desk.

Let’s get back to the Gigabyte W34WQC A!

I was trying to find a compromise that is good for gaming, good for productivity, and easy on my wallet. I think the Gigabyte was a good choice, and it ticked almost all the right boxes for me.

You already know I was shopping for something reasonably priced. All the monitors I was looking at were $400 or less.

Productivity would steer most people towards something with a much higher DPI. 32” widescreen 3840x2160 monitors are quite common. My wife has a very nice 32” Dell 4K IPS monitor on her desk. It looks great, and it is around 140 DPI.

I could write 2,000 words about why I would prefer to stick to the same DPI. The short answer is that Wayland isn’t ready for me, and X11 doesn’t support fractional scaling. Everything is easier for me if I stay at 110 DPI, and I don’t think there are any high-DPI ultrawide monitors.

The 34” ultrawide is working out well so far. I have my screen divided up into three equal columns. Firefox is on my left with the awesome PPI Calculator open. My Emacs window is in the middle with the font enlarged slightly, giving me a little over 90 columns. To my right are a pair of terminal windows that are about 125 columns wide.

Davinci Resolve Ultrawide

It should definitely be noted that Davinci Resolve is just a little more comfortable with an ultrawide display. You can comfortably fit two widescreen viewers, the inspector tab, and the media pool on the screen at the same time. I used to have to scroll my media pool from side to side to see timecodes and clip lengths. I won’t have to do that anymore!

I have been firing up older first-person shooters that I am confident will keep up with the Gigabyte’s 144 Hz refresh rate. I wandered around for a bit in Borderlands 2, I played through a level of Severed Steel, and I have also been just generally farting around in Just Cause 3.

I ran the UFO ghosting test, and the W34WQC definitely has some ghosting. If I were smart, I would have run the test on my old monitors before putting them in the closet!

I can most definitely tell that the Gigabyte monitor at 144 Hz feels smoother than my old QNIX monitors at 102 Hz. Part of that is certainly due to the extra 42 Hz, but I suspect both monitors have roughly the same number of frames of ghosting. That probably means that the Gigabyte VA panel’s ghost frames fade away more quickly.

I have no science to back that up. This is how I feel playing the same games with each monitor.

I do have some complaints!

Can I start the complaints with a bit of praise? The W34WQC stand is pretty nice. It feels solid, it can tilt, and the height is easily adjustable. I removed the stand as soon as I made sure my long DisplayPort cable could manage 144 Hz at native resolution, because I always use monitor arms. I was excited to see that the Gigabyte stand is attached using the VESA mounting screws. That means I can attach it to any other monitor. I may wind up using it on one of the old QNIX monitors, since I have no idea where the stock legs went to.

Zip Tied Power Cable Cheat

NOTE: Is snipping away ½” of strain relief and zip-tying a 90° bend in the cable cheating? Is it still cheating if it works?

My first complaint is the location of the ports. They all point downwards, and they are all rather close to the bottom. I had to search through my box of power cables to find the one with the smallest end, and I had to get creative with a zip tie to attach the power cable in such a way that it wasn’t hanging below the frame. Who wants to see cables dangling below their monitor?!

I need a long DisplayPort cable to reach my computer, so I am using my own. It has a fairly compact end, and I can still just barely see the cable from where I am sitting. I do have to duck my head down to see, but I shouldn’t be able to see it from my chair at all. The included DisplayPort cable has even longer ends than the one I am using.

The monitor is too vibrant with the default settings

Everything is rather bright, and the reds are crazy vibrant with the monitor set to the standard profile. Reds are bright. Browns look a bit orange. Everything is eye catching, but not in a good way.

I just set it to the sRGB profile, and I think it looks great. I did bump up the brightness a bit to fit better with the lighting in the room. I am assuming Gigabyte’s sRGB profile is calibrated fairly well. I am excited to learn that the color profile I have been using for years on my QNIX monitors wasn’t all that far off!

Conclusion

I believe I made a good decision, but I also don’t feel like there was a wrong choice to be made here. The Sceptre is probably a decent value at $300. Either of the AOC monitors seem fine both on the spec sheet and in the technical reviews on Tom’s Hardware. I don’t expect I would have regretted buying any of them, but I do think the Gigabyte was a better value for me.

I do have some regret that I didn’t splurge on a 38” ultrawide. For productivity work, like writing this blog, the 34” monitor just feels like a bigger monitor. Being 400 pixels wider would almost definitely make the 38” ultrawide feel much like two monitors without a bezel. Then I remember that I can nearly buy an AMD 7900 XT with the money I saved buy staying one size smaller.

What do you think? Did I make the right choice with the Gigabyte W43WQC A? Why does every monitor have a terrible name? Are you already using one of those 38” ultrawide monitors? Do you think I should have spent three times as much for those extra four inches? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

I Am Excited About the Topton N5105/N6005 Mini-ITX NAS Motherboard!

| Comments

I have been excited about the Topton N5105/N6005 motherboard ever since Brian Moses sent me a link to it while he was pondering his DIY NAS build for 2023. I literally haven’t been this excited about a motherboard and CPU combo for a low-power NAS or homelab build since I built my AMD 5350 homelab machine way back in 2015!

Topton N5105 on Brian Moses's Desk

I am writing about this now because my good friend Brian just bought a big box of these N5105 motherboards, and he is selling them in his eBay store. Am I allowed to tell everyone that?

Why should I buy a motherboard from briancmoses.com?

I could say a lot of things about how Brian is a good dude. We have been friends for decades, though, so you should probably assume that I am biased, and you should take anything I say that is just an opinion with a grain of salt.

I can definitely give you some verifiable facts. Brian has been buying parts for his DIY NAS build blogs and his giveaways for nearly a decade. I think the upcoming giveaway might even be the tenth anniversary. He buys the parts to make sure there won’t be any problems when you buy the same things to replicate his successful builds.

Two Geeks Exchanging Money

You can buy the Topton motherboard from Topton’s store on Aliexpress. I know the prices there move around, so I am not sure how much money you might save by going that route. Brian has already imported the motherboards, so there won’t be any surprise tariffs or fees. You won’t have to hope your package doesn’t fall off the boat on a slow shipment direct from China. I don’t know that shipments like this travel by container ship, but that is what I picture in my head.

Brian is taking on quite a bit of risk here, so I think his pricing is reasonable. I think that buying a motherboard from Brian is a great way to thank him for all the work he has put into his DIY NAS build blogs over the last eleven or twelve years!

Why is this Celeron N5105 motherboard so awesome?

I feel like I have to say a few words about my AMD 5350 build from 2015. That build used a Mini-ITX motherboard with a slow, power-sipping AMD laptop-grade CPU, four SATA ports, and room for two full-size DDR3 DIMMs. My server with two SSDs and two hard disks idled at around 35 watts, and the motherboard and CPU for that build cost just under $100. The AMD 5350 was from the same family of processor used in the Playstation 4 and Xbox One.

That was an amazing combination at the time, and I shop for something comparable every year, but I usually come up empty. The Topton N5105 doesn’t manage to come in at quite the same price point, but it packs in so many extra features to help justify the extra cost. It also doesn’t help that everything is just more expensive today than two or three years ago.

Some of those extra features are just inevitable due to the passage of time, like the much faster CPU, the faster built-in GPU, and support for more RAM. We might be able to say the 2.5 gigabit Ethernet on the Topton board was inevitable, but so many motherboards still ship with 1 gigabit Ethernet, and the Topton has FOUR 2.5 gigabit Ethernet ports!

These are the highlights from the spec sheet:

  • Jasper Lake Intel® Celeron® Processor N5105 CPU (Base: 2.0Ghz, Burst 2.9Ghz, TDP: 10W) *Mini-ITX Form Factor (17.0 cm x 17.0 cm)
  • 2 x DDR4 SO-DIMM slots 2400/2666/2933MHz (non-ECC) up to 32GB RAM
  • 2 x M.2 NVMe 2280 slots (PCIe 3.0 x1)
  • 6 x SATA III
  • 4 x 2.5Gbps (Intel i226-V) network interfaces
  • 2 x USB 3.0 ports
  • 4 x USB 2.0 ports (internal and external)

You can put this motherboard in a dense little case like the Jonsbo N1 or Jonsbo N2. You will have plenty of SATA ports to fill up all the bays with 20 TB 3.5” hard disks. You will have room for a couple of NVMe drives for boot, lvmcache, and maybe some fast virtual machine storage.

This crazy motherboard even has FOUR 2.5 gigabit Ethernet ports. Just one of those ports would be fine for my own personal use, but having more means you could even replace your home router with a virtual machine, and it should have no problem handling some of the fastest Internet connections available.

The best part is how efficient your home server can be with this board. I don’t spill the beans on Brian’s upcoming DIY NAS blog, but it is looking like a pretty much fully loaded Topton N5105 build can idle all day long at somewhere around 60 watts, and he didn’t even put in effort to curb power consumption.

Is the Celeron N5105 fast enough?

Fast enough for what?! I have a Raspberry Pi server sitting at Brian Moses’s house. That Pi can do AES encryption fast enough to keep up with the 12 TB USB hard disk, and it can pass encrypted traffic to my house via Tailscale at a little over 200 megabits per second. My Pi 4 is a very capable little piece of hardware, but the Celeron N5105 is at least four times faster.

My homelab server is built from spare parts. The CPU is an AMD FX-8350 that I bought in 2013. This was the fastest, most power-hungry consumer CPU that AMD offered at the time. My NAS virtual machine on my homelab server was able to transfer files via NFS or CIFS at 12 gigabits per second. My FX-8350 can move encrypted data via Tailscale at nearly 900 megabits per second. The FX-8350 is more than 40% faster than the N5105.

My Raspberry Pi and homelab servers are two varieties of orange that don’t compare all that directly to the Topton N5105.

My FX-8350 spikes up to 220 watts to push 900 megabits per second via Tailscale. Even though the Celeron N5105 is slower overall, it has more modern encryption acceleration instructions and more memory bandwidth than the ancient FX-8350, so the N5105 can push encrypted data via Tailscale at more than one gigabit per second. I don’t have Brian’s numbers on hand, but I recall it being a good bit more than one gigabit per second!

I don’t have an N5105 machine on hand to test out myself, but I have no doubt that when you install fast-enough disks, that it has enough horsepower to max out every one of those 2.5 gigabit Ethernet ports while serving files.

Why haven’t I built a Topton N5105 machine yet?!

I eat pizza at Brian’s house nearly every weekend. I bet it would be easy to sneak off with a motherboard!

The Topton N5105 is my dream homelab motherboard. It lands so close to the sweet spot for price, power efficiency, and horsepower while packing in a ton of SATA and network ports.

The Topton N5105 has more than enough CPU to run my Octoprint, NAS, and Seafile virtual machines. It would have no trouble saturating a couple of those 2.5 gigabit Ethernet ports, and having those would give me an excuse to start upgrading some of my machines to 2.5 gigabit Ethernet.

Best of all, the N5105 would probably save me 30 watts while turbocharging my server’s Tailscale throughput.

I always say that the best server for you is probably the one you already have. My ancient FX-8350 may be power hungry, but it would still take seven or eight years for the N5105’s power savings to add up to its price. Not only that, but the current hardware is doing its job just fine, and I am seeing a workstation upgrade in my future. That will mean free upgrades for the homelab!

Sometimes, building a new server IS the hobby!

My homelab server really isn’t much of a laboratory. It mostly runs services that I actually use. I just want everything to be reasonably priced and reasonably reliable. My homelab isn’t my hobby. My homelab-like gear is there to enable my other hobbies.

If building a cool little server is your hobby, then the Topton N5105 might be the motherboard you’ve been waiting for. Pairing it with either the Jonsbo N1 or N2 case would make for such a nifty, dense, shoebox-sized server.

I really like both of these cases from Jonsbo. The Jonsbo N1 would look cool at the edge of your desk or sitting on a shelf in the background of your video calls, while the Jonsbo N2 is more practical with its easily accessed hotswap bays for the hard disks. I would happily build a server using either one!

Is one N5105 server enough for a homelab?!

Everyone has a different idea of what constitutes a homelab, and everyone has different needs. One of our friends on our Discord server is running a fancy Kubernetes cluster at home on three or four Beelink SER5 5560U units. Another of our friends is consolidating his homelab down to a single off-lease Dell 730XD with a pair of 12-core processors, 192 gigabytes of RAM, and an Nvidia P40 GPU.

I think it is awesome that you can fit a cluster of separate physical machines in a lunchbox. I also think it is awesome that you can get a really good deal on beefy off-lease server gear.

The Topton N5105 is more than enough for my own homelab needs. Maybe it is enough for yours, or maybe a server built with this motherboard would be a good fit with the rest of your hardware!

I did see that the N5105 can be used for GPU video transcoding with Jellyfin.

Conclusion

I hope I haven’t spilled any beans about the 2023 DIY NAS build. I really do think the Topton N5105 motherboard is a nifty piece of hardware with a great combination of features at about the right price point. Having a power-sipping CPU, six SATA ports for a big RAID 5 or RAID 6, two NVMe slots for speedy media, and a ton of 2.5 gigabit Ethernet ports on a mini-ITX board is fantastic.

Writing this blog post is making me want to build a little server around the Topton N5105. Have you already built one? How is it working out for you? Do you think I should give in to the temptation and build one for myself? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

Enabling Transparent Hugepages Can Provide Huge Gaming Performance Improvements

| Comments

My gaming rig is getting rather long in the tooth. I am running a slightly overclocked Ryzen 1600 and an aging Nvidia GTX 970 with its thermal limit pushed to its maximum. I wouldn’t even be able to play any games from the last few years if it weren’t for Proton-GE’s ability to enable AMD’s FidelityFX Super Resolution (FSR) in almost every game I play.

I haven’t done a ton of science. I don’t have a handy way to benchmark most games. I did run a Borderlands 3 benchmark with my potato settings. I have nearly every knob turned to the lowest setting, and I bet I have some extra things disabled in config files. I run Borderlands 3 at 1280x720 with FSR upscaling to 2560x1440, and my hope is that the game can stay above my monitor’s 102 Hz refresh rate. It doesn’t always stay that high.

NOTE: I backed off the overclock of my aging QNIX QX2710 monitors while replaying Borderlands 3. I dropped them down to 86 Hz, and I will probably keep them here until my next monitor and GPU upgrade. It is easier to hit 86 frames per second in newer games, and it is enough of a step up from 60 Hz that I don’t feel too bad about giving up the extra frames. Why I landed on 86 is probably a long enough story for its own blog post. Can you believe these crazy monitors are still working great nine years later?

Borderlands 3 Benchmark

The benchmark came in at 92 frames per second with Transparent Hugepages (THP) disabled. That is the default setting on Ubuntu. That went up to just over 99 frames per second when I turned THP on.

Your mileage will most definitely vary, but when you’re constantly dropping just below your monitor’s refresh rate, that 8% improvement is huge! It is easy and free to give it a try:

1
2
3
4
pat@zaphod:~$ echo always | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
[sudo] password for pat: 
always
pat@zaphod:~$ 

That command won’t do anything permanent. You will be back to the default setting next time you reboot.

What are Transparent Hugepages? Why do they help performance?

Let’s keep this to two or three short paragraphs. Your software usually allocates memory in 4 KB pages, and your CPU has to keep track of which physical locations on your sticks of RAM correspond to those 4 KB pages. The CPU has a cache of recently accessed pages. If your game is flipping through more pages than fit in that cache, things will slow down.

Hugepages are usually 2 MB instead of 4 KB. That means the CPU has to keep track of only a tiny fraction of those mappings. It is sort of like having a page cache that is suddenly 500 times larger.

When something is in the cache, it is just like when an item is on the shelf at the store. When something isn’t in the cache, you have to ask an employee to fetch the item from the back room. Every time something isn’t on the shelf, you have to wait. Just like the CPU.

THP have been a HUGE boost to my Team Fortress 2 performance!

Team Fortress 2 on Linux is stuck in a stupid place right now. The game uses a modern enough version of DirectX on Windows to work well with modern graphics hardware, but it is stuck using OpenGL on Linux. Since it is a multiplayer game, they don’t let us run the Windows version under Proton to get a performance boost. Valve have updated Portal 2 and Left 4 Dead 2 to use DXVK on Linux, and I hope they do the same for Team Fortress 2, but I am definitely not holding my breath.

Team Fortress 2 on Linux needs a lot of single-threaded CPU grunt, and I have always had trouble keeping the game up at my monitor’s 102 Hz. This is another one of those things I can’t easily benchmark.

NOTE: Not much going on in the video. I had OBS running a replay buffer, but this was the only time I remembered to hit the key to save a replay!

The game runs fine until I walk into a busy fire-fight on a server with tons of fancy hats and lots of explosions and effects. Then my frame rate drops far enough below my refresh rate that the game stops feeling smooth and I start having trouble landing pills with my demoman.

Enabling THP has helped dramatically with TF2. As far as I can tell, I have yet to drop below 102 frames per second, and I certainly haven’t dropped as low as my new 86 Hz refresh rate.

Quite a while ago I used mastercomfig.com to generate some potato settings for my game. The settings went so far that the weird cubic lighting made the game sort of resemble Minecraft. I am still using mastercomfig.com to lower my settings, but I have backed off several notches from the potato-grade settings.

It is a bummer that I have to play this ancient game with my GPU so underutilized that it sits clocked at the minimum frequency, but I am super stoked that I can play without my frame rates helping me to lose!

Will THP help with other games?

As I said, I am not using a ton of science here. I was playing through Dying Light when I learned that THP might help gaming performance. My unscientific test there was loading the game, waving the camera around in the room where I spawned, then reloading the game with THP and doing the same thing. The numbers seemed to be leaning at least 5% higher, but we are just going by my memory between reloads and hoping I pointed the camera at similar things.

Some games need more CPU. Some games need more GPU. Some settings lean more on one than the other. Even after that, things will depend on how much CPU and GPU your machine has. Some games could run slower, though I don’t think I have seen that yet. Some games might run the same. Some games might run a little better.

The only way to find out is to try.

THP can cause performance issues

There are reasons that the Linux kernel doesn’t enable transparent hugepages by default. There are some programs that run extremely poorly or cause problems. The most famous of which is probably PostgreSQL.

I have been running THP on my desktop for a couple of weeks now. I haven’t rebooted in nearly two months. I have had one hiccup so far. I wandered into my office and noticed that my glances window had a red process using 100% of a CPU core. It was khugepaged. Its job is to defragment memory so the kernel can map more 2 megabyte pages.

In my haste, I didn’t see the root cause of my problem right away. I figured my web browser was my longest-running process that uses a large amount of RAM, so I closed and reopened Firefox. The problem went away for a few minutes, but then it was back.

It turned out that when I closed Davinci Resolve the night before, it didn’t actually completely shut down. There were no windows visible, but there were processes eating up memory and using a very small but constant amount of CPU. I killed Resolve and haven’t seen khugepaged since. That was a few days ago.

Conclusion

I know some of you are rocking much newer GPUs than my GTX 970, and you probably don’t need to wrestle an extra 5% out of your games. I am glad GPU prices are getting better, but I paid $340 for this GPU within a week or so of release, and it was the second fastest available. More modern cards that perform roughly as well cost almost as much. Prices are getting better, but I feel like I will get quite a bit more bang for my buck if I can hold out on my next a little while longer.

If you need to squeeze a little extra out of your aging gaming rig, you should most definitely try enabling transparent hugepages. It is easy to try, easy to undo, and it seems very unlikely that it would have a negative impact on your gaming performance.

Tailscale is Awesome!

| Comments

I noticed a problem last week. Tailscale is at least a small part of nearly everything that runs on my own private network, so it gets mentioned in the vast majority of the blogs I write. I also tend to write a post every time Tailscale drops a new feature and I start using or relying on those new features, but those write-ups are rather specific. I don’t have anything recent to link to that talks about what I actually use Tailscale for on my personal network.

This is the post that I need to write to address this!

What on Earth is Tailscale?

If I just say that Tailscale is a mesh VPN, I feel like everyone should have a pretty good idea what I am saying, but the world has gotten weird. When I first starting using them twenty-something years ago, a VPN was the magical thing that would put your Internet-connected computer on the other side of your company’s firewall, or it might be used to connect two physical locations using an Internet connection.

hacker2

Marketing has made the vast majority of people now think the services that companies like NordVPN are selling is the traditional use of a VPN, even though what these companies are doing is just one of the many things you can do with a VPN connection.

Yeah, yeah. What in the heck Tailscale?

You can shoehorn Tailscale into doing a lot of useful things, but how about we just talk about the basics? The idea is that you install Tailscale on all of your laptops, phones, and servers. Tailscale will work its magic and set up a direct Wireguard VPN connection between each of your devices. As long as they have a decent Internet connection, each of your devices will be able to make a direct network connection to any other device on your personal Tailnet.

You can be on your phone at Starbucks, connect to a test web server that is running on your laptop at home, and that laptop will be able to access a database server that lives at your office. Everything just works.

The best part is how easy it is to get up and running. You could probably already have signed up for a Tailscale account and had two or three machines connected to your new Tailnet in the time it took you to read up to this point. It is that easy to get going.

Tailscale is awesome because you don’t have to jump into the deep end

Tailscale is probably the easiest VPN to set up, and it is really easy to install it on two or three computers just to solve a single problem. Dipping your toe in is a great way to get started!

You can install Tailscale on your phone and your Home Assistant server, and you will immediately be able to access your home automation dashboard from anywhere in the world.

Maybe you need remote access to your NAS. Just install Tailscale on your laptop and your NAS, and you’ll be connected in no time.

If you keep adding one or two machines at a time to your Tailnet to solve small problems, it won’t be long before almost all your computers are part of your Tailnet.

This is definitely how I became so reliant on Tailscale. I started with just a handful of nodes on my Tailnet, and before I knew it, every device I own was on my Tailnet, and at this point almost every network service that I run is only accessible via my Tailnet. Say Tailnet again. Tailnet.

What is Pat doing with Tailscale?

I am going to expand on this, but I think it is best to start with a list:

Some of these things are simple enough that they don’t require their own heading.

Remote access to Home Assistant is kind of basic. The Home Assistant app on my Android phone can always communicate with my Home Assistant server no matter where I am, and I didn’t have to expose Home Assistant to the Internet. I can click a button to warm up my espresso machine when I am driving home, and that is awesome.

The Raspberry Pi-KVM is really cool. It is a do-it-yourself networked keyboard, video, and mouse device. You connect the Pi to a network and plug the appropriate HDMI and USB cables into a server. You can then use a web browser to see everything on that server’s HDMI port and type as though you are in the room with the server.

Tailscale means I can easily leave the Pi-KVM behind at my parents’ house and troubleshoot their problems from the comfort of my desk at home.

I am not using Tailnet Lock

Tailscale has a potential security problem. It is possible for Tailscale to add a new node to your private Tailnet. Up until a few months ago, you just had to trust that the folks at Tailscale wouldn’t add some sort of nefarious network device to your Tailnet.

Tailnet Lock closes that hole. When you enable the lock, you have to manually approve new nodes on your Tailnet. It seems to be implemented in a secure way. You can read more about Tailnet Lock in Tailscale’s documentation.

Tailscale Lock is not yet compatible with node sharing. I rely very heavily on node sharing, so I couldn’t enable Tailscale Lock today even if I wanted to.

I haven’t decided whether I even want to enable it. I don’t think I feel paranoid enough to worry about someone at Tailscale adding a node to my Tailnet.

Self-hosted Seafile has saved me $700 so far!

I wouldn’t have self-hosted Seafile again if I had to run the server on the public Internet. Tailscale lets me access my Seafile server from anywhere in the world, and my little Raspberry Pi server doesn’t accept any connections from the local network. It is only accessible via my Tailscale network.

I don’t have to worry so much about old, broken, exploitable services when they are running on my own personal Tailnet. The entire Internet can’t hammer away at the nginx server in front of my Seafile server looking for a hole. I don’t have to scramble around one morning when I discover that there is a patch for a zero-day exploit against Seafile. I can just remain happy knowing that only five computers in the world can see my Seafile server.

Seafile is syncing a little over six terabytes of data between my workstation, my NAS, and my laptop. The Seafile server also stores a copy of all that data with 90 days of history.

Google charges $100 per year for two terabytes of cloud sync storage. Dropbox charged $120 for the same. I can’t say that I have done the math perfectly here, because I haven’t done a good job of tracking when I actually cross each two-terabyte threshold. I do know that I would be paying $400 to Google or $480 to Dropbox this month if I weren’t hosting my own Seafile server.

I have been slowly spending those savings on local storage. The first $300 went to the Seafile Pi and its 14 TB hard drive. Another $200 or so went to a 12-terabyte drive for my workstation, and last month I added a 14-terabyte drive to my homelab server. If I don’t need to replace any failed hardware, the $400 I save each of the next three or four years will go right into my pocket!

Node sharing is Tailscale’s killer feature

We need to share large video files to produce our various video content. I usually have to send a 20- to 30-gigabyte video file to Jeremy so he can finish up the work on the Create/Invent Podcast, and Brian has to send me about 40 gigabytes of video when we record The Butter, What?! Show. They both have accounts on my Seafile server, and I have shared the server to their Tailnets.

Why is this such a killer feature? If I were doing this the old-fashioned way, I could have sent them Wireguard keys!

With Tailscale, I don’t need to maintain anything. I don’t need to generate keys. I don’t need to ship those keys to Brian or Jeremy. They just have to log in to Tailscale, and Tailscale manages all of that for me.

Brian already had his own Tailnet, but Jeremy was new to Tailscale. I sent him the sharing link. That sent him to the Tailscale website, where he was able to create an account, and he was taken straight from there to the download page. He was able to ping my Seafile server a few minutes later. All I had to do was send him a URL.

If Brian or Jeremy can’t log in to Tailscale, they aren’t going to have to call me.

Geography-agnostic servers are pretty cool!

I made a mistake and created a problem with my Seafile Pi when I as upgrading all my machines to use Tailscale SSH. It was my fault, but I could no longer connect via SSH, and I needed to sit down at the console or pop the boot drive out. I was at Brian Moses’s house one Saturday night for pizza, so I just made sure to bring the Pi home with me.

Seafile was down while I drove, but I plugged the Raspberry Pi in when I got home, so nobody noticed that anything happened. If Brian didn’t see me leave the house with the server, he wouldn’t have noticed that it was in a new location.

I fixed my Tailscale SSH problem, and I 3D-printed a new case with a slot for an OoberLights board. I think the Seafile Pi was on my desk for two weeks before I took it back to Brian’s house.

I think this is so cool. I can move a server to a new location for troubleshooting, and everything that connects to that server will still be able to connect. No configuration changes need to be made anywhere. Everything just works!

Tailscale SSH has been a nice upgrade!

Tailscale keeps adding new features. They are almost always useful features. They tend to be the sort of features that if you’re already using Tailscale, then there isn’t much excuse not to turn them on and try them out. Tailscale SSH is one of those features.

I have an SSH private key on my workstation. I have an SSH private key on my laptop. It is my job to make sure I install their matching public keys on every device I need to connect to via SSH. Then I have to remember to generate new keys on some sort of regular basis.

I always fail at the last part. One of the keys I was using last year was generated in 2013.

Tailscale is already generating a private key on every node to establish secure Wireguard connections. Why not let Tailscale handle all my key management for me? I can use Tailscale’s ACLs to control which machines can connect to other machines. My desktop and laptop can connect anywhere, my internal servers can sometimes connect to each other, but my public servers are never allowed to SSH anywhere.

It is unfortunate, but I still need to maintain SSH keys. I can’t use Tailscale SSH to authenticate to Gitlab, Github, or even Tailscale nodes that have been shared with me. Even so, cutting the number of places where I manage public keys down to four from dozens and dozens is a huge win!

Tailscale’s ACLs

There is a good chance that you won’t need to use Tailscale’s ACLs. I think I was running Tailscale for almost two years before I even considered [setting up any access controls][tacls].

I decided it was time to tag all my nodes and configure ACLs when I added my first vulnerable node to my Tailscale network. I migrated the web server that runs the nginx server for our blogs over to a cheaper Digital Ocean droplet, and at the same time I added Tailscale to the server.

This server is answering requests over the Internet. Anyone sitting anywhere in the world could be banging on this server trying to get a shell. I don’t want them to have a point from which to attack the rest of my Tailnet if they succeed. I have fragile things like Samba servers hiding behind Tailscale!

These are my tags:

  • workstation (machines where I sit at the keyboard)
  • shared (via node sharing)
  • server-ts (servers only accessible via Tailscale)
  • server-dmz (servers accessible via Tailscale or LAN)
  • server-external (servers on the public Internet)

The workstation tag can connect to anything. The server-ts tag can connect to other any of the three server tags. The server-dmz can connect to server-dmz or server-external, and the server-external just can’t connect to anything. The devices that have more exposure can never connect to safer devices.

Tailscale Funnel is solving real problems!

I wanted to yell something like, “I don’t know what I would do without Tailscale’s fun tunnels!” but that would be an exaggeration, if not a total lie. We muddled along just fine without Funnels. Our continuous blog deployments used to have a two- or three-minute delay, but with a Funnel, they happen within a couple of seconds of a new post being pushed to Gitlab.

What is a fun tunnel? If you have a service running on your private network, and you need to expose that service to the public Internet, then you can use a Funnel.

I added a webhook server to our little development server, and I configured Tailscale to point a Funnel from butterwhat.humpback-rooster.ts.net to our development server. Now a Gitlab action can let our development server know that a new commit is available, and it can publish our changes immediately.

Funnels are currently in alpha and have unpublished bandwidth limits. The traffic has to run through Tailscale’s relays, so the bandwidth is much lower than with a direct Tailscale connection. You probably don’t want to stream video over Plex or Jellyfin with a Funnel, but it is probably alright to host a low-traffic web server.

I would prefer to not need a Funnel at all!

My only Funnel is a consequence of not yet having Tailscale when we set up a Git repository so Brian Moses and I could collaborate on Butter, What?! blogs. We needed a repository we could both access, so I set one up for us on Gitlab.

I already had a virtual machine running on my homelab server that was handling Octopress and Jekyll blogs, so it made sense to me to just add one more blog to the mix. That server had a cron job that attempts to pull changes from Gitlab every few minutes, and if there were changes, they would be published to the Internet.

I improved the situation with a Funnel just because I could. The better solution to the problem would be removing Gitlab from the process. I shared the virtual machine with Brian via Tailscale a long time ago. Why aren’t we just pushing our updates directly?

Tailscale has a proxy server and handles Let’s Encrypt certificates for you!

Tailscale makes it easy to create TLS certificates for your devices. They have had this feature for a while, and their Funnel feature wouldn’t function without it.

I am way more excited about the proxy server that landed alongside Funnels. Not only does the proxy work with your Funnels, but it also works just fine on your Tailnet. You can ask Tailscale to create a Let’s Encrypt certificate for you, then have Tailscale proxy connections to an HTTP or HTTPS server for you.

1
2
root@octoprint:~# tailscale cert octoprint.humpback-rooster.ts.net
root@octoprint:~# tailscale serve / proxy 5000

There is no extra software to install. I don’t have to figure out which web server Octoprint uses. I don’t have to ask Google how to install a certificate on that particular web server. I had a proper HTTPS connection to my Octoprint server in less than a minute, and it was awesome.

Tailscale exit nodes might be a replacement for NordVPN and friends

Tailscale lets you designate machines on your Tailnet as exit nodes. You can configure an Android phone to be an exit node, and you can even share your exit nodes with your friends.

What is an exit node? It is a device that you can route all your Internet traffic through. If you’re sitting in a coffee shop on some sketchy WiFi, you can click a button and have Tailscale force off of your traffic through one of your exit nodes. The coffee shop will only be able to see encrypted packets flowing past while your real unencrypted traffic exits via a computer at your house.

I have an exit node in my house, at Brian Moses’s house, and on a Digital Ocean droplet in New York.

Services like NordVPN claim to make you completely anonymous. I can’t speak to how truthful they are about this, but I can definitely tell you that you will not be completely anonymous when using Tailscale exit nodes. Maybe you would be somewhat anonymous if you set up a Tailscale exit node on an old Android phone, connect it to the Starbucks WiFi, then hide the phone somewhere near Starbucks. There is a flaw to this plan, but I am not sure it is a bigger flaw than trusting NordVPN.

You will definitely be hiding the content of your network traffic from the coffee shop, but whatever servers you are connecting to will see the IP address of your exit node. If you’re doing something nefarious, someone can link you to your exit node’s Comcast account.

I can use an exit node while I am on sketchy WiFi, but how often does that happen these days when our phones are fast WiFi hotspots? I am way more likely to use exit nodes to test website functionality from a remote location.

Tailscale on a travel router is neat, but isn’t quite ready

I have a really inexpensive OpenWRT travel router in my laptop bag. It is a GL.iNet Mango that I bought for about $20. They go on sale quite often. I had to shoehorn Tailscale on there because the Mango’s flash storage is ridiculously tiny.

All the recent models from GL.iNet have plenty of storage for Tailscale, and the latest beta release of their custom OpenWRT interface has Tailscale and Zerotier right in the GUI. You can get either up and running with just a few clicks!

The cheapest GL.iNet router that can run this beta firmware seems to be the GL.iNet GL-SFT1200 Opal travel router.

Why would I want the cheapest travel router? I enjoy the idea of having a tiny, USB-powered network device that I can leave behind. The cheaper it is, the less concerned I will be about not being able to recover the device!

I can leave it behind at my parents’ house so I can connect to their network to troubleshoot a problem. I can get permission to leave it at a customer’s site to remotely connect to some of their gear. I am sure you can come up with some use cases, both legitimate and nefarious!

Several of us on our Discord server would really like to be able to connect a Fire TV or Android TV to a travel router, then have the travel router pass all the packets through a Tailscale exit node. We haven’t had a ton of luck. I’ve managed to mostly make it work on the ancient version of OpenWRT that ships on the Mango, but the Mango can only pass VPN traffic at about four megabits per second. I haven’t had any success with newer firmware releases on nicer routers.

Conclusion

I knew that I was using Tailscale for a lot of small yet important things, and when I sat down to write this blog, I knew I would have to write a few paragraphs about each of those things. What I didn’t know was that I would be going way past 3,000 words! Does that mean I should keep the closing paragraphs brief?

Are you curious about Tailscale? Are you having trouble understanding what exactly Tailscale is or what it can do for you? Don’t worry about that. It should only take you a few minutes to get Tailscale up and running. I expect you’ll have a better idea about what is going on and how Tailscale can solve problems for you. I am solving problems with Tailscale that I didn’t even know I had!

Eliminating My NAS and RAID For 2023!

| Comments

I am excited! I have been working toward this goal for quite some time. As I am writing this, I am migrating data off my server’s old RAID 10 onto a single 14 TB USB hard drive. I should be finished and ready to physically remove all but one of the old 4 TB hard drives from my homelab server. Isn’t that awesome?!

I don’t know who this blog post is for. Sometimes I write things with an audience in mind, but this isn’t one of those times. I had some plans, I executed those plans, and I thought I should document that. I hope something in here is of use to you!

I can’t decide if I am talking about my NAS or just the large RAID storage

I am quite certain that when I chat about my NAS that I am also referring to the big, honkin’ stack of disks in a RAID, and not just the fact that the server shares files over my network. I am also aware that enough time has gone by that my quartet of 4-TB drives in a RAID 10 are no longer big nor honkin’.

StorageUpgradeThumbnail.jpg

I have had a RAID of one sort or another in my home since the very end of the twentieth century. In those days, individual hard disks just weren’t big enough to store all my data on a single file system, and we had nearly an entire decade in the middle of that time span where hard drives were poorly manufactured.

Hard drives have gotten reliable again, and disk sizes have outpaced the rate at which my data is growing. I don’t actually need a RAID to store the bulk of my data anymore.

I need a redundant array of inexpensive computers instead of just disks

I think it is safe to say that Tailscale was the tiny domino in front of the bigger choices that led me to the point I am at right now. Tailscale and fast Internet connections mean I can drop a tiny server anywhere in the world and treat it like it is on my local network.

Today I have just over six terabytes of data, and that is growing at a rate of around one terabyte each year. I figure I should have at least three copies of that data, and at least one of those copies should be in a different physical location. At least one of those copies should have some amount of history available just in case I accidentally delete an important file.

One of those copies of my data has been living on the RAID 10 array on the NAS virtual machine running on my homelab server, but those drives are getting full, and they are very old now.

tl;dr Let’s just list all the places where my data lives!

Hello. This is Pat from about six paragraphs in the future. I realized that I am going to use a lot of words explaining all the pieces of my backup and redundancy plan, and it might be prudent to just make a list of all the places where my data lives before I do that.

  • Seafile on a Raspberry Pi
    • 14 TB of storage
    • off-site at Brian’s house
    • 90 days of history
  • Homelab server
    • 14 TB of storage
      • I can bug out with the USB hard drive
    • opposite side of the house! (might survive a fire?!)
    • 90+ days of daily btrfs snapshots
  • My workstation
    • 12 TB of storage
    • No snapshots
  • My laptop
    • 1 TB of storage (not enough!)
    • Bulky data like video files aren’t synced here

Three full copies of 100% of my data. One copy is in a different location. Two copies have history. Backups are out of band.

A quick note about the Western Digital Easystore!

I am probably more excited about this than I should be, but the Western Digital Easystore USB hard drive that I bought has S.M.A.R.T. support!

1
2
3
4
5
6
7
8
9
10
11
12
13
root@kvm:~# smartctl -a /dev/sdd | head -12
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.12.12-xanmod1] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     WDC WD140EDGZ-11B1PA0
Serial Number:    POTATOES
LU WWN Device Id: 5 000cca 28fec75bb
Firmware Version: 85.00A85
User Capacity:    14,000,519,643,136 bytes [14.0 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches

This has been hit or miss for me over the years with USB enclosures and hard drives. Sometimes cheap USB enclosures work while expensive ones don’t. In my experience, though, the majority of USB drives don’t support S.M.A.R.T. at all. The 14 TB Seagate USB hard drive that I bought for my Seafile Raspberry Pi has no S.M.A.R.T. support.

I have no idea if the extremely similar Western Digital Elements drives work with S.M.A.R.T. I don’t even know that all or even most Western Digital Easystore USB drives support S.M.A.R.T. I only know that I have had luck with the one I just bought.

What’s involved in my storage syncing and backups?

The heart of my storage and backup plan is my Seafile server. That server is a Raspberry Pi 4 with a 14 TB USB hard drive, and it lives at Brian Moses’s house. The server is only accessible via my encrypted Tailscale network.

You can think of Seafile as self-hosted Dropbox. As I am writing these words, this Markdown file is synced up to my Seafile server at Brian’s house every time I hit save. Not long after that, the Seafile clients on my laptop and homelab server will download a copy of those changes. This takes about 30 seconds.

I have Seafile set to keep file-change history for 90 days. I could probably already pull six different versions of this blog post out of that history, and I have only written five paragraphs so far!

Any files that Seafile drops on the USB hard drive on the server will be snapshotted once each day.

Why did I plug a USB hard drive into my homelab server?

I haven’t decided how useful this is, but I think the concept is pretty neat!

I have a hole in my backup plan. I would really like to have an up-to-date copy of all my data on my laptop, but the 1 TB NVMe in my laptop is just too small. I keep saying that if there is a fire, I can grab my laptop on the way out of the house. It would be nice if I didn’t have to worry about pulling down 6 TB of video from Brian’s house right after a fire, wouldn’t it?

Pat's server closet

NOTE: I still have to drill a hole to run the wires to the network cupboard correctly. Only one of those computers is doing anything. I thought my homelab server looked lonely, so I brought in some old computers from the garage to sit on the table with him.

When I talk about a fire, I don’t literally mean a fire. Maybe the weather forecast is predicting massive flooding, and we should drive to Oklahoma. Maybe there is an earthquake and we notice a terrifying cracks in the walls. Maybe there is a gas leak. Maybe a CIA agent shows up, and we have to take a glass of water to a UFO.

There are a lot of bad things that can happen where I wouldn’t have to get out of the house in seconds. Emergencies where I would have time to pack my laptop bag.

In those cases, I can just grab the USB hard drive and take it out the door with me!

I am keeping a RAID for the virtual machines on my homelab server

I set aside a 1 TB partition on my new 14 TB USB hard drive for storage of virtual machine disk images. Why 1 TB?!

If I don’t count my existing NAS virtual machine, my VM disk images add up to something not much more than 200 GB. That is just too big to comfortably fit on the old SATA SSDs that my homelab uses for booting and lvmcache. One full terabyte is plenty of room for these virtual machines to grow, and it will be easy to replace this volume with a $50 SSD if I have to.

I built a RAID 1 out of the 1 TB partition on the USB hard disk and one of the old 4 TB hard disks, then I moved all my KVM qcow2 images to that new 1 TB RAID 1.

I think this is super cool! I can walk away with that USB hard disk and my virtual machines will just keep running. Home Assistant will continue to run my home automation, and Octoprint will continue to run my Prusa MK3S.

The opposite will work. If the aging hardware in my homelab machine fails, I can install Debian on any other computer. Then all I have to do is plug the USB hard drive in, point the QEMU configuration to the right place, and I can boot Home Assistant and Octoprint on the new machine.

NOTE: I need to remember to set up a job to regularly back up my QEMU config files to the USB hard drive, or else it will be a pain to replicate all the machines on a new server!

I think it is really cool that I will be able to easily carry all my virtual machines away with me if I ever have to run out the door.

I thought of a new goal that I should keep in mind!

This is something I have been doing ever since I hauled the Seafile Pi to Brian’s house. I just never put the idea into words.

Faster is always nice, but I am going to do my best to make sure my storage, synchronization, and backups work well even if my network is limited to 100-megabit Ethernet.

Last year, my workstation just didn’t have enough storage to hold much video, so I would edit files stored on NFS over my Infiniband link. That was great, but Infiniband only works over extremely short distances or with fiber optics.

Installing a big disk in my workstation and putting it behind an lvmcache fixed that problem. I can accumulate terabytes of video, but the files I am currently working on will always be accessible at NVMe speeds.

My Raspberry Pi is stuck at 100 megabit for some reason. I tried quite a few cables, switch ports, and switches. Those are cables and ports that negotiate gigabit just fine with my other Raspberry Pi. My Internet connection at home is only 150 megabit, anyway, so this hasn’t been a big deal.

Just about the only time this causes any sort of issue is when we record podcasts. We generate a few dozen gigabytes of video files in two or more locations, and it takes an hour or three to get all those files uploaded or downloaded.

This only happens about twice a month, but it is rare that I am in a rush to open Davinci Resolve immediately after an interview. It is usually fine letting this wait a day.

How much did this cost? Is this a better value than using smaller disks in a RAID 5 or RAID 6?

Ugh! This is one of those situations where it is tough to make direct comparisons. It would have surely cost more money if I put two hard disks on the Pi, the homelab server, or both, but maybe three or four smaller drives in a RAID 5 could provide some redundancy without bringing up the cost by much.

More disks would require more SATA ports or more USB ports, and I am not terribly confident that sticking three or four USB disks in a RAID 5 would be stable. It would probably work, but mdadm might kick good drives out if they happen to respond too slowly.

You can get 14 TB USB hard drives for about $200, assuming you wait for a good deal. I think it is safe to say that even if we include tax, I paid less than $700 for my three hard drives.

I bought the Pi long enough ago that I got a good deal on it, so that part of my math would feel like cheating, so I am going to ignore the compute side of things. I am just going to assume you already have some sort of server setup at home like I do.

I have the Seafile Pi hosted for free at Brian Moses’s house, and it is currently storing just under six terabytes of data. That would cost me $300 annually if I were using Google Drive or $360 with Dropbox, and I think I am about to be at the point where I would be charged for my third year with either of those services. Thank goodness I hosted my own file-sync service!

RAID is not a backup!

I always feel like I need to say this. RAID is there to reduce downtime or maybe increase performance. If one or maybe two drives fail, you can just replace them, and everything will be fine. That can save you hours of work. You won’t have to reinstall an operating system. You won’t have to restore from backup. You won’t have to reconfigure anything.

If your disk controller or its driver goes wonky, you might ruin the data on every disk in your RAID. That could take your data and every single one of your ZFS or btrfs snapshots with it. Snapshots are nice to have, and can be a vital part of a backup plan, but snapshots aren’t much of a backup on their own!

Earlier, I mentioned that my backups are out of band. That means my backups are done outside of normal channels. In my case, Seafile is copying data to and from the server via its own protocol.

If your backup destination shows up as a normal disk to your operating system, then it is potentially open to most of the same problems, accidents, and attacks as the data you are trying to back up. This is even worse if you leave that backup storage connected all the time. If some ransomware can encrypt and hijack your files, then it can do the same to the backups on your USB drive or mapped share.

You should have another layer in there to make sure you can’t lose your backup.

Did I get to the end of this weird sideways-upgrade project?

I am willing to answer that question in the affirmative! Three of the four disks from the old RAID 10 array have been removed from the server. All my virtual machines are booted from disk images stored on the 1 TB partition on the USB hard disk. That 1 TB partition is now in a RAID 1 array with a 1 TB partition on the youngest of the ancient 4 TB disks. That mdadm RAID 1 array is encrypted using LUKS.

The fresh NAS virtual machine is running Debian 11. There are no file shares on this NAS, so it probably isn’t really a network-attached storage, but the hostname implies that it is still a NAS! The remaining 12 TB of the USB drive is encrypted using LUKS and attached directly to this new NAS virtual machine. It now has a big btrfs file system with 99% of the contents of the old, retired NAS virtual machine.

I have a Seafile client running on the new NAS, and that client seems to be syncing every relevant Seafile library that should have a copy on the NAS.

My homelab server has always had a simple script that I run manually after boot. It unlocks the LUKS-encrypted storage, scans that storage for logical volumes, mounts everything that needs mounting, then fires up the virtual machines that live on that encrypted storage. That has all been updated to match the new disk layout.

I have not set up automated daily btrfs snapshots. I will do this soon. I promise!

What is my next home network or server upgrade?!

It seems like I made a lot of changes in 2022! I upgraded my OpenWRT router to a much more capable piece of hardware, so I am now fully prepared to call the ISP and upgrade to symmetric gigabit fiber. I installed the latest OpenWRT on a couple of other routers and sprinkled them around the house and set up 802.11r WiFi roaming.

I have done some work to get my aging AMD FX-8350 homelab server down under 2 kWh of power consumption per day. I probably just shaved a bit more off that by removing some hard drives, but I wouldn’t mind taking this further!

I have been watching my friends on Discord pick up tiny boxes like the Beelink SER5 5560U for $300 or the Beelink with an N5095 for $140. The Ryzen 5560U is a huge upgrade for me, and also extremely overkill. The N5095 would sip power but is comparable in speed to my overpowered dinosaur of an FX-8350, though my FX-8350 has four times the RAM of the $140 Beelink. That’s something a cheap RAM upgrade could fix, but the more a sideways move like this costs, the longer it will take to pay for itself.

What do you think? Should I downsize into a Beelink N5095 whether it is cost effective or not? I do enjoy the idea of seeing how much homelab and NAS can be crammed into a Star Wars lunch box, but I am also not excited about turning my ancient FX-8350 into e-waste for no real reason. Let me know what I should do in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

One Weird Trick To Use When Your First Layer Won’t Stick

| Comments

I have been 3D printing for eight years. If the first layer wasn’t sticking on my first printer, I would just make sure the nozzle was at a reasonable height then load up the glass build plate with more hairspray. There wasn’t much hairspray wouldn’t fix.

The Prusa MK3S with its PEI sheet has been a huge upgrade. Every now and then I print something that is a little persnickety, and I have to put down a layer of glue stick to make sure some tiny perimeters don’t peel up. For the most part, though, the PEI does a good job holding my prints and releasing them afterward.

Small Prints at the Corners

NOTE: The problematic corner still needed a bit of glue stick with these tiny, angular filament clips!

My experience hasn’t been perfect. I have a lot of trouble with bed adhesion close to the edges. Even with Prusa’s 49-point mesh bed leveling, the extremes are either a bit too close or too far. I also suspect that one corner of my PEI sheet is just crummy.

I have tried adjusting the bed level correction. I think I have improved things, but I haven’t completely eliminated the problem. I assume this problem is a result of the PINDA probe being way over to the right. That means it has no way to probe really close to both edges of the bed.

I was hunting for a new solution this week, because I have a different problem. I grabbed a spool of Anycubic gray PLA for $19 to print a mess of GridFinity bins. I had some trouble with adhesion on a long bin, so I started doing some troubleshooting.

Gridfinity bins at the corners

NOTE: I didn’t get as close to the edge on the right as I thought I would!

My first layers are more than a little inconsistent with this filament. I suspect it just isn’t very good filament.

Never troubleshoot your overall first-layer issues with crummy filament. You’ll probably just screw things up for when you swap back to nice filament. Even so, I wanted to continue printing cheap GridFinity bins.

This one simple trick seems like a good idea!

PrusaSlicer and SuperSlicer default to a 0.42 mm extrusion width for the first layer. Other slicers use a wider extrusion for the first layer. Why would you want to use a wider extrusion for the first layer?

The wider extrusion width makes up for being a bit too far from the print bed or minor underextrusion.

If everything is calibrated well, everything will work out very much like the defaults. If you’re printing just a hair too far above the surface, that extra extrusion means that you are more likely to still get just enough squish to lock the filament to the PEI.

SuperSlicer settings

In my case, this means I can calibrate for a perfect first layer near the center of my PEI sheet, but I am still getting enough contact out at the problematic corners and edges for my print to stick.

You’d get to see a diagram here if I were artistic, or if I had a pen for my 2-in-1 tablet!

Won’t wider extrusions make the printer lay down too much plastic?

No. The slicer does the math correctly. If the first layer would be 30 lines wide with a 0.42 mm extrusion width, then the first layer will be about 25 lines wide with a 0.5 mm extrusion width.

The lines will be wider, so they will be spaced farther apart.

A tiny update!

While I was writing this blog I was thinking about why I stopped at 0.5 mm extrusion width for the first layer. I’ve heard that double the nozzle width should be acceptable, and I am already printing infill at 0.6 mm. Why not push things a little more?!

Printing at 0.75mm extrusion width

I am now running with 0.75 mm extrusion width on the first layer. It seems to be working great. Even my problematic corner managed to stick, even though my Z-offset is definitely a bit higher than it should be. Look at the size of those lines!

I don’t trust this filament, so I am not going to try to tune my first layer until I finish off the spool with Gridfinity bins. The Z-height doesn’t make a lot of sense to me. The top-left square is close to correct. The bottom-right is usually my difficult corner, and that one is printing a bit too close, but the top-right square is closer than the middle-right square.

Conclusion!

I am using a 0.5 mm extrusion width in SuperSlicer with the Arachne perimeter generator. I am not sure if Arachne is being used on the first layer, but I assume that it is. I might try pushing this up to 0.6 mm because that is what I’ve been using for infill. If it works well for infill, then it ought to work for the first layer. The first layer is the slowest, so printing fewer lines ought to be a decent speed increase!

What do you think? Have you been having trouble with your first layer? What have you tried doing to correct it? What 3D printer are you using? Have you tried wider extrusion for the first layer? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

I Finally Repaired my Baratza Preciso Coffee Grinder

| Comments

I am trying to figure out when I shelved my Baratza Preciso. I suspect that it has been more than a couple of years! It stopped grinding fine enough to pull a slow enough shot of espresso from my Rancilio Silvia, so I started using my wife’s Baratza Encore and ordered some replacement parts.

Baratza Preciso and Baratza Encore

If my memory is correct, I started working on the repairs and then the big plastic drive gear gave up, so I also gave up. I have been limping along making lattes using a grinder that isn’t up to the task ever since. At least until last week!

These words won’t help much if you are shopping for a coffee grinder!

I am quite pleased with how my Baratza Preciso has held up. I have had it for eight years, and I am still using it today. It did need some inexpensive repairs over the years, but it is chugging along.

Baratza no longer makes the Preciso. It has been replaced by one of the grinders in the Baratza Sette lineup, but I am not sure which one.

I can’t speak for the Sette, but I can say that Baratza has amazing support, and they offer so many replacement parts that you could probably assemble an entire grinder from spare parts. If you buy a Sette or an Encore today, I have confidence that you can repair it in 10 years.

tl;dr: The reason I just had to write this!

The Baratza Preciso is SO FAST! It takes the Encore well over a minute to grind 18 grams of coffee for an espresso. The Preciso seems to grind that much in less than twenty seconds. I have watched the Encore with a stopwatch, but I only counted out the Preciso to 15 hippopotamuses, and I tend to count out time a bit more slowly than it actually passes.

I forgot how slowly the Encore grinds. I wrote a blog many years ago about how it takes less than six minutes for me to make a latte. I often head to the kitchen before sitting down at my desk to record a podcast. I look at the clock, and it says I have way more than six minutes available! This should be no problem!

Coffee with 3D-printed cocoa logo

Then, by the time I am finished doing my barista duties, I am almost late! Did I time myself incorrectly in 2013? Nope. I just didn’t realize how much time the Encore was adding to my routine.

This might be worth keeping in mind when you’re shopping for a grinder. I make two lattes every day, so I am probably saving three minutes a day. That’s 21 minutes each week, 90 minutes each month, or over 1,000 minutes each year.

I tend to make a latte either right before or in the middle of a task. Today I sat down, created an empty blog from a template, and wrote the opening paragraphs that you just saw. Then I got up, wandered to the kitchen, and made a delicious latte with Yemen Mokha Matari from Sweet Maria’s.

One might consider wasting a few extra minutes taking a break to be valuable. Others might find it more important that they’re saving 21 minutes each week when making coffee for their friends.

The problem with the Baratza Encore

There’s more than one problem. We are just going to ignore that the Encore probably produces an inconsistent grind for espresso. That is for sure a problem, but it is minor compared to the real problem.

What do you do when a shot pulls too slowly on setting 3 and too fast on setting 4? There’s no option in between, so all you can do is adjust the dose. If you adjust the dose, then you also need to adjust the volume of extraction. Adjusting two things at once makes tuning more difficult.

The Baratza Encore’s grind is more of a problem than I thought!

I wrote this entire blog two months ago, but forgot to put the finishing touches on it so I could publish it. Two months of being back to using the Preciso again have shown me another problem with the Encore. The cheaper grinder produces so many more fines!

I try to pop my shower screen off once a month for cleaning. This time the screen didn’t look any worse for the wear. Hardly any coffee grounds are sneaking past the tiny holes in the screen now that I am using the Preciso again. It was always so much dirtier with the Encore.

This surprises me a bit. Both grinders are obviously related, and they use the exact same upper burr. The Encore has a slower, weaker motor, and the lower burr is different.

I assumed the difference in the lower burr would just be the number of blades or how aggressive the angles on those blades might be. This is probably correct, but I didn’t realize that this could make a difference in the quality of the grind!

What happened to Pat’s Baratza Preciso?

You are going to have to forgive me. I don’t even know for sure what the order of problems and solutions has been, but I will do my best to give you a timeline.

I know for sure that I got to a point where I was grinding at the Preciso’s absolute finest setting, and shots were pulling in less than 10 seconds. It was awful!

I ordered replacement parts. I have several of the replacement plastic adjustment doodads and the plastic burr holders on hand now. There’s a tiny screw you can use to fine-tune the adjustment ring. I pushed that fine-tuning screw as far as it could go, and I was still getting 10-second shots of espresso.

At that point, I even tried to cut a shim out of a business card. I put that shim between the burr and the plastic retaining ring in an attempt to get the burrs closer together. I’m pretty sure this helped a bit, an I am almost positive that I brewed espresso for a few months like this. I bet this is what caused me to strip the drive gear and finished completely chewing up the upper burr.

The Baratza grinders have a lot of plastic parts

And I have managed to break every single one of them. This was the second time I’d stripped the drive gear, and I happened to order two the first time it happened. I don’t think I knew this at the time. I just threw my hands up in the air, put the Preciso out in the garage, and limped along with the Encore.

Baratza Preciso Broken Gear

The plastic parts are probably a good thing. The idea is that you’re supposed to blow out a fragile $3 plastic part if there’s ever a rock in your bag of coffee. That’s better than destroying an expensive burr.

With the price of the burrs for the Preciso, it might be better if everything were overbuilt and I had to replace a $16 burr every few years. Your mileage may vary.

I wasn’t sure if I should order the replacement burr

There are two burrs in the Preciso. The upper burr is $16 and is extremely easy to replace. The lower burr is $45, and it looks like it is challenging to get it unscrewed from the drive assembly.

I could tell that the upper burr was damaged, and it looked like the lower burr was fine. I am assuming that the bigger burr on the bottom doesn’t do much cutting. It probably pushed the beans into the upper burr as it spins.

I spent about two hours when I replaced the drive gear. Replacing the gear wasn’t too difficult, but when the machine was still grinding too coarse, I wound up taking it apart two more times to move shims around and try adjusting things.

Baratza Preciso Upper Burr

When that didn’t work out, I had wished I spent those two hours working toward getting our LumenPNP pick-and-place machine up and running. When the pick and place works, we can start selling OoberLights boards. When we sell OoberLights boards, there will be cash flow. I could use that cash flow to buy a grinder upgrade.

I do want a grinder upgrade. I’ve had my eye on the flat-burred Turin DF64 grinder for a while. It would be a really nice upgrade!

I didn’t know if I should repair the Preciso. The burr was about $25 after shipping and tax. That’s 5% of the cost of my next grinder. Not only that, but I suspected I would have to spend an hour taking the Preciso apart again. Maybe it would be better to put that $25 toward the price of a grinder upgrade and not waste an hour of my time getting angry at the old grinder.

I spent the $25. I did take the Preciso apart again to undo my adjustments to make sure the new burrs would never touch each other. It wound up being a good choice.

I am drinking a latte right now. I did not quite hit the right grind today. With 18 grams in with the Preciso set to 4F, my light-roast Yemen gave me 31 grams out in 45 seconds on my Rancilio Silvia.

This isn’t far off from my ideal shot for a latte. I could easily write 2,000 words about why I aim for a longer pull with a little more than a ratio of a little more than 1.5, but that would be drifting way off topic.

The important thing to note is that I can still go three entire clicks finer on an extremely light bean. I am calling this a successful repair.

UPDATE: A slightly darker roast Ethiopian coffee was able to completely choke the machine with the grind set to 4F! I had to bump it up to about half-way past the 6 setting to get a good pull. I think this means I have done a good job!

The thrilling conclusion?!

I don’t know if the conclusion is thrilling, but I am excited to have my Baratza Preciso working again, and I am quite happy that I didn’t have to spend $400 or more on a grinder upgrade this year. I expect that I will get at least a few more years of service out of the Preciso before I need to upgrade.

What do you think? Should I have junked the Preciso and splurged on something like a Turin DF64? There’s a good chance that next grinder will outlast me. When I eventually upgrade, will I be kicking myself for not doing it a few years sooner? Will my coffee taste that much better?!

Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

Can You Save Money By Changing the CPU Frequency Governor on Your Servers?

| Comments

I am sure there will be somewhere around 2,000 words by the time I get done saying everything I want to say, but I most definitely will not make you read them all to learn the answer to this question.

The answer is yes. At least on my own aging homelab server, in my own home, with my particular workload. I am probably going to be saving just a hair under $10 per year by switching to the conservative CPU governor with some minor tweaks to the governor settings.

You don’t even have to wait to see my extra tweaks. Here’s the script I run at boot to switch to the conservative governor and tweak its settings:

1
2
3
4
5
6
7
8
#! /bin/bash

/usr/bin/cpufreq-set -g conservative

echo 40 > /sys/devices/system/cpu/cpufreq/conservative/down_threshold
echo 1 > /sys/devices/system/cpu/cpufreq/conservative/sampling_down_factor
echo 150000 > /sys/devices/system/cpu/cpufreq/conservative/sampling_rate
echo 85 > /sys/devices/system/cpu/cpufreq/conservative/up_threshold

My motivation

I moved my homelab server out of my office to the opposite side of the house. It now lives on a temporary table underneath my network cupboard. This network cupboard used to belong to my friend Brian Moses, but it is mine now. Should I write up what I’ve done with it since acquiring the house?!

My Homelab and Network Cupboard

I had to unplug the server before moving it and its UPS across the house, so I figured I could plug it into a Cloudfree outlet and monitor its power usage in Home Assistant. Once that happened, I couldn’t help but monitor power usage with various max clock frequencies, and during benchmarks, and testing all sorts of other things.

Power is heat, and heat is the enemy here in Texas

I am quite a few years late for this change to have a significant impact on my comfort. In our old apartment, my home office was on the second floor on the south side of the building, and that room had particularly poor airflow from the HVAC system. Heat rises, the sun shines from the south, and you need airflow to stay cool.

I know we didn’t get to the numbers yet, but my changes may have dropped my heat generation by nearly 60 BTUs. That would have made a noticeable impact to the temperature of my old office in July and August.

My new office at our house has fantastic airflow. The only time I get warm is when I close the door and turn off the air conditioning to keep the noise down while recording a podcast.

That was the real motivation for moving the homelab server out of the room. Sure, that got 300 unnecessary BTUs out of here, but the important thing is that there are now four fewer hard drives and nearly as many fans spinning away near my microphone.

Here in Plano, TX, we wind up running our air conditioning eight or nine months of the year. I wouldn’t be surprised if we would spend $5 per year to cool the heat that would have been generated by that extra $10 of electricity.

The specs of my homelab server

I wrote a blog post in 2017 about upgrading my homelab server to a Ryzen 1600. I almost made a similar upgrade to my desktop the next year, but instead I decided to save some cash and I just swapped motherboards. My desktop machine is a Ryzen 1600 now, and my homelab is an old AMD FX-8350. Here are the specs:

  • AMD FX-8350 at 4.0 GHz
  • 32 GB DDR3 RAM
  • Nvidia GT710 GPU
  • 2x 240 GB Samsung EVO 850 drives
  • 4x 4 TB 7200 RPM drives in RAID 10

When the FX-8350 was in my desktop machine, I had it overclocked to 4.8 GHz. I don’t think I have the exact numbers written down anywhere, but I recall that squeezing the last 300 MHz out of the chip would use an extra 90 or 100 watts on the Kill-A-Watt meter. The first thing I did on the homelab was turn the clock down to 4 GHz in the BIOS. I think it is supposed to be able to boost to 4.2 GHz when only two cores are active, but I had boost disabled when I was overclocking, and it is still disabled today.

This is what I know from measuring power over the last few weeks and scouring old blog posts for power data. My power-hungry FX-8350 machine never goes below 76 watts at the smart outlet. Old blog posts suggest that about 20 watts of that is going to the four hard disks, and up to another 19 watts could be consumed by the overly complicated GPU.

I am not done collecting data. I will clean up this table when I am finished. In the mean-time, though, here are all the things I know so far:

1
2
3
4
5
6
7
8
   geekbench             kwh per day         tailscale
    X8  X1         ondemand   conservative     mbps
                              stock  custom
4.0ghz 222 117 watts   2.2        2.03*    1.97    608
3.6ghz 184 106 watts   2.1        2.0*             533
2.8ghz 145  93 watts   2.04                        472
2.0ghz 118  87 watts   1.99                        377
1.4ghz  97  84 watts   1.96                        260

All the power numbers are measured at the Cloudfree smart plug.

I am not really saving $10 per year

I have had the clock speed of my FX-8350 capped at 2.0 GHz even since I removed the Infiniband cards from my network. You can probably see from the chart that this is only 0.02 kWh per day, and I have learned that 0.02 kWh per day for a year only works out to about $0.50.

I was already saving $9.50 per year by capping the CPU speed at the absolute minimum, but I was also slowing everything down. Switching to the conservative governor and making a few tweaks both made my homelab server faster and saved me the next $0.50. I think that is a nice win!

The motivation for wanting a faster homelab server

My personal network and computing environment is heavily reliant on Tailscale. Tailscale is a mesh VPN that effectively makes my computers seem like they’re all on the same local network no matter where each machine is located in the world. I have been trying to leverage the security aspect of this more and more as time goes on, and one of the things I have been doing is locking down my network services so they are only available on my Tailnet.

Homelab Power Utilization

NOTE: The first graph is misleading, because pixels are so wide! The graph always shows both the highest and lowest reading during that time period, but it can’t show you just how little time the server spent at the peak.

I have almost entirely eliminated my reliance on my NAS, but every once in a while I need to move some data around the network. As you can probably see in my charts, Tailscale tops out at around 350 megabits per second when I limit the server to 2.0 GHz. It is capable of going twice as fast as this, and even though that isn’t saturating my gigabit Ethernet port, it is still faster!

My testing methodology

My Cloudfree smart outlet runs the open-source Tasmota firmware. Tasmota keeps track of the previous day’s total power usage. I don’t know if you can set the time of day when this resets, but my outlets all cross over to the next day at 5:00 p.m. This is a handy time of day for checking results and setting new values for the next run.

All that data is stored in Home Assistant, so I can always go back and verify my numbers.

All of the most important tests were run for a full 24 hours. Some of the numbers in the middle are probably lazy. If I didn’t get a chance to adjust the governor until 6:00 p.m., I figured that extra hour at the previous setting wouldn’t skew the data significantly.

I would always wait for a full day when switching between the extreme ends of the scale.

NOTE: You probably shouldn’t buy an old-school Kill-A-Watt meter today. Lots of smart outlets have power meters, and you can set those up to log your data for you, and you can even check on them remotely. They also cost less than a Kill-A-Watt. The Cloudfree plugs that I use are only $12 and ship with open-source firmware.

My goals when tweaking the conservative CPU governor

I wanted to make it difficult for the CPU to sneak past the minimum clock speed. If something was really going to need CPU for a long time, I most definitely wanted the CPU to push its clock speed up.

I don’t have a good definition for what constitutes a long time. I figured that if I am going to scp a smaller volume of data around, that I don’t really care if the task runs for 10 seconds at 1.4 GHz instead of 5 seconds at 4.0 GHz.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
pat@zaphod:~$ iperf -c nas;iperf -c nas;iperf -c nas;iperf -c nas;iperf -c nas
------------------------------------------------------------
Client connecting to nas, TCP port 5001
TCP window size: 67.5 KByte (default)
------------------------------------------------------------
[  1] local 100.88.23.40 port 39508 connected with 100.75.238.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  1] 0.0000-10.0719 sec   452 MBytes   377 Mbits/sec
------------------------------------------------------------
Client connecting to nas, TCP port 5001
TCP window size: 67.5 KByte (default)
------------------------------------------------------------
[  1] local 100.88.23.40 port 59728 connected with 100.75.238.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  1] 0.0000-10.0579 sec   657 MBytes   548 Mbits/sec
------------------------------------------------------------
Client connecting to nas, TCP port 5001
TCP window size: 67.5 KByte (default)
------------------------------------------------------------
[  1] local 100.88.23.40 port 56344 connected with 100.75.238.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  1] 0.0000-10.0374 sec   719 MBytes   600 Mbits/sec
------------------------------------------------------------
Client connecting to nas, TCP port 5001
TCP window size: 67.5 KByte (default)
------------------------------------------------------------
[  1] local 100.88.23.40 port 35472 connected with 100.75.238.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  1] 0.0000-10.0465 sec   734 MBytes   613 Mbits/sec
------------------------------------------------------------
Client connecting to nas, TCP port 5001
TCP window size: 67.5 KByte (default)
------------------------------------------------------------
[  1] local 100.88.23.40 port 58154 connected with 100.75.238.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  1] 0.0000-10.0403 sec   716 MBytes   598 Mbits/sec
pat@zaphod:~$ 

I will probably start to care if a 20-minute file copy takes 40 minutes.

I wasn’t able to plot a precise point on that continuum. I am able to slow down the clock speed increase by raising the sampling_rate, but if I push it too far, my iperf tests were never able to push past 2.8 GHz.

My tweaks to the conservative governor settings

I raised both the down_threshold and up_threshold above the defaults. I figured this would add some friction on the way up while making the trip back down to 1.4 GHz a little faster.

I bumped the sampling_rate from the default of 4,000 microseconds to 140,000 microseconds. In my tests, anything at 180,000 microseconds or higher wouldn’t let the CPU reach full speed. I may try lowering this value, but it takes 24 hours to verify the results.

Every time I made a change to the conservative governor, I would run three consecutive iperf tests. Why did I run three 10-second tests instead of a single 30-second test?

It seemed like the small delay when reconnecting between the tests would allow the CPU to clock down a notch or two. That seemed like a helpful simulation of what the real world might be like.

I didn’t use a stopwatch. I didn’t set up a script to watch the clock speed to let me know when we were reaching the maximum. I just ran cpufreq-aperf and counted hippopotamuses before seeing 4,000,000 in the output. I guess it helps that cpufreq-aperf updates once every second!

I didn’t really need an exact number, but it was easy to see how quickly the CPU was ramping up. I think I wound up at a point where the CPU bumps up one frequency notch faster than once every two seconds, but slower than once per second.

That means my iperf test reaches full speed in about five seconds. It also doesn’t dip more than one notch between runs.

I think that is pretty reasonable. I can reach or at least approach 600 megabits per second via Tailscale in less than 10 seconds while only using 0.01 kWh more throughout the day than if I locked the machine at 1.4 gHz.

Is all this work worth $10 per year?!

The important thing is that you don’t have to do all this work. I’ve spent a few hours making tweaks and recording data to learn that just switching to the conservative governor would save me 75% as much as if I just force my lowest clock speed, and I spend even more hours tweaking the governor to claw back the next 24%.

All you have to do is spend two or three minutes switching governors or applying my changes. You don’t have to hook your server up to a power meter to see if it is actually working, but you aren’t running my ancient and power-hungry FX-8350, so your mileage will almost certainly vary.

I imagine this helps less if you already have an extremely efficient processor.

Why wouldn’t you want to make this change?

My ancient server is overkill for my needs. I run a handful of virtual machines that are sitting idle most of the day, and they could all manage to do their job just fine with the smallest mini PC.

Maybe you run servers that need to be extremely responsive. Google measures your web server response time as part of their search rankings. If your web server is locked at full speed, it might be able to push your blog up a notch in the search rankings, and that would be worth so much more than $10 per year!

Most of us aren’t going to notice things slowing down. Especially if you just switch governors instead of applying my extreme tweaks. It isn’t like your CPU is going to be in sleep states more often with the conservative governor. It can still do work while cruising at a low speed.

The best server hardware is almost always the hardware you already have

Every time my home server gear starts getting old, I start thinking about buying upgrades. This old FX-8350 box is eating up $93 in electricity every year. $73 of that is the compute side, and about $20 is the storage.

If I wait for a deal, I could swap out the old 4 TB hard drives for a 14 TB hard drive for around $200. If we ignore the fact that I need more storage and these drives are getting scary old, I can save $14 per year here. That’d pay for the new hard drive sometime in the next decade.

NOTE: We post hard drive, SSD, and NVMe deals in the #deals channel on the Butter, What?! Discord server almost every day!

When I do this math, I always assume I am going to be buying something bigger, faster, and better. A new motherboard will be $100. A new CPU will be at least $200. New RAM will be at least $150. I am assuming I can reuse my case and power supply.

Even if this magical new machine uses zero electricity, it would take six years to pay for itself in power savings. If it only uses half as much power it will take will take 12 years.

I think this is the first year where I can pay for a server in energy savings!

My next upgrade might be very different! I am very seriously considering replacing my homelab server with the slowest Beelink mini PC you can buy. The Celeron N5095 model sometimes goes on sale for $140. I have some free RAM here to upgrade it, and the little Beelink would probably use less than around $10 in electricity every year.

It would cost me $340 for a Beelink and a 14 TB USB hard drive to hang off the back. The two pieces of hardware combined might only cost me about $16 per year in electricity. That would completely pay for itself in power savings in about 4.5 years. Maybe less than three years if we include the costs of cooling.

I don’t like lumping in the 14 TB hard drive with the Beelink. I am quickly running out of storage on my server, and I am planning on replacing those four drives with a single 14 TB drive after my next inevitable disk failure. I will be retiring those hard drives soon whether I retire the FX-8350 or not!

The Beelink would pay for itself in electricity savings in two years. No problem.

It is exciting that this is even a possibility, but it is a bummer because this is a downgrade in many ways. The Beelink doesn’t have six SATA ports and the bays to hold those drives. The Beelink doesn’t have PCIe slots for upgrades. My FX-8350 is 50% faster than the N5095, but it is possible that the N5095’s AES instructions would give it a significant Tailscale boost!

But the Beelink is tiny, quiet, and capable of doing the work I need. I am excited that it is literally small enough to fit in the network cupboard!

There are more capable Beelink boxes. The Beelink model with a Ryzen 5 5560U would be a pretty good CPU and GPU upgrade, but I don’t need more horsepower, and that $400 Beelink wouldn’t save me enough power to pay for itself before I’d likely retire it.

Of course this gets complicated because I have no idea how to account for prematurely turning my FX-8350 server into e-waste.

UPDATE: Power use is up a bit, and I am not sure why!

You can see two spikes there on December 13. The first one was when I upgraded to the noticeably faster Tailscale release. I was excited to see that my 612 megabits to 785 megabits. An hour later I realized that the numbers might go up a bit if I switched back to the ondemand CPU governor. That got me up to 810 megabits per second.

More CPU Governor Graphs

I did remember to switch back to the conservative governor, but I noticed that my power graphs didn’t drop back down. That’s when I learned that switching governors also resets the tweaks I made to some of the knobs on the conservative governor.

It wasn’t until I wrote the two previous paragraphs that I noticed that my monkeying around with Tailscale isn’t the problem. The graph got all sorts of chonky about 24 hours before that!

If you’ve ever carried virtual machines around on your laptop, then you probably already know that VMs and power savings don’t usually fit together well. Sometimes things work out alright, but other times it is easy for one of those machines to keep the CPU awake, and your 8-hour battery on your laptop winds up lasting only 3 hours.

I will report back when I know more, but I figured it was worth noting that we have to keep an eye on our CPU governor tweaks! Things aren’t really going all that bad. The ondemand governor used 2.2 kWr each day. The good days on the conservative governor used 1.97 kWh. My worst day so far since December 13 is at 2.05 kWr.

I am still closer to the low end than the high end!

Conclusion

I am just one guy testing his one ancient homelab server. I’ll probably find a way to do a comparable test on at least one more piece of hardware, but this is still just me. I want to hear from you!

Are you going to try this out? Do you have an old Kill-A-Watt meter or a smart outlet capable of measuring power usage? If you happen to do a before-and-after test with and without the conservative Linux CPU governor, I would absolutely love to hear about it! You can leave a comment, or you can stop by the Butter, What?! Discord server to chat with me about it!