Keychron K2 HE Hall Effect Gaming Keyboard For Writing and Coding

| Comments

This is not a long-term review. I decided I should shop for a new keyboard last night, I ordered the Keychron K2 HE this morning, and it will arrive tomorrow morning. I will have had the keyboard on my desk for a week by the time I finish writing this blog, so I will definitely be able to tell you if I am happy with my purchase, but this is mostly going to be about WHY I chose the Keychron K2 HE.

I wound up paying $10 extra for the special edition with the wood accents. I’m not sure how necessary that was.

Keychron K2 HE at my desk

I have been using a $30 75% keyboard with knockoff Cherry MX Blue switches since 2019. I bought it on a whim. I saw a deal, I posted it to Butter, What?!, and I thought it might be fun to free up the real state on my desk between the enter key and my mouse. That was a fantastic decision, and it has been a surprisingly delightful keyboard.

I am a huge fan of the IBM Model M keyboard, so the blue switches feel light and uncomfortably crunchy to my fingers. I have been thinking for a long time that a switch upgrade would be fun. I am sure there is a smoother yet tactile switch with a much heavier actuation force available now, but there are something like 100 different Cherry-compatible switches to choose from. Making that choice seemed like a lot of work, so I kept putting it off.

The 16-gram gaming mouse that I’ve been using for the last six months is one of the lightest and lowest latency gaming mice anywhere in the world. Doesn’t it seem like a bummer to pair such an impressive mouse with a cheap old keyboard?

Why the Keychron K2 HE?!

I have been fascinated by the idea of these new hall-effect switches for quite a while now. You can set the actuation height in software, so you can have ridiculously responsive key presses while gaming.

They also allow for some interesting magic when pressing and releasing buttons: a key can be counted as released when you begin lifting your finger, while immediately reactivating another key that you were already holding down. You don’t have to lift above the actuation point. This is very similar to how movement binds in Team Fortress 2 are already done, but apparently this may trigger anticheat mechanisms in some multiplayer games!

This is all neat, but I was hoping that my next keyboard would be running open-source firmware like QMK or VIA. Last time I looked, all the hall-effect keyboards were proprietary.

I should have been paying more attention, because the Keychron K2 HE has been out for more than half a year, and it runs on QMK!

I am going to be honest with you, since I always do my best to stay honest here! Most of the hall-effect features seem like gimmicks. I’m not sure how much of a difference setting actuation height will make to responsiveness, and I half expect the feature where you can assign four actions to different depths on a single key to be too cumbersome to configure separately for each game.

That said, I am excited about having a chance to try them out. I will report back in a few months with details on how I manage to make use of that. I started my first game of Cyberpunk 2077 last week, and everyone tells me that I will hate driving with the keyboard. Maybe I will be able to configure analog steering!

Why a 75% keyboard layout?

I don’t need a number pad. I’m not an accountant working in the nineties. I don’t key in digits from receipts and purchase order printouts all day long. If you ARE keying in hundreds or thousands of digits all day long in 2025, why isn’t the machine somehow scanning those digits for you? The longest sequence of numbers I type is an occasional 6-digit 2FA code, and it is faster to type a year like 2025 without moving a hand all the way to a number pad.

The horizontal space on my desk between my fingers and my mouse is valuable. It is easier to keep things centered on the monitor if I don’t have to reach an extra six inches to get to the mouse, and it is nice to not have to reach that extra six inches over and over throughout the day.

A lot of people enjoy 60% keyboards, but I don’t find the space on my desk between my fingers and my monitor to be terribly valuable. You could fill that space with 100 extra keys, and the worst that would happen is that I’d ignore them. I will never wish I could put something in their place.

I tend to make use of function keys for one-handed operations. Emacs defaults to using F3 through F5 for recording, stopping, and replaying macros. I often click the next place I want to repeat a macro, so running the macro with one finger helps. I have F9 through F12 with modifier keys bound to shortcuts that adjust my display output between combinations of my primary monitor and my office television.

Now that I am dropping back to a single macropad, I might move some of my office video lighting controls to other combinations of function keys. The function keys are like a free macro pad in an unobtrusive spot, and I wouldn’t complain if I had an addition row of them!

Why use a gaming keyboard when I spend more time working than playing?

In a perfect world, the IBM Model M in my closet would appear on my desk when I am writing a blog or chatting in Discord, and a fancy hall-effect keyboard would magically take its place when I fire up a first-person shooter. I do swap my mouse when playing games that require fast aim, but I’m not going to attempt to play musical chairs with two heavy wired keyboards. I’m also not going to move to a separate desk, computer, monitor, and keyboard to play games. I toggle back and forth fairly often!

I can write a blog post using the crummiest laptop keyboard, but I will play Team Fortress 2 better with an appropriate tools, and I will also have more fun.

The linear hall-effect keys feel way different than blue switches!

I have been typing on this keyboard for three days. The first thing I did was switch to the preconfigured gaming profile, and I lowered the actuation distance for every key from 2 mm to 0.5 mm.

At that distance, it only takes letting the weight of one of your fingers rest on a key to see a letter appear on your screen. It happens just as the spring starts to provide proper resistance your pressure. I wound up setting the spacebar’s engagement height to the default of 2 mm because I was occasionally typing dozens of spaces while just resting my thumb on the keyboard!

Keychron K2 HE at my desk

I expected that I’d be toggling back to the 2-mm default profile when not gaming, but I have only been use the gaming profile with the short-throw switches. It isn’t causing me much trouble. I’m not typing extra random characters. The aggressive gaming switches still type like a normal keyboard for me.

I have noticed that I have a peculiar habit. When my thoughts slow down and my fingers catch up to the words, I might pause with my fingers somewhere besides the home row. I have seen a few accidental t characters pop up while waiting to think up the next word. It is always a t. I’m not sure why that is, but I seem to have already broken the habit.

I am weirded out by the creamier sound of this keyboard. The crunchiness of my old blue switches didn’t match the clackiness of my old IBM Model M, but those keyboards felt and sounded more alike than the Keychron K2’s linear Nebula switches compare to the blues. I hear this strange noise when my fingers hit the keys, and I feel like I am in the wrong office!

What about gaming, Pat?!

I don’t really know yet. I’ve only had the keyboard for a little over 24 hours. I feel like the part of my gaming experience where the rapid response of the quick-actuating hall-effect switches would help me the most would be when playing Team Fortress 2. I don’t have any plans to play in the near future, because playing an lightly competitive multiplayer game is tiring and draining. I usually have a few months in a row where I enjoy that, but then I wind up taking a break.

I have been playing through Sniper Elite: Resistance again this week. I enjoy this series of games. They reward my quick aim time and accuracy, but they don’t require me to be constantly aiming and shooting. I get to spend time slowly wandering around, positioning myself, and making sure a big group of bad guys doesn’t spot me.

The game is mostly relaxing with regular bursts of fun. It is also never going to make use of faster keyboard switches. I’m sure I’ll be excited to play some adrenaline-fueled games like Trepang2 or RoboQuest soon enough.

A keyboard that works as an analog gamepad?!

The third default profile is set up for gaming. It remaps the WASD cluster as the left analog stick. If you partially engage the W key, your guy will slowly walk forward. Push it all the way down, and he’ll walk at full speed. Isn’t that weird?!

I am curious about how this works out, because I recently started my first playthrough of Cyberpunk 2077. I prefer aiming with a mouse, but everyone tells me that I am going to hate driving with the keyboard. It sounds like the Keychron K2 HE might help me make the bad keyboard driving controls less bad. I’m not sure how well this works in practice, but I am excited about giving it a try!

But Pat! I don’t want to spend $140 on a keyboard!

Depending on which part of the Keychron K2 HE you value, there seem to be two really good options at around half the price. Keep in mind that I haven’t used either of these alternative keyboards. A keyboard is a personal and opinionated choice. It might be a good idea to shop somewhere with a liberal return policy!

You can skip the hall-effect switches and opt to go with the Keychron K2 Max for $115. I believe that replacing the hall-effect switches with red switches is the only difference between the K2 Max and K2 HE.

You can downgrade to lesser sound damping material with the Keychron K2 Pro and save another $10. You could also drop back to the base model Keychron K2 which has very little acoustic material to bring the price down to $80. Any of these keyboards would be a good option for office work, and would still be a fine gaming keyboards. Every Keychron K2 trim level supports the open-source QMK firmware.

If the hall-effect switches and rapid-trigger effect are what you’re excited about, I was also eyeing up the Yunzii RT75, which seems to go on sale regularly for $72. The RT75 does not run open-source firmware. Yunzii’s web configurator seems to have a feature set comparable to the Keychron K2 HE.

The Yunzii RT75 is in a fully plastic case, and it doesn’t have comparable acoustic foam to any keyboard in the Keychron K2 lineup. It comes with a different set of tradeoffs, but I looked up how the Yunzii sounds, and I don’t think it sounds bad at all!

Buy vs. build

I am going to tell you right now that I have a lot of ideas about what would make for my ideal keyboard. I want a split keyboard, with bonus points for using a 3D cup shape. I want more keys that my thumbs can reach so I can rely less on my pinkies. I would enjoy an extra column of keys that my pinky could reach while using the WASD cluster while gaming.

I can’t get every feature I want without compromise. A split keyboard is going to end at G, but sometimes I DO reach for the Y and 7 keys while gaming, and they’d be a mile away on the wrong half.

A cupped ergonomic keyboard wouldn’t let me move my fingers over to WASD while gaming, which means I’d have to rely on layers to make games work. Then I’d have to make sure I switch layers if I use text chat or switch to my Discord window. Maybe I could automate that with QMK, but that’s even more work!

3D printing an ergonomic shell and manually soldering 80 switches isn’t a daunting task, but I am doubtful that one could design their own three-dimensional QMK or VIA keyboard with hall-effect switches today. I don’t know if this Keychron K2 HE sounds creamy or thocky, but I don’t believe that I couldn’t make my own keyboard sound like this no matter how many heavy layers I cut out of aluminum on my CNC machine.

I have enough desk space in here that I could have a station dedicated to gaming, but it doesn’t make any financial sense. My gaming GPU makes DaVinci Resolve run faster. An overpowered CPU that I might have for compiling or rendering can still be utilized for gaming.

All these related tasks work better if I invest all the money into a single build, so I need one keyboard that works well enough for everything.

Conclusion

We know this isn’t a conclusion. I haven’t even had the Keychron K2 HE in my hands for an entire week, and I haven’t even played any games where keyboard latency will have any impact. I haven’t even gotten to test out the Dynamic Keystrokes. My boring idea is to move the harder to reach weapon-switch binds from 4 through 6 down to short presses of 1 through 3, but that is a heck of a minor upgrade from a keyboard that cost five times as much the one it replaced.

I am confident that I have chosen well. Spending $140 on a piece of hardware that I will push thousands of words through every single day is a bargain. I’ll be excited if it manages to improve my gaming experience by even 5%. I am even more excited about messing around with an extremely custom QMK build at some point in the future.

I want to hear what you think! Did I make a good choice with the Keychron K2 HE? Should I have chosen something else? Would I have been better off spending half as much on the Yunzii RT75Should I have built my own keyboard from scratch? Are you using a better 75% keyboard? Visit our friendly Discord community and join the discussion about keyboards, custom ultralight gaming mice, and other related interests!

Torture Testing My Cenmate 6-Bay USB SATA Hard Disk Enclosure

| Comments

I don’t know if I am really properly beating on this thing as hard as I can, but I am doing my best with the hard drives I have available!

People in the homelab community tend to have an aversion to USB storage, and I definitely didn’t have a ton of confidence in it the past. I had my own issues with RAID arrays built from USB hard disks ten to twenty years ago, but I have had great success with external USB hard disks on both my off-site Raspberry Pi and my NAS virtual machine over the last four years, so I thought it was time to try out a beefier piece of external USB storage.

Cenmate 6-bay USB enclosure on my desk

I imagine that everyone’s distaste for USB storage is based on outdated information and old experiences. I wound up ordering a Cenmate 6-bay USB enclosure for $182. The tl;dr is that it is doing a fantastic job. It can manage over 940 megabytes per second on sequential reads or writes. It is well built. It is extremely compact and dense. The fans aren’t super loud. The price is fantastic. It handles most drive failure situations gracefully, and when things are less graceful, it doesn’t leave you in a position where you’re likely to lose any data.

I would most definitely trust my own bulk storage to this Cenmate enclosure.

More importantly, I haven’t had the USB device misbehave in any disastrous way. I started writing this blog post after six simultaneous bonnie++ benchmarks had been running for thirty hours on six old 7200-RPM hard disks without a single hiccup.

At this point, I have a few days of continuous successful bonnie++ benchmarks of the mechanical disks, three days of bonnie++ benchmarks against aging SATA SSDs, and seven full days of continuous fio randread benchmarks at an average of 60,000 IOPS.

I did have some trouble with my mechanical disk testing, but every hiccup that I have had is because my collection of aging test hard drives have aged worse than I thought!. At least half of them are dying!

Why use a USB enclosure instead of building or buying a NAS?

This post is about what I have learned about this specific USB SATA enclosure, so I don’t want to go too deep into why I think you should consider using one or more USB enclosures in your homelab. I will endeavor to keep this part short!

Price is a good reason. The cost of my $140 Trigkey N100 mini PC and my $182 6-bay Cenmate enclosure maths out to $54 per 3.5” hard drive bay. That is less than half the cost per drive bay of a NAS from UGREEN or AOOSTAR, and both companies are selling their NAS offerings at extremely competitive prices.

You can assemble a 6-bay NAS for yourself using a Trigkey mini PC with an N150 CPU, 16 GB of RAM, a 512 GB NVMe for less than $350. The whole setup takes up just over six liters of space. How much you want to spend on the six mechanical hard disks to fill that up is your choice.

Cenmate Read Testing

We pushed around 10 TB of reads and writes to the six SSDs with 3 days of continuous bonnie++ tests, then around 16 TB of random reads over 7 days at an average of over 60,000 read operations per second

Another good reason to use USB enclosures is flexibility. You can buy 2-, 4-, 6-, and 8-bay enclosures all at reasonable prices. You can plug multiple enclosures into a single computer, and if you run out of fast USB ports, you can plug one enclosure into the next. You can connect all your external enclosures to a single server, or you can split them up between mini PCs.

There isn’t even a rule that says you can only use a USB hard drive enclosure with a mini PC! Maybe you’ve already have a purpose-built NAS, but you are running out of space. You can always plug in an external USB enclosure to add more disks, but you should make sure your operating system will allow it. You could make sure your most important data is on the internal storage, while relegating the new USB enclosure to backups and scratch data.

The density of a mini PC with a Cenmate enclosure is hard to beat

I knew from the dimensions that there wasn’t a ton of empty space inside a Cenmate enclosure, but I didn’t understand just how dense it would be until I loaded it up with 3.5” drives and picked it up. That was the moment that I understood in my gut that my setup packed a lot of storage into a small volume.

A few days ago, we were talking about a ridiculous build where someone crammed ten 3.5” hard disks and an N150 mini PC into a mini-ITX gaming case. The Reddit post seems to be gone, so I can’t look up the exact specs, but it sure looked like it was packed to the gills!

A Jonsbo N4 case is 19.6 liters and holds six 3.5” disks.

If you measure the length, width, and height of the space my Trigkey N100 mini PC stacked on top of my 6-bay Cenmate enclosure, you will find that it takes up just 6.3 liters. That is counting the void left behind that mini PC as occupied space.

My setup is 1/3 the size of a Jonsbo N4 case with the same number of 3.5” drives.

I’m not saying that your NAS build needs to be this compact. I just think it is neat that I may have accidentally built the most compact 6-drive NAS in our Discord community!

My little trick for installing 2.5” SATA SSDs in the Cenmate trays!

Cenmate’s trays are awesome for 3.5” hard drives. The little plastic clips hold the drive in place, and you don’t need any tools to install the drives. Not only that, but a Cenmate enclosure with a few drives installed is heavy enough that you can just push the trays in with one finger, and they solidly clunk into place.

Snip the Cenmate tray

You need to screw 2.5” drives in from the bottom of the tray, but the real bummer is that one of these plastic nubbins interferes with these smaller drives. Cenmate wants you to remove the blue retention bracket when installing 2.5” SSDs. I didn’t want to do that. I would be very likely to lose the brackets!

I took a set of my flush cutters to the one nubbin that bumps into the SATA SSD. I checked. Only having three out of four nubbins does a fine job holding a heavy 3.5” hard disk in place, and once you snip it off there’s no problem installing your SATA SSD.

They should ship like this from the factory.

I am using the word nubbin a lot.

Nubbin.

How mean can I be to the SATA-over-USB connection?

I wrote most of the rest of this blog post almost two months ago. Last week, my friend Brian McMoses stopped by with a stack of seven old SATA SSDs. They range in size from 120 GB to 256 GB. I started out running bonnie++ which winds up being a workload that is roughly half reads and half writes.

I ran those continuous read/write benchmarks for 72 hours. That ate up around 2% of the write lifetime of the oldest drive in my 6-disk RAID 0 array. I hope you will agree with me that destroying SSDs for the sake of enclosure reliability testing is a bummer, and that three days of writes was enough.

I switched to a read-only randread benchmark using fio. When you start the first test, fio creates a bunch of files and fills them up. Every subsequent run of fio reuses those same files, so I have been doing an average of around 60,000 random IOPS spread across 6 drives on a single USB port for seven full days so far.

fio running on my Cenmate enclosure for 7 days so far

I took this screenshot at some point during the sixth day of continuous random read testing

The Cenmate enclosure survived several days at 940 megabytes per second of sequential reads while I was collecting data for the previous blog post. That is one kind of stress for the chips inside the Cenmate enclosure. Now the enclosure is surviving weeks of hammering the USB controllers with 50 times more IOPS than six mechanical hard disks could ever sustain.

I have an extreme level of confidence now that my Cenmate enclosure can handle intense workloads for prolonged periods of time as long as the disks or SSDs are in good working order.

What happens when you have failing disks? I found that out pretty quickly, and we’re going to talk about that soon!

If you trust my judgment, you can stop reading here or skip to the conclusion. I am about to go into great detail about the things that happened while hammering on the Cenmate enclosure with several failing 3.5” hard disks installed. I think the two important observations are that the enclosure’s electronics have been rock solid, and the data on my drives would be safe even when encountering the worst failure mode that I could produce.

What kind of problems am I running into?!

I had a good list of reasons for choosing a 6-bay enclosure. Six disks is a good count for a RAID 5 array that doesn’t dedicate too large a percentage of your storage to parity data. Six big disks in a RAID CAN exceed speed of the Cenmate’s 10-gigabit USB connection, but they can only do that on towards the first third or half of the disk. That felt like a reasonable balance between value and a small bottleneck.

There was an even more important reason for my decision. I was pretty sure I had six old but usable 4-terabyte hard disks in my closet.

I was wrong. One of my old disks was completely dead. Two were making clunking sounds while generating lots of read errors. Others were quietly but very regularly encountering errors. The biggest bummer is that the 12-terabyte disk that I expected to be problem free is now the only disk left in my test that is encountering read errors.

THESE ARE HARD DISKS WITH PROBLEMS. This is not a problem with the enclosure, the SATA chipset in the enclosure, or the USB connection. I just didn’t have six good disks on hand.

My batch of test hardware is now was down to 4-terabyte drives, one flaky 12-terabyte drive, one 500-gigabyte drive, and one 400-gigabyte drive. These are the drives I had when I managed to make my mdadm RAID array kick drives out of the array.

My plan was to do all the angry benchmarking against a RAID 5 array, but that would be limited to the performance of the slowest drive. The 12-terabyte drive can manage 250 megabytes per second while the 400-gigabyte drive is limited to around 80 megabytes per second.

It is a good thing Brian brought over some SATA SSDs for me to use for further testing!

I am glad that my drives aren’t perfect, because it let me test interesting failure modes!

I didn’t even consider that these failure modes would be interesting. I have three unique things happening with at least three different drives. I won’t post every line from dmesg, because sometimes they are numerous.

My 12-terabyte drive is reporting read errors, but bonnie++ is able to power through them, because they wind up being correctable.

1
2
3
4
5
[4898744.584164] I/O error, dev sdf, sector 913209008 op 0x0:(READ) flags 0x80700 phys_seg 32 prio class 0
[4898877.612857] scsi host5: uas_eh_device_reset_handler start
[4898877.612912] xhci_hcd 0000:00:0d.0: bad transfer trb length 47104 in event trb
[4898877.679055] usb 2-1.4.3: reset SuperSpeed USB device number 126 using xhci_hcd
[4898877.692684] scsi host5: uas_eh_device_reset_handler success

Sometimes when there is a recoverable read error, that individual USB SATA controller is reset. The matching /dev/sdf device doesn’t go away. Nothing bad happens. There is just a little blip in the connection. I assume this reset happens due to the drive being unreponsive while attempting to repeatedly read the bad sector. The filesystem stays mounted, and the benchmark keeps chuggin’ away.

One of my 4-terabyte disks had an unrecoverable read error. The bonnie++ process acknowledged the I/O error.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[4787355.429473] sd 3:0:0:0: [sdd] tag#1 FAILED Result: hostbyte=DID_OK driverby
te=DRIVER_OK cmd_age=2s
[4787355.429479] sd 3:0:0:0: [sdd] tag#1 Sense Key : Illegal Request [current] 
[4787355.429481] sd 3:0:0:0: [sdd] tag#1 Add. Sense: Invalid field in cdb
[4787355.429484] sd 3:0:0:0: [sdd] tag#1 CDB: Read(16) 88 00 00 00 00 00 59 df 91 48 00 00 01 00 00 00
[4787355.429485] critical target error, dev sdd, sector 1507823944 op 0x0:(READ) flags 0x80700 phys_seg 32 prio class 0
[4787355.429601] sd 3:0:0:0: [sdd] tag#2 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=2s

<removed lots and lots of repeats of the above>

[4783762.072077] scsi host3: uas_eh_device_reset_handler start
[4783762.072333] xhci_hcd 0000:00:0d.0: bad transfer trb length 65536 in event trb
[4783762.072411] xhci_hcd 0000:00:0d.0: bad transfer trb length 53248 in event trb
[4783762.072572] xhci_hcd 0000:00:0d.0: bad transfer trb length 33792 in event trb
[4783762.140263] usb 2-1.3: reset SuperSpeed USB device number 122 using xhci_hcd
[4787945.388477] critical medium error, dev sdd, sector 1507823944 op 0x0:(READ)
 flags 0x0 phys_seg 1 prio class 0

This is a little different than the recoverable read error, but doesn’t change much in practice. Had these drives still be in an mdadm RAID array, the drive experiencing this error would almost definitely be kicked out of the RAID.

The first bad drive that I pulled was the problematic one. It somehow manages to make all six drives disconnect from the mini PC’s USB controller. I started that day with sdc through sdf, but after the reset I had sdg through sdj. The connection didn’t just get reset. The USB enclosure was detected as a brand new device.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[4423503.817847] EXT4-fs warning (device sdf1): ext4_end_bio:342: I/O error 17 writing to inode 13 starting block 27623625)
[4423503.819712] Aborting journal on device sdf1-8.
[4423503.819738] JBD2: I/O error when updating journal superblock for sdf1-8.
[4423503.819743] EXT4-fs (sdf1): Delayed block allocation failed for inode 13 at logical offset 14118912 with max blocks 2048 with error 5
[4423503.819751] EXT4-fs (sdf1): This should not happen!! Data will be lost
[4423503.819755] EXT4-fs error (device sdf1) in ext4_do_writepages:2724: IO failure
[4423503.819771] EXT4-fs (sdf1): I/O error while writing superblock
[4423503.820240] EXT4-fs error (device sdf1): ext4_journal_check_start:84: comm kworker/u8:0: Detected aborted journal
[4423503.820280] EXT4-fs (sdf1): I/O error while writing superblock
[4423503.820283] EXT4-fs (sdf1): Remounting filesystem read-only
[4423503.841037] sd 5:0:0:0: [sdf] Synchronizing SCSI cache
[4423503.910074] sd 5:0:0:0: [sdf] Synchronize Cache(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
[4423504.140334] usb 2-1: new SuperSpeed Plus Gen 2x1 USB device number 106 using xhci_hcd
[4423504.183687] usb 2-1: New USB device found, idVendor=2109, idProduct=0822, bcdDevice= 8.b3
[4423504.183696] usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[4423504.183699] usb 2-1: Product: USB3.1 Hub             
[4423504.183700] usb 2-1: Manufacturer: VIA Labs, Inc.         
[4423504.186967] hub 2-1:1.0: USB hub found
[4423504.187324] hub 2-1:1.0: 4 ports detected
[4423504.817696] usb 2-1.2: new SuperSpeed Plus Gen 2x1 USB device number 107 using xhci_hcd
[4423504.829870] usb 2-1.2: New USB device found, idVendor=174c, idProduct=55aa, bcdDevice= 1.00
[4423504.829877] usb 2-1.2: New USB device strings: Mfr=2, Product=3, SerialNumber=1
[4423504.829880] usb 2-1.2: Product: ASM235CM
[4423504.829882] usb 2-1.2: Manufacturer: ASMedia
[4423504.829883] usb 2-1.2: SerialNumber: ACAAEBBB215F
[4423504.835984] scsi host6: uas
[4423504.837425] scsi 6:0:0:0: Direct-Access     ASMT     2235             0    PQ: 0 ANSI: 6
[4423504.838834] sd 6:0:0:0: Attached scsi generic sg3 type 0
[4423504.850722] sd 6:0:0:0: [sdg] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB)

I pulled that drive and drew a big, fat question mark on its label. This is the worst problem I was able to coax out of the Cenmate enclosure, and I am hoping that the question-mark drive will let me recreate this problem again in the future.

NOTE: As I am writing this, I am wondering what would have happened if I were using disk ID labels instead of lazily building my temporary mdadm devices using /dev/sdc through /dev/sdg. Would mdadm realize that the new devices match the old devices? I will have to try that next month after the randread testing is completed!

The worst failure mode isn’t that bad

The group of people today who need five nines of uptime and the group of people who can even make use of slow, mechanical disks don’t have a much overlap.

I don’t know about you, but my RAID of slow, mechanical disks it there to keep me from having to waste time restoring dozens of terabytes of data when I have a hardware failure. It isn’t a big deal if my backup target isn’t available over the weekend. Losing access to my Jellyfin library for an evening isn’t a huge problem.

Sitting down to spend a few hours of my time performing a restore and making sure I actually restored everything that was necessary is a bummer. Wondering whether or not I ACTUALLY did a good job restoring everything over the next three weeks is even worse.

Restoring from backup and getting a machine back into production at work can be a stressful task, and the job isn’t always complete when you think it is.

I will only lose a few minutes of my own time if my hypothetical stack of six 20-terabyte SATA drives in my Cenmate enclosure go offline due to a weird USB reset and drive redetection cycle. I had everything back in five minutes when I encountered the problem. I powered off the enclosure, used mdadm to stop the RAID 5, powered the enclosure back up, and my RAID 5 was detected again a few seconds later. A lazier fix would have been to just reboot the server.

This could be a serious problem if I was serving customers and this was the only copy of their data. I can’t imagine a scenario where I would be serving information to the world from mechanical hard disks in 2025.

As far as I can tell, this type of failure is difficult to trigger. I have at least four failing hard disks here, and only one of them has managed to make this happen, and it only happens after it has been throwing read errors for several minutes. It is a rare problem, you can probably see it coming, you can easily prevent it from happening again, and it is easy to recover from.

I feel that this is acceptable, especially if you are aware that it can happen.

How does this thing actually work?!

The layout of the USB devices that show up when you plug the Cenmate 6-bay enclosure in is interesting!

1
2
3
4
5
6
7
8
9
10
/:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/2p, 20000M/x2
    |__ Port 1: Dev 14, If 0, Class=Hub, Driver=hub/4p, 10000M
        |__ Port 1: Dev 15, If 0, Class=Mass Storage, Driver=usb-storage, 5000M
        |__ Port 2: Dev 16, If 0, Class=Mass Storage, Driver=uas, 10000M
        |__ Port 3: Dev 17, If 0, Class=Mass Storage, Driver=uas, 10000M
        |__ Port 4: Dev 18, If 0, Class=Hub, Driver=hub/4p, 10000M
            |__ Port 3: Dev 21, If 0, Class=Mass Storage, Driver=uas, 10000M
            |__ Port 1: Dev 19, If 0, Class=Mass Storage, Driver=uas, 10000M
            |__ Port 4: Dev 22, If 0, Class=Mass Storage, Driver=uas, 10000M
            |__ Port 2: Dev 20, If 0, Class=Mass Storage, Driver=uas, 10000M

The device at the top of the tree is a 10-gigabit USB hub. The usb-storage device you see is a USB SSD that I plugged into the Cenmate enclosure’s daisy-chain port. Then there are two USB attached SATA (uas) devices that correspond to two of my 3.5” SATA hard disks.

The next branch in the tree is another 10-gigabit USB hub that has the other four hard drives attached.

I didn’t even notice that the Cenmate enclosure wasn’t using the usb-storage driver until after plugging in that additional USB SSD!

How is the power consumption?

I have the Cenmate enclosure plugged into a metering smart outlet that is connected to Home Assistant.

It sits at an extremely frugal 0.2 watts when no drives are plugged in. During the 6-drive benchmarks, the enclosure eats up 1.33 kWh per day. That works out to an average of 55 watts, which is roughly 9 watts per drive.

I prefer to track power usage over an entire day to get a nice, clean average, but I don’t have that kind of data with fewer drives. The instantaneous readings do start at around 9 watts with one drive installed, and they go up by about 9 watts every time you click another drive into a bay.

This means you don’t have to be conservative and buy a smaller enclosure if you’re power conscious. You can buy an oversized enclosure and add drives as time goes by and your needs grow.

The fully-loaded enclosure does spike to nearly 120 watts when you flip the power switch and all six drives spin up. The included 12-volt power brick says it maxes out at 108 watts. I am not terribly concerned about this, because the spike past 108 watts ends so quickly that you’ll miss it if you blink.

Conclusion

I am more than pleased enough with the results so far, so I will be working on setting the Cenmate enclosure up for long-term use. I don’t need six disks’ worth of storage, but I am certain I can make good use of one or two bays in the immediate future, and it will be handy to have some spare bays around if I ever need to sneakernet some data around. This will have to wait a few weeks. I’d like to see at least a month of fio random IOPS testing go by without a hiccup.

I am extremely curious at this point about what you are thinking! Have you thought about using a USB SATA enclosure? Do you have fear, uncertainty, and doubt caused by the earlier days of USB storage like I did? Are you already using a USB enclosure? How has your experience been? Let us all know about it in the comments, or join the Butter, What?! Discord community and tell us chat with us about how things are going!

Should I Run Bazzite Linux On My Workstation?

| Comments

You might consider it a stretch to call my gaming PC a workstation. One lazy way to define a workstation could be enterprise server-grade hardware in an office-friendly case, but I’m willing to be more liberal with my labeling. Workstation is an easy word to use in the title that conveys relevant enough information, so I am sticking with it, because this is the machine I sit at when I want to get work done.

Bazzite is the new and popular gaming Linux distro. It is built on top of Universal Blue, which is built on top of Fedora Silverblue, and these are all immutable distros. I hope I got that correct!

I am excited about the idea of immutable distros. I’ve been running Bazzite’s gaming mode in my living room for a few months, and I am impressed with it. They have desktop spins of the installer, so they have me tempted to give it a try.

Bazzite on my 5700U laptop

I usually shy away from the more niche Linux distros. I don’t want to have to reinstall and start from scratch if someone gets bored and the distro goes away.

I could wait until the end to reveal this, but I am already dipping my toe a little deeper into the Bazzite waters. I just installed the KDE Plasma spin of Bazzite on my Asus 2-in-1 laptop. Things are looking promising so far!

My Linux distro history

I started out using Slackware in the nineties. I tried SuSE for a while, because their network installer was handy when we had our early cable modems.

I settled on Debian before the end of the decade, and that is all I used until 2006.

That’s when I switched to Ubuntu. The appeal for most Debian users in those days was Ubuntu’s release cycle. We got what amounted to a fresh, reasonably stable, and up-to-date Debian build every six months. That was SO MUCH BETTER than dealing with Debian’s testing repositories breaking your machine twice a year.

I had a continuously updating Ubuntu install on this computer from 2009 until 2022. It was installed on my old laptop, had been dded to new SSD and NVMe drives a few times, and has been paired with one laptop and two different motherboards.

That is when I almost switched back to Debian. Ubuntu has been drifting farther and farther from Debian as the years go by. There are lots of inconsequential things I am grumpy about, but the straw on the camel’s back for me is forcing snaps on us. Ubuntu installs the Firefox snap via apt, and in 2022, the snap would refuse to update itself unless I closed Firefox.

It felt like I traveled backwards in time, and it didn’t help that the Firefox snap took so long to open and refused to auto update unless I remembered to close my browser. Who closes their browser?! This felt like a good time to start thinking about where I might move in the future.

I wound up aborting my Debian install. I’m not going to get all of the details right from memory, but I am sure this will be close enough to accurate. Getting a combination of recent enough Mesa and RADV libraries installed for ray tracing to work well, and getting a build of OBS to work with hardware video encoding, while simultaneously having a working ROCm setup compatible with DaVinci Resolve Studio was going to be a massive pain in the butt.

Ubuntu had two out of the three nailed, and working around the third wasn’t a big deal.

Bazzite to the rescue?!

Bazzite prioritizes gaming. Bazzite is built on top of Fedora Silverblue with nearly bleeding edge AMDGPU drivers and Mesa libraries, so my Radeon GPU will always be working great, and I will be running one of the first distros to ship support for whatever the next generation of Radeon GPUs happens to be. That means I won’t have to wait as long after a new hardware release before upgrading!

This is awesome. Gaming is the most demanding thing I use my computer for, and things always improve when you can use the latest and greatest kernels, drivers, and libraries. Shoehorning this stuff into Ubuntu LTS releases can be a pain, and you’re always lagging behind.

Bazzite ships with their ujust system. It isn’t a package manager. It is more like a consolidated set of scripts and magic to help you get certain things going, much like an officially supported set of Proxmox helper scripts.

On my laptop, I ran ujust enable-tailscale to get my fresh Bazzite install connected to my Tailnet, and I ran ujust install-resolve-studio to install DaVinci Resolve.

It was slightly more complicated than that. I had to download the zip file from Blackmagic’s site myself, but ujust handled the rest for me. It set up a custom distrobox environment with everything Resolve needs to run, and I didn’t even have to click through Resolve’s GUI installation tool. It was just ready to go, and everything seems to work. Though I did have to tweak Resolve’s memory settings to stop it from crashing on my low-end laptop!

I don’t know if it is fair to accuse my laptop of being low end. It was squarely in the mid range when I bought it, but time has gone by, and it is starting to show its age.

The best part is that Resolve is in its own container. It is unlikely that a future update to the Bazzite installation will break things.

It took me a few clicks to install OBS Studio using Bazzite’s new Bazaar frontend for Flatpak. Flatpak correctly installed the required VA-API plugin. I just had to turn on the advanced settings in OBS Studio, and I had my laptop hardware encoding a 1080p screen capture in h.265.

Those were the trio of things that were going an effort to get working on Debian three years ago. They’re all working, and they’re all in better shape than on my current Ubuntu install on my workstation. I think that is an awesome start!

Living with an immutable distro, and embracing Distrobox

I already mentioned that Bazzite uses Distrobox to containerize DaVinci Resolve, but I didn’t explain what Distrobox is. Let’s see if I can do a good enough job in a paragraph.

Distrobox sits on top of either Docker or Podman, and it handles installing, configuring, and running full Linux distros in these containers. They aren’t containerized for security or to provide any significant separation. The opposite is true! All your Distroboxes are plumbed to have access to most of your hardware and to share your home directory.

This means you can set up separate Distroboxes with Arch, Debian, and Ubuntu. You can set up terminal window shortcuts to open shells in these separate boxes. You can create an AI-generated video in your Debian box, then edit that with DaVinci Resolve in the Ubuntu box, and paste that video into Discord using your Arch box. Each Distrobox has access to your Wayland session, so you can run GUI programs on any Distro.

I had Distrobox up and running on my aging Ubuntu install in a few minutes. Not long after, I had an Ubuntu 25.04 box going with Steam installed, and I was playing games that were already downloaded to my Ubuntu host. It bind-mounted all my usual file systems exactly where they needed to be to play my existing Steam games.

My plan is to use Bazzite for the stuff that is a pain to maintain or relies heavily on the host’s hardware. Steam, OBS, and Resolve, and Firefox will live up there on the host. I expect to do nearly everything else inside one or more Distrobox boxes.

It is possible to export a Distrobox image on one machine, then import it on another. My plan is to get myself an environment that I am happy with on my old Ubuntu workstation, and move all my important work into that box. Once I am happy, I will copy that box over to my laptop.

If I do things well, I should almost instantly have my working environment fully operational once I get around to installing Bazzite on my workstation. That is awesome!

The core idea here isn’t new. I used to do something similar with work and personal virtual machines two decades ago, but it wasn’t nearly as easy to work with those separate virtual machines at the same time.

Conclusion

Wiping out my workstation and starting from scratch fills me with dread. I always worry that there will be something that I rely on that is missing, or some weird binary in /usr/local/bin that just doesn’t exist anymore. Maybe I will lose a game’s save files that are stored in a weird location and aren’t being synced by Steam. What if an important program refuses to work correctly, or I can’t figure out how to configure something correctly?

Thing never ACTUALLY go terribly wrong, but I always miss something important, and migrating to an entirely new Linux distro isn’t something I would do on a whim. I am definitely going to kick the tires on my laptop for a few weeks, and put some work into getting a Distrobox environment well configured on my current workstation before I wipe my NVMe.

What do you think? Are you running Bazzite on a productivity machine? Am I silly for thinking this will be a good idea, or am I a genius and optimizing for exactly the right thing? How long do you think it will take me to get a productive Distrobox image set up so I can start my migration? You should join our friendly Discord community to let me know if I am making a mistake, or to chat with me to see how things are working out so far!

My New Budget Gaming TV – The TCL Q6/Q651F

| Comments

I have had a renewed interest in couch gaming on the TV. I set up an Intel N100 mini PC last year as a Steam box, but I didn’t use it very long. Then I picked up a Ryzen 6800H mini PC to use in the living room, and it is running Bazzite. That machine is roughly comparable to a Steam Deck, and it has been fantastic, so I wound up adding a GameSir Cyclone 2 to my collection, and I absolutely love its mechanical buttons and D-pad.

Not all games are fun for me in our living room. Our aging 70” Vizio doesn’t have fantastic latency. You would think 70” makes for a massive television, but the couch is 14’ away. I had trouble reading the descriptions of the upgrades in Spider-Man Remastered, so I wound up playing at my desk instead!

TCL Q6 in my office

NOTE: I used Flux Kontext to remove some junk under my desk in the photos in this blog!

I have an ancient 40” 1080p TV in my office that was manufactured in 2009. It actually has decent latency—better than our giant Vizio! It is unfortunate that it suffers from the same problem that I have in the living room. When I sit on my office recliner, I am just too far away to game on a 40” screen.

The tl;dr about the TCL Q6!

You can find almost every iota of information you could possibly want about this TV from the excellent review at rtings.com. The trouble is that review is in reference to the TCL Q6/Q651G, and that precise model isn’t in stock anymore. I had to buy the Q6/Q651F. What is the difference?

I didn’t find a good source to confirm this before I ordered, but it sure looks like the only significant difference is that the Q6/Q651G runs Google’s Android TV, while my Q6/Q651F runs Amazon’s FireOS. I would have preferred the Google model, since all our other screens have Chromecast with Android TV devices, but I wasn’t going to pay an extra $100 or more just for the different operating system.

The important features that I wanted weere 120 Hz and variable refresh rate (VRR) at 1080p and 1440p. This is available on either model at 55” or above. There are smaller models, but they are always limited to 60 Hz. My model ending in F has both options available in the menus, though I am having trouble with VRR. We’ll talk more about that soon.

The review at rtings.com is chock-full of great facts, but there aren’t many opinions. If you read on, you will see my opinions scattered all over the place. The opinion that I would have most liked to have seen before buying this TV is how good the slightly fake 120-Hz modes look and feel while gaming. To keep with the tl;dr, they feel 100% like playing on a proper 120-Hz monitor, but the loss of vertical resolution makes the picture look muddy if you’re sitting too close to the screen.

What was on my wishlist?

This is easy. I wanted the lowest latency, best response time, and highest refresh rate that I could get for cheap. Cheap is relative. I guess I wanted to optimize for price without giving up on latency.

The important thing here is that I don’t need the dynamic range or crazy-fast response time of an OLED display. I don’t need dimming zones. I just want a basic yet fast and clean picture.

The trouble is that I don’t use the TV in my office all that often, so I didn’t want to spend too much money. I thought that $300 for the 55” TCL Q6/Q651F was quite reasonable.

NOTE: I have been using my new office TV every single day since it arrived. That should be just over two weeks by the time this is published.

What got me excited about this particular TV?

One of the things I noticed playing games on my Bazzite box in the living room is just how much of a bummer gaming at 60 Hz can be. Sure, there are a ton of games where a low refresh rate is fine, and there aren’t all that many games that I can run on my Ryzen 6800H Bazzite box that will push far past 60 frames per second anyway.

I get to plug the TV in my office directly into my gaming PC’s Radeon 6700 XT. It isn’t exactly a bleeding-edge GPU, but it can do better than 60 FPS in all the games I want to play with a controller.

The TCL Q6 can run at 120 Hz at 1080p or 1440p as long as you enabled Dual Line Gate (DLG) in the settings. There are a lot of comments on the Internet talking about how crummy the interpolated 1440p looks on this native 4K display. They’re not entirely wrong, but they’re most definitely a little off.

Roboquest on the TCL Q6

Don’t sit this close to a 55” TV!

It looks like butt if you’re sitting right in front of the TV. I played a map of Roboquest at 1440p120 with my mouse and keyboard while sitting way too close to this massive TV. It was buttery smooth and felt as responsive as my Gigabyte G34WQC monitor, but from less than two feet away I could see all those chonky, unsmoothed pixels.

That problem goes away when I am sitting 10’ away in my office recliner. I can’t make out those beefy pixels even when wearing the right glasses. It shouldn’t be a problem from your couch.

VRR isn’t working for me

I started my testing with my Ryzen 6800H Bazzite box. I learned that you have to force the HDMI port to HDMI 2.0 in the TV settings in order for my PC to pick up the 120-Hz modes, but once I did that, they worked perfectly. However, the Steam Deck sidebar interface kept telling me that VRR wasn’t available on my monitor.

I plugged my gaming PC into another HDMI port, and I had the same problem. In both cases I turned VRR on for the correct HDMI port in the TV settings.

1
2
3
4
5
6
pat@zaphod:~$ xrandr --props | grep vrr_capable
    vrr_capable: 1 
    vrr_capable: 0 
    vrr_capable: 0 
    vrr_capable: 0 
pat@zaphod:~$

`

My Gigabyte ultrawide is the only port showing a vrr_capable display. The review on rtings.com for the Q6/Q651G says that Freesync is supported. Freesync will work with older HDMI ports, but the HDMI specification didn’t add their own VRR implementation until HDMI 2.1, and my GPU doesn’t support HDMI 2.1 with the drivers available on Linux. Perhaps the Q651G supports Freesync and G-Sync, while the Q651F only supports HDMI VRR?

I am not going to give up on this, but I am not terribly upset. A high refresh rate with low latency was a hard requirement. I was excited about getting a TV with variable refresh rate, but it is significantly less necessary when plugged in to my desktop PC.

How bad is it to not have working VRR?!

I keep squinting my eyes while looking very closely for tearing, but it is hard to notice. I imagine this is half because a 120-Hz display will only have an individual torn frame visible for around 8 milliseconds, and the DLG that allows for 120 Hz is probably smearing that tear across two frames. The higher your refresh rate, the lower the odds are of two tears appearing in close to the same position.

The only time I can see tears is when they line up in precisely the same place. The Marvel animation when starting Spider-Man 2 flashes things fast and runs at exactly 250 frames per second. I can almost always see a tear right in the center of that fast page-flipping animation.

Yes, you can turn on V-sync to eliminate this, but that adds nearly one entire frame of additional latency and makes things even less smooth if your framerate dips. Since I’ve only noticed tearing a handful of times, I am definitely going to opt to have lower latency here!

The TCL Q6 isn’t a bad monitor unless you need to enable DLG for 120-Hz modes!

The last time I rearranged the desks in my office, I decided to put my L-shaped desk along the same wall as the TV. That let me line things up so the TV could be used occasionally as a second monitor. I would never try to do work on the old 40” 1080p display, and I most definitely won’t be doing any proper work with the 55” 80-DPI TCL TV.

I can fit most of what I need on my 34” ultrawide monitor, but sometimes it is handy to have a second display to drop a small Discord window on while you are gaming in full screen, or have some extra screen real estate to monitor camera feeds while recording video while participating in a video podcast interview.

The TCL Q6 is acceptable for these tasks, and I would say that text in my terminal and Emacs windows looks more than clean enough as well. I wouldn’t want to edit blogs sitting two feet from an 80-DPI display, but you could.

TCL Q6 Text at 4K with and without DLG

Text at 4K60 with DLG disabled (left) and DLG enabled (right), photographed using my Sony ZV-1 on a stationary tripod

The trouble is that small text looks horrible when you enable DLG. You need DLG to game at 120 Hz, and for some reason the TCL Q6 doesn’t disable DLG even when you’re driving it at 2160p60.

The DLG setting is toggled separately for each HDMI port, so if I were running my PC on one port and a PlayStation 5 on another, I would be in good shape. I need to use one port, and I want a clean 2160p60 image while sitting at my desk and a fast 1440p120 when I am gaming from the other side of the room. The only way I can currently do that is slowly navigating through menus on the remote to toggle DLG.

This isn’t as big of a deal as that last paragraph might make it seem. I can just leave DLG disabled most of the time. Games that I would play on the TV where I actually need more than 60 Hz are the exception rather than the rule, though I do also need to enable DLG for latency-sensitive games like Dead Cells, because the rtings tests measured input latency at 10.4 ms at 60 Hz and 6.2 ms at 120 Hz. Dead Cells is the sort of game where that extra quarter of a frame could be the difference between surviving and dying!

Listen. I am fully aware that I bought the cheapest gaming-focused TV I could find. I don’t expect it to be perfect for every situation.

Is DLG some sort of fake 120 Hz?!

It depends on how you look at it, but from my perspective the answer is “sort of!”

Your 60-Hz panel isn’t magically refreshing twice as fast when Dual Line Gate (DLG) is enabled. Instead, half the lines are physically refreshed on each pass. No individual line is updating more than 60 times per second, but half the lines are being updated every 1/120 of a second.

This has more than a little in common with interlacing on old CRT televisions and monitors. In a way, you are sacrificing half of your vertical resolution for increased update speeds and reduced input lag.

Does that mean you are effectively running at 2560x720? It should be a bit better than that, but as I talked about in the previous section, enabled DLG messes up the vertical resolution even on static images. That makes me think there is something subpar about TCL’s implementation of DLG, but I am not well-versed on what is technically possible to achieve with this.

I think there are two important questions: Does it feel like 120 Hz, and can you ACTUALLY see the reduction in resolution?

TCL Q6 in my office

I have been using the left corner of my TCL Q6 as extra monitor space when I watch YouTube videos

I sat two feet away from my TV and played Roboquest with my mouse and keyboard. Latency feels comparable to my 144-Hz Gigabyte W34WQC monitor, and flipping back to 60 Hz feels as crummy as I expected. The lower effective vertical resolution is painfully obvious when sitting this close to the TV while playing Roboquest.

What about when I am sitting across the room with a controller in my hand? Maybe I can see the lower effective vertical resolution if I pause the game, squint my eyes, and stare really hard. Otherwise, I don’t think I could tell you if I am playing Spider-Man Remastered at 2160p, full 1440p, or 1440p with DLG enabled from my comfy chair 10’ away.

For what it is worth, I got nearly the same score in Roboquest’s shooting range on my monitor and on the TCL Q6. I had never tried the shooting range before, and I didn’t practice at all. I just made one run on the TV, then another on the monitor. Not a super scientific test, but it was enough to convince me that the TCL Q6 was easily worth $300.

I should note here that DLG looks completely clean at 1080p120, but if you’re far enough away that 1080p is acceptable, then you’re already far enough away that you won’t see the muddiness that DLG causes at 1440p.

UPDATE: Weird problem with DLG

I don’t have enough understanding to describe this well. DLG appears to be adjustable separately on each input, but I don’t think it actually is. Sometimes it says DLG is on, but it obviously isn’t.

If I have DLG enabled on HDMI 2 for my PC, and I see that it is actually working, then switch to the Fire TV’s Netflix app, Netflix will usually play back the video with what appears to be a double-thick letterboxing. The aspect ratio seems correct, but the top and bottom of the movie will be cut off.

Popping over to HDMI 2 and disabling DLG fixes this problem.

I don’t know what is really going on here, but I thought I should post a warning. I’m not terribly grumpy about this glitch. I bought a gaming TV, and the TCL Q6 is an excellent gaming TV. I am learning that it does other stuff well, and I am enjoying these other functions. Other use cases are just a bonus for me, so I can’t complain too much if there’s a small settings issue.

Bumping into an FSR upscaling reality

In theory, it is better to run your display at its maximum resolution and let FSR, DLSS, or XeSS upscale your game to match. If your computer can run your game at 1920x1080, it should look better at 3840x2160 than it does at 2560x1440.

My upper limit is 2560x1440, because I want that 120-Hz refresh rate. I was having trouble keeping Marvel’s Spider-Man 2 running at 90 FPS or above. I had to dial down to FSR’s Ultra Performance mode, which means I was rendering the game at 854x480, and that was upscaled to 2560x1440. That is more than a little yucky.

I dropped my TCL Q6 TV’s resolution to 1920x1080, and I could get away with FSR’s Performance mode. That means I was rendering at 960x540, and that was upscaled to 1920x1080. My frames per second wasn’t identical, but it was close enough, and the Performance mode has 30% more pixels to work with before upscaling.

TCL Q6 at 1440p60

I would be running my display at 2560x1440 if there were a mode in between where I could get a rendering resolution of 960x540, but that just isn’t an option.

I stayed down at 1080p for Spider-Man 2, but I wound up using the FSR 3’s automatic scaling with a target of 105 frames per second. I don’t actually need Ultra Performance upscaling the majority of the time. The frame rate mostly only drops when I am swinging across the city as fast as I can go, and things move fast enough there that it barely matters what the render resolution might be.

You’re probably wondering how the game looks scaled up from 854x480. It looks like absolute butt if you are standing right next to the TV. Slightly blotchy weirdness all over the place, and you can see a fringe of odd smearing around the edges of Spider-Man as he moves around.

There is still some of the smearing around the edges of moving objects even with less aggressive upscaling settings, but you can’t see any of it from ten feet away.

How is the TCL Q6 for watching movies and TV shows?

It is for sure adequate. The built-in speakers do the job, but they aren’t amazing. The video output isn’t anything to write home about, but I can’t really complain too much.

I don’t expect the black levels of an OLED or a display with 1,000 individual dimming zones. The blacks tend to be a little gray. The reviews complain that this TV isn’t terribly bright, but I don’t get any sunshine in my office. Full bright is more than bright enough in here, and an entirely white screen lights up the entire room a surprising amount.

If I read and remember the rtings.com review well enough, the backlight is flicker free down to around 30% brightness. I have so far backed the backlight down to 70%, and I expect I will push things a little dimmer before I am done tuning. The TCL Q6 is a little too bright for my dimly lit office.

TCL Q6 in my home office

This is pretty much the view I have when playing games from my home office’s recliner

I picked the Gaming preset for my PC’s HDMI port because that seems to enable all the low-latency stuff. I picked the Movie preset for both the built-in Fire TV and the HDMI port of my Chromecast device. That Movie preset seems pretty comparable to my gaming monitor. I have my monitor set up with fairly natural, not overly saturated color.

NOTE: The Movie preset enabled a bunch of questionable stuff including HDR Enhancer, Local Contrast Enhancement, but worst of all, it set Motion Processing to low. That last one causes the so-called soap opera effect that most people hate. I didn’t notice that I accidentally enabled it with 60-Hz content, but the first 24-Hz movie I tried to watch looked really broken! I am assuming that I tested Jellyfin’s framerate matching BEFORE switching to Movie mode.

The TCL Q6 does a fantastic job of seamlessly matching the frame rate of content that you are watching. All the movies and shows I have played so far in Netflix and Jellyfin have adjusted the display to 24 Hz, and I have watched YouTube videos play back at 24, 25, 30, and 60 Hz. The review on rtings.com says that the TCL Q6 does a good job of playing back 24 FPS content without judder, and it sure seems as though they are correct.

I think I am going to stick with saying that the TCL Q6 is adequate to a couple of notches above adequate. I most definitely will not be complaining when I get stuck in my office and have to watch a movie.

In fact, if you told me that I had to replace the TV in my living room today, and I wasn’t allowed to upgrade to a $2,500 OLED TV, I would probably just spend $520 on the 75” version of the TCL Q6 and call it a day. It is an upgrade over the 70” Vizio that we’ve had for nearly a decade now, and the TCL Q6 would do an admirable job.

Conclusion

A $300 55” TV like the TCL Q6 is going to involve compromises, and my opinion is that those deficiencies are all in precisely the right places for a gaming-first television.

My instinct here is that I need a few hundred words to summarize what I have written, but that feels like way too much information. The TCL Q6 is inexpensive, is as good or better for gaming than most TVs at double the price, and it is more than adequate for occasional TV- and movie-watching. What more needs to be said?

I’m not sure how long my quasi-review of the TCL Q6 will continue to be relevant. Televisions that can handle native 4K at 120 Hz will be at this price point in a year or two, and fewer of your computers will be limited to HDMI 2.0, so you won’t need to limit your output to 1440p120. For now, though, the TCL Q6 is a fantastic TV for this specific use case.

If this is in the future, and you want to know how my TCL Q6 has been treating me, you can join our Discord community and ask! We are a community of geeky homelab and NAS enthusiasts who also talk about 3D printing, gaming, and home automation. We are a friendly and pretty well-rounded bunch, so you should stop by and say hello!

Is A 6-Bay USB SATA Disk Enclosure A Good Option For Your NAS Storage?

| Comments

That title should be longer. I don’t want to exclude 4-bay and 8-bay USB SATA enclosures, but I didn’t want to waste so many extra words in the front of the title!

If you had asked me this question ten or twenty years ago, I would have laughed.

When I consolidated down from a desktop and a small laptop to a single beefy laptop sometime around 2008, I stuffed all my old 40-gigabyte IDE hard drives into individual USB 2.0 enclosures so I could continue to back up my important data to my old RAID 5 array. It did the job, but even with light use, I would get timeouts and weird disk restarts fairly often. I wish I had a photo of this setup.

Cenmate 6-bay USB SATA Enclosure in its home

The Cenmate 6-bay enclosure is about half as wide as the Makerunit 6-bay 3D-printed NAS case. My router-style N100 mini PC Proxmox server is my only mini PC that is WIDER than the Cenmate enclosure!

A lot of time and many USB upgrades, updates, and improvements have happened since those days. I have had a Raspberry Pi with a 14-terabyte USB drive at Brian Moses’s house since 2021, and I have also had a similar 14-terabyte USB hard drive set up as the storage on my NAS virtual machine since January of 2023. Both have been running without a single hiccup for years.

External USB hard drive enclosures are inexpensive, reasonably dense, and don’t look half bad. They also allow for a lot of flexibility, especially if you want to mix and match your homelab’s collection of mini PCs.

UPDATE: Since writing this blog, I have done a pretty good job torture testing the Cenmate enclosure using a stack of six SATA SSDs.

Let’s talk about the pricing!

A good way to compare storage servers is using price per 3.5” drive bay. I’ve usually said that anything under $200 per bay isn’t bad, and anything down at $100 per bay is quite frugal.

You can buy a two-bay toaster NAS with an Intel N100 for $230, which works out to $115 per bay. There is also the AOOSTAR 4-bay WTR PRO NAS with quite a bit of CPU horsepower for $539, and that’s $134 per bay. UGREEN has been pricing their small NAS boxes pretty aggressively. The UGREEN 4-bay regularly goes on sale for $500, and the 8-bay sells for $1350. That is $125 or $168 per bay, but you do get a CPU and network adapter upgrade out of the latter.

USB SATA enclosures seem to range from $25 to $40 per 3.5” drive bay. The 6-bay model that I ordered cost $182, which works out to $30 per bay. Of course, that isn’t directly comparable to a full NAS box from UGREEN or AOOSTAR on its own.

The Intel N100 mini PC that I plugged my enclosure into cost me $140. Adding that up works out to a delightfully frugal $53 per drive bay!

My most expensive mini PC is the Ryzen 6800H that I currently use for living-room gaming. Let’s assume we are buying a RAM upgrade to push that Acemagician M1 as far as it will go. We’d be up at about $450 with 64 gigabytes of RAM. That would put our 6-bay Ryzen 6800H NAS at $105 per bay. That is still a really good value.

USB drive enclosures keep you flexible

When you buy or build a purpose-built NAS server, you wind up locking yourself in. If you choose a case with 5 drive bays, then you’re probably going to have to swap all your gear into a new case if you decide you need 8 bays.

As long as I have a free USB port, I can plug in another 6-bay or 8-bay enclosure when I run out of storage next year. Many of the options on the market can be daisy chained, so you can plug one enclosure’s USB cable into the previous enclosure. Even if they didn’t, you could always buy a quality USB hub and have the same flexibility.

Daisy-chaining or using a hub will limit your total available bandwidth.

Cenmate enclosure during initial setup

The Cenmate enclosure is here hanging out with a Ryzen 6800H mini PC, an Intel N100 mini PC, and a Seagate 14-terabyte USB hard disk while I was loading it up with disks. It is roughly the same width as the Acemagician M1.

Outgrow your Intel N100 mini PC? Swap in a new mini PC with a Ryzen 5700U or Ryzen 6800H. The mini PC running your NAS virtual machine is acting up? Migrate the VM’s boot disk to another mini PC and move the USB enclosure. You aren’t locked in to any single configuration.

Not only do I have those options, but all my computers run Linux. If something goes completely wrong, I could carry the USB enclosure from my network cupboard to my desk, plug it into my desktop PC, and have immediate, fast, direct access to all my data. If there’s a fire, I can wander out of the house with my laptop and the drive enclosure, and I will have all my extra data with me in a hotel ready to be worked with.

Piling things on top of a drive enclosure in the homelab is pretty reasonable. The enclosure is roughly as wide as all but my largest mini PC, and that mini PC is 80% heat sink. If you go with an 8-bay enclosure, you should be able to fit two stacks of mini PCs on top if you tip the enclosure on its side.

There is a limit to how many mini PCs, 6-bay hard drive enclosures, and small network switches you can stack in your homelab before it gets unwieldy, but unlike a full 19” rack, you can almost always balance just one more mini PC on top if you have to!

Which enclosures did I look at?!

The first one I liked was an older Syba model that is available in 4, 6, or 8 bays. It looks older. It doesn’t support daisy-chaining, and it uses the older 5-gigabit USB 3 standard. It has enough unused space on bottom that it should be able to fit another drive in the same space, and that feels like a bummer. Syba has some of the lowest pricing per bay, and they also have an eSATA port on some of their enclosures. I used to use eSATA pretty regularly, but USB 3.0 is faster, and I don’t have any eSATA ports available to plug it into.

Then I was looking at various enclosures from Yottamaster. Their enclosures carried the highest price tags. They look attractive, but Yottamaster doesn’t seem to have one model of enclosure available in different sizes. They all look completely different. Some models have daisy-chaining, some have 5-gigabit USB ports, some have 10-gigabit USB ports. My favorite thing about these was that they were easy to find on Aliexpress.

I decided to purchase an enclosure from Cenmate. Their lineup of enclosures with 2, 3, 4, 6, and 8 bays all look identical aside from height. They support daisy-chaining, and their newest models have 10-gigabit USB 3. You could save a few bucks and go with their older 5-gigabit enclosures, but I figured it would be better to start with a newer model.

Why did I choose a 6-bay enclosure?

I was trying to balance the amount of storage I would waste on parity against wasting SATA bandwidth.

Running a RAID 5 with a 4-disk enclosure would dedicate 25% of my storage to parity, while running a RAID 5 with a 6-disk enclosure would only eat up 16%. It also helps that smaller disks tend to cost less per terabyte than larger disks, though I will be spending a bit more on electricity.

Cenmate 6-bay USB enclosure on my desk

The Cenmate enclosure’s tool-free 3.5” trays are easy to use, and the latch mechanism is quite satisfying to operate!

A 7200-RPM hard disk might top out at 250 megabytes per second on the fast end of the platter and something as low as 120 megabytes per second on the inside tracks. A 10-gigabit USB 3 port can theoretically move nearly 1,000 megabytes per second.

That leaves us at an average speed of around 160 megabytes per second when all six drives have to be choohin’.

What do you think? Is that a reasonable compromise between maximum speed and wasted parity space? I think it is fine. My NAS virtual machine will be bottlenecked by my mini PC’s 2.5-gigabit Ethernet ports anyway.

Nothing even manages to be apples and oranges

There’s a gap here. An N100 or N150 mini PC from Trigkey or Beelink can be found with a pair of 2.5-gigabit Ethernet ports just like a $500 4-bay UGREEN DXP4800, but the most costly 8-bay UGREEN NAS gets you an upgrade to a pair of 10-gigabit ports. Mini PCs with 10-gigabit ports are the worst combination of rare, large, or expensive.

If you have a need for 10-gigabit Ethernet ports, then a 6-bay or 8-bay UGREEN NAS might work out to a better value. My suspicion is that the Venn diagram of people who need 10-gigabit Ethernet and the people who can get by using slow mechanical hard disks would be two circles that are barely even touching.

What are you planning on doing with your big hunk of bulk storage? I watch the occasional movie, but I can easily stream video to every screen in the house with 1-gigabit Ethernet. Sometimes I dump 30 gigabytes of video off my Sony ZV-1, but the microSD card is also way slower than 1-gigabit Ethernet. I run daily backups both locally and remotely, and my remote backups finish in a reasonable amount of time over my 1-gigabit Internet connection, so I won’t notice the difference between that or a 10-gigabit local backup.

How do we know the USB enclosure won’t be junk?!

At the moment we have no real idea! My Cenmate enclosure only just arrived, but I am working on being as mean to it is I can.

I stuffed it full of spare 4-terabyte SATA disks that I had lying around my office. I plugged the Cenmate enclosure into one of my mini PCs, I set up a RAID 5, and I attached that RAID 5 to a virtual machine. I made sure the virtual machine is light on RAM so not much will be cached.

I fired up tmux, and I have one window continuously looping over a dd job writing sequentially to a big, honkin’ file. I have another window running dd that will be forever reading the RAID 5 block device sequentially. I have a third window running an old-school bonnie++ disk benchmark.

I don’t care how fast any of this goes. The two separate sequential tasks will be fighting the benchmark task for IOPS, so it is all going to run very poorly. What I do care about is whether I can make any disks or the USB SATA chip reset or error out.

I will feel pretty good about it when it survives for a couple of days. I will feel great about it after it has been running for more than a week.

How are things going so far?

The follow-up to this blog post will be a more direct review of the Cenmate unit, but it seems appropriate to include what I learned on the first day with the enclosure!

I have a box with five 4-terabyte SATA drives. These used to live in the homelab server I built for myself in 2015. My plan was to stick those in the enclosure along with an underutilized 12-terabyte to build a 6-drive RAID 5.

One of those 4-terabyte disks is completely dead, and I haven’t extracted the 12-terabyte drive yet. I was impatient, so for today I set up a quick RAID 5 across the first terabyte of the four good drives.

1
2
3
4
5
6
7
8
9
10
Every 2.0s: cat /proc/mdstat            proxmoxremote1: Mon Jun  2 16:41:09 2025

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sde1[4] sdd1[2] sdc1[1] sdb1[0]
      2929290240 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
      [=>...................]  recovery =  8.4% (82099764/976430080) finish=94.5
min speed=157679K/sec
      bitmap: 0/8 pages [0KB], 65536KB chunk

unused devices: <none>
1
2
3
4
5
--total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read  writ| recv  send|  in   out | int   csw 
  2  15   7  76   0| 937M   45k| 716B 1146B|   0     0 |7648  5732 
  2  14   6  78   0| 935M   29k| 769B 1068B|   0     0 |7487  5425 
  2  15   5  78   0| 950M   34k|2240B 1288B|   0     0 |7791  5753 

The enclosure is plugged into a 10-gigabit USB port on my Trigkey N100 mini PC, and mdadm during the RAID rebuild said I am hitting 480 megabyte per second reads and 160 megabyte per second writes. That is as fast as these old hard drives can go to build a fresh RAID 5 array. I also verified that smartctl is able to report on every drive bay.

I pulled my spare 12-terabyte drive from my desktop PC and stuck it into a free bay in the Cenmate enclosure, and running dd to read data sequentially from five drives got me up to 950 megabytes per second. I am just going to call that 10 gigabits per second.

The Cenmate enclosure is louder than I hoped yet quieter than I expected. I usually measure the sound of my office with the meter sitting on the desk in front of me, because I care about what I can hear while working. Usually my idle PC’s quiet fans put me at around 36 dB.

The Cenmate is off to my side just barely in arm’s reach, and its fans push that up to 45 dB. I get a reading of 55 dB when I hold the meter up next to the unit. It isn’t ridiculously loud, but I will be happy to move it to my network cupboard at the end of the day!

The conclusion so far

So far, so good! I paid around $30 per bay for a USB SATA enclosure with six 3.5” drive bays, and it is for sure able to move data four times as fast as a 2.5-gigabit Ethernet port. It is cheap, fast, dense, and it even looks nice and clean. We’ll see if it winds up being reliable.

I have already moved the Cenmate enclosure to my network cupboard. Long-term testing is progressing, but it is progressing slowly. I keep finding out that my old hard disks are starting to have bad sectors or other weird problems, so I won’t be able to start properly beating on a full enclosure for a couple of weeks.

I believe that USB hard drive enclosures are a great way to add additional storage to your homelab, especially if you need space for big video files or more room for backups. The enclosures are inexpensive, extremely dense, and it sure looks like they’re going to wind up being reliable as well.

Have you been using a USB enclosure for your homelab’s NAS storage? Or are you a diehard SATA or SAS user? Join our Discord community! We’d love to hear about your successes or failures with USB storage!

Using A Ryzen 6800H Mini PC As A Game Console With Bazzite

| Comments

It has been almost a year since I wrote about using a $140 Intel N100 mini PC as a game console and Steam-streaming device in the living room. I don’t know what part of this arrangement tickles so many people, but that post has been in my top 10 most read blogs ever since it was published. This is weird to me, because I am not a gaming blog, and my game-related blog posts don’t usually get many views.

Bazzite Acemagician M1 Ryzen 6800H Gaming Mini PC with the GameSir Cyclone 2

Last year, I tested a mini PC for gaming that was actually destined to live in my homelab. This year, I tested a Ryzen 6800H mini PC in my homelab before moving it to its permanent home in the living room.

What’s the tl;dr for this? Bazzite is delightful. It functions very much like SteamOS on the Steam Deck, but it also installs things like Decky and EmuDeck for you. Bazzite installed very easily on my Acemagician M1, and my Ryzen 6800H has enough iGPU horsepower to run Grand Theft Auto 5 Enhanced at nearly 60 frames per second with reasonable settings.

Why a mini PC? And why a Ryzen 6800H?!

I was looking for the sweet spot where performance and price meet for the best value. I wanted enough power to play a good percentage of my existing Steam library without breaking the bank.

The Intel N100 mini PC I tried out for gaming last year was fun! It can play pretty much any 2D game or top-down shooter I could think of, and it could emulate anything up to around the Nintendo Wii. That little $140 box also did a fantastic job at streaming Steam games from my real gaming PC over the network.

I’m not going to be playing any first-person shooters in the living room. I will continue to a mouse and keyboard for those at my desk. That said, there are a lot of more modern games in my library that would be fun in the living room with a controller. I wanted to be able to run games like Red Dead Redemption 2 and Grand Theft Auto 5.

It also helps that the Ryzen 6800H is in the same league as the Steam Deck. The Steam Deck has faster quad-channel memory giving it a slight boost, but the 6800H pulls ahead due to having 50% more iGPU cores than the Deck. Even better, the Ryzen 6800H mini PC can run at 45 watts, so it can clock a little higher to run games just a little better.

If you see people having success playing a game on the Steam Deck, then the game will run about 20% better on a Ryzen 6800H with dual-channel RAM.

I feel the Ryzen 6800H is the sweet spot between price and performance for a Steam game console

The prices of mini PCs with faster iGPUs go up faster than their performance increases. You can spend an extra $150 on a mini PC with a Ryzen 7740HS or 8740HS to upgrade to the 780M iGPU, but that seems to only be roughly 20% faster.

The most important question to ask yourself is whether the faster mini PC will allow you to play games that you wouldn’t be able to run otherwise. There are a ton of games that wouldn’t run on my $140 Intel N100 mini PC, while the $309 Ryzen 6800H opens up a whole slew of newer games for me to play.

The prices on mini PCs go up rather sharply after the 6800H. Those more expensive mini PCs do come with more than just incremental CPU and GPU upgrades. You get more RAM and more storage.

The trouble is that you will be paying $100, $200, or even $400 more for a 20% or 30% boost in frame rates, but you won’t make use of that extra RAM while gaming.

Things get rather interesting outside the mini PC space once you start pushing past the $500 mark. You could build a mini-ITX gaming PC around a $220 Intel Arc B570 GPU and absolutely blow any mini PC out of the water. It’ll be a little bigger, but it will be upgradeable and oh so much faster!

Who cares about Grand Theft Auto 5! That ran on my PlayStation 3!

This is true. I played through the story, and I played online with my friends on my PlayStation 3. I can tell you that Grand Theft Auto 5 on my mini PC is a very different experience.

The PlayStation 3 could only render the game at 1280x720 and scale that up to 1920x1080 with a basic upscaler, and it couldn’t even maintain 30 frames per second. I am rendering the game at 1920x1080 on my mini PC. I have the settings dialed in to where the frame rate stays at around 60, but I have been in situations where things dip into the low fifties.

There is definitely some room for dialing things up a bit more.

Definitely install Grand Theft Auto 5 Enhanced instead!

I wound up installing the enhanced version of GTA 5 yesterday, and it runs well and looks better than the legacy version. It supports FSR 3 natively, so I don’t have to use Gamescope to upscale using the much worse FSR 1 upscaler.

I keep turning up new knobs that make the game look nicer without noticeably dropping the frame rate. The game stays well above 60 frames per second if I set FSR 3 to performance mode, but that is rending the game at around 1280x720, and it is extremely obvious that the resolution is so low.

GTA5 Enhanced on my 6800H Bazzite gaming mini PC

This is about as low as the frame rate tends to go in GTA5 Enhanced on my Ryzen 6800H Bazzite mini PC

The game stays mostly in the mid fifties when FSR 3 is set to balanced, and it looks a lot better. I think that is a good tradeoff.

There are no lighting, shadow, or texture quality settings that I can dial down that bring the FPS up over 60, but I was able to push the lighting to high, the shadows to soft, and enable ambient occlusion without losing any performance. There are probably still settings I can push a notch higher without losing performance.

You might be better served by a Steam Deck!

The Steam Deck can run just about any game my Ryzen 6800H mini PC can run, and the cheapest Steam Deck only costs $100 more. What is the trade off there?

The Steam Deck is portable. It has a screen, a battery, and built-in controller, so you can play it on an airplane. You can purchase an inexpensive dock to connect the Deck to both power and your TV at the same time, so you can use the Steam Deck just like I am using my mini PC, but you would retain the option to pick up the Steam Deck and walk away.

My mini PC is faster, came with twice as much storage, and I saved $100. I think I’d be having more fun with a Steam Deck.

NOTE: While I was writing this, refurbished Steam Decks with 256 gigabytes of storage showed up in Valve’s store for $319.

Be careful choosing your mini PC!

My Acemagician M1 came with a single 16-gigabyte SO-DIMM installed. I wound up buying a 32-gigabyte stick of DDR5 for $72 to upgrade one of my Intel N100 mini PCs, and I moved its old 16-gigabyte stick to my Acemagician M1 to upgrade it to dual-channel RAM.

Using both channels doubles the available memory bandwidth. This isn’t a big deal in my homelab, because most processes aren’t held back all that much by a single channel. Gaming with an iGPU requires every ounce of memory bandwidth you can find.

I literally doubled my frame rates in Grand Theft Auto 5 when I installed the second SO-DIMM. You can buy a 16 gigabyte DDR5 SO-DIMM for $40. That would bring my total investment up to $350. The link leads to the same SO-DIMM I am using in my own Acemagician M1.

Other mini PCs on Amazon specifically list that they ship with two 8-GB SO-DIMMs. That is plenty of RAM for low-end gaming, and two SO-DIMMs is what you want.

You don’t have to shop around or think about this at all if you buy a Steam Deck.

I am keeping my eye out for Ryzen 6800H or 6900HX mini PCs that claim to ship with two 8 gigabytes sticks of RAM. Here is what I have so far:

Bazzite has been delightful

Bazzite is kind of like SteamOS on steriods. Both are immutable Linux distributions—that means you can’t accidentally goober up your base install. Both boot directly to Steam’s big-picture mode. Both run all your games on Linux using Proton under Gamescope. Both have a quick-menu to let you adjust frame-rate limits, FSR upscaling, and put limits on your power usage.

Mario Galaxy 2 on EmuDeck

I didn’t play much Mario Galaxy on my Ryzen 6800H yet, but I did verify that it runs at 60 FPS with 3x render resolution for 1080p, and that there is a ton of wiggle room for turning up settings.

Bazzite builds on that. While SteamOS now only supports the Steam Deck, Bazzite has images for AMD, Intel, and Nvidia GPUs. Bazzite also has lets you click a button during setup to install things like Decky to inject frame generation into games that don’t support frame generation, and EmuDeck so you can run NES, PlayStation, Wii, and other old console games.

I have barely scratched the surface with Bazzite. I suspect it deserves its own blog post, but I’d like to dig a little deeper before I attempt to write it!

The Ryzen 6800H has lower Steam streaming latency than my Intel N100

At this point, I think I am locked in to my test game for measuring Steam Link latency. I used Gunfire Reborn at 1080p to test the first Intel N100 gaming mini PC, so that is what I will continue to use. That will help keep things fair.

Device Wired WiFi
Ryzen 6800H 6 ms 8 to 11 ms |
Intel N100 8 ms 11 ms |
Steam Link 16 ms 15 ms |

I think it is important to mention that the original Steam Link hardware from 2018 is still fantastic. So many games are just fine with an extra 16 milliseconds of latency, especially if you are using a controller. I played some Red Dead Redemption 2 with 70 milliseconds of latency over T-Mobile’s 5G network. Red Dead is a slow-paced game, so I could only barely tell that there was additional latency. The 16 milliseconds of the Steam Link hardware from seven years ago is imperceptible here.

That said, the 6 or 8 milliseconds of these mini PCs completely goofs up my timing when playing Dead Cells. You’d be likely to have the same problem playing Super Mario Bros. for the NES with similar additional latency.

Steam Link Latency Bazzite 6800H

It sure is hard to read the information on Steam game streaming’s statistics output, isn’t it?!

You shouldn’t entirely trust my latency measurements over WiFi. Is there REALLY something better optimized in the hardware of my Ryzen 6800H mini PC? Does this WiFi chipset just get along better with the access point in my living room? Am I just having a luckier day with interference almost a year after testing the Intel N100? Will the radio situation in your home be equivalent to mine? Probably not.

I am pretty excited to see 8 ms of latency while streaming to the TV in the living room. Every time I fire a game up, it will start at around 11 ms of latency before settling in to 8 ms within the first ten or twenty seconds. It seems to do a good job staying there, too.

As awesome as it is that my Acemagician M1 can run Grand Theft Auto 5 Enhanced, it the sort of game where I wouldn’t notice an additional 6, 8, or even 16 milliseconds of latency. I can run the game maxed out on my gaming PC and stream it to my living room.

Do you want to stream games from your gaming rig or run them in the living room?

I say you should be prepared to do both.

I can’t be using my desktop PC in my office and gaming in the living room simultaneously, but you may have a situation where you need to do work while your kids play games. They can play a lot of games on a beefy enough mini PC without interrupting your work, but they can still stream the fancier games from your gaming PC at other times.

For my purposes, the games that are the most latency sensitive aren’t the ones that require an overpowered gaming rig. I was able to play Dead Cells and every platformer up to New Super Mario Bros. Wii on the $140 Intel N100.

That isn’t to say that I don’t play heavy modern games where latency isn’t important. Those games also happen to be the games where I am going to be using a mouse and keyboard. I won’t be playing those in the living room.

Why put a mini PC in the living room instead of a PlayStation or Xbox?

If you are anything like me, you have a huge backlog of unplayed or underplayed game in your Steam library. I have collected over 2,000 games by purchasing bundles. Usually I play one or two games in a bundle, but there are often games that look like they’d be fun with a controller on the couch.

Having a comfortable way to play couch-friendly games is something I have been missing for quite a number of years, and being able to play my back catalog is going to be awesome.

Maybe you don’t have a Steam library at all, and you don’t know anything about Steam. One of most awesome things about Steam is the sales. You can get deep discounts on older games several times a year, and sometimes those games aren’t even that old. You’ll probably save quite a bit of money buying your games on Steam instead of in Sony’s or Microsoft’s stores.

Don’t I need a controller?!

Yes. I have more than a few DualShock 4 controllers for the PlayStation 4, so I started out using one of those. I have been a fan of first-party PlayStation controllers for a long time. They work great with Linux, and pair right up with Bazzite. Steam understands them. Most importantly for me, though, is that the d-pad on Sony controllers is quite good.

That said, I have been itching to try one of the fancier controllers made by GameSir, and I’ve been having a problem. My DualShock 4 controller kept losing signal and inputs sitting 15’ away from the TV in the living room. This isn’t a new problem. Our Nintendo Switch has trouble, too.

I figured spending $50 on a GameSir Cyclone 2 with its 1,000-Hz USB dongle and fancy microswitch buttons and D-pad would be a good solution to this problem. My connectivity wasn’t perfect, but it was immediately better. Putting the GameSir dongle on a short USB extension and positioning it 6” from the Acemagician M1 solved all my problems.

The GameSir controller is fantastic. I wrote a long-winded blog post about it, but the tl;dr is that it is a nicer controller than anything Sony makes, and it costs $10 to $20 less. If you’re starting from scratch, then I think this is the place to start.

Final Thoughts: The Future of Living Room Gaming is Flexible

So, where does all this leave us? It’s clear the landscape of accessible gaming is shifting. We’re no longer limited to dedicated consoles or expensive gaming rigs to enjoy a great experience in the living room. The Ryzen 6800H mini PC, paired with the magic of Bazzite, offers a compelling blend of power, flexibility, and value. While the Steam Deck remains a fantastic, portable alternative, the mini PC route opens doors for a dedicated, potentially more powerful, and customizable setup.

Ultimately, the “best” solution depends on your needs. Do you prioritize portability? Value a plug-and-play experience? Or crave the freedom to tinker and optimize? There’s a fantastic option out there for everyone.

But don’t just take my word for it! The world of mini PCs, emulation, and Steam streaming is constantly evolving, and it’s much more fun to explore it together. Come join the conversation in our Discord community! Share your own experiences, ask questions, get help with your builds, and discover new gaming possibilities with fellow enthusiasts. We’d love to have you!

Let’s build the perfect living room gaming setup – one mini PC at a time.

Proxmox On My New Acemagician Ryzen 6800H Mini PC And Jellyfin Transcode Performance

| Comments

The hardware running Proxmox around my homelab is a little slow or a little outdated. I have an Intel N100 mini PC and a Ryzen 3550H mini PC here at home, and I have another Intel N100 mini PC off-site at Brian Moses’s house. The N100 machines are modern but lean towards the slow and power-sipping side. The Ryzen 3550H is basically a smaller, slower, older Ryzen 5700U.

I have been keeping my eye on various Ryzen 6000 mini PCs. They aren’t exactly bleeding edge, but they have a rather powerful iGPU, and they have more modern CPU cores than a Ryzen 5700U, like in my laptop, or Ryzen 5800U.

Installing Proxmox on my Acemagician M1 Ryzen 6800H Mini PC

My Ryzen 6800H mini PC while I am installing Proxmox. It is in my home office sitting on top of my off-site Trigkey N100 Proxmox server and its 14-terabyte external USB storage.

I haven been waiting to see a nice specimen drop under $300. Preferably one with 2.5-gigabit Ethernet and a pair of m.2 slots. If it weren’t for all the upcoming tariffs stirring up trouble for us, I was expecting Ryzen 6600U or Ryzen 6800H mini PCs to take the $250 price point away from the Ryzen 5800U mini PCs before summer ends.

I got everything I wanted except the price. I saw the Acemagician M1 on sale for $309, and I just had to snatch one up. I don’t really NEED to expand my homelab, but that is definitely a good enough price for me to be excited enough to do some testing!

Should I sneak in a tl;dr?

Like with most interesting things, the value proposition of a Ryzen 6800H mini PC can be less than simple.

Let’s compare it to my lowest cost Intel N100 Proxmox node. The Ryzen 6800H CPU is nearly four times faster, can fit two or three times as much RAM, has a faster Ethernet port, but the N100 manages to transcode around 50% faster. At today’s prices, the Ryzen 6800H only costs a hair more than twice as much as the Intel N100.

Proxmox on the Ryzen 6800H

Do you value that extra video-transcoding performance? Maybe you should save some cash and add an Intel N100 mini PC to your homelab, especially when you consider that the Ryzen 6800H burns 50 watts of electricity while transcoding. The Ryzen 6800H is just fast enough to transcode 4K 10-bit tone-mapped video in real time for at least two Jellyfin clients.

Maybe that is just the right amount of video-encoding performance for your needs. If you value that extra CPU chooch, then maybe you should splurge for a Ryzen 6600U or 6800H.

Do you have to watch out for Acemagician?

Maybe. There are reports that Acemagician installs spyware on the Windows 11 image that ships on their hardware. I never booted Windows 11 on mine. The very first thing I did was install Proxmox, so this didn’t matter to me at all.

We are still at a point where the handful of Goldilocks mini PCs don’t tend to go on sale at the lower price points. There are a lot of mini PCs with gigabit Ethernet and two m.2 slots, or with 2.5-gigabit Ethernet and one m.2 slot. Sometimes you even get a pair of Ethernet ports, and sometimes BOTH are 2.5-gigabit Ethernet ports. Finding the right combination for a good price can be a challenge!

I could see why you might want to vote against Acemagician with your wallet, but this was the correct porridge for me. It would have been nice if it had a second 2.5-gigabit Ethernet port, but that wasn’t a deal-breaker for me at $309.

Ryzen 6800H at idle

Home Assistant says that my power-metering smart outlet reads between 6.1 and 7.5 watts most of the time while my Acemagician M1 is sitting there waiting for a task, but it shoots up to a whopping 50 watts while transcoding video!

There is an oddity, though. Mine shipped with a single 16-gigabyte DDR5 SO-DIMM. I was expecting a pair of 8-gigabyte SO-DIMMS.

On one hand, that means I didn’t acquire a pair of worthless 8-gigabyte DDR5 SO-DIMMs that would be destined for a landfill. On the other hand, I am thinking about using this particular mini PC as a gaming console in the living room, so I could really use that dual-channel RAM for the iGPU. Not only that, but my single-channel RAM might be having an impact on my Jellyfin testing.

OH MY GOODNESS! This isn’t unexpected, but you absolutely need dual-channel memory for good 3D-gaming performance on your Ryzen 6800H. My FPS doubles in most games when I dropped in a second stick of RAM, and some weird regular stuttering in Grand Theft Auto 5* completely went away.

I should also say that taking apart the Acemagician M1 was the opposite of a delight. They hide the screws under the rubber feet, and when you pop the easy side of the lid off you are greeted with the big cooling fan. You have to finesse the entire motherboard out of the shell to reach the memory and NVMe slots underneath.

The rest of this blog will be about the Ryzen 6800H and not specifically my Acemagician Mini PC

The Ryzen 6800H is overkill for my personal homelab, but I wanted to be able to see how my new mini PC might handle some light gaming duties in the living room. Something like the 6-core Ryzen 6600U would have been a better fit for my homelab, and you can find those at better prices, but twice as many GPU cores out of the 8-core Ryzen 6800U or 6800H.

That isn’t a big deal for your homelab. The smaller iGPU probably has exactly as much Jellfin transcoding performance as the heavier iGPU.

I already said some of this in the tl;dr. The Ryzen 6800H is roughly four times faster than an Intel N100 and maybe 25% faster than a Ryzen 5800U.

All mini PCs for your homelab are a good value

This statement is mostly true. You should make sure you’re buying when there is a sale, because there is always at least one mini PC brand running a sale on Amazon on any given day. You may have to wait to get a good deal on exactly the specs you want, but there’s always sure to be a deal coming up. We keep an eye on our favorite mini PC deals in our Discord community.

I have been doing a bad job keeping the pricing in my mini PC pricing spreadsheet up to date. When I last updated it, you could get an Intel N100 mini PC for $134, a Ryzen 5560U for $206, or a Ryzen 6900HX $439. Each of those is roughly twice as fast as the model before it, and each costs around twice as much. The prices and performance don’t QUITE map out that linearly if you plot them on a graph, but none would stray that far from the line.

We haven’t seen deals that good on an Intel N100 or lower-end Ryzen 5000-series mini PC in a while. You’re going to wind up paying $150 or more today, possibly closer to $200. And there aren’t many 6-core Ryzen 5000-series mini PCs around now, so you have to pay a bit more for an 8-core Ryzen 5800U.

What’s exciting today is that the 8-core Ryzen 6000 mini PCs with 12-core RDNA2 iGPUs are four times faster than an Intel N100 or Intel N150 mini PC while only costing a bit more than twice as much.

Lots of small Proxmox nodes, one big one, or something in between?!

One of the cool things about mini PCs is that you can mix and match whatever assortment of servers you might need.

Do you want to save money on your electric bill? All these laptop-grade CPUs consume similar amount of power when they aren’t doing any serious work, so it might be better to splurge on one overpowered mini PC that idles at 9 watts, because four Intel N100 boxes will each idle at 7 watts.

NOTE: Don’t just assume that someone else’s idle numbers will exactly match your own. Cramming four times as many virtual machines onto a Ryzen 6800H just because it has four times the CPU and RAM of an Intel N100 also means that you have four times as many opportunities for mostly idle virtual machines to keep the CPU awake. We aren’t always comparing apples with oranges.

Proxmox Datacenter Manager

That is 9 watts vs. 28 watts. That difference only adds up to around $20 per year where I live, but that might be enough difference in power consumption to pay for a mini PC someone in Europe over the course of 3 or 4 years.

One the other hand, you may have some vital services that need to alongside problematic ones. Maybe your home router runs in a virtual machine, and every once in a while your Jellyfin or Plex containers goofs up your iGPU and requires a reboot of the host. You probably don’t want your home network going down just because you had to fix a problem with Plex.

You have the option of moving Plex and some less vital services over to their own inexpensive Intel N100 mini PC while running the rest of your homelab on a mid-range or high-end mini PC. You have a lot of flexibility in how you split things up.

How is the performance?

I have been running Geekbench 5 on all my mini PCs and keeping track of the scores, but why Geekbench 5? I didn’t wind up buying Geekbench 6, because I am unhappy that Geekbench no longer includes an AES test. I have been extremely interesting in improving my potential Tailscale encryption speeds, so this number has been a good indicator of whether or not a particular CPU would be a good upgrade for me.

It also helps that I have all sorts of historical Geekbench 5 scores in my notes. That makes it easier for me to compare older machines to my current hardware.

Mini PC Single Core Multi Core
Trigkey Intel N100 DDR4 1,053 2,853 |
Topton Intel N100 DDR5 1,002 2,786 |
Minisforum UM350 Ryzen 3550H 955 3,215 |
Acemagician M1 Ryzen 6800H 1x16GB 1,600 7,729 |
Acemagician M1 Ryzen 6800H 2x16GB 1,646 9,254 |

The multi-core score did improve by about as much as I would have expected, but my multi-core score is lower than many of the mini-PC scores in Geekbench’s database. Other people are near or above 10,000 points.

We should probably also talk about Jellyfin transcoding performance. Unlike the gcn5 iGPU in processors like the Ryzen 3550H or 5800U, the Ryzen 6800H’s RDNA2 iGPU supports hardware tone mapping. This is important today because most content that you download on the high seas will be 10-bit HDR video. If you need to play back on a non-HDR display, then you will want Plex or Jellyfin to map the content down to 8-bit for you. The Intel N100 and Ryzen 6800H can both do that for you.

I played my usual 4K 10-bit test movie, and my Ryzen 6800H was transcoding at between 42 and 56 frames per second. It was also burning 50 watts of electricity as measured at the power outlet while transcoding.

Ryzen 6800H Jellyfin Performance

I am not sure why this is the only Jellyfin encoding screenshot I have saved at 51 FPS!

The Intel N100 can manage 75 frames per second while transcoding the exact same movie. I don’t believe I measured power consumption while transcoding on the N100, but both of my N100 mini PCs top out at around 20 watts maximum. The Intel N100 is faster and more efficient at this task than the Ryzen 6800H.

That isn’t the actual performance limit for either machine. When Jellyfin is transcoding two or more videos, the total throughput of all the videos will exceed the single-video maximum.

Wrapping Up

So, as you can see, diving into the world of mini PCs for your homelab is a fascinating exercise in balancing power, efficiency, and price. The Acemagician M1 with its Ryzen 6800H offers a significant step up in processing power compared to the Intel N100. While it’s not perfect – the fiddly build and single RAM stick were minor inconveniences – the performance gains are undeniable.

Ultimately, the “best” mini PC truly depends on your specific needs and priorities. Do you prioritize power efficiency and low cost? The N100 is a fantastic choice, especially if your mini PC will spend many hours each day transcoding video! Need a bit more punch for demanding services or memory-heavy workloads? A Ryzen 6600U or 6800H might be the sweet spot.

We’ve only scratched the surface here, and the mini PC landscape is constantly evolving. If you’re building your own homelab, debating upgrades, or just enjoy geeking out over hardware, we’d love to have you join our community!

Come hang out with us on the Butter, What?! Discord server! We share deals, troubleshoot issues, discuss projects, and generally discuss all things homelab and DIY NAS. Share your setups, ask questions, and learn from others – we’re a friendly bunch who love to help. We’re always swapping tips and tricks on finding the best hardware, and specifically discussing optimal configurations for different homelab services, so you’ll be among the first to know about the next great mini PC deal!

Charging Your 8 KHz Wireless Gaming Mouse Using Magnets

| Comments

I have been gaming with one of my Li’l Magnum! fingertip gaming mice for the last three months. The model I am currently using weighs 16.4 grams and is built using the internals from a VXE Mad R mouse.

This is my lightest Li’l Magnum! so far, and the VXE Mad R is a fantastic value. For $43 you get a PAW3395 sensor, a 200-mAh battery, and an 8 KHz receiver. It is that last part that I am particularly excited about, but gaming with your polling rate dialed up to 8,000 Hz drains your mouse’s battery fast.

I will give you the tl;dr right here in the intro. I spent $10 on an inexpensive pair of magnetic USB-C charging adapters. I stuck one of the USB-C ends to the back of my monitor, and I installed one of the 0.8-gram magnetic doodads in my mouse. That brought my mouse up to 17.2 grams, but I can just dock it up on my monitor when I’m not using it, so I never have to think about the battery again.

I uploaded an STL to MakerWorld and Printables!

I cheated. The magnetic doodads arrived, and I immediately ran a cable and stuck the doodad behind my monitor with a big glob of blue tack. That was enough to let me try out my new dock, record a video, and figure out if this was a good idea.

It is a good idea. I like it a lot. Everyone who stops by the house seems to get a kick out of it. So I designed a simple universal L-shaped bracket to hold my magnetic charger up on my monitor.

The bracket is pretty big. I wanted to have a good amount of surface area for the double-stick tape to get a solid grip.

How is the battery life at 8 KHz?!

I am going to tell you that I haven’t done proper science. I don’t know how many actual hours of use the mouse can manage on a single charge. I just made sure to play some games every day, kept an eye on the battery level, and I’d call it done when the percentage dropped low enough that I figured it would die on me in the middle of gaming the next day.

I don’t use my fingertip mouse unless I am gaming.

My VXE Mad R could make it four days at 8K, eight days at 4K, and significantly longer at 2K or 1K polling.

I think four days is quite reasonable, but it is a weird schedule to remember when I need to plug my mouse in to top it off.

Do you really need 8K polling?!

Probably not, but I say that every little bit helps. A normal gaming mouse polls for changes 1,000 times each second. That means that when you click the button it may take as long as one millisecond for your game to register that you’ve decided to fire your weapon. That is an imperceptible amount of time.

Li'l Magnum! charging in its magnetic dock

When you dial things up to a polling rate of 8K, you drop that maximum to 0.125 milliseconds. This is also an imperceptible amount of time.

This might matter when you are playing against other people. You aim at the other person, and they aim at you. You see that you are staring directly at one another. You have identical hardware. You click at the same time, but their mouse is set to 8K and yours is set to 1K. You died almost one full millisecond before the game registered your click.

This is assuming that your mouse hardware is doing a good job, and that it isn’t lying to you.

High polling rates can be problematic

I am running Linux. Your experiences may be different on Windows, but I haven’t encounted a single first-person or third-person shooter that has been grumpy about 8 KHz polling. My frame times are always rock solid.

My troubles have been outside games. Video footage playing on YouTube in Firefox will freeze while the audio continues to play if I jiggle my mouse around. Sometimes the mouse pointer will be jumpy and lag behind when I move it over certain programs.

I didn’t keep track of exactly which things still work well that 2 KHz and 4 KHz, but things are definitely less likely to be problematic down there.

I would be working hard on an automated solution for this if my Li’l Magnum! was my daily driver for productivity tasks. I was already sniffing USB traffic in ATK’s configuration app figuring out which USB commands I might need to send to change my mouse’s polling rate!

Using a fingertip mouse to scroll Reddit and Mastodon isn’t comfortable. I just toss my Li’l Magnum! into its magnetic dock when I am done playing games and move my basic Logitech G305 back into place.

That single millisecond may not matter

There are so many places to shave off latency. The difference between enabling V-Sync, completely uncapped FPS, or capping your FPS just below your VRR maximum could be almost 10 milliseconds. Setting that up correctly could be a free boost for you.

Upgrading to an OLED gaming monitor with might shave 5 to 10 milliseconds of latency off the already nice IPS monitor you are currently using. That might be an expensive upgrade.

The nice thing is that you get to add these little improvements together. Five milliseconds from tuning your settings, plus 5 milliseconds from a monitor upgrade, plus one millisecond from polling your mouse at 8 KHz adds up to an 11 millisecond advantage.

That last millisecond of latency from your mouse might be a free performance upgrade. You may already have the hardware to do it, but you’re just not excited about charging your battery every three days.

What about adding 0.8 grams to save 2 grams?!

I have stopped chasing grams. I have trouble telling the difference between my 16.4-gram and 25.2-gram L’il Magnum! mice while gaming. The difference is obvious when you picked them up, but I quickly forget what I am using after gaming for a few minutes. I suspect this is because either of these mice weigh less than my index finger, so they both feel like almost nothing.

That said, I know for certain that some people want to shave every gram off their mouse that they possibly can. One option is to swap out for 200-mah stock battery for a 25-mAh or 50-mAh battery. The stock battery weighs about 4 grams, and you might be able to shave three of those off my switching to a lighter battery.

The trouble is that the 50-mAh battery won’t last you through an entire evening of gaming. You can probably top off a battery that small if you plug the mouse in when you go on a bathroom break, but plugging and unplugging a USB-C cable is a pain.

Tossing your 17-gram mouse onto a magnetic connector under your monitor requires significantly less effort. Maybe it is worth adding back 0.8 grams after saving 3 grams just for the convenience. You’d still save two grams, but charging would be almost effortless.

Why did I choose this model of magnetic charger by NETDOT?

There are some nice looking magnetic charging adapters that support 240-watt USB-C PD for around $20 each. That seems way too fancy. I don’t expect to ever use a mouse that won’t charge using old-school 5-volt USB power.

Many of the older, cheaper 5-volt magnetic chargers have round ends so they can swivel. That is smart if you are charging a phone, but those bulbous ends sure looked like they’d be close to scraping the mouse pad when using my Li’l Magnum!, so I figured they would be a bad idea.

Li'l Magnum! with magnetic USB-C charger

You can save some cash if you aren’t in a hurry. There are magnetic USB-C chargers on Aliexpress that look similar to the ones I bought, but they are only $2 each.

So why did I choose the NETDOT Gen10 magnetic doodads? They were the right shape, a 2-pack cost $10, and they were one of the options that Amazon could deliver to me the next day.

I am sure others work fine. They may be heavier. They may be lighter. They’re probably all similar enough in weight that it doesn’t really matter.

Will these doodads work with a normal mouse?

Every single one of my gaming mice that use USB-C ports for charging are already converted into L’il Magnum! shells. I can’t ACTUALLY test to make sure that there is enough clearance for the NETDOT unit to clear the plastic of a VXE or MCHOSE mouse.

I suspect they’d work just fine. The NETDOT ends are only 0.5 mm larger than the cables that shipped with my VXE and MCHOSE mice.

Conclusion: A Tiny Upgrade for Big Convenience!

Switching to a magnetic charging setup for my Li’l Magnum! fingertip mouse has been a changed the game—literally. No more worrying about battery life at 8K polling, no more fumbling with cables, and just a tiny 0.8-gram trade-off for effortless charging. Whether you’re chasing every millisecond of latency or just love a clever quality-of-life hack, this simple mod is worth trying.

If you’re curious about lightweight mice, high-polling-rate gaming, or 3D printing, come join the conversation in our Discord community! We’d love to hear your thoughts and see your mods and experiments. Drop by, share your setup, and let’s geek out over the little things that make gaming better.

Happy gaming—and happy modding! 🚀

What If Your 10-Gigabit Ethernet Is Unreliable? I Bought A 5-Gigabit SFP+ Module!

| Comments

I bought a couple of MokerLink 2.5-gigabit switches back in September. One lives in my office. The other lives in my network cupboard on the other side of the house. I plugged my desktop PC into a 10-gigabit copper SFP+ module, and I connected the two switches using another pair of Xicom 10-gigabit SFP+ modules.

I was a little worried that the 70’ of Cat 5e running through the attic might not manage to connect at 10 gigabit, but it worked just fine. It continued to work fine for a few months. Then things started becoming less reliable.

My 2.5-gigabit MokerLink switch and my old 1-gigabit switch in my office

I hate intermittent problems. I still haven’t correctly identified my problem. In fact, I have been back to having a flawless 10-gigabit Ethernet connection across my house for more than a month. That makes my problem even more difficult to troubleshoot!

I am going to walk through my troubleshooting steps, and tell you what I currently suspect is going on. Maybe you are having similar problems, and maybe something I have done or something that popped into my mind may be of use to you!

What do I think is going on? Here’s the tl;dr

I am extremely confident that my problem is related to the weather. It doesn’t get all that cold here in Plano, TX, but my problems started in the cooler months, got worse as the weather got colder, and the problem went away completely once the temperature outside stopped dropping below maybe 50F at night.

I don’t think it is just the cold. Things are more likely to be problematic when it is humid or raining.

Most of the length of the Cat 5e cable running from my office to my network cupboard is above the insulation in the attic.

I initially thought my SFP+ copper modules were overheating

Copper SFP+ modules do tend to get quite warm. They’re not going to burn you, but they’re hot enough that you think they might when you unplug them!

When the problems started occurring, I would move the cable in my office from the 10-gigabit SFP+ module to a 2.5-gigabit Ethernet port on the switch. That would always work fine. I would usually remember to move the cable back a few hours later, and I’d have a flawless 10-gigabit connection again. I assumed the SFP+ modules might be cooling down enough.

Smokeping graph

Each smokey point on my Smokeping graph represents me swapping ports or modules while doing heavy iperf3 testing

At first I was going almost a week between having to do this. Then every couple of days. Then several times a day.

I decided to take one of the switches apart to see what I might be able to do to keep the SFP+ modules cooler, and I thought I had a pretty good idea. Replacing the dried-out thermal compound in my cheap Intel 10-gigabit Ethernet card was necessary to keep them running at full speed. That was a good enough reason for me to give it a try here!

Attempting to keep the SFP+ modules cool in my MokerLink switches

I moved all my cables to the old gigabit switch, I ordered an assortment of thermal pads from Amazon, and I waited for the pads to be delivered. I figured I would stack several pads up between the PCB and the chassis to help the SFP+ modules to transfer more heat from the module to the shell of the switch.

My shipment got delayed a few days. I cut up some thermal pads when they finally arrived, installed them in the MokerLink switch, and moved all my cables back to the 2.5-gigabit and 10-gigabit ports.

Thermal pads installed in my MokerLink 2.5-gigabit Ethernet switch

My stacks of thermal pads installed under the SFP+ ports were inspired by the stock thermal pad that MokerLink installs under their CPU!

I couldn’t get a 10-gigabit connection at all. Did I break the switch? Did I mess up the SFP+ ports?

Probably not. I could get them to connect if the other end of the connection was a 2.5-gigabit port.

I wasn’t sure what was going on, but I figured I might as well modify both switches with the thermal pads for good measure.

Things just started working again one day!

I didn’t think about my 10-gigabit network much at all for a few weeks. This upgrade was just an inexpensive and fun experiment. My old 1-gigabit Ethernet gear was adequate for my daily needs, and the 2.5-gigabit upgrade was still working just fine. The 10-gigabit links were just a bonus. It didn’t hurt at all having them downgraded.

I don’t know what made me remember to try the 10-gigabit link again, but when I did, it was working perfectly. At least for a while. I’m not sure how long it was stable before I ran into trouble again.

This is when I started noticing the correlation to the weather. The 10-gigabit link was more likely to be problematic at night. It is colder at night. Especially since the sun wasn’t beating down on the attic all day.

Then I started being able to make reliable predictions. I would see rain in the forecast, then I would see network problems.

What do you think is happening in my attic? Are my cables contracting in the cold causing something weird to happen in a bend somewhere? Do I have a tiny tear in a jacket of a cable somewhere that is letting moisture in? Is that causing a short, or is the combination of cold and moisture doing something else?

If moisture is causing a short in a damaged cable, then why does it work perfectly at 2.5-gigabit speeds?

I ordered a 5-gigabit SFP+ copper module

One of the two switches involved in this problematic network link is managed. I can set the port to 1000, 2500, or 10,000 megabit. This does not influence the link speed between the SFP+ modules. The switch always detects a 10-gigabit full-duplex link even when the other end of the connection is a 2.5-gigabit port.

When I first set up the 10-gigabit connection across the house, I set up a long chain of couplers and extra patch cables in order to coax the Xicom SFP+ modules into connecting at 5-gigabit speeds. That worked great when the wiring was good, but I haven’t been able to get them to negotiate down to 2.5-gigabit or 5-gigabit on their own.

Lianguo 5-gigabit SFP+ module

Can you see that tiny switch labeled 2 and X on my Lianguo 5-gigabit module? That switches between 2.5-gigabit and 5-gigabit mode!

I wound up ordering a 5-gigabit SFP+ module from Aliexpress. I had two reasons for choosing this particular module. The first is that there really isn’t a big selection of 5-gigabit SFP+ modules. The best part is that this module has a tiny switch that you can flip to set it to 2.5-gigabit or 5-gigabit. I figured that might come in handy!

The 5-gigabit SFP+ module works perfectly. I plugged it in and immediately saw 5-gigabits per second on all my iperf tests.

I want to try the 5-gigabit module while the 10-gigabit link is having problems!

Nothing has gone wrong while I am here to do anything about it. My Smokeping graphs have been very nearly solid green for the last 6 to 8 weeks. We had a cooler night with some rain about three weeks ago, and I had a few cyan blips on the graph. A cyan blip means that one out of twenty pings over a five-minute period didn’t get a response.

I am pretty sure that singular blip happened before the 5-gigabit module arrived at my house.

I joked in Discord that since a potential solution is here in my hands that I won’t see another problem until November. That is feeling less like a joke now.

Conclusion?!

I don’t think we really get to see a conclusion until winter rolls around again. I think the 5-gigabit SFP+ module was a good purchase. Dropping down to a 5-gigabit Ethernet connection to the other side of the house is still a huge upgrade over my old 1-gigabit connection, and that is WAY less work than pulling a new cable.

Some of this is way more obvious with the benefit of hindsight. The temperature or humidity in the attic didn’t occur to me at all early on. When it get cold outside, we turn the heat on. Parts of the house that receive the most cooling in the summer also receive the most heat in the winter. My immediate assumption was that an extra few degrees of heat near my network cupboard was pushing things past the limit. That doesn’t seem to have been the case.

What do you think? Have you ever experienced strange network issues tied to weather or temperature? Could humidity or drops in attic temps be messing with your cabling? Maybe you have a different hypothesis to explain why my 10-gigabit link acts up when it’s cold. Share your thoughts (or war stories!) in the comments below, or jump into our Discord community to geek out over networking mysteries with fellow tech enthusiasts. Let’s solve this together—maybe before winter comes back! ❄️🔌

Proxmox Datacenter Manager Is Exactly What I Needed

| Comments

When I first migrated my virtualized homelab stuff from my old Debian with KVM and virt-manager to a mini PC running Proxmox, I knew I would want to eventually have some sort of cluster manager. It didn’t take long before I had Proxmox running on a second mini PC at home, and this week I am migrating my off-site Raspberry Pi’s Seafile server to another mini PC running Proxmox.

How can you get all that stuff into a single web interface? Proxmox will let you add a bunch of servers to a cluster, but that pushes the high-availability services pretty hard. You need to have a minimum number of machines to maintain a quorum, you need very low latency between your Proxmox hosts, and you need a clustered file system underneath.

My 1.5-liter 14-terabyte N100 Off-site Proxmox server

Setting up my 1.5-liter N100 off-site Proxmox host with 14 terabytes of storage

I want one of my Proxmox nodes to live in another city. My nodes aren’t a cluster, and they aren’t interchangeable. One of my nodes is plugged into a 3D printer. Two other nodes have external USB hard disks tied to specific virtual machines. I don’t need things migrating around on their own. I just want a unified GUI, and I would like to be able to manually migrate virtual machines and LXC containers around without doing a convoluted backup and restore cycle.

Proxmox’s Datacenter Manager has only had an alpha release so far, and it doesn’t have all that many features yet, but it scratches every single itch that I have.

What does Proxmox Datacenter Manager do well?

Installation was a breeze thanks to the Proxmox Helper Script. I had an LXC container up and running in a couple of minutes, and it took less than ten minutes to add all three of my Proxmox nodes via their Tailscale addresses. Using Tailscale means my Datacenter Manager can see all my nodes no matter where they are physically located.

The dashboard shows the CPU utilization of my ten busiest guests, ten busiest nodes, and my ten most memory-hungry nodes. That is a pretty boring view for me, because my homelab isn’t all that complicated. My guests don’t tend to do anything exciting.

Proxmox Dataventer Manager screenshot

The exciting page for me is the remotes tab. It shows a combined list of the task history of all my nodes. This makes it easy to see at a glance if any of my backup tasks have failed.

From there, you can drill down into each remote. That will show a summary page that looks very similar to the summary page on each individual Proxmox server. Even better, though, is that there is a little paper airplane icon next to each guest. This lets you easily migrate containers and virtual machines to a different host. I don’t do this often, but I am excited to have a simple interface to make it happen when I need to balance the load on my servers!

Removing nodes from Proxmox Datacenter Manager is a breeze

Well, it is almost a breeze. You do have to manually grab the TLS key from each new server to paste into your Datacenter Manager interface. This isn’t exactly a friction-free experience, but it also isn’t a herculean effort.

I goofed up the partitioning on my new off-site Proxmox host, and I decided that the cleanest way to fix my mistake was to reinstall the node from scratch. Removing a node from a Proxmox cluster is bit of a pain. My understanding is that if you need to remove a node that no longer exists you might have your work cut out for you.

Proxmox Datacenter Manager screenshot

I don’t think this Allocations section shows up anywhere in the usual Proxmox GUI. It is a handy summary to have!

I thought I might be able to get away with updating my remote node’s certificate fingerprint, but Proxmox Datacenter Manager gave me an error when it tried to reconnect. Even though my old node was gone, I had it deleted and set back up in less than a minute.

Easy-peasy.

Was it easy to get working with Tailscale?

I already have Tailscale running on each of my Proxmox hosts. There is a simple Proxmox helper script that installs Tailscale in your LXC containers for you, so I just used that to add Tailscale to the Datacenter Manager container.

My Proxmox hosts were all grumpy about using Tailscale’s Magic DNS. That isn’t a big deal. My Proxmox hosts only need to be able to talk to each other and to my NAS devices for backup purposes. I wound up configuring all my hosts to use the local DNS server, and I added the five relevant IP addresses to the hosts file on each Proxmox server and the Datacenter Manager container.

I made sure to use the Tailscale hostname when I added remote hosts to the Datacenter Manager GUI. They all seem to be talking happily.

I won’t be able to do proper testing until I send my remote Proxmox server home with Brian next time have pizza here on a Saturday night!

Conclusion

Proxmox Datacenter Manager has been a missing piece in my homelab journey, offering a streamlined way to manage nodes both across the house and across town without the rigid demands of a full-on Proxmox cluster. By bridging standalone Proxmox hosts—whether in my office, in the network cupboard on the other side of my house, or sitting off-site at Brian’s house—it delivers the unified GUI and manual migration capabilities I craved.

Adding Tailscale to the mix erased geographical barriers, while features like cross-host task monitoring and one-click VM/LXC migrations made previously fragmented management into a cohesive experience. For an alpha-stage tool, it is impressive that it is already able to do everything I actually require. Even so, I am looking forward to some of the features on the Proxmox Datacenter Manager roadmap!

If you’re tinkering with Proxmox, juggling nodes in different locations, or just love geeking out over homelab workflows, I’d love to hear about your setup! Join our Discord community to swap tips, troubleshoot quirks, and explore using tools like Tailscale and Proxmox together. Whether you’re a clustering pro or a DIY novice, there’s always room to learn, share, and streamline your lab. Let’s build smarter setups—without the headaches.

Join the Discord here and let’s keep the conversation going! 🚀