It Is My Network Cupboard Now!

| Comments

It was about eleven years ago when I helped my friend Brian Moses run Ethernet to every room in his house. Once he decided that the router and switches would live in an actual cupboard in his laundry room, Brian and I ran all but one of the cables to the laundry room ourselves.

Getting the first string over the vaulted ceiling in the attic to help pull most of the actual cables probably took one entire evening. The roof just isn’t tall enough to allow for much arc on a throw, and the longer the string gets, the more drag you get. Today it would take us two minutes to fly an FPV drone across the attic towing a string.

How did I end up with Brian Moses’s network cupboard?!

We were sitting in Brian’s kitchen one evening about five years ago, and I remember him jabbering about being interested in moving into a bigger house. He was complaining about how much work it would be getting things ready to be listed, and clearing out of the house whenever a potential buyer wanted to see the house.

My Network Cupboard

I figured out what this long ass cable is! It is bypassing the extra layer of switches there and connecting my office to the switch ports on the router. Maybe I should color code that and make a correct length cable!

I am pretty sure I said, “It sounds like everything would be easier if I just bought the house.” I didn’t have to go see all sorts of random houses. I didn’t have to put in any bids. I didn’t have to worry about what sort maintenance the previous owner had done. Best of all, I knew the quality of the workmanship that went into running cable to every room in the house.

Why did I wait five years to write an update on how the network cupboard is doing?!

I didn’t do much to the network cupboard when we moved in other than swap in my own router, and that doofy tube-shaped router of mine made the cupboard a mess! It didn’t even come close to fitting where the router is supposed to go, so I just had it dangling by a couple of patch cables back there, and the door to the cupboard wouldn’t even close!

Doofy D-Link DIR-860L

NOTE: The old D-Link DIR-860L running OpenWRT has a crack and some scratches!

My network cupboard just wasn’t photogenic for a long time.

What is in my network cupboard?

The heart of my network cupboard is a custom 17” rack that Brian and I built. The frame is made out of simple 1x2 lumber that is held together with half-lap joints and screws. The rack is rotated ninety degrees from the usual orientation so that it fits in the cupboard. It is hinged on the left and has a magnetic latch on the right, so it is easy to open things up to get to the back of the patch panel.

What is in the rack?

  • a used 48-port patch panel from eBay
  • two 8-port unmanaged gigabit Ethernet switches
  • a Linksys WRT3200ACM running the latest OpenWRT

What else is in the cupboard?

I had all of this connected to a CloudFree smart outlet for a few weeks, and I learned that the power usage is quite steady. The network gear and any extra overhead from the UPS uses precisely 0.45 kWh each day, which works out to an average of 18.75 watts. This tiny UPS doesn’t have any monitoring capabilities, but it should keep my network running for quite a few hours.

There are a pair of pegboard shelves slotted into the wood rack. The gigabit Ethernet switches are mounted to one shelf. It has been a decade since we put them in place, but I am fairly certain that those switches have keyhole slots for screws in the bottom. We used those slots to correctly mount the switches to the pegboard.

I was about to tell you that the Linksys router doesn’t have keyhole slots on the bottom, so I wound up running some zip ties through the pegboard to secure the router in place. That last part is true, but I just asked Google to show me pictures of the underside of the Linksys router. It does have keyhole slots on each of the feet. I just didn’t remember to use them!

What isn’t in my network cupboard?

The Frontier FiOS ONT is on the opposite side of the house. It gets power from a closet in one of our bedrooms. The FiOS gear is plugged into an identical UPS, and it uses a lot less power than the gear in the cupboard, so it should last even longer when the power goes out.

There are also two additional access points in the house, both of which are running OpenWRT. These two access points and the Linksys router in the network cupboard all in the same 802.11r roaming group.

AI Network Guy

I have an ancient D-Link DIR-860L, a.k.a. one of the tube routers, in my office. This access point runs a virtual serial port to connect my Prusa MK3S 3D printer to my Octoprint virtual machine on the other side of the house. The D-Link and Prusa share a UPS. There are also two cheap gigabit switches on opposite sides of my office.

A TP-Link Archer A7 is near the center of the house. This access point is really enough to cover the entire house, but I needed a router in the network cupboard, and I needed an OpenWRT device next to my 3D printer. Those two OpenWRT routers are at the far ends of the house. If I have to have them there, then I may as well use them to provide better WiFi coverage, right?!

The TP-Link in the living room is the only network device in the house with no UPS. Maybe that is worth correcting!

My homelab server now lives under my network cupboard

My homelab server and its NAS virtual machine used to be connected to my workstation using a 40-gigabit Infiniband network. That has required me to keep the two machines within about six feet of each other unless I want to install fiber in my walls. This is a bummer, because I have been using my office as a video studio, so I have been trying to cut down on noise.

I bought a few hard drives, set up some lvmcache, and made sure all my data was syncing in the background so I could eliminate the need for a lightning-fast network connection. Then I pulled the Infiniband cards, stuck a rolling media cart under my network cupboard, and hauled the homelab server across the house.

The server looked lonely out there, so I brought in a couple of ancient computers from the garage to fill up the cart. They aren’t plugged in. They are just taking up space.

This cart used to hold Brian Moses’s DIY NAS and his 3D printer. It was the home of my 3D printer for a few years. Now it lives in the laundry room under my network cupboard.

I figured the cart was a good idea. This way my homelab server won’t drown if there is ever a washing machine calamity! This is also approximately where Brian’s homebrew-beer Keezer used to live.

Is a room with so much water the best place for my gear?

I am sure we can all agree that water is bad, but I think the laundry room is the best spot in this house for my homelab server to live.

It is tucked in over by the garage. The room is big enough to walk around in, but not so big that anyone would ever want to spend some time there. The laundry room has its own door, and it is far enough away that no one will ever have to hear the hum of the fans and hard disks. There is also quite a bit of wasted floor space in there.

I have backups of everything. Rebuilding and restoring the server will be the least of my concerns if we spring a major leak in there.

That said, I certainly don’t expect to have some sort of localized flooding in the laundry room that manages to drench a server that is 30” above the ground.

What is next for my network cupboard?

For the moment I am in pretty good shape here! I can upgrade our FiOS Internet service to symmetric gigabit with a phone call, and I tested the Linksys WRT3200ACM to make sure it could route that fast before I even installed it.

I am more than a little tempted to upgrade some of my home network to 2.5 gigabit Ethernet. Switches have gotten down to around $20 per port, and 2.5 gigabit PCIe and USB network adapters can be had for less than $30 each. That would be an easy and reasonably priced upgrade!

I have done work over the past year or so to ween myself off of my requirement for a stupidly fast network. I have been saying that I would like to be able to be able to work with all my data without any slowdowns as long as my devices are connected with a 100 megabit per second connection.

Should you be doing the same thing in 2023?

I imagine that at least half of this question is really whether or not you should wire a house for data in 2023. The answer to that is most definitely no.

WiFi 5 and WiFi 6 are both fantastic. Our televisions are all on WiFi, and the streaming services only use a fraction of our WiFi bandwidth. My laptop can manage something near half a gigabit from any table or couch in the house. Our WiFi is doing a fantastic job. It helps that we live in a house. Every apartment complex I have lived in has been overflowing with WiFi networks, and that causes all sorts of bandwidth reductions and dropped packets.

1
2
3
4
5
6
7
8
pat@zaphod:~$ ping 10.42.254.1                                               2 
PING 10.42.254.1 (10.42.254.1) 56(84) bytes of data.
64 bytes from 10.42.254.1: icmp_seq=1 ttl=64 time=0.261 ms
64 bytes from 10.42.254.1: icmp_seq=2 ttl=64 time=0.277 ms
64 bytes from 10.42.254.1: icmp_seq=3 ttl=64 time=0.254 ms
64 bytes from 10.42.254.1: icmp_seq=4 ttl=64 time=0.259 ms
64 bytes from 10.42.254.1: icmp_seq=5 ttl=64 time=0.254 ms
64 bytes from 10.42.254.1: icmp_seq=6 ttl=64 time=0.262 ms

Even in our house, we still need at least a few cables. Our WiFi worked just fine when we only had one access point in the middle of the house, but we needed a cable in the wall to get the access point to that location.

I also prefer to have wired connections at every desk. My wife works from home, so she should have a reliable, low latency connection at her desk. I have a handful of things with Ethernet ports in my office aside from my workstation.

I enjoy playing multiplayer video games. I appreciate that the connection from my gaming computer to my router only fluctuates by the smallest fraction of a millisecond. If I am going to miss a shot that causes me to lose, I want it to be my fault. I don’t want to miss because my WiFi latency decides to shoot up to 20 or 30 milliseconds at the exactly the wrong moment.

You might be able to avoid running wires!

Do you want a solid, reliable, low-latency connection at your desk? Just make sure you have your Internet provider install their gear near your desk. Then you can wire up everything at your desk, and use WiFi where the latency and reliability isn’t a problem.

Our offices are at extreme ends of the house. We needed to run cables through the walls no matter what.

How much would it cost to build a network cupboard?

The heart of my network cupboard is that DIY wood frame that we are using as a 19” rack and the 48-port patch panel. It isn’t something you can just buy, but two unskilled yahoos were able to turn a few dollars in 1x2 lumber into a network rack in an afternoon.

We didn’t need a 48 ports, but used 48-port patch panels are always nearly free on eBay. Just make sure you pick one that still has the jacks installed!

I bet you can buy the lumber, hinges, a fully loaded patch panel, and a 16-port gigabit Ethernet switch for a total of less than $100.

Why didn’t we use rack-mount switches?

Putting rack-mount switches into a 19” rack seems like the obvious choice, doesn’t it? Brian bought a pair of 8-port switches because they were cheaper, offered some redundancy, and they definitely didn’t have fans.

I am sure the past decade has changed things, but when we were shopping, all the used 19” rack-mount gigabit Ethernet switches had fans and used quite a bit more electricity. This was going to be closed in a wood cabinet, so hotter gear wasn’t going to be ideal.

If Brian bought a used rack-mount gigabit switch ten years ago, do you think the fans all still be spinning today?

Conclusion

I feel like I may have accidentally written two blogs in one. Writing anything about whether we need to wire an entire house for Ethernet in 2023 could probably be a blog of its own, but I am not so sure it would be an interesting one.

Brian and I built my network cupboard more than a decade ago. I remember some of our ideas, but that is long enough ago that I don’t know who came up with each part of the plan. I am definitely glad that we came up with a plan, because I am still enjoying our work today, and I will be continuing to make use of it for years to come!

Oh No! I Bought A GPU! The AMD RX 6700 XT

| Comments

This is hugely unexciting. I replaced my Nvidia GTX 970 with a lower end GPU that was launched two years ago. Isn’t that boring?

It is only in the last two or three years that I have really felt like I was limping along with my old GPU. I feel like AMD’s FSR and Proton-GE’s ability to enable it in nearly any game was what let me get by for at least this last year.

Rootin' Tootin' Cowboy Shootin'

I hit my first game that wouldn’t run properly just a few months ago. DEATHLOOP was included in Humble Choice, and after finishing Prey, I was excited about trying DEATHLOOP. My 4 gigabytes of VRAM just wasn’t enough, and the game was barely able to manage 15 frames per second.

Everybody says AMD GPUs are fantastic on Linux

I don’t believe them. The last time I used an AMD GPU was shortly after AMD had acquired ATI. I had an ATI X1400 in Dell laptop running Linux. AMD’s proprietary Linux drivers were atrocious, but they promised things would get better. They didn’t get better in the years that I owned that laptop. Every computer I have owned since has had an Nvidia GPU, except for my current 2-in-1 laptop with integrated graphics.

I am running Ubuntu 22.04 LTS. If I swap my GTX 970 for an RTX 4090, everything will just work using the drivers I am already using that were installed via apt. I wouldn’t have to make any configuration changes.

If I bought an AMD 7900 XT, and that is what I very nearly did, this wouldn’t be the case. The kernel that ships with Ubuntu 22.04 is too old, and I believe I would also need all sorts of newer libraries that just don’t ship with the current Ubuntu LTS.

It is fantastic that we can run AMD cards with open source drivers and libraries. It is a huge bummer that the support is interwoven so tightly into the Linux distribution.

What has been stopping me from upgrading?!

I am sure are aware of how expensive video cards have been over the last several years. I wanted to upgrade two years ago, but I just couldn’t stomach paying $1,000 for a GPU that is only twice as fast as the GPU I bought at its launch in 2014.

GPU prices have dropped a lot, but I had another problem. My old and still surprisingly nice QNIX QX2710 monitors require dual-link DVI connections. You just can’t buy a modern video card with a DVI port. That’s another expense, and it is a bummer that the upgrade I wound up making was more sideways that it was upwards.

I was hoping that the prices on 38” ultrawide 3840x1600 monitors would drop more before I needed to upgrade. They didn’t, so I wound up buying a Gigabyte G34WQC 34” ultrawide 3440x1440 monitor.

I figured I would get the monitor upgrade out of the way first. Then I could wait a few months to see what happens. Maybe there would be an AMD 7800 XT available by then that would be a better fit for me. That didn’t work out. I made it five days before ordering a new GPU.

How did I land on the AMD 6700 XT?

I really did have my heart set on the AMD 7900 XT. I have more than a slight concern that I would have trouble fitting that card in my case, and I wasn’t excited about jumping off the Ubuntu LTS train onto the other train where I would be dealing with an operating system upgrade every six months.

NOTE: This shouldn’t have been a big concern. There is a PPA with nightly builds of Mesa 23. I have been trying it out with my 6700 XT, and the updated drivers nearly doubled frame rates in Control with ray tracing enabled.

Not only that, but it doesn’t sound like full support for the 7900 XT exists anywhere on Linux. It seems that there is no overclocking support, and we don’t have hardware AV1 encoding yet. I bet Ubuntu will be six months behind everyone else on that front.

Severed Steel

There were some good deals this week on both 6800 XT and 6900 XT cards. I want to say they were at $580 and $650 respectively. Both have 16 GB of VRAM, and the price increase was close to proportional with the increase in performance. Either sounded like a good value, but I was definitely planning on going with the 6900 XT.

So how did I wind up with the 6700 XT? The 6700 XT was $520 when I started shopping. It made no sense to buy such a slow card with only 12 GB of VRAM while only saving $50. That would have been terrible.

Then a 6700 XT went on sale for $370. That easily made it competitive on the price-to-performance graph. Then I saw that the 6700 XT is about 1.5” shorter than the 6900 XT. That was enough to get me to place my order immediately.

I saved money. The RX 6700 XT should be more than enough GPU for my needs. I won’t have to upgrade my Linux distro every six month. Best of all, I won’t have to hope the card fits in my case without the eight 3.5” hard drive bays getting in the way. Less money and less work seemed smart to me!

Will the RX 6700 XT really be enough GPU for me?

I sure hope so! I decided last year that my minimum viable upgrade would be an Nvidia RTX 3070. I made that decision when an RTX 3070 would cost something like $1,200. They’re down to $550 or so today.

The 6700 XT is comparable enough. The 6700 XT does better in some games, while the RTX 3070 does better in others. The 6700 XT does support ray tracing, but AMD cards always do a lot worse with ray tracing enabled. Even so, the 6700 XT cost nearly $200 less, so it seems like a good value.

I don’t expect that the 6700 XT will keep me going for eight years like the GTX 970 did. If it keeps me going until I can get a 7900 XT on sale for $600 or so, then I will be really pleased.

Things were a mess during the first few hours with the 6700 XT!

I swapped in my new 850-watt power supply, removed my GTX 970, and installed my MSI RX 6700 XT. I just turned on the computer, opened Steam, and fired up Borderlands 3. I had working drivers, variable refresh rate Freesync was working, and the game was running so well. I wound up turning the knob up to Badass, which is one click above Ultra, and I set FSR at 75% resolution for my 3440x1440 monitor. I wandered around, and my frame rates were usually between 80 and 100. Running the benchmark landed at about 77 frames per second.

Then I fired up Davinci Resolve, and learned that I had no OpenCL support. Don’t do what I did. I followed the advice of using AMD’s own amdgpu-install to install their graphics, ROCm, and OpenCL drivers. That got Resolve working, but it was really slow. It also obliterated my frame rates in the games I had already tested.

AI Image

I ran amdgpu-uninstall to clear out all the damage I had done. Once I verified that gaming was fast again, I followed this advice and ran apt install rocm-opencl-runtime. You do have to need the amdgpu-install Debian package installed for this to work. It will install AMD’s OpenCL libraries alongside your existing open-source drivers.

I also need OBS Studio for my workflow. I was surprised that it offered AV1 as an option for encoding with the GPU. I tried it, and it left me with a zero-byte file. Then I tried h.264, and that worked great. That is all I need!

Davinci Resolve on Linux is missing features without CUDA

I messed around a bit in an old project to make sure Resolve will still be able to get the job done. I set up a Magic Mask to cut myself out of a short clip, and that seemed to work well. Scrubbing around the timeline seems fine.

The trouble came in when I decided to export a video. My own 2160p YouTube settings use h.265 at something like 120 megabits per second. The options for h.264 and h.265 are gone.

I am left with an extremely simplified option labeled MPEG4. I can’t specify a bitrate. I can only chose between good, better, or best quality.

I don’t know exactly what I will do next time I have to upload to YouTube. I am expecting to have to use DNxHR.

Was AMD a good choice?

I can’t say for certain. The AMD 6700 XT has only been in my computer for one full day. As far as gaming goes, I will definitely say it was a good value. As long as you aren’t using ray tracing, and we will talk about that very soon, the 6700 XT is a bit faster and has more VRAM than an RTX 3070. Even better, you can get a 6900 XT with 16 GB of VRAM for not much more than the price of that RTX 3070 with only 8 GB of VRAM.

Stable Diffusion Dude

All the AMD cards fall behind when it comes to ray tracing. The 7900 XTX costs less than an RTX 4080, and the 7900 XTX performance sits somewhere between the RTX 4080 and RTX 4090. Until you turn ray tracing on. That’s when the AMD cards fall way behind.

I will play some games, try out the bleeding-edge Mesa libraries, and then report back on how things are going!

Ray tracing with an RX 6700 on Linux

I wanted to play Control with ray tracing. With the open-source driver, the option isn’t available. During the short time when I ran AMD’s driver using amdgpu-install, I was able to turn on ray tracing in Control, but the game was super slow and jittery. It didn’t matter if ray tracing was turned on. That driver was just terrible.

Control Ultimate Edition Ray Tracing

I also wanted to play Severed Steel with ray tracing enabled. You don’t turn ray tracing on in the game. It either launches with DX12, DX12 with ray tracing, or DX11. At first I couldn’t tell the difference. There are a lot of environmental reflections baked into the reflective floors, walls, and ceilings.

Once I figured out the difference, it was easy to see that ray tracing was working just fine. First I noticed the sniper’s laser scopes were reflecting on the ceiling and floor. Then I noticed the streaks of glowing bullets flying through the air were being reflected everywhere.

I also tried turning ray tracing on in DEATHLOOP. It was not an option.

I am a little disappointed here, but I don’t feel let down. I didn’t expect to be able to use ray tracing in many games with the $370 GPU that I chose. Ray tracing works in Severed Steel, and the fact that it works in other games with AMD’s driver bodes well for the future.

NOTE: To enable ray tracing with Ubuntu’s Mesa libraries you have to add RADV_PERFTEST=rt to your game’s startup options. It might look like this in Steam:

1
RADV_PERFTEST='rt,gpl' %command%

This upgrade was inspired by DEATHLOOP

DEATHLOOP ran like a slideshow on my GTX 970, so I didn’t get to play it. I finished the first mission last night, and it was buttery smooth. I don’t know if it is a good game yet, but I look forward to finding out this week!

I turned almost everything up to the max, but I enabled some of the FSR-related settings to get some extra performance. The game looks fantastic, and it is usually staying above 100 frames per second.

UPDATE: DEATHLOOP does not run great. As soon as I got to the town, my FPS dropped below 60. Even turning the settings down rather low and setting a much lower resolution drops me into the 60-FPS range all the time. I asked Google about this, and I found a bunch of people on Reddit wondering why they can’t stay above 60 FPS with their Ryzen 5950X and Nvidia RTX 3090 machines.

Conclusion

I think I made a good choice, but I think the 7900 XT would have also been great. I am already enjoying buttery, smooth games with nearly maxed out settings. I expect I would be underutilizing a 7900 XT for at least a couple of years.

FidelityFX Super Resolution (FSR) got me a few extra years out of my GTX 970, and I think I am going to continue to rely on it with the 6700 XT. There just isn’t quite enough grunt here to get to 144 frames per second with Ultra or Badass settings at 3440x1440, and I think that is alright.

I was hopeful that my GPU upgrade would get me to 100 Hz or 144 Hz in most games with the settings maxed out without having to enable FSR. I was also expecting to pay more than $1,000 for that upgrade, though, so I can’t complain about reaching that goal with a light sprinkling of FSR enabled.

The Topton N6005/N5105 and Jonsbo N1 Are An Awesome DIY NAS Combination!

| Comments

Am I trying to win Brian Moses’s DIY NAS 2023 giveaway?!

Brian Moses DIY NAS 2023 Edition

Have you seen this thing? It looks both amazing and adorable, and it packs so much hardware into quite a tiny, power-sipping package. I exiled my homelab server to a shelf under my network cupboard on the opposite side of the house. If I had Brian’s new build, I would definitely need to make room to show it off somewhere in my home office!

If I win the DIY NAS: 2023 Edition, what would I do with it? What sort of problems would it solve for me?

Well, duh, Brian! I would do everything I am doing today, but I would do it better and faster while using less power and generating more heat!

What do I have running on my homelab server today?

I still call that first VM a NAS. I don’t actually run Samba or NFS on there anymore, but I just don’t have a better name. It runs a Seafile client to pull down an additional copy of all my data from my Seafile server at Brian Moses’s house. That extra copy is stored on an encrypted btrfs partition that takes automatic hourly, daily, weekly, and monthly snapshots.

We are generating well over a terabyte of video footage every year, and I am starting to run out of space!

What is wrong with my old homelab server?

I want to say that nothing is wrong with my old homelab server. It is doing a fine job, but it is getting quite old. I only have two mechanical hard disks connected to my server, and it still idles at more than 70 watts. That old FX-8350 is a real pig of a CPU.

DIY NAS Cartoon

In most ways, that old FX-8350 is faster than the Celeron N5105 in Brian’s new NAS build. They benchmark about the same when it comes to single-core performance, but my FX-8350 has twice as many cores, so it pulls way ahead when fully loaded. The problem is that my homelab starts pulling more than 250 watts of electricity at the power outlet when it is under full load.

That awesome new Celeron has a TDP of 15 watts. It isn’t going to be heating up the room when it is thinking really hard.

Those 2.5 gigabit Ethernet ports are awesome

I pass almost all my personal data through my Tailscale network now. That means everything I move around is encrypted, which also means that I eat a lot of CPU cycles encrypting and decrypting data. My FX-8350 tops out at around 900 megabits per second when passing data through Tailscale’s encryption.

The N5105 in Brian’s DIY NAS build for 2023 has much more modern AES instructions, and it easily beats my FX-8350. I believe the N5105 can manage nearly 1.3 gigabits per second when pushing data through Tailscale.

That is fast enough to need more than my homelab server’ss gigabit Ethernet port. Do you know what the best part is? My FX-8350 has to pull more than 200 watts from the power outlet to push 900 megabits per second through Tailscale. I bet Brian’s N5105 doesn’t even make the room warmer.

Those 2.5 gigabit Ethernet ports will cost me a lot of money!

I don’t have any 2.5 gigabit Ethernet gear in my house, but the prices are getting really competitive!

I would have to spend $30 putting a 2.5 gigabit Ethernet PCIe card in my workstation. I would need to spend $110 to add a 5-port 2.5 gigabit Ethernet switch to my office. Then I would need to buy a second switch to put in my network cupboard.

Winning Brian’s giveaway will cost me at least $250 in network gear!

Why you should want to win Brian’s DIY NAS 2023 giveaway!

This year’s NAS build is delightful. I have seen it. I have held it in my hands. I am envious of whoever manages to win this year.

That Jonsbo N1 Mini-ITX case is diminutive, looks awesome, and is packed as full as it can be with hard drives and other components. You can’t fit much else in there, and it would look absolutely amazing in my office.

Excited to win the DIY NAS

I have already written about how excited I am about the Toptop N5105 motherboard. Topton has packed so many amazing features into such a tiny package. That power-sipping Celeron packs a lot of punch for 15 watts. It has six SATA ports and a pair of M.2 NVMe slots, so you can pack in a ton of storage. I am probably most surprised by the FOUR 2.5 gigabit Ethernet ports. I am impressed that you can get so much from such a tiny motherboard for under $300.

It is an amazing, power-sipping, compact NAS build this year.

Conclusion

It doesn’t specifically state in the rules of the giveaway that I am not allowed to win the giveaway. It says right there in the rules that ANYBODY can win. I count as an anybody, don’t I?!

I didn’t discover the Topton N5105 motherboard or the Jonsbo N1 case, but I definitely encouraged Brian to build this year’s DIY NAS around these components. I know that Brian would agree with me that it wouldn’t be fair for me to win the NAS. He doesn’t even have to feel bad about it, because I am doing my best these days to own and operate less server gear and fewer hard disks. I would have to have to buy a stack of disks that I don’t need to fill the Jonsbo N1 to the brim!

I hope my entry into the contest encourages you to enter as well. This year’s DIY NAS build is quite badass, and I think anyone would be proud to display it among their homelab gear!

https://www.ebay.com/str/briancmosesdotcom/ “Topton N5105 NAS Motherboard on eBay”

My New Monitor: The Gigabyte G34WQC A Ultrawide

| Comments

I have been avoiding a monitor upgrade for as long as I could. I have been running a pair of QNIX QX2710 monitors at my desk for nearly a decade. These have served me well, and were a tremendous value. I have had these IPS panels overclocked to 102 Hz almost the entire time, and I only paid $325 for each monitor in 2013. At the time I bought them, you couldn’t get a name brand 27” 2560x1440 monitor for less than $900.

The QNIX monitors weren’t perfect. Their color reproduction didn’t look all that far off from my new Gigabyte monitor in sRGB mode, but there was more than a bit of backlight bleed around the edges. I knew it was there, but it wasn’t so bad that I couldn’t live with it.

Placeholder Photo

NOTE: This photo needs to be replaced. I am rearranging video lights, camera mounts, computers, and monitors all around my office. Everything is a mess right now. If I knew I was going to use this photo in the blog, I would have turned on more lights and used a better camera!

I am staring at my new monitor while writing this, and I can say for sure that the old monitors looked just fine. Upgrades are always fun, and swapping in this nice new Gigabyte monitor has been an enjoyable experience, but this is kind of a sideways move for me. Going from two 2560x1440 monitors to a single 3440x1440 is both an upgrade and a downgrade.

There is a very specific problem with the QNIX monitors that has been holding me back. They only have a single dual-link DVI port on the back. My Nvidia GTX 970 was probably one of the last few GPUs to sport a pair of DVI ports.

Active DisplayPort to dual-link DVI adapters can be a bit janky. Some are really inexpensive, but a pair of good adapters that might actually work might cost me $50 or $60. That’s almost 20% of the cost of a new monitor.

I am in need of a GPU upgrade, so upgrading my monitor first made a lot of sense.

Are we already too far in for a tl;dr?!

I am quite happy with my purchase. I believe the Gigabyte G34WQC is the best 34” ultrawide monitor you can buy for under $400.

The G34WQC has excellent dynamic range, low latency and a rather fast 144 Hz refresh rate for gaming, FreeSync Premium support, and quite good sRGB color accuracy.

This is definitely the best monitor for me. You can keep reading if you want to find out why!

Why didn’t I buy a 38” 3840x1600 monitor?

I have been drooling over these monitors ever since the first one was announced. These would be a proper upgrade for me!

I should probably say here that my old 27” 2560x1440 monitors, 34” ultrawide 3440x1440 monitors, and 38” ultrawide 3840x1600 monitors all have the same exact pixel density. They all have about 110 pixels per inch. The size of each of their pixels are precisely the same. The bigger monitors just have more of exactly the same thing.

There are only a few models of 38” ultrawide monitors on the market. They usually cost between $1,150 and $1,300, though you can sometimes find one for $999.99.

I would be willing to pay that much for a monitor. That is about what my old 20” Triniton monitor cost me if you adjust for inflation, and that monitor was both used and physically damaged!

All the 38” ultrawide monitors are premium products. You can find 34” ultrawide monitors from brands like Sceptre for under $300.

The premium you have to pay for the extra 4” is high, and I am hopeful that someone like Sceptre will add a 38” ultrawide monitor to their lineup in two or three years.

It seemed like a wise move to save $600 or more today. Do you think I made the right decision? Maybe I will be able to add a 38” monitor to my desk in a few years with that $600 that I saved!

Why the Gigabyte G34WQC A?

I can’t believe how much effort I put into shopping for a monitor. I figured that all of the 1440p ultrawide monitors under $400 would be full of compromises, and I assumed those compromises would be pretty equivalent.

At first I had my sights set on the AOC CU34G2X. It usually sells for $340 on Amazon, but it was priced up at $400 on the day I was shopping. I immediately added it to my wishlist, and I said I would shop around until the price dropped back to $340.

Tom’s Hardware has a great review of this monitor. They tested the latency, and it scored pretty well. They said its HDR support was basically phoned in. Overall, though, I was pleased with the test results at the $400 price point.

Then I noticed the AOC CU34G3S, and it was also priced at $400. It seems to be an update to the CU34G2X. They both have similar quality 34” 3440x1440 VA panels. The cheaper CU34G2X supports up to 144 Hz and has a curve of 1500R, while the newer CU34G3S goes up to 165 Hz and has a curve of 1000R.

This is when I stopped, blinked a few times, and said, “Oh, poop!” How much of a curve do I want? That tight 1000R curve sounded like too much curve!

I would gladly pay $400 for the 165 Hz monitor, especially since it means I could order it immediately and have it on my desk in two days. I was worried more than a little about that more extreme curve.

I clicked on a lot more monitors, but most of them didn’t have reviews that included latency testing like Tom’s Hardware. There was an Acer Nitro for $360 that looked good on paper, but I couldn’t find a single technical review.

Then I stumbled upon the Gigabyte G34WQC for $380. Tom’s Hardware has a really good review, and all the graphs in the review included the AOC monitors that I was already interested in.

The Gigabyte monitor can only reach 144 Hz, but it still manages to match the latency of the 165 Hz AOC monitor. The Gigabyte has higher maximum brightness, and it has really good dynamic range. Not as much dynamic range as an OLED monitor, but 34” 1440p OLED monitors cost four or five times as much.

All of that data was fantastic, but I was most excited that the Gigabyte G34WQC only has a curve of 1500R.

Is 1000R really too much curve?

I have no first-hand experience with a 1000R monitor. I hit up Google, used my protractor, and did some math. I believe I correctly calculated that my two monitors were set at an angle equivalent to around 650R.

Two flat monitors with an angle in between is probably not directly comparable to a continuous curve, but coming up with such an extreme number makes me think that 1000R wouldn’t be as extreme as I feared.

I feel like 1000R would be amazing for first-person shooters. I worried that it would be awkward when I have Emacs in the middle of the screen and some terminal windows off to either side.

I am staring at a 1500R monitor while writing this. It hasn’t even been on my desk for a full 24 hours, and it is already making me think I would have been perfectly happy if I bought a 1000R monitor.

I do feel that you need to have some amount of curve on a monitor this size. My friend Brian Moses has two desks in his office. Both have 34” ultrawide monitors. One has a curve, the other doesn’t. I bet you can guess which one he prefers sitting at.

Why did I settle for a VA monitor?

I was already using IPS monitors, so you might assume that a VA monitor would be a downgrade. My IPS monitors were made with LCD panels rejected by the folks at Dell or Apple. Those LCD panels came off the same assembly line as the very best LCD panels of the time, and they were intended to be used in the most expensive monitors. There was just something they didn’t like about these batches, so they ended up in cheap monitors.

That leads to the other point that this VA monitor has 10 years of technological and manufacturing improvements on my old IPS monitors.

Of course I did check the prices on 34” IPS monitors. There was one oddball down at $450, but I couldn’t find any reviews on that one. The majority of 34” IPS monitors were priced at $750 and above, so they cost twice as much as any of the VA monitors.

If I were going to spend more than $750, I would most definitely have waited for a sale on one of the premium 38” monitors. They are all very nice IPS monitors, and sometimes you can find one on sale for $1,000.

Can you believe I am only using one monitor?

I have had at least two monitors on my desk for a long, long time. I used to have two Riva TNT2 graphics cards installed in my dual-Celeron ABIT BP6 machine connected to a pair of 19” CRT SVGA monitors from Sam’s Club. I believe this would have been right around Y2K. Do you remember CRT monitors and Y2K?!

My old 27” monitors are just about as tall as they need to be. I tried mounting a third monitor above the other two once, and that was way too far to be leaning my neck back. It was uncomfortable even just looking at a small terminal window at the bottom of the screen. I know the 38” ultrawide monitors would be 160 pixels taller, but that’s really only 80 more on top and 80 more on bottom. That would still be reasonable.

The most important thing I learned from using a pair of 27” monitors is that I can really only see about one third of the second monitor without turning my head. I know that I will continue to miss some of that extra screen, but a 34” ultrawide is roughly one third wider than one of my old 27” monitors. That is pretty close to the perfect width.

I was a bit worried that a 38” ultrawide might be too wide. Especially when playing full-screen games. I am much less concerned about this after having the 34” ultrawide on my desk, and I should have figured that out with math. A 38” monitor is only 400 pixels wider than a 34” monitor. That is only 200 more pixels to the right and 200 more pixels to the left!

Don’t let me talk you out of spending three times as much on a 38” ultrawide! I would certainly be excited to have one on my desk.

Let’s get back to the Gigabyte W34WQC A!

I was trying to find a compromise that is good for gaming, good for productivity, and easy on my wallet. I think the Gigabyte was a good choice, and it ticked almost all the right boxes for me.

You already know I was shopping for something reasonably priced. All the monitors I was looking at were $400 or less.

Productivity would steer most people towards something with a much higher DPI. 32” widescreen 3840x2160 monitors are quite common. My wife has a very nice 32” Dell 4K IPS monitor on her desk. It looks great, and it is around 140 DPI.

I could write 2,000 words about why I would prefer to stick to the same DPI. The short answer is that Wayland isn’t ready for me, and X11 doesn’t support fractional scaling. Everything is easier for me if I stay at 110 DPI, and I don’t think there are any high-DPI ultrawide monitors.

The 34” ultrawide is working out well so far. I have my screen divided up into three equal columns. Firefox is on my left with the awesome PPI Calculator open. My Emacs window is in the middle with the font enlarged slightly, giving me a little over 90 columns. To my right are a pair of terminal windows that are about 125 columns wide.

Davinci Resolve Ultrawide

It should definitely be noted that Davinci Resolve is just a little more comfortable with an ultrawide display. You can comfortably fit two widescreen viewers, the inspector tab, and the media pool on the screen at the same time. I used to have to scroll my media pool from side to side to see timecodes and clip lengths. I won’t have to do that anymore!

I have been firing up older first-person shooters that I am confident will keep up with the Gigabyte’s 144 Hz refresh rate. I wandered around for a bit in Borderlands 2, I played through a level of Severed Steel, and I have also been just generally farting around in Just Cause 3.

I ran the UFO ghosting test, and the W34WQC definitely has some ghosting. If I were smart, I would have run the test on my old monitors before putting them in the closet!

I can most definitely tell that the Gigabyte monitor at 144 Hz feels smoother than my old QNIX monitors at 102 Hz. Part of that is certainly due to the extra 42 Hz, but I suspect both monitors have roughly the same number of frames of ghosting. That probably means that the Gigabyte VA panel’s ghost frames fade away more quickly.

I have no science to back that up. This is how I feel playing the same games with each monitor.

I do have some complaints!

Can I start the complaints with a bit of praise? The W34WQC stand is pretty nice. It feels solid, it can tilt, and the height is easily adjustable. I removed the stand as soon as I made sure my long DisplayPort cable could manage 144 Hz at native resolution, because I always use monitor arms. I was excited to see that the Gigabyte stand is attached using the VESA mounting screws. That means I can attach it to any other monitor. I may wind up using it on one of the old QNIX monitors, since I have no idea where the stock legs went to.

Zip Tied Power Cable Cheat

NOTE: Is snipping away ½” of strain relief and zip-tying a 90° bend in the cable cheating? Is it still cheating if it works?

My first complaint is the location of the ports. They all point downwards, and they are all rather close to the bottom. I had to search through my box of power cables to find the one with the smallest end, and I had to get creative with a zip tie to attach the power cable in such a way that it wasn’t hanging below the frame. Who wants to see cables dangling below their monitor?!

I need a long DisplayPort cable to reach my computer, so I am using my own. It has a fairly compact end, and I can still just barely see the cable from where I am sitting. I do have to duck my head down to see, but I shouldn’t be able to see it from my chair at all. The included DisplayPort cable has even longer ends than the one I am using.

The monitor is too vibrant with the default settings

Everything is rather bright, and the reds are crazy vibrant with the monitor set to the standard profile. Reds are bright. Browns look a bit orange. Everything is eye catching, but not in a good way.

I just set it to the sRGB profile, and I think it looks great. I did bump up the brightness a bit to fit better with the lighting in the room. I am assuming Gigabyte’s sRGB profile is calibrated fairly well. I am excited to learn that the color profile I have been using for years on my QNIX monitors wasn’t all that far off!

Conclusion

I believe I made a good decision, but I also don’t feel like there was a wrong choice to be made here. The Sceptre is probably a decent value at $300. Either of the AOC monitors seem fine both on the spec sheet and in the technical reviews on Tom’s Hardware. I don’t expect I would have regretted buying any of them, but I do think the Gigabyte was a better value for me.

I do have some regret that I didn’t splurge on a 38” ultrawide. For productivity work, like writing this blog, the 34” monitor just feels like a bigger monitor. Being 400 pixels wider would almost definitely make the 38” ultrawide feel much like two monitors without a bezel. Then I remember that I can nearly buy an AMD 7900 XT with the money I saved buy staying one size smaller.

What do you think? Did I make the right choice with the Gigabyte W43WQC A? Why does every monitor have a terrible name? Are you already using one of those 38” ultrawide monitors? Do you think I should have spent three times as much for those extra four inches? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

I Am Excited About the Topton N5105/N6005 Mini-ITX NAS Motherboard!

| Comments

I have been excited about the Topton N5105/N6005 motherboard ever since Brian Moses sent me a link to it while he was pondering his DIY NAS build for 2023. I literally haven’t been this excited about a motherboard and CPU combo for a low-power NAS or homelab build since I built my AMD 5350 homelab machine way back in 2015!

Topton N5105 on Brian Moses's Desk

I am writing about this now because my good friend Brian just bought a big box of these N5105 motherboards, and he is selling them in his eBay store. Am I allowed to tell everyone that?

Why should I buy a motherboard from briancmoses.com?

I could say a lot of things about how Brian is a good dude. We have been friends for decades, though, so you should probably assume that I am biased, and you should take anything I say that is just an opinion with a grain of salt.

I can definitely give you some verifiable facts. Brian has been buying parts for his DIY NAS build blogs and his giveaways for nearly a decade. I think the upcoming giveaway might even be the tenth anniversary. He buys the parts to make sure there won’t be any problems when you buy the same things to replicate his successful builds.

Two Geeks Exchanging Money

You can buy the Topton motherboard from Topton’s store on Aliexpress. I know the prices there move around, so I am not sure how much money you might save by going that route. Brian has already imported the motherboards, so there won’t be any surprise tariffs or fees. You won’t have to hope your package doesn’t fall off the boat on a slow shipment direct from China. I don’t know that shipments like this travel by container ship, but that is what I picture in my head.

Brian is taking on quite a bit of risk here, so I think his pricing is reasonable. I think that buying a motherboard from Brian is a great way to thank him for all the work he has put into his DIY NAS build blogs over the last eleven or twelve years!

Why is this Celeron N5105 motherboard so awesome?

I feel like I have to say a few words about my AMD 5350 build from 2015. That build used a Mini-ITX motherboard with a slow, power-sipping AMD laptop-grade CPU, four SATA ports, and room for two full-size DDR3 DIMMs. My server with two SSDs and two hard disks idled at around 35 watts, and the motherboard and CPU for that build cost just under $100. The AMD 5350 was from the same family of processor used in the Playstation 4 and Xbox One.

That was an amazing combination at the time, and I shop for something comparable every year, but I usually come up empty. The Topton N5105 doesn’t manage to come in at quite the same price point, but it packs in so many extra features to help justify the extra cost. It also doesn’t help that everything is just more expensive today than two or three years ago.

Some of those extra features are just inevitable due to the passage of time, like the much faster CPU, the faster built-in GPU, and support for more RAM. We might be able to say the 2.5 gigabit Ethernet on the Topton board was inevitable, but so many motherboards still ship with 1 gigabit Ethernet, and the Topton has FOUR 2.5 gigabit Ethernet ports!

These are the highlights from the spec sheet:

  • Jasper Lake Intel® Celeron® Processor N5105 CPU (Base: 2.0Ghz, Burst 2.9Ghz, TDP: 10W) *Mini-ITX Form Factor (17.0 cm x 17.0 cm)
  • 2 x DDR4 SO-DIMM slots 2400/2666/2933MHz (non-ECC) up to 32GB RAM
  • 2 x M.2 NVMe 2280 slots (PCIe 3.0 x1)
  • 6 x SATA III
  • 4 x 2.5Gbps (Intel i226-V) network interfaces
  • 2 x USB 3.0 ports
  • 4 x USB 2.0 ports (internal and external)

You can put this motherboard in a dense little case like the Jonsbo N1 or Jonsbo N2. You will have plenty of SATA ports to fill up all the bays with 20 TB 3.5” hard disks. You will have room for a couple of NVMe drives for boot, lvmcache, and maybe some fast virtual machine storage.

This crazy motherboard even has FOUR 2.5 gigabit Ethernet ports. Just one of those ports would be fine for my own personal use, but having more means you could even replace your home router with a virtual machine, and it should have no problem handling some of the fastest Internet connections available.

The best part is how efficient your home server can be with this board. I don’t spill the beans on Brian’s upcoming DIY NAS blog, but it is looking like a pretty much fully loaded Topton N5105 build can idle all day long at somewhere around 60 watts, and he didn’t even put in effort to curb power consumption.

Is the Celeron N5105 fast enough?

Fast enough for what?! I have a Raspberry Pi server sitting at Brian Moses’s house. That Pi can do AES encryption fast enough to keep up with the 12 TB USB hard disk, and it can pass encrypted traffic to my house via Tailscale at a little over 200 megabits per second. My Pi 4 is a very capable little piece of hardware, but the Celeron N5105 is at least four times faster.

My homelab server is built from spare parts. The CPU is an AMD FX-8350 that I bought in 2013. This was the fastest, most power-hungry consumer CPU that AMD offered at the time. My NAS virtual machine on my homelab server was able to transfer files via NFS or CIFS at 12 gigabits per second. My FX-8350 can move encrypted data via Tailscale at nearly 900 megabits per second. The FX-8350 is more than 40% faster than the N5105.

My Raspberry Pi and homelab servers are two varieties of orange that don’t compare all that directly to the Topton N5105.

My FX-8350 spikes up to 220 watts to push 900 megabits per second via Tailscale. Even though the Celeron N5105 is slower overall, it has more modern encryption acceleration instructions and more memory bandwidth than the ancient FX-8350, so the N5105 can push encrypted data via Tailscale at more than one gigabit per second. I don’t have Brian’s numbers on hand, but I recall it being a good bit more than one gigabit per second!

I don’t have an N5105 machine on hand to test out myself, but I have no doubt that when you install fast-enough disks, that it has enough horsepower to max out every one of those 2.5 gigabit Ethernet ports while serving files.

Why haven’t I built a Topton N5105 machine yet?!

I eat pizza at Brian’s house nearly every weekend. I bet it would be easy to sneak off with a motherboard!

The Topton N5105 is my dream homelab motherboard. It lands so close to the sweet spot for price, power efficiency, and horsepower while packing in a ton of SATA and network ports.

The Topton N5105 has more than enough CPU to run my Octoprint, NAS, and Seafile virtual machines. It would have no trouble saturating a couple of those 2.5 gigabit Ethernet ports, and having those would give me an excuse to start upgrading some of my machines to 2.5 gigabit Ethernet.

Best of all, the N5105 would probably save me 30 watts while turbocharging my server’s Tailscale throughput.

I always say that the best server for you is probably the one you already have. My ancient FX-8350 may be power hungry, but it would still take seven or eight years for the N5105’s power savings to add up to its price. Not only that, but the current hardware is doing its job just fine, and I am seeing a workstation upgrade in my future. That will mean free upgrades for the homelab!

Sometimes, building a new server IS the hobby!

My homelab server really isn’t much of a laboratory. It mostly runs services that I actually use. I just want everything to be reasonably priced and reasonably reliable. My homelab isn’t my hobby. My homelab-like gear is there to enable my other hobbies.

If building a cool little server is your hobby, then the Topton N5105 might be the motherboard you’ve been waiting for. Pairing it with either the Jonsbo N1 or N2 case would make for such a nifty, dense, shoebox-sized server.

I really like both of these cases from Jonsbo. The Jonsbo N1 would look cool at the edge of your desk or sitting on a shelf in the background of your video calls, while the Jonsbo N2 is more practical with its easily accessed hotswap bays for the hard disks. I would happily build a server using either one!

Is one N5105 server enough for a homelab?!

Everyone has a different idea of what constitutes a homelab, and everyone has different needs. One of our friends on our Discord server is running a fancy Kubernetes cluster at home on three or four Beelink SER5 5560U units. Another of our friends is consolidating his homelab down to a single off-lease Dell 730XD with a pair of 12-core processors, 192 gigabytes of RAM, and an Nvidia P40 GPU.

I think it is awesome that you can fit a cluster of separate physical machines in a lunchbox. I also think it is awesome that you can get a really good deal on beefy off-lease server gear.

The Topton N5105 is more than enough for my own homelab needs. Maybe it is enough for yours, or maybe a server built with this motherboard would be a good fit with the rest of your hardware!

I did see that the N5105 can be used for GPU video transcoding with Jellyfin.

Conclusion

I hope I haven’t spilled any beans about the 2023 DIY NAS build. I really do think the Topton N5105 motherboard is a nifty piece of hardware with a great combination of features at about the right price point. Having a power-sipping CPU, six SATA ports for a big RAID 5 or RAID 6, two NVMe slots for speedy media, and a ton of 2.5 gigabit Ethernet ports on a mini-ITX board is fantastic.

Writing this blog post is making me want to build a little server around the Topton N5105. Have you already built one? How is it working out for you? Do you think I should give in to the temptation and build one for myself? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

Enabling Transparent Hugepages Can Provide Huge Gaming Performance Improvements

| Comments

My gaming rig is getting rather long in the tooth. I am running a slightly overclocked Ryzen 1600 and an aging Nvidia GTX 970 with its thermal limit pushed to its maximum. I wouldn’t even be able to play any games from the last few years if it weren’t for Proton-GE’s ability to enable AMD’s FidelityFX Super Resolution (FSR) in almost every game I play.

I haven’t done a ton of science. I don’t have a handy way to benchmark most games. I did run a Borderlands 3 benchmark with my potato settings. I have nearly every knob turned to the lowest setting, and I bet I have some extra things disabled in config files. I run Borderlands 3 at 1280x720 with FSR upscaling to 2560x1440, and my hope is that the game can stay above my monitor’s 102 Hz refresh rate. It doesn’t always stay that high.

NOTE: I backed off the overclock of my aging QNIX QX2710 monitors while replaying Borderlands 3. I dropped them down to 86 Hz, and I will probably keep them here until my next monitor and GPU upgrade. It is easier to hit 86 frames per second in newer games, and it is enough of a step up from 60 Hz that I don’t feel too bad about giving up the extra frames. Why I landed on 86 is probably a long enough story for its own blog post. Can you believe these crazy monitors are still working great nine years later?

Borderlands 3 Benchmark

The benchmark came in at 92 frames per second with Transparent Hugepages (THP) disabled. That is the default setting on Ubuntu. That went up to just over 99 frames per second when I turned THP on.

Your mileage will most definitely vary, but when you’re constantly dropping just below your monitor’s refresh rate, that 8% improvement is huge! It is easy and free to give it a try:

1
2
3
4
pat@zaphod:~$ echo always | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
[sudo] password for pat: 
always
pat@zaphod:~$ 

That command won’t do anything permanent. You will be back to the default setting next time you reboot.

What are Transparent Hugepages? Why do they help performance?

Let’s keep this to two or three short paragraphs. Your software usually allocates memory in 4 KB pages, and your CPU has to keep track of which physical locations on your sticks of RAM correspond to those 4 KB pages. The CPU has a cache of recently accessed pages. If your game is flipping through more pages than fit in that cache, things will slow down.

Hugepages are usually 2 MB instead of 4 KB. That means the CPU has to keep track of only a tiny fraction of those mappings. It is sort of like having a page cache that is suddenly 500 times larger.

When something is in the cache, it is just like when an item is on the shelf at the store. When something isn’t in the cache, you have to ask an employee to fetch the item from the back room. Every time something isn’t on the shelf, you have to wait. Just like the CPU.

THP have been a HUGE boost to my Team Fortress 2 performance!

Team Fortress 2 on Linux is stuck in a stupid place right now. The game uses a modern enough version of DirectX on Windows to work well with modern graphics hardware, but it is stuck using OpenGL on Linux. Since it is a multiplayer game, they don’t let us run the Windows version under Proton to get a performance boost. Valve have updated Portal 2 and Left 4 Dead 2 to use DXVK on Linux, and I hope they do the same for Team Fortress 2, but I am definitely not holding my breath.

Team Fortress 2 on Linux needs a lot of single-threaded CPU grunt, and I have always had trouble keeping the game up at my monitor’s 102 Hz. This is another one of those things I can’t easily benchmark.

NOTE: Not much going on in the video. I had OBS running a replay buffer, but this was the only time I remembered to hit the key to save a replay!

The game runs fine until I walk into a busy fire-fight on a server with tons of fancy hats and lots of explosions and effects. Then my frame rate drops far enough below my refresh rate that the game stops feeling smooth and I start having trouble landing pills with my demoman.

Enabling THP has helped dramatically with TF2. As far as I can tell, I have yet to drop below 102 frames per second, and I certainly haven’t dropped as low as my new 86 Hz refresh rate.

Quite a while ago I used mastercomfig.com to generate some potato settings for my game. The settings went so far that the weird cubic lighting made the game sort of resemble Minecraft. I am still using mastercomfig.com to lower my settings, but I have backed off several notches from the potato-grade settings.

It is a bummer that I have to play this ancient game with my GPU so underutilized that it sits clocked at the minimum frequency, but I am super stoked that I can play without my frame rates helping me to lose!

Will THP help with other games?

As I said, I am not using a ton of science here. I was playing through Dying Light when I learned that THP might help gaming performance. My unscientific test there was loading the game, waving the camera around in the room where I spawned, then reloading the game with THP and doing the same thing. The numbers seemed to be leaning at least 5% higher, but we are just going by my memory between reloads and hoping I pointed the camera at similar things.

Some games need more CPU. Some games need more GPU. Some settings lean more on one than the other. Even after that, things will depend on how much CPU and GPU your machine has. Some games could run slower, though I don’t think I have seen that yet. Some games might run the same. Some games might run a little better.

The only way to find out is to try.

THP can cause performance issues

There are reasons that the Linux kernel doesn’t enable transparent hugepages by default. There are some programs that run extremely poorly or cause problems. The most famous of which is probably PostgreSQL.

I have been running THP on my desktop for a couple of weeks now. I haven’t rebooted in nearly two months. I have had one hiccup so far. I wandered into my office and noticed that my glances window had a red process using 100% of a CPU core. It was khugepaged. Its job is to defragment memory so the kernel can map more 2 megabyte pages.

In my haste, I didn’t see the root cause of my problem right away. I figured my web browser was my longest-running process that uses a large amount of RAM, so I closed and reopened Firefox. The problem went away for a few minutes, but then it was back.

It turned out that when I closed Davinci Resolve the night before, it didn’t actually completely shut down. There were no windows visible, but there were processes eating up memory and using a very small but constant amount of CPU. I killed Resolve and haven’t seen khugepaged since. That was a few days ago.

Conclusion

I know some of you are rocking much newer GPUs than my GTX 970, and you probably don’t need to wrestle an extra 5% out of your games. I am glad GPU prices are getting better, but I paid $340 for this GPU within a week or so of release, and it was the second fastest available. More modern cards that perform roughly as well cost almost as much. Prices are getting better, but I feel like I will get quite a bit more bang for my buck if I can hold out on my next a little while longer.

If you need to squeeze a little extra out of your aging gaming rig, you should most definitely try enabling transparent hugepages. It is easy to try, easy to undo, and it seems very unlikely that it would have a negative impact on your gaming performance.

Tailscale is Awesome!

| Comments

I noticed a problem last week. Tailscale is at least a small part of nearly everything that runs on my own private network, so it gets mentioned in the vast majority of the blogs I write. I also tend to write a post every time Tailscale drops a new feature and I start using or relying on those new features, but those write-ups are rather specific. I don’t have anything recent to link to that talks about what I actually use Tailscale for on my personal network.

This is the post that I need to write to address this!

What on Earth is Tailscale?

If I just say that Tailscale is a mesh VPN, I feel like everyone should have a pretty good idea what I am saying, but the world has gotten weird. When I first starting using them twenty-something years ago, a VPN was the magical thing that would put your Internet-connected computer on the other side of your company’s firewall, or it might be used to connect two physical locations using an Internet connection.

hacker2

Marketing has made the vast majority of people now think the services that companies like NordVPN are selling is the traditional use of a VPN, even though what these companies are doing is just one of the many things you can do with a VPN connection.

Yeah, yeah. What in the heck Tailscale?

You can shoehorn Tailscale into doing a lot of useful things, but how about we just talk about the basics? The idea is that you install Tailscale on all of your laptops, phones, and servers. Tailscale will work its magic and set up a direct Wireguard VPN connection between each of your devices. As long as they have a decent Internet connection, each of your devices will be able to make a direct network connection to any other device on your personal Tailnet.

You can be on your phone at Starbucks, connect to a test web server that is running on your laptop at home, and that laptop will be able to access a database server that lives at your office. Everything just works.

The best part is how easy it is to get up and running. You could probably already have signed up for a Tailscale account and had two or three machines connected to your new Tailnet in the time it took you to read up to this point. It is that easy to get going.

Tailscale is awesome because you don’t have to jump into the deep end

Tailscale is probably the easiest VPN to set up, and it is really easy to install it on two or three computers just to solve a single problem. Dipping your toe in is a great way to get started!

You can install Tailscale on your phone and your Home Assistant server, and you will immediately be able to access your home automation dashboard from anywhere in the world.

Maybe you need remote access to your NAS. Just install Tailscale on your laptop and your NAS, and you’ll be connected in no time.

If you keep adding one or two machines at a time to your Tailnet to solve small problems, it won’t be long before almost all your computers are part of your Tailnet.

This is definitely how I became so reliant on Tailscale. I started with just a handful of nodes on my Tailnet, and before I knew it, every device I own was on my Tailnet, and at this point almost every network service that I run is only accessible via my Tailnet. Say Tailnet again. Tailnet.

What is Pat doing with Tailscale?

I am going to expand on this, but I think it is best to start with a list:

Some of these things are simple enough that they don’t require their own heading.

Remote access to Home Assistant is kind of basic. The Home Assistant app on my Android phone can always communicate with my Home Assistant server no matter where I am, and I didn’t have to expose Home Assistant to the Internet. I can click a button to warm up my espresso machine when I am driving home, and that is awesome.

The Raspberry Pi-KVM is really cool. It is a do-it-yourself networked keyboard, video, and mouse device. You connect the Pi to a network and plug the appropriate HDMI and USB cables into a server. You can then use a web browser to see everything on that server’s HDMI port and type as though you are in the room with the server.

Tailscale means I can easily leave the Pi-KVM behind at my parents’ house and troubleshoot their problems from the comfort of my desk at home.

I am not using Tailnet Lock

Tailscale has a potential security problem. It is possible for Tailscale to add a new node to your private Tailnet. Up until a few months ago, you just had to trust that the folks at Tailscale wouldn’t add some sort of nefarious network device to your Tailnet.

Tailnet Lock closes that hole. When you enable the lock, you have to manually approve new nodes on your Tailnet. It seems to be implemented in a secure way. You can read more about Tailnet Lock in Tailscale’s documentation.

Tailscale Lock is not yet compatible with node sharing. I rely very heavily on node sharing, so I couldn’t enable Tailscale Lock today even if I wanted to.

I haven’t decided whether I even want to enable it. I don’t think I feel paranoid enough to worry about someone at Tailscale adding a node to my Tailnet.

Self-hosted Seafile has saved me $700 so far!

I wouldn’t have self-hosted Seafile again if I had to run the server on the public Internet. Tailscale lets me access my Seafile server from anywhere in the world, and my little Raspberry Pi server doesn’t accept any connections from the local network. It is only accessible via my Tailscale network.

I don’t have to worry so much about old, broken, exploitable services when they are running on my own personal Tailnet. The entire Internet can’t hammer away at the nginx server in front of my Seafile server looking for a hole. I don’t have to scramble around one morning when I discover that there is a patch for a zero-day exploit against Seafile. I can just remain happy knowing that only five computers in the world can see my Seafile server.

Seafile is syncing a little over six terabytes of data between my workstation, my NAS, and my laptop. The Seafile server also stores a copy of all that data with 90 days of history.

Google charges $100 per year for two terabytes of cloud sync storage. Dropbox charged $120 for the same. I can’t say that I have done the math perfectly here, because I haven’t done a good job of tracking when I actually cross each two-terabyte threshold. I do know that I would be paying $400 to Google or $480 to Dropbox this month if I weren’t hosting my own Seafile server.

I have been slowly spending those savings on local storage. The first $300 went to the Seafile Pi and its 14 TB hard drive. Another $200 or so went to a 12-terabyte drive for my workstation, and last month I added a 14-terabyte drive to my homelab server. If I don’t need to replace any failed hardware, the $400 I save each of the next three or four years will go right into my pocket!

Node sharing is Tailscale’s killer feature

We need to share large video files to produce our various video content. I usually have to send a 20- to 30-gigabyte video file to Jeremy so he can finish up the work on the Create/Invent Podcast, and Brian has to send me about 40 gigabytes of video when we record The Butter, What?! Show. They both have accounts on my Seafile server, and I have shared the server to their Tailnets.

Why is this such a killer feature? If I were doing this the old-fashioned way, I could have sent them Wireguard keys!

With Tailscale, I don’t need to maintain anything. I don’t need to generate keys. I don’t need to ship those keys to Brian or Jeremy. They just have to log in to Tailscale, and Tailscale manages all of that for me.

Brian already had his own Tailnet, but Jeremy was new to Tailscale. I sent him the sharing link. That sent him to the Tailscale website, where he was able to create an account, and he was taken straight from there to the download page. He was able to ping my Seafile server a few minutes later. All I had to do was send him a URL.

If Brian or Jeremy can’t log in to Tailscale, they aren’t going to have to call me.

Geography-agnostic servers are pretty cool!

I made a mistake and created a problem with my Seafile Pi when I as upgrading all my machines to use Tailscale SSH. It was my fault, but I could no longer connect via SSH, and I needed to sit down at the console or pop the boot drive out. I was at Brian Moses’s house one Saturday night for pizza, so I just made sure to bring the Pi home with me.

Seafile was down while I drove, but I plugged the Raspberry Pi in when I got home, so nobody noticed that anything happened. If Brian didn’t see me leave the house with the server, he wouldn’t have noticed that it was in a new location.

I fixed my Tailscale SSH problem, and I 3D-printed a new case with a slot for an OoberLights board. I think the Seafile Pi was on my desk for two weeks before I took it back to Brian’s house.

I think this is so cool. I can move a server to a new location for troubleshooting, and everything that connects to that server will still be able to connect. No configuration changes need to be made anywhere. Everything just works!

Tailscale SSH has been a nice upgrade!

Tailscale keeps adding new features. They are almost always useful features. They tend to be the sort of features that if you’re already using Tailscale, then there isn’t much excuse not to turn them on and try them out. Tailscale SSH is one of those features.

I have an SSH private key on my workstation. I have an SSH private key on my laptop. It is my job to make sure I install their matching public keys on every device I need to connect to via SSH. Then I have to remember to generate new keys on some sort of regular basis.

I always fail at the last part. One of the keys I was using last year was generated in 2013.

Tailscale is already generating a private key on every node to establish secure Wireguard connections. Why not let Tailscale handle all my key management for me? I can use Tailscale’s ACLs to control which machines can connect to other machines. My desktop and laptop can connect anywhere, my internal servers can sometimes connect to each other, but my public servers are never allowed to SSH anywhere.

It is unfortunate, but I still need to maintain SSH keys. I can’t use Tailscale SSH to authenticate to Gitlab, Github, or even Tailscale nodes that have been shared with me. Even so, cutting the number of places where I manage public keys down to four from dozens and dozens is a huge win!

Tailscale’s ACLs

There is a good chance that you won’t need to use Tailscale’s ACLs. I think I was running Tailscale for almost two years before I even considered [setting up any access controls][tacls].

I decided it was time to tag all my nodes and configure ACLs when I added my first vulnerable node to my Tailscale network. I migrated the web server that runs the nginx server for our blogs over to a cheaper Digital Ocean droplet, and at the same time I added Tailscale to the server.

This server is answering requests over the Internet. Anyone sitting anywhere in the world could be banging on this server trying to get a shell. I don’t want them to have a point from which to attack the rest of my Tailnet if they succeed. I have fragile things like Samba servers hiding behind Tailscale!

These are my tags:

  • workstation (machines where I sit at the keyboard)
  • shared (via node sharing)
  • server-ts (servers only accessible via Tailscale)
  • server-dmz (servers accessible via Tailscale or LAN)
  • server-external (servers on the public Internet)

The workstation tag can connect to anything. The server-ts tag can connect to other any of the three server tags. The server-dmz can connect to server-dmz or server-external, and the server-external just can’t connect to anything. The devices that have more exposure can never connect to safer devices.

Tailscale Funnel is solving real problems!

I wanted to yell something like, “I don’t know what I would do without Tailscale’s fun tunnels!” but that would be an exaggeration, if not a total lie. We muddled along just fine without Funnels. Our continuous blog deployments used to have a two- or three-minute delay, but with a Funnel, they happen within a couple of seconds of a new post being pushed to Gitlab.

What is a fun tunnel? If you have a service running on your private network, and you need to expose that service to the public Internet, then you can use a Funnel.

I added a webhook server to our little development server, and I configured Tailscale to point a Funnel from butterwhat.humpback-rooster.ts.net to our development server. Now a Gitlab action can let our development server know that a new commit is available, and it can publish our changes immediately.

Funnels are currently in alpha and have unpublished bandwidth limits. The traffic has to run through Tailscale’s relays, so the bandwidth is much lower than with a direct Tailscale connection. You probably don’t want to stream video over Plex or Jellyfin with a Funnel, but it is probably alright to host a low-traffic web server.

I would prefer to not need a Funnel at all!

My only Funnel is a consequence of not yet having Tailscale when we set up a Git repository so Brian Moses and I could collaborate on Butter, What?! blogs. We needed a repository we could both access, so I set one up for us on Gitlab.

I already had a virtual machine running on my homelab server that was handling Octopress and Jekyll blogs, so it made sense to me to just add one more blog to the mix. That server had a cron job that attempts to pull changes from Gitlab every few minutes, and if there were changes, they would be published to the Internet.

I improved the situation with a Funnel just because I could. The better solution to the problem would be removing Gitlab from the process. I shared the virtual machine with Brian via Tailscale a long time ago. Why aren’t we just pushing our updates directly?

Tailscale has a proxy server and handles Let’s Encrypt certificates for you!

Tailscale makes it easy to create TLS certificates for your devices. They have had this feature for a while, and their Funnel feature wouldn’t function without it.

I am way more excited about the proxy server that landed alongside Funnels. Not only does the proxy work with your Funnels, but it also works just fine on your Tailnet. You can ask Tailscale to create a Let’s Encrypt certificate for you, then have Tailscale proxy connections to an HTTP or HTTPS server for you.

1
2
root@octoprint:~# tailscale cert octoprint.humpback-rooster.ts.net
root@octoprint:~# tailscale serve / proxy 5000

There is no extra software to install. I don’t have to figure out which web server Octoprint uses. I don’t have to ask Google how to install a certificate on that particular web server. I had a proper HTTPS connection to my Octoprint server in less than a minute, and it was awesome.

Tailscale exit nodes might be a replacement for NordVPN and friends

Tailscale lets you designate machines on your Tailnet as exit nodes. You can configure an Android phone to be an exit node, and you can even share your exit nodes with your friends.

What is an exit node? It is a device that you can route all your Internet traffic through. If you’re sitting in a coffee shop on some sketchy WiFi, you can click a button and have Tailscale force off of your traffic through one of your exit nodes. The coffee shop will only be able to see encrypted packets flowing past while your real unencrypted traffic exits via a computer at your house.

I have an exit node in my house, at Brian Moses’s house, and on a Digital Ocean droplet in New York.

Services like NordVPN claim to make you completely anonymous. I can’t speak to how truthful they are about this, but I can definitely tell you that you will not be completely anonymous when using Tailscale exit nodes. Maybe you would be somewhat anonymous if you set up a Tailscale exit node on an old Android phone, connect it to the Starbucks WiFi, then hide the phone somewhere near Starbucks. There is a flaw to this plan, but I am not sure it is a bigger flaw than trusting NordVPN.

You will definitely be hiding the content of your network traffic from the coffee shop, but whatever servers you are connecting to will see the IP address of your exit node. If you’re doing something nefarious, someone can link you to your exit node’s Comcast account.

I can use an exit node while I am on sketchy WiFi, but how often does that happen these days when our phones are fast WiFi hotspots? I am way more likely to use exit nodes to test website functionality from a remote location.

Tailscale on a travel router is neat, but isn’t quite ready

I have a really inexpensive OpenWRT travel router in my laptop bag. It is a GL.iNet Mango that I bought for about $20. They go on sale quite often. I had to shoehorn Tailscale on there because the Mango’s flash storage is ridiculously tiny.

All the recent models from GL.iNet have plenty of storage for Tailscale, and the latest beta release of their custom OpenWRT interface has Tailscale and Zerotier right in the GUI. You can get either up and running with just a few clicks!

The cheapest GL.iNet router that can run this beta firmware seems to be the GL.iNet GL-SFT1200 Opal travel router.

Why would I want the cheapest travel router? I enjoy the idea of having a tiny, USB-powered network device that I can leave behind. The cheaper it is, the less concerned I will be about not being able to recover the device!

I can leave it behind at my parents’ house so I can connect to their network to troubleshoot a problem. I can get permission to leave it at a customer’s site to remotely connect to some of their gear. I am sure you can come up with some use cases, both legitimate and nefarious!

Several of us on our Discord server would really like to be able to connect a Fire TV or Android TV to a travel router, then have the travel router pass all the packets through a Tailscale exit node. We haven’t had a ton of luck. I’ve managed to mostly make it work on the ancient version of OpenWRT that ships on the Mango, but the Mango can only pass VPN traffic at about four megabits per second. I haven’t had any success with newer firmware releases on nicer routers.

Conclusion

I knew that I was using Tailscale for a lot of small yet important things, and when I sat down to write this blog, I knew I would have to write a few paragraphs about each of those things. What I didn’t know was that I would be going way past 3,000 words! Does that mean I should keep the closing paragraphs brief?

Are you curious about Tailscale? Are you having trouble understanding what exactly Tailscale is or what it can do for you? Don’t worry about that. It should only take you a few minutes to get Tailscale up and running. I expect you’ll have a better idea about what is going on and how Tailscale can solve problems for you. I am solving problems with Tailscale that I didn’t even know I had!

Eliminating My NAS and RAID For 2023!

| Comments

I am excited! I have been working toward this goal for quite some time. As I am writing this, I am migrating data off my server’s old RAID 10 onto a single 14 TB USB hard drive. I should be finished and ready to physically remove all but one of the old 4 TB hard drives from my homelab server. Isn’t that awesome?!

I don’t know who this blog post is for. Sometimes I write things with an audience in mind, but this isn’t one of those times. I had some plans, I executed those plans, and I thought I should document that. I hope something in here is of use to you!

I can’t decide if I am talking about my NAS or just the large RAID storage

I am quite certain that when I chat about my NAS that I am also referring to the big, honkin’ stack of disks in a RAID, and not just the fact that the server shares files over my network. I am also aware that enough time has gone by that my quartet of 4-TB drives in a RAID 10 are no longer big nor honkin’.

StorageUpgradeThumbnail.jpg

I have had a RAID of one sort or another in my home since the very end of the twentieth century. In those days, individual hard disks just weren’t big enough to store all my data on a single file system, and we had nearly an entire decade in the middle of that time span where hard drives were poorly manufactured.

Hard drives have gotten reliable again, and disk sizes have outpaced the rate at which my data is growing. I don’t actually need a RAID to store the bulk of my data anymore.

I need a redundant array of inexpensive computers instead of just disks

I think it is safe to say that Tailscale was the tiny domino in front of the bigger choices that led me to the point I am at right now. Tailscale and fast Internet connections mean I can drop a tiny server anywhere in the world and treat it like it is on my local network.

Today I have just over six terabytes of data, and that is growing at a rate of around one terabyte each year. I figure I should have at least three copies of that data, and at least one of those copies should be in a different physical location. At least one of those copies should have some amount of history available just in case I accidentally delete an important file.

One of those copies of my data has been living on the RAID 10 array on the NAS virtual machine running on my homelab server, but those drives are getting full, and they are very old now.

tl;dr Let’s just list all the places where my data lives!

Hello. This is Pat from about six paragraphs in the future. I realized that I am going to use a lot of words explaining all the pieces of my backup and redundancy plan, and it might be prudent to just make a list of all the places where my data lives before I do that.

  • Seafile on a Raspberry Pi
    • 14 TB of storage
    • off-site at Brian’s house
    • 90 days of history
  • Homelab server
    • 14 TB of storage
      • I can bug out with the USB hard drive
    • opposite side of the house! (might survive a fire?!)
    • 90+ days of daily btrfs snapshots
  • My workstation
    • 12 TB of storage
    • No snapshots
  • My laptop
    • 1 TB of storage (not enough!)
    • Bulky data like video files aren’t synced here

Three full copies of 100% of my data. One copy is in a different location. Two copies have history. Backups are out of band.

A quick note about the Western Digital Easystore!

I am probably more excited about this than I should be, but the Western Digital Easystore USB hard drive that I bought has S.M.A.R.T. support!

1
2
3
4
5
6
7
8
9
10
11
12
13
root@kvm:~# smartctl -a /dev/sdd | head -12
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.12.12-xanmod1] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     WDC WD140EDGZ-11B1PA0
Serial Number:    POTATOES
LU WWN Device Id: 5 000cca 28fec75bb
Firmware Version: 85.00A85
User Capacity:    14,000,519,643,136 bytes [14.0 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches

This has been hit or miss for me over the years with USB enclosures and hard drives. Sometimes cheap USB enclosures work while expensive ones don’t. In my experience, though, the majority of USB drives don’t support S.M.A.R.T. at all. The 14 TB Seagate USB hard drive that I bought for my Seafile Raspberry Pi has no S.M.A.R.T. support.

I have no idea if the extremely similar Western Digital Elements drives work with S.M.A.R.T. I don’t even know that all or even most Western Digital Easystore USB drives support S.M.A.R.T. I only know that I have had luck with the one I just bought.

What’s involved in my storage syncing and backups?

The heart of my storage and backup plan is my Seafile server. That server is a Raspberry Pi 4 with a 14 TB USB hard drive, and it lives at Brian Moses’s house. The server is only accessible via my encrypted Tailscale network.

You can think of Seafile as self-hosted Dropbox. As I am writing these words, this Markdown file is synced up to my Seafile server at Brian’s house every time I hit save. Not long after that, the Seafile clients on my laptop and homelab server will download a copy of those changes. This takes about 30 seconds.

I have Seafile set to keep file-change history for 90 days. I could probably already pull six different versions of this blog post out of that history, and I have only written five paragraphs so far!

Any files that Seafile drops on the USB hard drive on the server will be snapshotted once each day.

Why did I plug a USB hard drive into my homelab server?

I haven’t decided how useful this is, but I think the concept is pretty neat!

I have a hole in my backup plan. I would really like to have an up-to-date copy of all my data on my laptop, but the 1 TB NVMe in my laptop is just too small. I keep saying that if there is a fire, I can grab my laptop on the way out of the house. It would be nice if I didn’t have to worry about pulling down 6 TB of video from Brian’s house right after a fire, wouldn’t it?

Pat's server closet

NOTE: I still have to drill a hole to run the wires to the network cupboard correctly. Only one of those computers is doing anything. I thought my homelab server looked lonely, so I brought in some old computers from the garage to sit on the table with him.

When I talk about a fire, I don’t literally mean a fire. Maybe the weather forecast is predicting massive flooding, and we should drive to Oklahoma. Maybe there is an earthquake and we notice a terrifying cracks in the walls. Maybe there is a gas leak. Maybe a CIA agent shows up, and we have to take a glass of water to a UFO.

There are a lot of bad things that can happen where I wouldn’t have to get out of the house in seconds. Emergencies where I would have time to pack my laptop bag.

In those cases, I can just grab the USB hard drive and take it out the door with me!

I am keeping a RAID for the virtual machines on my homelab server

I set aside a 1 TB partition on my new 14 TB USB hard drive for storage of virtual machine disk images. Why 1 TB?!

If I don’t count my existing NAS virtual machine, my VM disk images add up to something not much more than 200 GB. That is just too big to comfortably fit on the old SATA SSDs that my homelab uses for booting and lvmcache. One full terabyte is plenty of room for these virtual machines to grow, and it will be easy to replace this volume with a $50 SSD if I have to.

I built a RAID 1 out of the 1 TB partition on the USB hard disk and one of the old 4 TB hard disks, then I moved all my KVM qcow2 images to that new 1 TB RAID 1.

I think this is super cool! I can walk away with that USB hard disk and my virtual machines will just keep running. Home Assistant will continue to run my home automation, and Octoprint will continue to run my Prusa MK3S.

The opposite will work. If the aging hardware in my homelab machine fails, I can install Debian on any other computer. Then all I have to do is plug the USB hard drive in, point the QEMU configuration to the right place, and I can boot Home Assistant and Octoprint on the new machine.

NOTE: I need to remember to set up a job to regularly back up my QEMU config files to the USB hard drive, or else it will be a pain to replicate all the machines on a new server!

I think it is really cool that I will be able to easily carry all my virtual machines away with me if I ever have to run out the door.

I thought of a new goal that I should keep in mind!

This is something I have been doing ever since I hauled the Seafile Pi to Brian’s house. I just never put the idea into words.

Faster is always nice, but I am going to do my best to make sure my storage, synchronization, and backups work well even if my network is limited to 100-megabit Ethernet.

Last year, my workstation just didn’t have enough storage to hold much video, so I would edit files stored on NFS over my Infiniband link. That was great, but Infiniband only works over extremely short distances or with fiber optics.

Installing a big disk in my workstation and putting it behind an lvmcache fixed that problem. I can accumulate terabytes of video, but the files I am currently working on will always be accessible at NVMe speeds.

My Raspberry Pi is stuck at 100 megabit for some reason. I tried quite a few cables, switch ports, and switches. Those are cables and ports that negotiate gigabit just fine with my other Raspberry Pi. My Internet connection at home is only 150 megabit, anyway, so this hasn’t been a big deal.

Just about the only time this causes any sort of issue is when we record podcasts. We generate a few dozen gigabytes of video files in two or more locations, and it takes an hour or three to get all those files uploaded or downloaded.

This only happens about twice a month, but it is rare that I am in a rush to open Davinci Resolve immediately after an interview. It is usually fine letting this wait a day.

How much did this cost? Is this a better value than using smaller disks in a RAID 5 or RAID 6?

Ugh! This is one of those situations where it is tough to make direct comparisons. It would have surely cost more money if I put two hard disks on the Pi, the homelab server, or both, but maybe three or four smaller drives in a RAID 5 could provide some redundancy without bringing up the cost by much.

More disks would require more SATA ports or more USB ports, and I am not terribly confident that sticking three or four USB disks in a RAID 5 would be stable. It would probably work, but mdadm might kick good drives out if they happen to respond too slowly.

You can get 14 TB USB hard drives for about $200, assuming you wait for a good deal. I think it is safe to say that even if we include tax, I paid less than $700 for my three hard drives.

I bought the Pi long enough ago that I got a good deal on it, so that part of my math would feel like cheating, so I am going to ignore the compute side of things. I am just going to assume you already have some sort of server setup at home like I do.

I have the Seafile Pi hosted for free at Brian Moses’s house, and it is currently storing just under six terabytes of data. That would cost me $300 annually if I were using Google Drive or $360 with Dropbox, and I think I am about to be at the point where I would be charged for my third year with either of those services. Thank goodness I hosted my own file-sync service!

RAID is not a backup!

I always feel like I need to say this. RAID is there to reduce downtime or maybe increase performance. If one or maybe two drives fail, you can just replace them, and everything will be fine. That can save you hours of work. You won’t have to reinstall an operating system. You won’t have to restore from backup. You won’t have to reconfigure anything.

If your disk controller or its driver goes wonky, you might ruin the data on every disk in your RAID. That could take your data and every single one of your ZFS or btrfs snapshots with it. Snapshots are nice to have, and can be a vital part of a backup plan, but snapshots aren’t much of a backup on their own!

Earlier, I mentioned that my backups are out of band. That means my backups are done outside of normal channels. In my case, Seafile is copying data to and from the server via its own protocol.

If your backup destination shows up as a normal disk to your operating system, then it is potentially open to most of the same problems, accidents, and attacks as the data you are trying to back up. This is even worse if you leave that backup storage connected all the time. If some ransomware can encrypt and hijack your files, then it can do the same to the backups on your USB drive or mapped share.

You should have another layer in there to make sure you can’t lose your backup.

Did I get to the end of this weird sideways-upgrade project?

I am willing to answer that question in the affirmative! Three of the four disks from the old RAID 10 array have been removed from the server. All my virtual machines are booted from disk images stored on the 1 TB partition on the USB hard disk. That 1 TB partition is now in a RAID 1 array with a 1 TB partition on the youngest of the ancient 4 TB disks. That mdadm RAID 1 array is encrypted using LUKS.

The fresh NAS virtual machine is running Debian 11. There are no file shares on this NAS, so it probably isn’t really a network-attached storage, but the hostname implies that it is still a NAS! The remaining 12 TB of the USB drive is encrypted using LUKS and attached directly to this new NAS virtual machine. It now has a big btrfs file system with 99% of the contents of the old, retired NAS virtual machine.

I have a Seafile client running on the new NAS, and that client seems to be syncing every relevant Seafile library that should have a copy on the NAS.

My homelab server has always had a simple script that I run manually after boot. It unlocks the LUKS-encrypted storage, scans that storage for logical volumes, mounts everything that needs mounting, then fires up the virtual machines that live on that encrypted storage. That has all been updated to match the new disk layout.

I have not set up automated daily btrfs snapshots. I will do this soon. I promise!

What is my next home network or server upgrade?!

It seems like I made a lot of changes in 2022! I upgraded my OpenWRT router to a much more capable piece of hardware, so I am now fully prepared to call the ISP and upgrade to symmetric gigabit fiber. I installed the latest OpenWRT on a couple of other routers and sprinkled them around the house and set up 802.11r WiFi roaming.

I have done some work to get my aging AMD FX-8350 homelab server down under 2 kWh of power consumption per day. I probably just shaved a bit more off that by removing some hard drives, but I wouldn’t mind taking this further!

I have been watching my friends on Discord pick up tiny boxes like the Beelink SER5 5560U for $300 or the Beelink with an N5095 for $140. The Ryzen 5560U is a huge upgrade for me, and also extremely overkill. The N5095 would sip power but is comparable in speed to my overpowered dinosaur of an FX-8350, though my FX-8350 has four times the RAM of the $140 Beelink. That’s something a cheap RAM upgrade could fix, but the more a sideways move like this costs, the longer it will take to pay for itself.

What do you think? Should I downsize into a Beelink N5095 whether it is cost effective or not? I do enjoy the idea of seeing how much homelab and NAS can be crammed into a Star Wars lunch box, but I am also not excited about turning my ancient FX-8350 into e-waste for no real reason. Let me know what I should do in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

One Weird Trick To Use When Your First Layer Won’t Stick

| Comments

I have been 3D printing for eight years. If the first layer wasn’t sticking on my first printer, I would just make sure the nozzle was at a reasonable height then load up the glass build plate with more hairspray. There wasn’t much hairspray wouldn’t fix.

The Prusa MK3S with its PEI sheet has been a huge upgrade. Every now and then I print something that is a little persnickety, and I have to put down a layer of glue stick to make sure some tiny perimeters don’t peel up. For the most part, though, the PEI does a good job holding my prints and releasing them afterward.

Small Prints at the Corners

NOTE: The problematic corner still needed a bit of glue stick with these tiny, angular filament clips!

My experience hasn’t been perfect. I have a lot of trouble with bed adhesion close to the edges. Even with Prusa’s 49-point mesh bed leveling, the extremes are either a bit too close or too far. I also suspect that one corner of my PEI sheet is just crummy.

I have tried adjusting the bed level correction. I think I have improved things, but I haven’t completely eliminated the problem. I assume this problem is a result of the PINDA probe being way over to the right. That means it has no way to probe really close to both edges of the bed.

I was hunting for a new solution this week, because I have a different problem. I grabbed a spool of Anycubic gray PLA for $19 to print a mess of GridFinity bins. I had some trouble with adhesion on a long bin, so I started doing some troubleshooting.

Gridfinity bins at the corners

NOTE: I didn’t get as close to the edge on the right as I thought I would!

My first layers are more than a little inconsistent with this filament. I suspect it just isn’t very good filament.

Never troubleshoot your overall first-layer issues with crummy filament. You’ll probably just screw things up for when you swap back to nice filament. Even so, I wanted to continue printing cheap GridFinity bins.

This one simple trick seems like a good idea!

PrusaSlicer and SuperSlicer default to a 0.42 mm extrusion width for the first layer. Other slicers use a wider extrusion for the first layer. Why would you want to use a wider extrusion for the first layer?

The wider extrusion width makes up for being a bit too far from the print bed or minor underextrusion.

If everything is calibrated well, everything will work out very much like the defaults. If you’re printing just a hair too far above the surface, that extra extrusion means that you are more likely to still get just enough squish to lock the filament to the PEI.

SuperSlicer settings

In my case, this means I can calibrate for a perfect first layer near the center of my PEI sheet, but I am still getting enough contact out at the problematic corners and edges for my print to stick.

You’d get to see a diagram here if I were artistic, or if I had a pen for my 2-in-1 tablet!

Won’t wider extrusions make the printer lay down too much plastic?

No. The slicer does the math correctly. If the first layer would be 30 lines wide with a 0.42 mm extrusion width, then the first layer will be about 25 lines wide with a 0.5 mm extrusion width.

The lines will be wider, so they will be spaced farther apart.

A tiny update!

While I was writing this blog I was thinking about why I stopped at 0.5 mm extrusion width for the first layer. I’ve heard that double the nozzle width should be acceptable, and I am already printing infill at 0.6 mm. Why not push things a little more?!

Printing at 0.75mm extrusion width

I am now running with 0.75 mm extrusion width on the first layer. It seems to be working great. Even my problematic corner managed to stick, even though my Z-offset is definitely a bit higher than it should be. Look at the size of those lines!

I don’t trust this filament, so I am not going to try to tune my first layer until I finish off the spool with Gridfinity bins. The Z-height doesn’t make a lot of sense to me. The top-left square is close to correct. The bottom-right is usually my difficult corner, and that one is printing a bit too close, but the top-right square is closer than the middle-right square.

Conclusion!

I am using a 0.5 mm extrusion width in SuperSlicer with the Arachne perimeter generator. I am not sure if Arachne is being used on the first layer, but I assume that it is. I might try pushing this up to 0.6 mm because that is what I’ve been using for infill. If it works well for infill, then it ought to work for the first layer. The first layer is the slowest, so printing fewer lines ought to be a decent speed increase!

What do you think? Have you been having trouble with your first layer? What have you tried doing to correct it? What 3D printer are you using? Have you tried wider extrusion for the first layer? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

I Finally Repaired my Baratza Preciso Coffee Grinder

| Comments

I am trying to figure out when I shelved my Baratza Preciso. I suspect that it has been more than a couple of years! It stopped grinding fine enough to pull a slow enough shot of espresso from my Rancilio Silvia, so I started using my wife’s Baratza Encore and ordered some replacement parts.

Baratza Preciso and Baratza Encore

If my memory is correct, I started working on the repairs and then the big plastic drive gear gave up, so I also gave up. I have been limping along making lattes using a grinder that isn’t up to the task ever since. At least until last week!

These words won’t help much if you are shopping for a coffee grinder!

I am quite pleased with how my Baratza Preciso has held up. I have had it for eight years, and I am still using it today. It did need some inexpensive repairs over the years, but it is chugging along.

Baratza no longer makes the Preciso. It has been replaced by one of the grinders in the Baratza Sette lineup, but I am not sure which one.

I can’t speak for the Sette, but I can say that Baratza has amazing support, and they offer so many replacement parts that you could probably assemble an entire grinder from spare parts. If you buy a Sette or an Encore today, I have confidence that you can repair it in 10 years.

tl;dr: The reason I just had to write this!

The Baratza Preciso is SO FAST! It takes the Encore well over a minute to grind 18 grams of coffee for an espresso. The Preciso seems to grind that much in less than twenty seconds. I have watched the Encore with a stopwatch, but I only counted out the Preciso to 15 hippopotamuses, and I tend to count out time a bit more slowly than it actually passes.

I forgot how slowly the Encore grinds. I wrote a blog many years ago about how it takes less than six minutes for me to make a latte. I often head to the kitchen before sitting down at my desk to record a podcast. I look at the clock, and it says I have way more than six minutes available! This should be no problem!

Coffee with 3D-printed cocoa logo

Then, by the time I am finished doing my barista duties, I am almost late! Did I time myself incorrectly in 2013? Nope. I just didn’t realize how much time the Encore was adding to my routine.

This might be worth keeping in mind when you’re shopping for a grinder. I make two lattes every day, so I am probably saving three minutes a day. That’s 21 minutes each week, 90 minutes each month, or over 1,000 minutes each year.

I tend to make a latte either right before or in the middle of a task. Today I sat down, created an empty blog from a template, and wrote the opening paragraphs that you just saw. Then I got up, wandered to the kitchen, and made a delicious latte with Yemen Mokha Matari from Sweet Maria’s.

One might consider wasting a few extra minutes taking a break to be valuable. Others might find it more important that they’re saving 21 minutes each week when making coffee for their friends.

The problem with the Baratza Encore

There’s more than one problem. We are just going to ignore that the Encore probably produces an inconsistent grind for espresso. That is for sure a problem, but it is minor compared to the real problem.

What do you do when a shot pulls too slowly on setting 3 and too fast on setting 4? There’s no option in between, so all you can do is adjust the dose. If you adjust the dose, then you also need to adjust the volume of extraction. Adjusting two things at once makes tuning more difficult.

The Baratza Encore’s grind is more of a problem than I thought!

I wrote this entire blog two months ago, but forgot to put the finishing touches on it so I could publish it. Two months of being back to using the Preciso again have shown me another problem with the Encore. The cheaper grinder produces so many more fines!

I try to pop my shower screen off once a month for cleaning. This time the screen didn’t look any worse for the wear. Hardly any coffee grounds are sneaking past the tiny holes in the screen now that I am using the Preciso again. It was always so much dirtier with the Encore.

This surprises me a bit. Both grinders are obviously related, and they use the exact same upper burr. The Encore has a slower, weaker motor, and the lower burr is different.

I assumed the difference in the lower burr would just be the number of blades or how aggressive the angles on those blades might be. This is probably correct, but I didn’t realize that this could make a difference in the quality of the grind!

What happened to Pat’s Baratza Preciso?

You are going to have to forgive me. I don’t even know for sure what the order of problems and solutions has been, but I will do my best to give you a timeline.

I know for sure that I got to a point where I was grinding at the Preciso’s absolute finest setting, and shots were pulling in less than 10 seconds. It was awful!

I ordered replacement parts. I have several of the replacement plastic adjustment doodads and the plastic burr holders on hand now. There’s a tiny screw you can use to fine-tune the adjustment ring. I pushed that fine-tuning screw as far as it could go, and I was still getting 10-second shots of espresso.

At that point, I even tried to cut a shim out of a business card. I put that shim between the burr and the plastic retaining ring in an attempt to get the burrs closer together. I’m pretty sure this helped a bit, an I am almost positive that I brewed espresso for a few months like this. I bet this is what caused me to strip the drive gear and finished completely chewing up the upper burr.

The Baratza grinders have a lot of plastic parts

And I have managed to break every single one of them. This was the second time I’d stripped the drive gear, and I happened to order two the first time it happened. I don’t think I knew this at the time. I just threw my hands up in the air, put the Preciso out in the garage, and limped along with the Encore.

Baratza Preciso Broken Gear

The plastic parts are probably a good thing. The idea is that you’re supposed to blow out a fragile $3 plastic part if there’s ever a rock in your bag of coffee. That’s better than destroying an expensive burr.

With the price of the burrs for the Preciso, it might be better if everything were overbuilt and I had to replace a $16 burr every few years. Your mileage may vary.

I wasn’t sure if I should order the replacement burr

There are two burrs in the Preciso. The upper burr is $16 and is extremely easy to replace. The lower burr is $45, and it looks like it is challenging to get it unscrewed from the drive assembly.

I could tell that the upper burr was damaged, and it looked like the lower burr was fine. I am assuming that the bigger burr on the bottom doesn’t do much cutting. It probably pushed the beans into the upper burr as it spins.

I spent about two hours when I replaced the drive gear. Replacing the gear wasn’t too difficult, but when the machine was still grinding too coarse, I wound up taking it apart two more times to move shims around and try adjusting things.

Baratza Preciso Upper Burr

When that didn’t work out, I had wished I spent those two hours working toward getting our LumenPNP pick-and-place machine up and running. When the pick and place works, we can start selling OoberLights boards. When we sell OoberLights boards, there will be cash flow. I could use that cash flow to buy a grinder upgrade.

I do want a grinder upgrade. I’ve had my eye on the flat-burred Turin DF64 grinder for a while. It would be a really nice upgrade!

I didn’t know if I should repair the Preciso. The burr was about $25 after shipping and tax. That’s 5% of the cost of my next grinder. Not only that, but I suspected I would have to spend an hour taking the Preciso apart again. Maybe it would be better to put that $25 toward the price of a grinder upgrade and not waste an hour of my time getting angry at the old grinder.

I spent the $25. I did take the Preciso apart again to undo my adjustments to make sure the new burrs would never touch each other. It wound up being a good choice.

I am drinking a latte right now. I did not quite hit the right grind today. With 18 grams in with the Preciso set to 4F, my light-roast Yemen gave me 31 grams out in 45 seconds on my Rancilio Silvia.

This isn’t far off from my ideal shot for a latte. I could easily write 2,000 words about why I aim for a longer pull with a little more than a ratio of a little more than 1.5, but that would be drifting way off topic.

The important thing to note is that I can still go three entire clicks finer on an extremely light bean. I am calling this a successful repair.

UPDATE: A slightly darker roast Ethiopian coffee was able to completely choke the machine with the grind set to 4F! I had to bump it up to about half-way past the 6 setting to get a good pull. I think this means I have done a good job!

The thrilling conclusion?!

I don’t know if the conclusion is thrilling, but I am excited to have my Baratza Preciso working again, and I am quite happy that I didn’t have to spend $400 or more on a grinder upgrade this year. I expect that I will get at least a few more years of service out of the Preciso before I need to upgrade.

What do you think? Should I have junked the Preciso and splurged on something like a Turin DF64? There’s a good chance that next grinder will outlast me. When I eventually upgrade, will I be kicking myself for not doing it a few years sooner? Will my coffee taste that much better?!

Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!