Windows 11 Start Menu and Quick Settings Opening Slowly

| Comments

I bought an Asus Vivobook Flip 14 2-in-1 laptop last month. It isn’t a speed demon, but with an 8-core Ryzen 7 5700u, 16 GB of RAM, and an NVMe that can push several gigabytes per second, it is definitely not a slow machine.

This is why I was surprised by how slow some parts of my Windows 11 experience have been, but only intermittently. Sometimes I would tap the Windows key to bring up the start menu, and it wouldn’t show up right away. Other times it would show up instantly, but I’d start typing and it would just sit there.

Sometimes it’d let me type enough to see the program I want to run, then I’d hit enter, and it would look as if nothing happened. I can click on the menu item that says open, but nothing opens. Then five or ten seconds later, something happens.

Sometimes I would touch my Wi-Fi or battery icon in the task bar, and it would take several seconds for the quick-settings panel to slide in.

The problem is that this happens at all. This is core functionality that I’d expect would be quite optimized. How does this ever happen on a mid-range computer? The intermittent part has made the problem difficult to solve, but I think I finally have!

tl;dr

I disabled memory compression. How do you do it? Open Powershell with admin privileges and run:

1
Disable-MMAgent -mc

You can turn memory compression back on with this command:

1
Enable-MMAgent -mc

Should you do this? I don’t know for sure, but it can’t hurt to try. Especially if you have plenty of RAM and a good, fast NVMe.

What wasn’t the problem?

I am reasonably certain that this tends to happen most often shortly after waking the laptop up. I am a Linux professional of some sort. At least, I assume I still count as a professional. I am pretty ignorant of precisely what Microsoft has been up to during most of the last decade.

I’m pretty sure sleeping isn’t just sleeping. I’m pretty sure Windows dumps your RAM to storage just in case the battery dies during sleep. I’m pretty sure that when you shut down your computer, that is just a fancy version of hibernation. The trouble is I am only pretty sure about so many things.

The first thing I tried was telling Windows to run the laptop in performance mode instead of power-save mode. Since the problem only happens some of the time, I had to run like this for a couple of days. I thought it was helping, and it might have been helping a bit, but there were still times when I had a lot of latency when I bang on the start menu.

The Asus Vivobook laptop was noticeably warmer. It wasn’t hot, but I could tell it wasn’t cool anymore. Not a huge deal, but heat is a waste of battery, especially since the laptop didn’t really feel any snappier.

Why disable memory compression?

It was hard to get good answers. In fact, I don’t think I got good answers. Intuition says I should give this a try, and it worked.

I don’t know what compression algorithm Microsoft is using, but I hope it is something fast with poor compression ratios. Something like lz4 decompresses at about 2 GB per second on a fast machine, and I bet it goes even slower on my laptop. Why waste CPU cycles and time compressing RAM when you can swap it to you NVMe just as quickly or even more quickly?

There’s a ton of cargo cult comments around the Internet talking about disabling memory compression and disabling the SysMain service (formerly Superfetch). There’s just as much cargo cult around telling people not to disable either one.

They say the current implementation of Superfetch understands that you have fast SSD or NVMe storage. I have no information about whether or not you should disable it, but I haven’t. It doesn’t seem like something that should be necessary today, but I also haven’t attempted to measure this.

If I understand correctly, my Ryzen 5700u will be stuck at around 2 Ghz for at least several seconds any time the CPU gets hammered. This is a trick that keeps the clock speed from hitting 4 Ghz when something needs to burn a bunch of CPU cycles for half a second.

My guess is that after sleeping, idling, or playing Into the Breach for two hours, the pages of RAM with the start menu and quick-settings panel wind up getting compressed. When my laptop is in power-save mode, it only goes up to 2 Ghz, the RAM takes twice as long to decompress, and that is what I am waiting on. If you’re on a fast desktop computer, or your laptop is plugged in, you are much less likely to even notice this issue.

So far, my guess seems to be correct, because I haven’t seen this latency since disabling memory compression.

I wasn’t just having trouble with my start menu

Steam has been weird. It is already open. I can see it in my taskbar, so I click on it to bring the window into focus. It appears instantly.

Except it may as well be a screenshot. I can’t click anything, and when it does this, it takes way longer than the start menu to start responding.

Someone smarter than me would have opened Task Manager to see what was happening. It never occurred to me. If this were any of my Linux machines, I would have been watching htop, dstat, and peeking at dmesg hoping for a hint.

This is a micro optimization solving an extremely specific problem!

I hate latency. I am the sort of person that expects to be able to hit a couple of keys and have a terminal window show up with a ready-to-use shell in less than 200 ms. That was what I expected on a laptop from 2009. We should be doing much better in 2022.

I am sure Microsoft has collected mountains of data about this. I imagine that in most cases, especially with a processor that isn’t trying to sip power, that whatever time we lose waiting for the start menu is made up several times over by the next program loading faster. I am guessing that my situation is an edge case for them, and a bit of delay the first time I hit my start button isn’t a big deal to them.

It is a big deal to me.

I can count three hippopotamuses before Windows 11 can open Windows Terminal and show me my Ubuntu 22.04 zsh prompt. I know there’s an extra layer of virtual machine in there, but my laptop in 2011 could open Gnome Terminal and show me a shell prompt in less than 150 ms.

Conclusion

I don’t know if my reasoning is correct, but the results make me believe my thinking is sound. If you have a slow CPU with a fast NVMe, I bet disabling memory compression on Windows 11 will do you some good. My Ryzen 5700u isn’t exactly slow, but it definitely pretends to be slow when in power-save mode, which is what it should be doing!

What do you think? Are you disabling memory compression? Is disabling compression giving you improvements that you can see? Should I be completely disabling SysMain? Let me know in the comments, or stop by the *Butter, What?! Discord server to chat with me about it!

Games to Play on Your Windows 11 Tablet

| Comments

When I ordered my Asus Vivobook 2-in-1, I immediately began to scour my voluminous Steam library looking for touch-friendly games. I was initially disappointed, and in some ways I still am. Many of the games are just simple mobile games on a bigger screen, and some games that do have mobile ports like Prison Architect won’t work without the second mouse button and a keyboard.

I haven’t found a good list of proper touchscreen games for Windows 10, so I figured I should make one. I thought /r/surfacegaming would be promising, but 99% of what they post there is just mouse-and-keyboard games running on a Surface device. There are curated lists of touch-friendly games on Steam, but they’re riddled with games that just don’t quite work.

I have tested every game on this list. Below the list are my mini-reviews of the games. If a game doesn’t have a review, that means I’ve only played enough to make sure the first level works.

These are just my findings so far. I plan to update the list as I find more games, and I will definitely be adding reviews for games as I start to actually play them!

tl;dr Just show me the list!

Into the Breach

Into the Breach is easily my favorite game on this list. I have played 12 hours within the first four days of owning the game.

Into the Breach sits in some sort of Goldilocks zone between simplicity and complexity. I certainly don’t want to play You Must Build a Boat on a 14” screen, but it was fantastic on my phone. I also don’t think I want to play something as hugely complex as Civilization 6 on a large tablet. Into the Breach is just right.

So far, this is the only game I have found that sits well inside this Goldilocks zone. What else should I be playing?!

Kingdom Rush

Kingdom Rush is a port of a mobile tower defense game. I remember trying this out on Android about eight phones ago. I didn’t expect to enjoy an overgrown mobile game, but it is definitely more fun that I expected.

I’ve played for about six hours so far. Into the Breach is consuming all my time for now, but I expect I will get back to Kingdom Rush at some point. It is nice being able to see the entire map on screen, and there’s significantly more strategy involved than I ever would have imagined.

Mini Metro

Mini Metro is very much a mobile game. I first played it on my old 12” Windows tablet with an underpowered Atom processor. That 2-in-1 couldn’t play any real games, but it played Mini Metro just fine, and it was a lot of fun!

Mini Metro is a game that you should probably me playing on a phone. The interface is quite simple, and you certainly don’t need a giant display. Even so, it is fun watching people move around on a subway map that’s as big as a sheet of paper!

Remnants of the Precursors

I haven’t played this one yet, but I think it deserves a couple of paragraphs. Remnants of the Precursors is an open-source game that is modernization of the MS-DOS game Master of Orion. I used to enjoy playing Master of Orion 2, so when I learned of this game, I installed it immediately.

Remnants of the Precursors is a complicated strategy game. It isn’t going to hold your hand and teach you how to play. I fired it up, started a new game, and then made sure I could click around on things with my finger. It seems to work fine, but I haven’t had an opportunity to sit down and learn a brand-new strategy game yet.

This sits outside of the tablet-friendly Goldilocks zone.

The Banner Saga 1, 2, and quite possibly 3?!

I don’t expect that I will be playing The Banner Saga games, but I can’t help but leave a few words here, because the style makes these games look amazing!

I played enough of The Banner Saga 2 to start a game and make a few moves on the map. They’ve certainly made sure that these games are touchscreen-friendly!

I was excited to see the animation. It reminds me of the Don Bluth animated movies from my childhood like The Secret of NIMH or even his animation from the Dragon’s Lair arcade game. The artists for The Banner Saga games have done a fantastic job!

Holy Potatoes! A Weapon Shop?!

There are four Holy Potatoes! games, and I own two of them. They seem to go on sale for $0.99 every once in a while. That’s fun!

I had to fire up Holy Potatoes! A Weapon Shop?! on my tablet to give it a try. It looks neat, but I haven’t even completed the tutorial. I’ll give it a try one of these days when I’m looking to waste some time.

I’ve managed to craft and sell my first weapon without using the mouse. Pinch-to-zoom doesn’t work, and zooming is definitely a thing in this game. I had no trouble zooming the map with the scroll wheel, though it doesn’t let me zoom out very far at all. I have not played enough to know if this will be problematic later in the game.

It does let me move around my tiny map by dragging my finger.

Games that just won’t work

Some of these are really disappointing. FTL, Prison Architect, and Invisible, Inc. each have iOS ports, and Prison Architect is also available on Android. If you can make these games worth with no keyboard and no right mouse button, it should be possible for the developers have already solved the difficult problem. Microsoft should throw a few dollars at these folks to remedy the situation!

Some of them are just dumb. Fae Tactics seemed like it was going to work, except it doesn’t detect a touch as a click. You have to double-tap everything. I noped my way out of there quite fast! This seems to be a common problem.

Games like Cities: Skylines are probably way too complicated and intricate to play with just a touch screen.

Why isn’t my favorite game on the list?!

I am only listing games I have actually tested, and I am doing my best to avoid buying new games. There are already 1,950 games in my Steam library, and I want to get a chance to play more of them. I don’t want to buy more games that I won’t play or won’t have time to play.

Where am I looking for games beside Steam?

I have been wanting to comb through the Itch.io charity bundles that I’ve bought. I know for sure that I bought both the Bundle for Ukraine and Racial Justice and Equality bundle. There are literally thousands of games in these collections.

Thank goodness you can visit randombundlegame.com to filter the list by bundle, genre, and many other properties. I know for certain that there are awesome games in here. If we’re lucky, a few games from Itch.io are both awesome and tablet-friendly!

Conclusion

I hope you’ve found my collection of touch-friendly tablet PC games helpful. I am certain it isn’t complete, but I am confident that everything that has made it onto the list is properly playable and most likely enjoyable to play on a touch screen!

I will be working to expand the list. I am always playing games, and I even sometimes manage to start playing new games, so I expect the list will naturally expand in the future.

Do you think I’ve left out any awesome games that work well on a touch-screen tablet PC? Is your favorite game already on the list, or did I completely miss it! Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

Using lvmcache for Gaming and Video Editing - What I Have Learned So Far

| Comments

I’ve been running and living with my lvmcache experiment on my desktop for more than a month now. Everything is working out about as well as I had hoped, and that means they’re doing even better than I truly expected. In fact, things are sometimes performing better than the measurements would suggest that they should. Doesn’t that seem counterintuitive?

Why use lvmcache on your desktop or workstation?

SSDs and NVMe drives can be expensive. Not only that, but they only get so large. NVMe pricing at up to 2 TB is pretty good, then you wind up paying bit of a premium to get to 4 TB. If you need more storage, it will either cost a lot more, or you need to buy multiple drives. Then you need more m.2 slots.

I decided to put a fast lvmcache in front of a slow hard drive. I bought a 1 TB Samsung 980 NVMe for $100. I set aside 300 GB of that to use as an lvmcache in front of a 4 TB 7200 RPM hard drive I already had on hand.

NOTE: Because so much of my bulky data is stored on the slow disk under the lvmcache, I am only using 142 GB on my root/home volume on the NVMe. Since large files are always going to live in the cache, I don’t expect to use much more space. It would have been safe to set aside 700 GB for caching, and I wish I did that!

The hope has been that my games would load faster, and my video editing experience in Davinci Resolve would still be smooth even when the files are stored on the slow hard drive. I don’t want to have to move large games to a bigger disk. I don’t want to shuffle around games that I am not playing often any longer. I don’t want to have to move my current Resolve project to the NVMe and move it off when I am finished.

I want lvmcache to handle the shuffling for me.

lvmcache is impossible to benchmark

That is probably an exaggeration, but it is definitely difficult to benchmark lvmcache well. Everything you read or write passes through Linux’s in-memory disk cache. That is not how lvmcache works.

Since lvmcache is a hot-spot cache, it tries to capture data that is accessed frequently, and it tends to skip caching sequential writes. That means you might dump 50 gigabytes of video footage from your camera, and it is possible that not a single byte will wind up in cache. You may not even edit that footage for a few days, so it definitely won’t wind up in the cache right away.

How do you benchmark something like this?

I don’t really want a benchmark. I just want to get a glimpse of what the cache is doing for me, and I want to see if I can measure any improvements or see if there are any delays.

I tweaked the awesome lvmcache-statistics script so that it works more like top. That will let me monitor cache hit and miss rates. I’ve been running that alongside dstat to watch how much data is moving around on each of the physical disks. This has definitely been interesting!

Game loading tests

Games are a huge part of why I want to use lvmcache on my desktop. I have so many games that are each between 50 and 150 gigabytes. I don’t hop between games. I am usually playing one single player and one multiplayer game during any given week or month. Most of this can drop out of cache, and it would be fine.

We’ll talk about how this might relate to cache sizing later on.

I already talked about this in the first lvmcache post last month, but I’ll summarize it again here. The first thing I tested was firing up Sniper Ghost Warrior Contracts on my new lvmcache setup. I wasn’t smart enough to use a stopwatch, but the game plays a cut scene while loading, and you can’t skip the cut scene until loading is complete.

The first time I started the game, I got to see way more of the cut scene than I ever did when playing the game on my SATA SSD. I quit the game, dropped my RAM caches, and tried again. This time it loaded faster. I repeated, and it was precisely as fast as loading from my old SSD.

I wasn’t seeing any numbers higher than around 150 megabytes per second on dstat, so I copied the game directly to my NVMe and tried again, and it loaded in precisely the same amount of time. I wanted to make sure lvmcache wasn’t a bottleneck.

The load time was about the same direct from the NVMe.

Before writing this blog today, I figured I should load the same game today. Steam says I haven’t played it in about six weeks. I’ve dumped hundreds of gigabytes of video since then, and I have edited two Create/Invent Podcast interviews and two Butter, What?! Show live streams.

I fired up Sniper Ghost Warrior Contracts and watched the dstat output. I was excited to see that there were rather long stretches of time when only the NVMe was being touched while on the way to the main menu. When I loaded my progress, though, things slowed down.

Yes, data was being pulled from the cache, but not much. It didn’t take all that much longer to load. If I were smarter, I would have used a stopwatch! It was probably an extra 10 or 15 seconds.

Really not that bad sounding, right? And I am confident that it will go faster if I decided to play again tomorrow, because dstat showed plenty of writes to the NVMe during the load, so more data was being cached.

Video editing is both awesome and interesting

We record one live stream and one podcast interview every month. My live streams with Brian Moses generate a large volume of video. We each record ourselves on our Sony cameras, and I record my desktop using OBS Studio. It is easy for us to break 100 GB of footage in 60 to 90 minutes. The Create/Invent Podcast interviews are recorded using Riverside.fm, so the video is more heavily compressed but still of quite good quality.

When I dump video off the cameras, I don’t see much lvmcache action going on. As soon as I add them to a timeline in Davinci Resolve, things start to get interesting. Resolve reads through every video file to generate an audio waveform to display on the timeline. During this part of the process I see reads on the slow disk and a similar amount of writes to the NVMe!

That means it is mostly ignoring the large video files when I dump them to disk. This is either because lvmcache doesn’t think a single write is worth promoting to the cache, or it doesn’t bother caching the writes because the slow disk is plenty fast enough to keep up with my SD card. It is probably some combination of both.

As soon as Davinci Resolve starts scanning those video files, the lvmcache will start getting primed. Isn’t that cool?!

What kind of disk performance do you need to edit video?

The answer to this question will be different for everyone. It will depend on how many videos you have playing simultaneously on your timelines and the bitrate of those videos. The highest bitrate videos that I edit 90% of the time are the 50-megabit files from my Sony ZV-1. Sometimes I edit 100-megabit and 150-megabit files from a Sony a7S3.

You really don’t need much disk performance to playback video files like these. My new 2-in-1 ultrabook laptop can composite three of my Sony ZV-1 files stored on a Samba share over WiFi on the on the same timeline and play them back without dropping any frames. Things get hairy there when you want to scrub through the timeline quickly, but it can handle basic editing tasks this way in a pinch.

The quickest and easiest test I could think of doing was playing back one of my videos faster and faster to watch how much I/O bandwidth would be required. I learned that playing back some footage of myself at 8-times normal speed only eats up 50 or 60 megabytes per second of bandwidth. I could have done math to figure this out, but doing it in real life also showed me that I seem to start dropping frames at 16-times or 32-times playback speed.

NOTE: Knowing this number is handy for me, because it tells me that my spare SATA SSD can outpace my CPU and GPU by nearly an order of magnitude.

My slow hard disk has at least three times more throughput than this. It could almost definitely keep up if there were two of these videos on a timeline, and it might be able to manage with three, but I am not confident of that. Video playback is very sequential disk access, and that is something spinning media is good at, but Resolve is going to be requesting blocks from three different video files. Those seeks will slow things down!

I had a chance to do a more realistic test this week with fresh footage from the Create/Invent Podcast, but disk performance isn’t my problem when editing podcast episodes. I wind up building stacks of timelines on top of timelines when I set up out various split-screen views. When I put those in a multicam bin, the multicam preview eats up CPU and GPU.

The important thing for me is that I had a chance to see first hand that my lvmcache would easily be up for the job.

This really simple playback speed test that I have been doing has been a handy yardstick for understanding how much disk throughput I need to have. I know how fast my 5 GHz 802.11ac WiFi is in my office, so it isn’t much of a surprise that my laptop can playback three video files on the same timeline in Resolve. I also isn’t a surprise that my CPU and GPU are the bottleneck when trying to play video at 32-times speed. As I alluded to earlier, throughput isn’t always enough.

If your video files are stored on a slow network share or a 7200-RPM disk you will certainly notice a slight delay every time you click a new point on your timeline, while on a solid-state drive you will be able to jump around instantly.

Is one cache enough? How big does your cache need to be?

This is another part of the puzzle where I have more questions than answers. Let’s walk through some of what I have been pondering. Maybe my needs will help you compute what is required to work for your own situation.

I have a 300 GB lvmcache volume on my 1 TB NVMe. Hindsight tells me I should have at least doubled that, and I most definitely should have realized that when I set this up.

I process between 100 and 150 gigabytes of video each month. That means I pull the files off of cameras, spend a week or two editing those files in Davinci Resolve, export 10 or 20 GB for YouTube, then I probably won’t touch those files again.

I have games that range in size from 20 gigabytes all the way up to 140 gigabytes. I really only need to fit two or three of those on the SSD cache at any given time.

I don’t have enough cache space to hold much more than one or two games and a month’s worth of video. Sure, the cache doesn’t have to hold every single piece of a 150 gigabyte game to be useful, but I have already seen that editing two live streams and podcasts in six weeks managed to push most of a 50 gigabyte game out of my cache.

Part of the problem here is that I don’t understand how lvmcache makes it decisions, and I don’t have any way to influence them. The way I see it, I have two choices.

I could use a bigger cache. That would give Team Fortess 2 and Sniper Ghost Warrior Contracts more time to show lvmcache that they need to be cached, and lvmcache would have more time to notice that I haven’t touched the video files from March in the last two months.

The other option is to split my data and use two caches. The video files are ephemeral. I am going to work on them for a few weeks and never touch them again. If I do have to revisit those files, they can weasel their way back into the cache. The video storage volume is almost like a scratch drive. I imagine that I only have to make sure the cache on my scratch volume can hold at least one project’s worth of data. Everything else could be devoted to the long-term volume’s cache.

I am leaning towards trying out the second option. I fear that I will outgrow a single large cache and just wind up right back where I am today, and adding a second cache will require a lot less work. I still have my 480 GB Crucial SATA SSD installed. It is more than fast enough to cache the big disk full of video files!

My current lvmcache configuration is still quite good!

Is it really a big deal that video editing will push Borderlands 3 out of my lvmcache? I will just have to sit through a longer loading time once or twice. That seems like a small price to pay to save me from spending time micromanaging which games live on the NVMe and which games live on slow storage.

My setup doesn’t have to be perfect to be fantastic. Neither does yours!

You need to have good backups!

I don’t enjoy writing this section. It is always important to have good backups. When relying on a mechanical drive, it is even more important to have good backups. The statistics from the most recent Backblaze report say that they have an average annual failure rate of 1.22%, while some models of drive fail three or four times more often.

Yes. Solid-state drives can also fail, but they don’t fail as often. Mechanical disks sure seem to be doing much better than they were a decade ago, but they have lots of moving parts.

Seafile will start uploading my huge video files within moments of copying them off my camera. I am pretty safe as long as I don’t immediately wipe the SD card.

Why not store /root and /home on the slow disk?

I would like my computer still be useful when my slow drive inevitably fails. I wouldn’t trust a fresh spinning drive to last, and I most definitely don’t expect the six-year-old 4 TB drive I am using today to last. I wouldn’t be surprised if it fails tomorrow!

I am only using 130 gigabytes on my NVMe. I don’t expect that to grow significantly.

I wish I had more control over lvmcache

I keep thinking about how I would like to be able to tune or give hints to my lvmcache. It would be nice if there was an equivalent to the handy eatmydata tool that ignore the flush command.

It’d be neat to be able to run a command like dontcachethis cp /media/sdcard/video*.mp4 /lvmcached/area/over/here to make sure I don’t pollute the cache with an obviously large and useless write. Maybe there could be a similar tool to inform the cache that what I am about to do is of high value and should be given more priority in the cache.

Then I thought about it some more. Using lvmcache isn’t really a solution. It is a Band-Aid. I am using lvmcache because 8, 16, or 32 terabyte NVMe drives are too expensive or just don’t exist yet. In five years this may be completely irrelevant to almost everyone.

I don’t need better tools for managing my lvmcache. I just need my lvmcache to work until I outgrow it!

Conclusion

I am extremely pleased with my lvmcache setup on my desktop. The games I play load quickly. My video editing workflow is smooth. Best of all, I don’t have to shuffles files around between my NAS, my slow local hard drive, and my local NVMe. For the most part, everything just works!

This is one of the final steps on the road to eliminating my NAS. I am just waiting for the old 4 TB hard disk in my desktop or one of the 4 TB hard disks in my server to fail. Eventually each will have a 14 TB drive of its own, and my RAID and my Infiniband network will go away. I could save $100 or more if my aging disks can survive long enough before I am forced into an upgrade!

What do you think? Are you planning to use lvmcache on your workstation? Are you already using lvmcache to solve the same problem? Or are you using lvmcache on your servers? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

This Linux User Tries Windows 11

| Comments

Hello everyone! My name is Pat, and I have been using Linux since 1995. The last Windows machine I had at home was a Cyrix P200 that dual booted Windows 95 and Slackware Linux. Before that I spent many years running MS-DOS and DESQview 386. My professional life after that definitely including configuring and maintaining Windows servers, but I always did my best to keep that to a minimum, and the last time I even touched a Windows server it was running Windows Server 2003.

Last month I bought an Asus Vivobook Flip 14. It shipped with Windows 10. It immediately offered to upgrade to Windows 11, and I let it go ahead and do its thing—why postpone the inevitable? I’ve been doing my best to make this into a comfortable device for this long-time Linux user. I think I am doing a reasonable job, but that depends on how you measure things.

Why am I running Windows 11 on my new 2-in-1 convertible tablet?

My hope was that I could treat this machine like an appliance. As far as I am concerned this may as well be an overgrown Android tablet, except it should also run be able to run Emacs, Davinci Resolve, and a real web browser. As a bonus, the Ryzen 5700u has a rather capable GPU, so I can even do some light gaming on this thing.

I should be able to use WSL2 and WSLg to haul enough of my usual Linux environment over to be comfortable enough. There’s even the Windows Subsystem for Android available to let me run Android apps and games if need be.

And Windows 11 is supposed to be pretty well optimized for use with a touch screen. That should be a bonus, right?

Native Emacs or WSL2 with WSLg?

The first thing I did was install Seafile so I could sync my Emacs configuration library. I don’t know C:\Users\patsh was the correct place to sync my .emacs.d, but that’s where I put it. I fired up Ubuntu 22.04 under WSL2, installed Emacs, and symlinked my .emacs.d to the appropriate place.

I heard accessing the host file system from WSL2 was slow, but I didn’t realize it would be this slow. My Emacs daemon on my desktop starts in about a second. This setup on my laptop was taking tens of seconds!

Emacs on Windows 11

I wound up installing the Seafile GUI applet in WSL2 and syncing everything I need to my home directory inside the WSL2 environment. This was a huge improvement, and it is definitely usable.

Since my Seafile libraries are encrypted, I have to run the Seafile GUI. I had to work quite hard to get the Seafile GUI and Emacs to start up when I log in. I don’t know why my initial attempts weren’t working. I wound up adding some random sleep commands to my little shell script that the Windows Task Scheduler was invoking, and it seems to work most of the time. It feels like such a hack, and I have no idea why it works.

If this didn’t work, my next step would have been attempting to install native Emacs on Windows. I tried doing this on my old Windows 10 tablet, but I never really drove that one daily. The Chuwi Hi12’s Atom CPU and lack of RAM made it a pretty awful machine to use for all but the lightest tasks, but I did learn that I didn’t enjoy maintaining the sprinkling of if statements that my Emacs config needed to work on Windows and Linux.

Running Emacs in the WSL2 Ubuntu virtual machine is much easier.

WSLg breaks automatic sleep for me

I don’t know when this started. I assume after I figured out how to get my Emacs GUI to launch on login. My laptop stopped going to sleep on its own. I wasn’t sure how to troubleshoot the problem, but Google suggested that I run powercfg /requests and that showed me that mstsc.exe was keeping my screen awake.

What on Earth was I doing with terminal server? It turns out that’s how WSLg puts X11 windows onto your Windows 11 desktop.

The relevant bug report that I found on Github suggests that the blog word counter script I use with Emacs might be causing the problem. That doesn’t seem to be the case, because even if I close Emacs and Seafile, my laptop no longer goes to sleep on its own.

I could probably fix the problem by never running Emacs again, but that isn’t an option, so I guess I will just have to live with this one.

Firefox was slow and using a ton of battery

I made a small change. I opened about:config and set gfx.webrender.all to true. Scrolling seems smoother, videos might be playing back better, and my runtime on battery is greatly improved. That last part is the most important to me.

I think Edge still wins if you want to maximize your time away from a power outlet, but I won’t be away from power long enough for this to matter. It takes less than 20 minutes to charge the Asus Vivobook Flip 14’s battery from 18% to 50%, so I don’t even need a long pit stop to put quite a few hours of use back into the tank.

If you told me I was going to be stuck without a power outlet for 24 hours, I would likely fire up Edge just so I could watch Netflix for 8 hours instead of 6 hours.

I have auto brightness, auto rotation, and keyboard problems

Have you tried rebooting it?

I gather that Windows 11 tries to do something smart with auto brightness. I see plenty of posts from people with Microsoft Surface tablets with the problem, so I assume this is a Windows 11 issue. Most of the time everything is fine, but I randomly have a problem where Windows adjust my brightness every time I rotate the tablet.

Sometimes it wants to be really bright. Sometimes it wants to be dim. It doesn’t stop doing goofy things until I reboot.

Sometimes my screen just doesn’t want to auto rotate. This is quite problematic on a tablet! Usually putting the tablet to sleep and waking it back up fixes this issue.

On other occasions, Windows doesn’t want to disable the keyboard and mouse when I flip the screen around into tablet mode. Sometimes putting the tablet to sleep helps. Sometimes I have to reboot.

I have managed to mitigate the rotation and keyboard problems, but I think the solution is stupid. I just make sure I take my time. If I wake up the device, unlock the device, switch to tablet mode, and rotate as immediately as I would like, there’s a good chance something will go wrong. I’ve been going slow for a last few days, and I haven’t been able to trigger a mistake since.

I can’t be the only one who thinks this is dumb.

UPDATE: I am not sure when it happened, but it is about three weeks since I wrote this blog, and these three problems seem to have completely gone away.

Remapping capslock to control

I doubt that I did this correctly. I don’t even know where I found the instructions. I pasted some sort of registry hack into Powershell, and now my capslock key is a control key. My control key is still a control key.

I worry that if I ever have to play any FPS games that I won’t be able to map both control keys to different things, but I also have no idea what to do with the original control key.

If there’s a correct thing I should be doing here, I would love to hear about it!

Powertoys is bumming me out

FancyZones sounds super handy, and it should have been a quick and easy way for me to at least partially mimic the automatic window sizing and placement I have on my Linux desktop using custom Sawfish scripts. The first problem I wanted to solve was that when Emacs starts up it is a few columns and two rows too small. I figured I could just set up a FancyZone to drop Emacs in!

Except FancyZones doesn’t work with WSLg windows yet.

At least Always On Top will come in handy.

Having Ubuntu 22.04 in WSL2 and on my desktop is handy!

I was able to set Windows Terminal to use Solarized Dark. I was able to install zsh and use Seafile to sync my zsh config on the laptop, and my configuration for things like htop and git came along for the ride.

I was able to just rsync my .rbenv over to the WSL2 environment, sync my blog-related Seafile library to the correct place, and all my Octopress scripting just works.

I also think it is awesome that my laptop and desktop are both 64-bit x86 machines. That means I can copy binaries around and they just work. There are a few binaries of minor important in my /usr/local/bin that have just continued to work.

A lot of people are excited about how the Apple M1 hardware is so fast and power efficient. I agree that this is exciting, but all my workstations and servers run on x86 hardware. Things are likely to go more smoothly and be more efficient overall if I stick to the same architecture.

I have more feelings about the Asus 2-in-1 than I do about Windows!

I didn’t buy the fastest, lightest, or fanciest 2-in-1, but I think the Asus Vivobook Flip 14 has been a fantastic compromise. It was one of the lowest price 2-in-1 ultrabooks, which does come with some compromise. Even so, the Ryzen 5700U performs quite well while still giving me 6 to 8 hours of battery life in this 3.2-pound package.

I am absolutely sold on the idea of a convertible laptop. I consume Hacker News, Reddit, and Twitter in tablet mode most mornings. I’ve been able to stand the little Asus up like a book to follow the assembly directions during the build of my LumenPNP pick-and-place machine. I get to fold the tablet back while propped up at a slight angle to watch YouTube while I roast coffee beans in the kitchen.

I expect all my future laptops will be convertible 2-in-1 models, even if I wind up with another jumbo machine like my old 18.4” HP laptop.

I can definitely see the appeal of the removable keyboard on the Surface Pro. It would be nice to leave 6 ounces at my desk when I read Twitter in my recliner, but I didn’t want a 2-in-1 with a kickstand. There are only a limited number of ways to fit a Surface Pro with a kickstand on your lap while typing.

I am typing this from the recliner right now. I am sitting in about the third different pose this session. The right knee is bent with the laptop on that side of my lap, and my left foot is under my right knee. I don’t have a good way to take a picture of this, but I know I wouldn’t be able to balance a Surface Pro in this position.

There is no proper way to lounge in a recliner with a laptop, so I tend to move around a lot. I also don’t tend to sit and type for long periods of time over there.

Gaming is bumming me out, but probably not for the reason you think!

I was curious how the Ryzen 5700u compares to my overclocked Ryzen 1600 desktop with its Nvidia GTX 970. My desktop has the edge in single-core performance, but not by a ton, and even though the 5700u has two fewer cores, it is quite comparable on multi-core performance.

I play a lot of Team Fortress 2. This is an ancient game and not terribly taxing. The Asus of course had no trouble maintaining 60 FPS with the default settings. I hope I am never stuck having to play Team Fortress 2 on a 14” screen, but it is nice to know it is an option.

NOTE: The clip of Team Fortress 2 in this tweet was NOT recorded on my new laptop. I just felt like this section needed some sort of example to spice things up. Maybe I will have something more appropriate to drop in its place soon!

Then I tried Sniper Ghost Warrior Contracts 2. It is one of the most modern games I’ve played this year. With the lowest settings I can muster and using AMD’s FSR to scale up from 1280x720, my desktop usually sits between 80 and 100 FPS. With similar settings, my Ryzen 5700u laptop can manage an unplayable 22 FPS.

Is that disappointing? I don’t think so. My desktop draws something like 300 watts at the power outlet to get 80+ FPS. I am impressed that my tiny, inexpensive laptop has even 20% as much gaming muscle at around 30 watts!

Why am I disappointed? I am bummed out about the almost total lack of touch-friendly games that I have to play on my Windows tablet. I already complained about this in the previous blog about this laptop. Hopefully I find some fun games over the next few months!

Conclusion

I am still very much at the beginning of this experiment. Today is May 23, and my notes say this laptop was delivered on May 4. That means I am only just coming up on three weeks. I am certain there will be more to learn over the coming months, and it is very likely that I have forgotten to mention something important in this blog!

What do you think? Are you a long-time Linux user trying out a Windows machine? Are you as excited about 2-in-1 convertible laptops as I am? Are you using a nicer 2-in-1 than my budget-conscious Asus Vivobook Flip 14, like the the new Asus Zenbook S13 Flip OLED that I am already drooling over even though the Ryzen 6800u model isn’t out yet! Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

I Bought an Asus Vivobook Flip 14 2-in-1 Convertible 2-in-1 Tablet

| Comments

I don’t know how exciting this is. The Asus Vivobook Flip 14 is not a bleeding-edge piece of hardware. It is definitely not one of the nicest 2-in-1 ultrabooks, but the price was just too good for me to pass it up.

Costco had the maxed out version of the Vivobook with a Ryzen 7 5700U, 16 GB of RAM, and a 1 TB NVMe for $600. That was $100 less than the Vivobook Flips on Amazon with half the RAM and half the storage, and a good bit less than other brands and models with an 8-core Ryzen chip.

I didn’t expect to buy this laptop

I post good deals every day on the Butter, What?! Discord server, and this looked like a good deal. I’ve been saying that I should keep lugging my giant gaming laptop around until there’s a good deal on a 6000-series Ryzen 2-in-1, but nobody has even released one of those yet. I imagine it will be a long time before I’ll see a good discount on something like that!

NOTE: That’s the Asus Flip 14 sitting in its temporary home to left of my two monitors.

Since I don’t have a membership, Costco charged me a $30 fee. After the fee, shipping, and taxes, my Vivobook Flip 14 cost me $692.78. I was expecting to use my American Express card to extend the 2-year warranty to 3 years, but Costco doesn’t accept the card. I was however surprised find a card in the box that explained that registering the warranty with Asus would provide me with a year of accidental damage protection. That was a nice bonus!

I’ve been wanting a nice 2-in-1 for a years

I’ve had my little 12” Chuwi tablet six years. It was a really nifty, and really inexpensive device. It has the same beautiful 2160x1440 screen as the Microsoft Surface Pro from the same year, and sitting on the couch surfing Reddit on a tablet like that was delightful.

The trouble with the Chuwi Hi12 was its slow Atom Z8350 with barely enough RAM to run a web browser. It was just enough to tease me with how awesome a giant tablet would be, but it was slow enough to be miserable to use for most tasks.

What am I giving up here by saving money?

I skimmed through some reviews, and the worst thing that everyone seemed to agree on was that the Asus Vivobook Flip feels like a cheap, plastic laptop. That is kind of what I expected to hear, and I am OK with this. The Asus isn’t a super thin wedge like the 13” Dell or HP 2-in-1 models, but it seems to be pretty well made for a block of cheap plastic.

I didn’t need reviews to tell me the disappointing things about this laptop. They are all right on the spec sheet.

The screen is only 250 nits. That’s the same brightness as my old Acer gaming laptop, and I know I have to switch Emacs from solarized-dark to solarized-light to be able to use it at a picnic table. I have a lot of thoughts on this, but I think they should wait until after I’ve put some miles on the machine.

I don’t know why, but this Asus convertible laptop doesn’t charge via USB-C. It has a port, but it doesn’t support USB-PD. I assume this means it doesn’t support video output via USB-C. The Vivobook Flip 14 is quite a few years newer than my Acer VX15, so I can’t imagine what their excuse is here.

The weather hasn’t yet permitted me to take the Asus on a field trip to the park, but the screen is plenty bright enough around the house at about 30% brightness.

The Vivobook has an HDMI port, but it is only version 1.4. That means it can support 1080p60 or 4k30. That would be a real bummer if I ever expected to dock this thing.

I am mostly OK with this. It is very likely that I will plug this tablet into a TV to play some FPV simulators like Liftoff and Velocidrone. I may never plug it into a monitor.

None of these limitations are things that make me want to spend hundreds of dollars more to circumvent. The super premium Lenovo Thinkpad X1 Titanium Yoga has comparable hardware under the hood, but it has better build quality and one of the brightest screens available in a 2-in-1. It costs somewhere around $1,600 to $1,800.

There was a deal on a 13” Ryzen 5700U Lenovo Yoga 6 convertible while my new laptop was in transit. This particular Lenovo has a 20% brighter screen, charges via USB-C, and the video output is the USB-C port, but it also had half as much storage. This may have been the better value, but I’m not going to nitpick.

This convertible isn’t my primary workstation

This will be the device I grab when I ride my electric unicycle to the park. I’ll use it to scroll through Twitter and Reddit on the couch.

It is going to be a handy device, and it is going to make my life easier and more enjoyable, but this most definitely does not have to be the ultimate convertible laptop for me to get a ton of mileage out of it.

I think the Asus Vivobook Flip 14 is going to get the job done just fine.

Will it run Linux?

I’m sure it can boot Linux, but that’s not really what I’ve been wondering. Will Linux have support for the accelerometer? Would something like Ubuntu and Gnome know how to flip the screen to the correct orientation when I rotate the device? Does any of the touch-screen support work well on Linux?!

I have no idea, though Reddit seems to think I might do OK right out of the box with Ubuntu and Gnome!

My plan is to attempt to use Windows. I want to treat the Vivobook like I treat my Android devices. It is just going to be an appliance with a web browser, Emacs, Davinci Resolve, and a stack of games.

We will see how that goes. I don’t have a whole lot to complain about so far.

14” probably isn’t too big for tablet, but 16:9 is really tall!

I have only been using the Vivobook Flip for two days. The first thing I noticed is how ridiculously tall it is when I prop it up on my lap in portrait orientation, and it hasn’t gotten any less ridiculous!

My Chuwi Hi12 has a 3:2 aspect ratio. That seems more appropriate for a big tablet, but I imagine LCD panels like that are rare. You’ll probably get a better deal when the manufacturer can just pick a common 16:9 panel off the shelf!

How is the battery life?

I don’t really know what counts as good battery life, and I certainly haven’t done any exhaustive testing. I’ve just messed around with the brightness, looked at the estimated battery life meter, and did math. The numbers in the next paragraph are very rough estimates.

With the brightness cranked to the max, I should be able to surf sites like Reddit, Hacker News, or Twitter for nearly 6 hours or watch YouTube for a little more than 3 hours. It looks like I can get an extra hour of YouTube by turning the brightness down to about 30%, which is a comfortable indoor brightness.

I am under the impression that I could increase these numbers quite a lot by using a different browser. It looks like Edge might give me more like 5 to 6 hours of YouTube or Netflix playback at full brightness.

That is a pretty big difference, so it might be worth using Edge to watch YouTube and Netflix when I know that I will have to spend an entire day away from power. Using Firefox is more comfortable for me, because that’s what I am already using everywhere else. All my bookmarks, add-ons, tabs, and history are already in sync!

NOTE: I am going to need to revisit all those Firefox numbers. Setting gfx.webrender.all to true seems to have put it on par with Edge for video playback battery efficiency.

It has been nearly 20 years since I bought a laptop that runs all day on battery. It’ll be nice having one again. It looks like I could eke out more than 9 hours of Emacs in the kitchen at 70% brightness!

How do you carry this thing?

I have a simple AmazonBasics 11.6” shoulder bag. I actually like it a lot! It only cost me $11, and it actually holds quite a lot of stuff for such a small bag. I know it says 11.6” on the label, but I’m pretty sure it just barely fits most 13” ultrabooks just fine, and it very nearly fits my 14” Vivobook.

I wound up ordering the 14” version of the same bag. It is only about an inch wider and taller than my old bag, so it really shouldn’t seem much bigger, but it feels so much bigger! The 11.6” bag seems like a purse. The 14” bag looks and feels like comically sized version of a laptop bag I would have carried 20 years ago.

I do wish my new laptop fit in the smaller bag. Both the 11.6” and 14” bags can easily hold a charger, an assortment of tools, connectors, and cables, and I can even squeeze the Nintendo Switch in there. It is good that I can actually close the zipper on the 14” bag, and I can also squeeze more gear in if need be.

If I really need to take more a ton of gear with me, I have larger backpacks. If I am traveling, I can even squeeze the entire AmazonBasics 14” shoulder bag into my old Targus laptop backpack.

I can of course just walk out of the house with just the bare laptop!

Conclusion (for now!)

I am quite pleased with my purchase of the Asus Vivobook Flip 14. I am reading Hacker News while sitting in a comfy chair. I can carry my laptop to the park on my electric unicycle without having to ride with a 12-pound bookbag on my back. I have a mobile OBS recording studio, and I can even finish this blog post while roasting coffee in the kitchen.

Sure, I could manage many of these things with my heavy old laptop, but almost everything is an improvement with the new hardware!

What do you think? Are you using an Asus 2-in-1 convertible laptop? Are you using a different 2-in-1? Do you think I should have splurged on a higher-end laptop? Do you agree with me that every laptop should have a 360-degree flip-around screen in 2022? Let me know in the comments, or stop by the *Butter, What?! Discord server to chat with me about it!

Is lvmcache Effective on a Desktop or Workstation?

| Comments

The answer you should take from me is that I don’t really know yet. At least, I don’t know just how well it is working because lvmcache is quite difficult to benchmark. The truth is that I’m not all that interested in benchmarks. I want to know how my experience of using my computer feels with a big, slow disk behind a fast NVMe cache.

What problem am I trying to solve?

Games load faster from an SSD. Scrubbing around in Davinci Resolve when there are many 4K videos on the timeline is smoother when reading from an SSD. I wish everything could fit on an SSD.

One of the two SSDs in my desktop started failing last month. I only had 1 TB of solid-state storage space, and it was getting really tight. I store most all my video on my NAS, and my NAS has a 100-GB lvmcache. It works great, and editing video over Infiniband with an lvmcache is very much like editing video on a local SSD.

Steam games are getting huge, and I am working toward eliminating the RAID in my NAS. I have a 14 TB hard drive off-site with a copy of all my data. The plan is to eventually stick a 14 TB drive in both my NAS and my desktop PC. I don’t feel the need to do this before the aging drives in my NAS fail, but the idea is on my mind.

How can I store several terabytes of data on my computer without buying a ridiculously expensive NVMe drive while still making sure everything feels like it is stored on an NVMe? I don’t want to feel that 200 IOPS of cheap spinning metal. I want to feel the 100,000 IOPS of a nice NVMe!

This experiment is my life now!

I bought a 1 TB Samsung 980 NVMe. I installed Ubuntu 22.04, left about 300 GB available for lvmcache, then stuck and old 4 TB 7200 RPM hard drive in my machine. This seemed like a good test to verify that lvmcache would be viable on my desktop.

Here’s where I’m at. My Ubuntu install and the data in my home directory are eating up only about 113 GB on the 1 TB Samsung NVMe. So far, I have installed 700 GB of Steam games on the 4 TB drive. I have also copied this year’s video files from my NAS to the 4 TB drive. That’s around 800 GB of video files.

Everything is encrypted, which does limit my throughput on the NVMe quite a bit, but I am still seeing 1.2 to 1.5 gigabytes per second. This is a topic for its own post.

What am I hoping to see?

lvmcache is a hotspot cache. It doesn’t just blindly cache every single read or write operation that happens on the slow disk. Things should only stick in cache if they are used regularly. The trouble is that I have absolutely no idea how lvmcache makes these decisions.

I could micromanage things. I could install one or two 100 GB Steam games on my NVMe, and when I am no longer playing those games, I could move them to the slow storage. I could copy the video files that I’m currently working on to the NVMe, then push them to slow storage when I am done.

I really don’t want to have to spend this much time managing where files live.

My hope is that if I play a game regularly, it will be promoted to the NVMe cache. My other hope is that as I am scrubbing around in the files for this month’s episodes of The Butter, What?! Show, that they will wind up being cached.

The best part is that I don’t need entire games or all of my video files to be cached. There’s a good chance that the early parts of a 150 GB game will start exiting the cache while the later parts of the game enter the cache as I slowly make progress. My video files from January will fade from cache, while this week’s recordings will be accessed quite often.

The cache sure seems to be doing its job!

I noticed the cache doing its job while I was testing to make sure my Nvidia drivers and fsync-enabled kernel were working well together. I kept loading “Sniper: Ghost Warrior Contracts.” The game plays a cut scene every time a level loads, and it lets you skip the rest of the scene when it is finished.

Each time I loaded the game, the message letting me know I could skip the cut scene was happening sooner. I fired up dstat, and I could see the game regularly reading at 70 megabytes per second from the slow drive and 150 megabytes per second from the cache drive!

As far as I was aware, it was letting me skip the cut scene for this level at precisely the same point as it would let me skip on my old Crucial M500 SSD. Could we go any faster?

I copied the game directly to my NVMe and fired it up again. The results were the same.

When a game loads, it isn’t just reading data from disk and plopping it into memory. It is doing work with that data. For this particular game, my CPU and GPU are definitely the bottleneck on load times.

This is definitely a good start.

Watching Davinci Resolve was fascinating!

I still have no idea how lvmcache decides what to keep and what to evict from cache. I do know that it tried to bypass the cache on long sequential writes, and that seemed to be confirmed when I plopped 120 GB of video footage from our last Butter, What?! Show live stream into place. If I were smarter, I would have watched dstat during the copy so I could better confirm this.

I was running dstat the next day when I started editing the footage in Davinci Resolve, and the results were really cool!

As soon as you drop a video on the timeline, Resolve reads through the entire thing to generate an audio waveform. You can see in the screenshot above that it was reading along at around 30 megabytes per second from the slow drive, and at the same time the NVMe cache was being written to. I most definitely didn’t expect this, and it is quite awesome!

Some of my multicam timelines get quite layered and convoluted. This might very well be because I have no idea what I am doing. The important thing is that there aren’t any hiccups or glitches. I don’t seem to ever be waiting for the disk to catch up to me.

I might split this into two caches

This is working well enough so far, but I have some concerns.

My cache is only 300 GB, and it is now normal for me to work with 120 GB to 180 GB of video at a time. It is currently very likely that I will be accidentally pushing games out of my cache once a month.

The lazy answer may just be to use a much bigger cache, and the 400 GB of unused space on my root filesystem suggests that might not be a bad idea. I am not confident that this will accomplish what I am hoping. Video files from two months ago have less value to me than a game I haven’t played in six months.

I’m not going to adjust anything while I’m still using this old 4 TB drive, but when I install a 14 TB drive, I am planning to split the drive and cache into two volumes.

I will have one volume for things that have long-term value to me. That would be the Steam games. The other volume would be for the data that I want cycling itself out of the cache.

I haven’t found another home for my 480 GB Crucial M4 SSD. Maybe I will use that for the video cache. An a7S3’s fanciest video modes only capture video at 150 megabits per second. My old SSD shouldn’t have much trouble streaming 10 of those files at a time, so it ought to be overkill for video editing.

It’d probably be better to steal 300 GB from my root partition, but using the SSD would be simpler.

As long as my NAS hard drives continue to not fail, I will have plenty of time to contemplate this.

How much room will I need for my root and home partition?

I am mostly guessing at everything. I figured 300 GB was a good size for a cache, and I set some aside for a big, useless swap partition. Whatever was left went to Ubuntu.

I am realizing now that big files are always going to wind up eventually being diverted to the slow disk. I most definitely could have set aside more space for cache. I probably should have.

I don’t think it would be a problem for me to use 80% of my 1 TB NVMe as cache, but it is nice to have some wiggle room if something winds up being slower than expected on the big drive.

NVMe vs. SSD

My computer doesn’t feel any different with the NVMe. It does boot significantly faster, but my old Ubuntu install that I’ve been upgrading since 2012 had a weird issue where it was waiting for something for 30 seconds while setting up my Infiniband link. Rebooting only happens every few weeks, so I never got around to troubleshooting that problem. Let’s just say that anything would feel like it boots faster when you eliminate that huge delay!

I’m sure that if you handed me a stopwatch, I could measure that some programs open faster. I know that copying files around will be faster, but it wasn’t exactly slow before.

Upgrading from a 7200 RPM hard disk that averages 150 IOPS to my first SSD that pulled 1,500 IOPS made my computer feel like a totally different machine. Upgrading from my Crucial M4 that could do 5,000 IOPS to the Samsung 980 that can do 100,000 IOPS was nice, but not terribly exciting.

That said, don’t buy a SATA SSD today. If you have an available NVMe slot, fill it with a decent NVMe. NVMe drives that can manage 3 gigabytes per second don’t cost more than SATA SSDs.

If I am only pulling 150 MB/s from the cache, do I need an NVMe?

No. For most use cases on your workstation, you probably don’t. Like I said, though, don’t buy a SATA SSD unless you don’t have a slot available for an NVMe.

Even if you set up identical tests, fire up dstat, and you see that your SATA SSD or NVMe cache read for similar amounts of time, and they both top out at around the same throughput, you still might speed up your workload with the NVMe.

Imagine that your task quickly reads 150 megabytes, does some work for a few seconds, reads another 150 megabytes, does some more work for several seconds, and this repeats over and over again. It’ll take the SATA SSD about 500 ms to pull that data, while your stupidly fast 7-gigabyte-per-second NVMe might be able to do it in 40 ms.

What’s next?

It is a bummer that lvmcache is difficult to benchmark. It is meant to fill up and settle in over longer periods of time. That’s OK by me because I want a cache that will learn what I really need cached over time, and I hope it works out well.

The plan is to sit tight for now. I am just going to do the things I usually do while the cache settles in. Once that happens, I plan to make a small tweak to the fantastic lvmcache-statistics script. It shows cache hit and miss rates since boot.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
-------------------------------------------------------------------------
LVM [2.03.11(2)] cache report of given device /dev/mapper/zaphodvg-slow

Cache hit rate since:  Fri Apr 22 08:12:26 AM CDT 2022
Current time:          Fri Apr 22 08:13:16 AM CDT 2022
-------------------------------------------------------------------------
- Cache Usage: 99.9% - Metadata Usage: 6.6%
- Read Hit Rate: 99.9% - Write Hit Rate: 99.9%
- Demotions/Promotions/Dirty: 16926/18475/0
- Feature arguments in use: metadata2 writeback no_discard_passdown 
- Core arguments in use : migration_threshold 8192 smq 0 
  - Cache Policy: stochastic multiqueue (smq)
- Cache Metadata Mode: rw
- MetaData Operation Health: ok

I am going to tweak it to show hits and misses since starting the script. That will let me drop RAM caches and fire up a game or edit a video and see what the cache is up to since starting the new task.

I will also be able to run a big game, make a note of the lvmcache-statistics output, then do my very best to ruin the cache with hundreds of gigabytes of fresh video files. Then we can see how much of the game managed to stay in the NVMe cache.

Conclusion

I am excited. I am definitely not noticing just how slow that 4 TB drive actually is, which is a huge win as far as I am concerned. This means I will be able to drop a slow 14 TB in here, and I won’t have to worry about managing which files live on the fast or slow storage. That is awesome!

What do you think? Are you using lvmcache on your desktop? Are you using another cache like bcache? How is it working out for you? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

I Almost Switched Back To Debian From Ubuntu!

| Comments

I’m not even sure where to begin. The problem I encountered this week has been brewing for years. My desktop is running Ubuntu 18.04. That’s a pretty old release now, and it has finally started getting more difficult to install modern software. I became particularly aware of this when I wanted to try using Gamescope to force a native game to use AMD’s FSR, but there’s no reasonable way to build Gamescope on Ubuntu 18.04.

NOTE: Gamescope doesn’t work with the current NVidia driver, but it sounds like that will be changing in the near future.

How did I get here?!

I usually keep my machines updated to the new Ubuntu release every six months. Every now and then I fall a few months behind, and it is annoying to wind up upgrading twice in three months, but it usually isn’t a big deal.

Then I got busy and missed one entire upgrade. At that point, it just felt like it would make sense to wait for the next LTS at 20.04. When the time came around for that upgrade, things got busy. We had medical emergencies. There was a pandemic. I seemed to constantly have podcasts I needed to work on, and I didn’t want an update to goof me up in the middle of an edit.

Now here we are. Nearly four years after missing the 18.10 upgrade, and two years after missing the 20.04 upgrade. All my podcast interviews and Butter, What?! Show episodes are edited and uploaded, and my second SSD is failing.

Seems like a great time to move ahead!

Why I am installing from scratch?

I have been upgrading the installation on this machine since 2012. There’s a lot of cruft on here. I’ve gone through the switch from Upstart to systemd. That seems to have left me in a weird spot where there’s a weird wait during boot while bringing up my 40-gigabit Infiniband network.

1
2
3
4
5
6
7
8
9
10
11
pat@zaphod:/var/log/installer$ ls -l
total 2324
drwxr-xr-x 2 root root    4096 Aug  8  2012 cdebconf
-rw-r--r-- 1 root root   36622 Aug  8  2012 hardware-summary
-rw-r--r-- 1 root root  382433 Aug  8  2012 initial-status.gz
-rw-r--r-- 1 root root     104 Aug  8  2012 lsb-release
-rw-r--r-- 1 root root      62 Aug  8  2012 media-info
-rw------- 1 root root 1223132 Aug  8  2012 partman
-rw-r--r-- 1 root root   61388 Aug  8  2012 status
-rw------- 1 root root  657282 Aug  8  2012 syslog
pat@zaphod:/var/log/installer$

Those are good enough reason to start from scratch. I’m also on a quest to eliminate my reliance on my NAS. My plan is to stick a big, honking, slow hard drive behind a large lvmcache in my desktop. I am leaving plenty room on my new 1 TB Samsung 980 NVME for that, but I’m not in a rush to get started on that project. I do have a spare 4 TB hard drive here, so I might experiment with this soon.

I know 4 TB isn’t exactly big or honking, but it’ll let give me an idea of what I can expect, and it is plenty big enough to store a year’s worth of video footage.

Why the Samsung 980 and not the Samsung 980 Pro or Samsung 980 EVO

This could be a blog post of its own! My inexpensive secondary SSD has been throwing errors and disappearing from the SATA bus. It happened once a few months ago, and it seemed to be a fluke. When it happened twice in one night, I went went shopping.

My desktop only has a PCIe 3.0 m.2 slot, and my Ryzen 1600 can’t properly manage to run LUKS encryption nearly fast enough to keep up with the pricier PCIe 4.0 NVME drives that can push more than 7 gigabytes per second. The drives my speed can be had for as little as $80 per terabyte.

The 1 TB NVME drives on sale that night were a Western Digital SN750, a Samsung 970 EVO, and my Samsung 980. The EVO was priced a bit higher, while the other two were nearly the same price.

I’ve had good luck with all my Samsung SSDs. It is rated for 600 TB of writes over its lifetime, and it has a 5-year warranty. These are all important, because this will be an lvmcache and the boot drive of my primary workstation. I’d prefer that it not be likely to fail!

What really piqued my interest was that instead of having a DRAM cache, the Samsung 980 has a 48 GB cache of fast flash sitting in front of the really slow QLC flash.

48 GB worth of writes is A LOT. I expect that dropping down to the slow QLC write speeds will rarely happen to me.

Ubuntu 22.04 let me down twice

My failed SSD timing isn’t great. We are less than two weeks away from 22.04 entering beta, and not many more weeks from the release. I had to decide between installing 22.10 and upgrading in a few weeks, or just limping along with a potentially flaky distro for a bit. I chose the latter. This hasn’t been a problem.

I am fully aware that I am asking for some weird disk configuration. I would normally just set up a small /boot partition and a single large LUKS encrypted partition for LVM to live on top of, but I don’t want my lvmcache to be encrypted. The slow hard drive is going to be encrypted, so encrypting the cache would be wasting CPU cycles, and I can’t encrypt nearly as fast as the NVME as it is.

That means what I really want is an unencrypted LVM Physical Volume (PV) with one or more encrypted Logical Volumes (LV).

Ubuntu’s manual partition tool in the installer doesn’t seem to let you configure LVM at all. I scratched my head a bit, then decided to just install Ubuntu 22.04 to its own 200 GB partition.

When I got booted up, I tried to open Firefox and I got an input/output error. Weird. apt said Firefox was installed, but it was telling me that it was a Snap. Even though apt thought the Snap was installed, it really wasn’t. This was easy to fix, but I have to admit that I was a bit freaked out about one of my core pieces of software being a Snap.

Let’s try Debian!

When I started my Linux journey 25 or more years ago, I was using Slackware. I remember it literally taking all night to compile a kernel on my 386-40 with 4 MB of RAM, though that dropped to less than an hour when I upgraded to 8 MB! Then I used SuSE for a year or so before switching to Debian.

When Ubuntu came out, it was awesome. Sure, it is great that Ubuntu made things easier for the average user, but to me Ubuntu was Debian with a 6-month release cycle. At the time this was huge, because it felt like we were stuck with Debian Potato for 3 years.

Every few years, Ubuntu has been drifting farther and farther away from Debian. Some of this is good, some of it is bad, some of it just doesn’t matter. I’m not excited about software moving out of apt and into Snaps. I completely understand why I would want to sandbox my web browser, but Snaps have been goofy for me in the past. They update on their own, and you have very little control over it.

Every few years, I think about switching back to Debian. Seeing Firefox in a Snap made me think it was really time.

I am having a hard time remembering exactly how the Debian installer was going to allow me to layer LUKS encryption and LVM, and I don’t have any evidence of the situation left to check on! I only remember that it wasn’t perfect, but I was doing better than the Ubuntu installer was allowing.

Memories of Debian Potato came flooding back to me!

It was easy to install the proprietary NVidia driver. Then I looked at copying over my Firefox profile. Debian Bullseye was a major version behind what I was running, so it wouldn’t import my Firefox profile. The common recommendation here seemed to be that I should try out the Flatpack Firefox. That worked, but I had to massage things a bit.

Then I noticed that obs-studio in apt was a major release behind. Flatpak seemed to be the right answer, but it took a bit of research to learn that you also have to install the NVidia OpenGL Flatpak that matches your driver. This seemed to work, except NVENC video encoding doesn’t work.

Oh boy! That lead to a whole array of building blocks that are outdated on Debian Bullseye. Getting obs-studio with NVENC working on Debian sure looked like it would be a Herculean effort.

There were a handful of other minor problems, and I am pretty sure I would have exacerbated them all had I tried to build myself everything needed to get NVENC working in obs-studio.

Back to Ubuntu 22.04

I am writing this on Ubuntu 22.04. I’m not quite settled in. I don’t have everything moved over. I may still reboot my old Ubuntu install on my old SSD to play Velocidrone tonight. I am probably 75% of the way ready to go.

The latest Emacs packages are happy with my convoluted config with my handful of outdated and pinned packages. The Firefox Snap seems fine, and it happily imported my profile. Pidgin and my purple-hangouts plugin seem happy. obs-studio just works. Davinci Resolve Studio 17.4 is working beautifully so far.

I had to install an old libopenssl package from a previous Ubuntu release to get my ancient rbenv working for Octopress. This is almost definitely the wrong fix, but it let me create a template for this post, and I’m assuming it will let me publish this to the Internet.

My NVME isn’t divided up how I really want. I left a 300 GB partition free to use as lvmcache, and Ubuntu is installed on a 600 GB LUKS partition with no LVM. This bums me out for a few reasons, but I am up and running, and it will do the job.

Why not something like Arch?

I am embarrassed to say that I thought about downloading Arch.

I dislike rolling releases. Sure, I caused myself some real headaches by locking myself down to Ubuntu 18.04 for so long, but I also saved myself a lot of frustration.

I don’t want to deal with a weird update on a random day keeping me from editing an interview. I want to be able to plan for some potential downtime for things like this. I’m in the middle of a few weeks of this sort of downtime right now!

Conclusion

I miss Debian, but running Debian on a workstation in 2022 sure looks like it would be challenging. Flatpaks and Snaps seems like a great solution, but when they need to interact with low-level things like NVENC, they just fall short.

If you can make something work on one distro, you can probably make it work on any distro, but I need to foundation of my machine to be solid. I can’t go ripping out so much stuff just to get NVENC to work, just like I wouldn’t want to rip out so much stuff on Ubuntu 18.04 just to get Gamescope going.

I will definitely be remembering this experience the next time I think about jumping ship back to Debian like it is still 2002!

I Think I Am Going To Buy an Open-Source LumenPNP Pick and Place Machine from Opulo.io

| Comments

I don’t even know where to begin this blog post. Do I tell you what a pick and place machine is? Do I give you the backstory about how our OoberLights project got to where it is today, and why buying an open-source LumenPNP might be a really good fit for us? Do I tell you that I have no idea what I’m doing?!

We’re going to talk about the OoberLights because producing these boards are what’s driving my decision. Other than that, I don’t know where I am going here. I’m writing this to help me make a decision. I hope that this train my thoughts are currently riding will be helpful for you as well!

NOTE: I just realized that I’ve never written anything about the adorable and hopefully reasonably priced OoberLights Micro boards! I need to correct that soon!

Let’s condense the OoberLights history down to a few paragraphs

It was a cold evening in Scranton, PA sometime around 2006. No. We don’t really have to go back that far, and I have no idea how accurate that year is.

A long time ago, I saw an LED imitation of a Dekatron on hackaday.com. I thought it would be neat to replace the num lock, caps lock, and scroll lock lights on my IBM Model M keyboard with tiny Dekatrons. I figured I could use them as CPU meters, disk usage meters, and I could spin LEDs around to indicate network speed.

Some number of years ago, I told a friend of mine about this. He said, “Why don’t we do it with Neopixels? Why not do concentric rings? Why not make them bigger?!” We ended up with a monstrosity of a board with 90 LEDs and an ESP8266. It was sized to fit in a 5.25” drive bay in your little home server.

Right as we ordered prototypes, the pandemic and its associated supply chain issues hit. The prices of our components went up, and some of our components were almost impossible to buy in any reasonable quantities. So we put the project on hold.

We eventually scaled back to something closer to my original vision. We dropped the outermost concentric ring, skipped the idea of having any brains, and squeezed the LEDs as tightly together as we could. We now have a simple board with 21 LEDs that has about the same footprint as a Cherry MX keycap.

Money is hard

We ordered a a batch of 20 prototype OoberLights Micro boards from PCBWay. The boards were panelized, populated, and soldered for us. We ordered because PCBWay was running a deal. If I remember correctly, we wound up paying the same rate for PCBA per board as you would normally pay for 100 boards. We got a much better deal than the normal rate.

We paid $24 for the unpopulated PCBs and $98 for the LEDs and assembly work. The total for the order after shipping was $129. That’s $6.45 per OoberLights Micro.

Talking about money is hard!

I feel a bit uncomfortable talking about this. We have a product here that we want to sell to you. I’m about to tell you that they may cost us $2.30 per unit. This is going to make you wonder why I set the price in the store to $15. I don’t know if it will be set to $15.00, but whatever the number is, you’ll want to know why I’m putting $12.00 of yours into my pocket when I hand you a $3 doodad!

Not only that, but what if I told you today that I expect the OoberLights Micro boards to sell for $10? What happens next month when I learn that I can’t make that happen and I raise the price to $15? How are you going to feel about that?!

Some of the numbers are extremely concrete, like what we actually paid for the prototypes. Other numbers are more nebulous, like how much we think PCBWay will charge us if we order 1,000 units in a single batch. I am quite comfortable talking about the former, but the latter make me a little nervous!

Back to how difficult money is!

Our ingenious PCB designed has spent time punching numbers into PCBWay. If I remember correctly, he says we get the cost of each fully assembled unit down to $2.36 if we order at least 1,000 units in one go. That’s about $2,500.

We would still need to order one more batch of prototypes. At full price, I expect that would be 20 prototypes for a little over $200.

That’s pretty reasonable. We’d need some sort of packaging. We need to buy shipping materials. Even so, I would think we could list them in the store for $12 or so. That’s not bad!

This is risky. What if our container falls of the boat on the way over from China? What if $2,500 get confiscated at a border crossing? What if nobody buys them, and I find a box with 973 OoberLights Micro boards in my closet in 2035?

If you’re placing a $2,500 order every month, and one goes wrong, that isn’t going to be the end of the world. If our first big batch goes wrong, the whole project is probably in big trouble!

How much will it cost with a LumenPNP?!

The printed circuit boards are cheap. They were only a little over a dollar each when we ordered 20. They’ll get even cheaper as quantity goes up, but for the sake of this post, I will just that they’re going to cost us a buck.

What if I buy reels of WS2812 LEDs? What if I run my own pick and place machine? How much will each OoberLights Micro board cost?

Here’s the lazy and naive answer. It is $2.18 per board. That doesn’t include the cost of the LumenPNP. That doesn’t include labor. That doesn’t include the time and energy it will take me to assemble and learn how to use the LumenPNP. That’s just the cost of components, and I’m just taking a guess at how much the boards will cost from PCBWay.

I don’t have a value to assign to the labor

I have no real idea of how much time I will have to spend in front of the pick and place machine. The first few times running the machine will be rough. Then it will get easier. Then it will eventually become an easy, normal process.

No matter how efficient I get, even if you value my time at minimum wage, I don’t believe it will be possible to match PCBWay’s price per board of $2.36.

And that first batch of boards that we make in house is going to be expensive! We will have to amortize the cost of the $1,145 LumenPNP kit over quite a few batches before it pays for itself.

The LumenPNP would help us minimize our risk

This is another thing that is difficult to assign any sort of dollar figure to. What are the odds of receiving 1,000 dead OoberLights Micro boards in our first batch? Are they going to fall off the boat? What are the chances that we just can’t sell 1,000 boards?!

Here’s what I do know.

We could actually make a single assembled prototype board for about $3.00 with the LumenPNP! How awesome is that?!

We don’t have to buy 1,000 fully assembled boards to get down to a good price point. We could instead make dozens of OoberLights Micros at a time.

The LumenPNP seems to be a lot less risk. If this all winds up being a horrible idea, I would much rather have a LumenPNP and $200 of unused Neopixels on my hands than 1,000 OoberLights boards that nobody wants. I could always sell the LumenPNP and the unused Neopixels to recoup some of our money!

The LumenPNP will make us more flexible

A small batch of unpopulated PCBs from PCBWay takes a week or two to arrive. A small batch of fully assembled OoberLights Micro boards took about six weeks to arrive.

What if we decide to add or remove a ring of LEDs to make a bigger or smaller OoberLights board? If we are assembling the boards in house, we can have the prototypes ready in two weeks, and if they work well, we can just start populating PCBs and have them in our store the same day.

If we don’t have a LumenPNP, we’d have to wait at least 6 weeks just for the prototypes. Then when we see that they work, we’d have to spend another $2,500 to buy the first 1,000 boards. Then we’d have to hope we can actually sell them!

I have been starting a lot of sentences with the word “then.”

Things get even better once we have three different size OoberLights boards. As stock runs low on each part, we can just produce the ones we need. We won’t have to order 1,000 at a time. We can produce just a few dozen at a time.

The LumenPNP comes with its own risks

What if I can’t manage to assemble the kit? What if I am unable to calibrate the thing and get it picking and placing? What if I can’t figure out how to get some sort of reflow oven going?!

The LumenPNP is an overgrown 3D printer. I’ve assembled enough 3D printers that I am confident this won’t be too much of a challenge.

A pick and place has a lot in common with a 3D printer or CNC router. I will be genuinely surprised if I can’t puzzle this thing out, and if I can’t, I’m sure I can find some help!

What about more expensive industrial pick and place machines?

When I mentioned that Stephen says the LumenPNP can place 500 components per hour, a friend of mine pointed me at the Neoden 3V Advanced saying that it was ten times as fast for only eight times the cost!

The Neoden is definitely faster, but not my such a large factor. The Neoden can only do 5,000 components per hour without vision. With vision, the maximum speed is 3,500 components per hours, but the recommended speed is 1,000 components per hour. That’s only twice as fast as the LumenPNP.

Let’s just ignore the fact that I can’t afford an $8,000 machine just for this project, and lets just make the assumption that Stephen’s 500 component per hour number is pushing the LumenPNP as hard as the Neoden 3V Advanced would be pushing itself at 3,500 CPH.

Our OoberLights Micro boards only have 21 components. What if we fit a panel of 50 units in the pick and place at the same time? It would take a little more than two hours for the LumenPNP to populate those boards. The Neoden 3V Advanced might be able to do the same in around 20 minutes.

That seems too fast! If it takes me 10 minutes just to load either machine with fresh LEDs and PCBs between each job, that means the LumenPNP will be operating 92% of the time while the Neoden will be waiting for me to do work at least 33% of the time.

In my garage, it sure seems like it’d be a better value to own two, three, or even four LumenPNP machines. By the time I finish loading one and kicking off the next job, the next machine will be finishing up.

Is the Neoden 3V Advanced the right machine to be comparing to? I have absolutely no idea, but it is an inexpensive commercial machine, so it seems like a reasonable thing to look at!

What if you really do need to produce thousands of OoberLights Micro boards?

We can still farm out the work to PCBWay! Just because we can do the work in house doesn’t mean we have to.

If we start selling 100 or 200 OoberLights Micro boards each month, then it would most definitely be an awesome idea to have PCBWay make us a big batch!

Am I going to order a LumenPNP?

The answer to this question is almost definitely yes. It is Thursday as I am writing this. I am pretty sure I was completely convinced that this was a good idea when Jeremy and I interviewed Stephen Hawes on the Create/Invent podcast on Tuesday. I’ve been trying to find a good reason not to pull the trigger.

I haven’t found one yet, but I decided that I should at least wait until Monday. There’s currently a six week lead time on LumenPNP orders, so it will be two months before I get to post a blog titled I Bought A Pick and Place Machine: I Have No Idea What I’m Doing. That will be fun!

Conclusion

I wrote everything above on Thursday. It is Friday morning now, and I am about to quickly reread this before publishing. The more I talk about the LumenPNP out loud, the more convinced I am that buying one is absolutely the right thing to do.

What do you think? Are you running an open-source pick and place machine? Is your machine the LumenPNP or something else? How is it working out for you? Is it as reliable as my Prusa MK3S? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

Using the Buddy System For Off-Site Hosting and Storage

| Comments

I’ve been referring to this as the buddy system for a while, thought I might be abusing the term a little. Last year, I bought a Raspberry Pi and a 14 TB hard drive for around $280. I loaded it up with my favorite cloud synchronization software and dropped it off at my friend Brian Moses’s house.

Using the buddy system for my cloud storage needs has a few advantages over Dropbox or Google Drive. I’m currently eating up about 5 TB of storage, and that would cost me $300 per year with Google, so I am already saving money. I also get to be a little extra paranoid. My data is encrypted on the client. That encrypted data is stored on an encrypted file system. The only way to access my Pi is through my Tailscale network.

I’ve been thinking a lot about the buddy system over the last year. I have a lot of things to say, so I figured it was time for a blog post!

You have to trust your buddy

Some of these problems can be mitigated, but there are so many bad things that could happen if you can’t trust your friend!

From my end of this bargain, I don’t feel the need to trust Brian all that much. There are enough layers of encryption, firewalls, and VPNs in the way to make me feel comfortable. Not to mention the fact that my data isn’t really all that sensitive. I wouldn’t leave a copy of my data sitting at someone’s house if I weren’t comfortable with a burglar walking off with it.

Brian is most definitely assuming more liability here. I could be snooping around his network. I could even be downloading illegal content and getting him in trouble with his ISP. I can imagine even worse things I could do from there.

A mesh VPN like Tailscale makes the buddy system easy

I definitely wouldn’t have gone back to hosting my own Seafile server without Tailscale. Tailscale is a zero-configuration mesh VPN. Every one of my devices running Tailscale is connected directly to each other via a Wireguard VPN connection. That means I can always hit my Raspberry Pi at Brian’s house no matter where my desktop, laptop, or phone are currently sitting.

Some of my Tailscale hosts

There are other similar offerings, like ZeroTier and Innernet, and I am certain they would handle the job just fine, but one of my favorite features of Tailscale’s service is their machine-sharing option. I share my Seafile server with my wife and with my Create/Invent Podcast co-host. Chris has a few hundred gigabytes stored up there, and having nearly unlimited storage makes it easy for Jeremy and I to share giant video podcast episodes.

To keep things secure, I configured the firewall to block everything except the Tailscale daemon’s port. My Raspberry Pi is nearly invisible on Brian’s home network. The only way to connect to the ssh or Seafile services is through my Tailscale network.

How can the host mitigate the trust issue?

I’ve known Brian for 25 years. We shared an apartment for a couple of those years. Brian knows that I won’t try to get him into any trouble.

Requiring this level of trust puts a lot of constraint on the buddy system. Wouldn’t it be nice to be able to safely widen this network up? Instead of Brian and I trading servers, it’d be awesome if we had a third acquaintance. I colocate with Brian, he colocates with our acquaintance, and our acquaintance colocates with me. We could turn it into a bit of a ring, or we could be doubling up on our off-site copies of our data.

We could host those servers in our router’s DMZ. That’ll keep our friends away from the stuff on our local network. Then we can firewall off the DMZ so only the Tailscale UDP ports can pass. Then our friends would have to use a Tailscale exit node to keep their software on the Raspberry Pi up to date, and I could definitely live with that.

Is that safe enough? Probably. Maybe. Assuming your router and firewall aren’t buggy, I imagine the only thing you really have to worry about at this point is non-Tailscale connections going out on the Tailscale port. Are you paranoid enough to worry about this?

Here’s my problem with locking this down. If your supposed friend is industrious enough, I bet they’re going to find a way to sneak out of their jail and potentially get you into trouble. If they’re going to try to circumvent your security, are they really your friend?

In any case, I think putting even a trusted friend in your DMZ is a fine idea, but I don’t think I want to put someone’s device on my home Internet connection unless I trust that they’re not going to abuse it.

Why not share a server or two?

Brian and I definitely talked about this. It would be easy for Brian to up an empty virtual machine for me and map me a slice of his NAS, and it would be just as easy for me to do the same thing.

This plan brings a little more risk along for the ride, but Brian knows I won’t be attempting to break out of my VM, and I don’t expect Brian to attempt this either.

Even so, I still decided against this. Giving me two power outlets and switch port for my Raspberry Pi and USB hard drive is easy. Brian didn’t have to set up a guest machine for me. He didn’t have to install an OS in a virtual machine. He didn’t have to set up any special networking rules. He just had to plug in my Raspberry Pi.

The Pi didn’t add much to the cost over the USB hard drive, it is easier to repair if something goes wrong, and it puts most of the burden on me. Not only that, but you can send a setup like mine to someone less skilled. I could mail a Pi and a USB hard drive to my father in Pennsylvania, and I bet he could plug it into his gear.

Is the buddy system worth the risk?

If you have a lot of data, the money you save will start adding up pretty quickly. I did a bad job accounting for when my hypothetical Google Drive storage would have crossed from $200 to $300, but I would be paying for my second year soon, and I will be ahead by more than $300. My data is growing, so I could very well be saving $400 next year.

via GIPHY

If you trust your friend, this is easy. If you had to call your friend and explain a situation about a dead body, and the first thing he’d do is ask how many shovels to bring, you have little to worry about. Your biggest risk is that a third party hacks into that poor little Raspberry Pi.

I’d bet I could trust most people I know if I stuck their Raspberry Pi in a DMZ and locked their server down to only allow Tailscale’s UDP ports out.

Conclusion

I had a lot of reasons for writing this blog rolling around in my head. The first reason being that I wanted to make sure that if any of my acquaintances asked me to host their Raspberry Pi at my house, I would be able to articulate exactly why that would be risky for me. Also, though, I wasn’t exactly certain of how I felt about the possible risks and how much effort it would be to mitigate them.

For a variety of reasons, I am not physically prepared to lock down someone else’s Pi on a VLAN in its own DMZ. If I were, I’m beginning to think that I’d be willing to do this. Maybe. Before I wrote this blog, I was completely against the idea, but I am warming up to it.

What do you think? Is it worth swapping servers with a buddy to save $200, $300, or more every single year? Is it worth accepting the risk that a friend might download dozens of terabytes of movies and force your ISP to drop you as a customer? Do you feel you could mitigate that risk well enough to not worry about it?

Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!

My Self-Hosted Cloud Storage with Seafile and Tailscale is Already Cheaper Than Dropbox or Google Drive!

| Comments

Back in February, I decided it was time to go back to hosting my own cloud storage again. I originally started hosting my own Seafile server back in 2013. I stopped colocating that server hardware in 2018, shut down my old Seafile server, and I wound up paying another company to use their Seafile service.

There were two problems sneaking up on me this year. I was rapidly approaching the storage limits of my hosting provider, and there was a huge chunk of my video data that I wasn’t syncing, because I didn’t have anywhere near enough space available.

In February, I took inventory of my total storage requirements. If my memory is correct, I was using right around 3.2 TB. The options for syncing that much data to a Dropbox-style are all rather costly. Dropbox was $120 per year for 2 TB of storage, and Google Drive was $100 for the same. That would have made my annual bill either $200 or $240.

NOTE: Google Drive sync only works with third-party clients. It would be more accurate for me to compare my costs to Dropbox, since that is the service I would actually use instead of Seafile. I’m OK with comparing to Google Drive pricing, though, because their prices are lower, and that makes things more of a challenge on my end!

I decided to spend about $290 for a Raspberry Pi 4 and a 14 TB Seagate USB hard drive. I dropped it off at Brian Moses’s house, and it has been chugging along without much trouble for the past 10 months.

I expected to save money, but I didn’t expect it to happen so soon!

In fact, I completely missed the point where I crossed into the black. I just had it in my head that if I had gone with Google Drive, I would have paid $200 back in February, and then I’d be having to pay another $200 after the first year was over. I knew that once I made it to February 2022, I would have paid back my initial investment and been ahead by about $110.

I forgot to take into account the fact that my data is growing. I checked earlier this month, and my Raspberry Pi’s hard drive is 41% full. That’s just over five terabytes.

Saefile Raspberry Pi Storage

NOTE: I did a bit of housekeeping and garbage collection while writing this post, but I’m still over 4 TB!

I imagine I flew past the 4 TB mark during the summer. If I were using Google Drive, I would have gotten a warning that I was running out of space, and I would have had to do some work to pay for more storage.

This means I’ve already paid for my Raspberry Pi and 14 TB hard drive, and I’ve even managed to put about $10 in my pocket!

You can’t really buy 6 TB of storage on Google Drive

At least, I don’t think you can. Even if you want to spend $300 per year on storage, they won’t let you put that much data on a single account. I’m pretty sure you’d have to do something goofy like set up three separate accounts. That would be yucky.

A Dropbox business plan supposedly has unlimited data, but they charge $20 per month per user, and there is a three user minimum. That’s $720 per year. If you have two friends, and you need to store a boatload of sync a boatload of data, that might be a good deal. It isn’t a good fit for me. I don’t have dozens of terabytes of data to sync!

Using Google Drive as my pricing benchmark seems like a good compromise. Google’s sync client isn’t great, and they don’t support Linux. Dropbox’s sync client is excellent, and it does work on Linux, but I feel like using their pricing makes my setup seem like too good a value. There are cheaper services from smaller companies, but they aren’t all that much cheaper.

Why am I using Seafile?

My Seafile server’s job is to synchronize my files. That’s just about the only feature I truly expect out of the software or rely on.

It pushes copies of every change I make to my desktop, laptop, and NAS. If this is your goal, Seafile isn’t the only game in town. Syncthing does this job very well, and is a much simpler piece of software. The last time I tested Nextcloud, it wasn’t up to the task of syncing my quantity of files, but that was 9 years ago. I expect their situation has improved tremendously in the mean time!

The Seafile client encrypts my data locally. The server doesn’t have the keys to access my files. This is extremely important to me. This keeps me safe if anyone on the Internet manages to hack into my Raspberry Pi. Nextcloud might be able to do this now, but I don’t believe Syncthing is up to the job yet.

Seafile also keeps historical copies of all my files. I have most of my Seafile libraries set to keep three months of history. A new version of every file is recorded just about every time I hit save. There are probably already 18 revisions of this blog post on my server already.

Seafile is at the heart of my backup and disaster recovery plan

The first thing my backup plan attempts to mitigate is hardware failure. Seafile doesn’t sync changes instantly, but it isn’t far off. I can save a file on my desktop, and the update will usually show up on my laptop in about 30 seconds. If I have that file open in Emacs on both machines, it will even automatically refresh itself.

If the SSD in my desktop melted right now, I could walk across my office, open up my laptop, and pick up where I was a couple of minutes ago. Nearly instantaneous backups are absolutely amazing.

My Seafile Libraries in December 2021

The next disaster I am defending against is the natural kind. What if my home office floods? What if my house burns down? What if there’s a burglar?!

If I’m lucky, I get to walk out the door with my laptop. If I can’t, there will be a full copy of all my data six miles away on my Seafile server. It would be nice to have more distance between me and my off-site backup, but I’m willing to live with this.

The last problem boils down to software failure. What if my new Emacs extensions decides to write garbage into all my Markdown files? What if I accidentally delete my home directory? What if I get hit with ransomware that encrypts then deletes all my important files?

The bummer here is that this change will be happily synced to all my computers. The good news is that history will still exist on the Seafile Pi. It’ll take a good number of clicks, but it won’t be too much trouble to restore.

My implementation of a 3-2-1 backup

What is 3-2-1 backup? It means you should have at least three copies of your data. Your data should be on at least two different mediums. At least one of those copies needs to be at a remote location.

In the old days, we usually wanted the second medium to be tape. These days, most people are happy enough making sure the second medium is just a different device. It is important to make sure that second device is out of band.

What do I mean by out of band? If your backup is to an always connected USB hard drive or a file share on your NAS, then your accidental rm -rf might delete your backup, or that piece of ransomware might take your backup right along with it! If you can see a get to a file with your file manager, it is quite likely not safe from accidents, viruses, or ransomware.

I have four copies of most of my data: on my desktop, laptop, NAS, and the Raspberry Pi. The copy on the Seafile server can only be accessed via the Seafile client using its own protocol. I wouldn’t be surprised if a dedicated attacker could exploit this connection somehow, but a random piece of malware or an accidental rm -rf certainly isn’t going to have a way to delete anything in the server’s history.

The Seafile server is six miles away.

Am I really $10 ahead? Will I really save another $300 by 2023?

This first year involved a bit of financial risk. I don’t know what sort of warranty I have on my Pi or 14 TB Seagate hard drive, but the odds of that hard drive dying are a good bit higher than zero. Having to pay for a new replacement hard drive would have set me back quite a bit on this journey.

Please don’t forget that time is money. I invested some numbers of hours of my time in February in the hopes that I will be able to save $200 or $300 each year for the next two or three years.

How much time did it take to get my Pi server up and running? I didn’t keep track, and it took me a lot longer because I was documenting the process in various ways as I was going.

If we ignore the blogging aspect of things, I will be surprised if I spend anywhere near 8 hours between installing and maintaining my Seafile Pi during the first two or three years.

It is probably important to note here that I’m comparing pricing to other services based on the storage I am actually using. I don’t have 6 TB of cloud storage. I have 14 TB. That would be more like $700 per year from Google Drive.

Should you be hosting your own cloud storage?

Most people shouldn’t. Most people fit in the free or nearly free tiers of Dropbox or Google Drive. The savings really start to become clear as your storage needs approach or exceed 2 TB.

Self-hosted cloud file synchronization isn’t exactly a drop-in replacement for a service provider. There are some advantages that I appreciate very much, but there are also a number of important disadvantages.

Dropbox is making sure your data is replicated to multiple locations. Google is making sure there are backups and redundancy. Even if that weren’t the case, they are doing work that would take you at least a few hours.

Maybe I have to do a little work, but I also know my files aren’t leaving the computers that I own. I don’t have to worry about Google restricting my files.

Dropbox’s unlimited business plan is potentially a really good value at $750 per year, but I can buy a lot of storage for my Raspberry Pi server with that $750. That could be something approaching 60 TB, and as long as those drives don’t fail, that would be $750 back in my pocket next year.

The important thing to note here is that I am quite willing to suffer through a cloud storage outage. If Brian’s 3D printer catches on fire and takes out my Raspberry Pi, I can order a new Pi and hard drive from Amazon and have it here in two days. I have a copy of the MicroSD card that I can pop into a fresh Pi in a few seconds, and I’ll be resyncing my Seafile libraries in no time.

I wouldn’t have done this without Tailscale and the buddy system

The worst part about hosting my first Seafile server was that it was exposed to the Internet. I had to watch for Debian or Seafile security alerts like a hawk. I had to drop whatever I was doing and get to work patching things when updates were available. That was a lot of work!

I’m hosting Seafile on my Tailscale mesh VPN network. My Raspberry Pi’s firewall is blocking just about everything on the Ethernet port except Tailscale, so there are only four or five computers that are even able to connect to my server.

There’s only one downside to this for me, but it isn’t a big one. I can’t use Seafile’s built-in file sharing abilities. Tailscale lets you share individual machines with other Tailscale users, and I have shared my Seafile server to both my wife and my co-host on The Create/Invent Podcast.

My Raspberry Pi is colocated at Brian Moses’s house. It is sitting underneath his Prusa MK3 3D printer, and it is connected to his gigabit Internet connection. This is essentially free for me, though I of course offered to plug a similar device into my network whenever he gets around to it.

I’m looking forward to the next year or two!

If you had asked me to implement this same setup half a decade ago, I would have been hesitant. I used to have a stack of USB 2.0 drives in a RAID 5 that I would plug into my Core Duo laptop for backups and extra storage, and those drives were more than a little flaky.

In that same era, hard drives were getting pretty awful. Every time I’d buy a stack of hard drives to build a RAID, I usually had a failed disk within the first couple weeks, and it was normal for a disk to fail about once a year after that.

Things have changed. Two of the 4 TB drives in my virtual machine server are six years old, and the other two aren’t all that much fresher, and they’re all still happy. Until we flipped the breaker to shut down Brian’s office last week to install a fancy old Wink Relay smart switch, my Raspberry Pi server had an uptime over 180 days with no weird USB errors or hiccups.

My investment has paid for itself, and I absolutely won’t be surprised if the hard drive lasts long enough to pay for itself three or four times over.

NOTE: It is important to take measures to not wear out your Pi’s microSD card. You can use raspi-config to use an overlay filesystem, which is a fantastic idea. I just made sure to disable as much logging as I can, since that’s what 99% of the writes end up being.

The math looks better if you buy a smaller hard drive!

My 14 TB drive isn’t quite the largest USB hard drive available, but it is far from the smallest. I didn’t need to go this large. An 8 TB drive would have cost less, and it would probably take me three years to fill it up, but the 14 TB drives are usually the lowest costs per terabyte. It is also better to have extra space than too little space!

The 6 TB version of my Seagate USB drive was on sale for $104 during the first week of December. That and the Pi would have cost me around $160. That’s 50% more storage for $35 less than the annual price of 4 TB of storage from Google.

Not only that, but sometimes the best computer is the one you already have. There’s an old Intel Core i3 laptop with 8 GB of ram in my closet. I could have used that instead of the Pi, and I would have saved about $55. Not only that, but every laptop comes with a built-in UPS!

You don’t have to do exactly what I’ve done. There are so many ways to accomplish the same thing! You can use a smaller drive. You can use an older, weaker Pi. You can run Syncthing or Nextcloud, or you could even use ZFS push to send backups to the Pi.

NOTE: All of this is a little weird during the current supply chain conundrum. It is difficult to buy a Raspberry Pi 4 for the price I got less than a year ago, and I have no idea when that will get better. I am particularly excited about the idea that you could build a 14 TB micro server for about 20% more than the cost of the disk. It is still neat today that you can do it for less than 33% more than the cost of the disk, but the more that gap increases, the less cool it is!

Conclusion

I couldn’t be more pleased with my off-site Raspberry Pi cloud storage server. I have convenience. I have piece of mind. I have security. I am maintaining my privacy. I’m saving money. What more could I ask for?

I only have one disappointment. I bought Pi with 2 GB of RAM for my Seafile project, but I swapped it with the 4 GB Pi from my Pi-KVM build. Neither requires more than a few hundred megabytes of RAM, and I figured I’d eventually find something else to host off-site on my Tailscale accessible Raspberry Pi server.

My Seafile Pi is using less than 300 megabytes of RAM right now, and I haven’t come up with anything to host out there. Maybe you can give me some suggestions?!

What do you think? Am I crazy for not just paying Dropbox for my sync and remote storage needs? Do you expect I’ll be doubling my investment in savings by this time next year? Are you already doing something similar, or do you have plans to build something similar? Let me know in the comments, or stop by the Butter, What?! Discord server to chat with me about it!