• 0 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • I bought a used old gen Sonos Connect about a year ago to integrate my Logitech Z906 into an existing pair of Sonos speakers. They made it deliberately tedious to downgrade those speakers (who had gotten the S2 “blessing”) back to S1 to make them work with the Sonos Connect. I’m an IT repair shop guy and I cursed all the way through this downgrade process.

    I would have gladly bought current hardware from them again if their prices were anywhere within the realm of plausibility. Credit where it’s due, that Sonos Connect hookup with the 2 wall-mounted 1st party speakers works absolutely reliably. That company just seriously lost its bearings since they engineered those parts.


  • I think if you’re talking wider demographics your model OSs are (obviously) Windows and macOS. People buy into that because CLI familiarity isn’t required. Especially with Apple products everything revolves around simplicity.

    I do dream of a day when Linux can (at least somewhat) rival that. I love Linux because I am (or consider myself) intricately familiar with it and I can (theoretically) change every aspect about it. But mutability and limitless possibilities are not what makes an OS lovable to the average user. I think the advent of immutable Linux distros is a step in the right direction for mass adoption. Stuff just needs to work. Googling for StackOverflow or AskUbuntu postings shouldn’t ever be necessary when people just want to do whatever they were doing on Windows with limited technical knowledge.

    However on another note, if you’re talking a home studio migration, not sure what that entails, but it sounds rather technical. I don’t want to be the guy to tell you that CLI familiarity is simply par for the course. Maybe your work shouldn’t require terminal interaction. Maybe there is a certain gap between absolutely basic linux tutorials and the more advanced ones like you suggest. Yet what I do want to say is that if you want to do repairwork on your own car it’s not exactly like that is supposed to be an accessible skill to acquire. Even if there are videos explaining step by step what you need to do, eventually you still need to get your own practice in. Stuff will break. We make mistakes and we learn from them. That is the point I’m trying to get at. Not all knowledge can be bestowed from without. Some of it just needs to grow organically from within.



  • Ah yea I didn’t realize the official dock has 2 ports for display output. Valve is bae.

    There are definitely docks that have 3 display outputs, which would be a viable option if you also buy the Wacom Link Plus. I personally don’t know of any docks that have 2 display outputs and a USB-C port that is display-capable. There may be Thunderbolt ones but Steam Deck doesn’t do Thunderbolt unfortunately.

    So yea I guess your only option is a different dock plus Wacom Link Plus. I don’t see any other personally.


  • So the setup is currently two external monitors? Or does that include the Deck monitor? Is the USB connection to the Wacom just for pen input or does it transfer image as well? If USB-C is used as the monitor port it most definitely will not work with USB-A of any kind. Not even USB-A 3.1. You either need a different dock with a USB-C port or you need the Wacom Link Plus (which means you probably also need a different dock with at least 2 HDMI ports or one HDMI and one DisplayPort).





  • Mark my words. Don’t ever use SATA to USB for anything other than (temporary) access to non critical preexisting data. I swear to god if I had a dollar for every time USB has screwed me over trying to simplify working with customers’ (and my own) drives. Whenever it comes to anything more advanced than data level access USB just doesn’t seem to offer the necessary utilities. Whether this is rooted in software, hardware or both I don’t know.

    All I know is that you cannot realistically use USB to for example carbon copy one drive to another. It may end up working, it may throw errors letting you know that it failed, it may only seem to have worked in the end. It’s hard for me to imagine that with all the individual devices I’ve gone through that this is somehow down to the parts and that somewhere out there would be something better that actually makes this work. It really does feel like whoever came up with the controlling circuits used for USB to SATA conversion industry-wide just didn’t do a good enough job to implement everything in a way that makes it wholly transparent from the view of the operating system.

    TL;DR If you want to use SATA as intended you need SATA all the way to the motherboard.

    tbh I often ask myself why eSATA fell by the wayside. USB just isn’t up to these tasks in my experience.



  • Once you face the (seemingly) inevitable necessity of further hardware purchases it does become sort of tedious I must say. I used to treat my raid parity as a “backup” for way longer than I’d like to admit because I didn’t want my costs to double. With unraid I at least don’t have the same management workload that I have on my main box where I have a rolling release Arch with manually installed ZFS where the build always has to line up with the kernel version and all that jazz. Unraid is my deploy and forget box. Rsync every 24h. God bless.

    Proxmox has been recommended to me before I switched my main server to Arch but once I realised that it has no direct docker support I thought I’d rather just do things myself. It really is a matter of preference. It’s kind of hard to believe that all the functionality in Proxmox can be had for absolutely free.


  • It’s understandable that you want to take your virtualization-capabilities to the next level but I also don’t see the appeal of containerizing unraid like many others here. I started using unraid last autumn and to me it really is about being able to mix drive sizes. It’s a backup to my main server’s ZFS pool so (fingers crossed) I don’t even really worry about drive failures on unraid. (I have double parity on ZFS and single parity on unraid.)

    Anyways my point is I started out with 8 SATA slots plus an old USB-based enclosure with i set to JBOD mode and that was a pretty stupid idea. unraid couldn’t read SMART data from those USB drives. Every once in a while one of the drives would suddenly show up as having an unsupported partition layout. Couple weeks ago all 5 drives in the enclosure started showing up as unusable. So as you can imagine I dropped that enclosure and now am working solely off the 8 internal slots. I’d imagine that virtualizing unraid’s disk access might potentially yield similar issues. At least the comments of people here remind me of my own janky setup.




  • I used to (over a span of about 4 years now) just rely on a RaidZ2 (ZFS) pool (faulted drive replacements never gave any issues) but I recently did an expansion of the array plus OS reinstall and only now am I starting to incorporate Docker containers into my workflows. The live data is in ~ and nightly rsynced onto the new larger RaidZ2 pool but there is also data on that pool which I’ve thus far never stored anywhere else.

    So my answer to the question would be an off-site unraid install which is still in the works. This really will only be that. A catastophe insurance. I probably won’t even rely on parity drives there in order to maximize space since I already have double parity on ZFS.

    As far as reinstallation goes, I don’t feel like restoring ~ and running docker compose for all the services again would be too much of a hassle.