• 1 Post
  • 24 Comments
Joined 1 year ago
cake
Cake day: June 8th, 2023

help-circle
  • It sounds like nobody actually understood what you want.

    You have a non-ZFS boot drive, and a big ZFS pool, and you want to save an image of the boot drive to the pool, as a backup for the boot drive.

    I guess you don’t want to image the drive while booted off it, because that could produce an image that isn’t fully self-consistent. So then the problem is getting at the pool from something other than the system you have.

    I think what you need to do is find something else you can boot that supports ZFS. I think the Ubuntu live images will do it. If not, you can try something like re-installing the setup you have, but onto a USB drive.

    Then you have to boot to that and zfs import your pool. ZFS is pretty smart so it should just auto-detect the pool structure and where it wants to be mounted, and you can mount it. Don’t do a ZFS feature upgrade on the pool though, or the other system might not understand it. It’s also possible your live kernel might not have a new enough ZFS to understand the features your pool uses, and you might need to find a newer one.

    Then once the pool is mounted you should be able to dd your boot drive block device to a file on the pool.

    If you can’t get this to work, you can try using a non-ZFS-speaking live Linux and dding your image to somewhere on the network big enough to hold it, which you may or may not have, and then booting the system and copying back from there to the pool.



  • I think you can keep doing the SMB shares and use an overlay filesystem on top of those to basically stack them on top of each other, so that server1/dir1/file1.txt and server2/dir1/file2.txt and server3/dir1/file3.txt all show up in the same folder. I’m not sure how happy that is when one of the servers just isn’t there though.

    Other than that you probably need some kind of fancy FUSE application to fake a filesystem that works the way you want. Maybe some kind of FUES-over-Git-Annex system exists that could do it already?

    I wouldn’t really recommend IPFS for this. It’s tough to get it to actually fetch the blocks promptly for files unless you manually convince it to connect to the machine that has them. It doesn’t really solve the shared-drive problem as far as I know (you’d have like several IPNS paths to juggle for the different libraries, and you’d have to have a way to update them when new files were added). Also it won’t do any encryption or privacy: anyone who has seen the same file that you have, and has the IPFS hash of it, will be able to convince you to distribute the file to them (whether you have a license to do so or not).




  • Why does Lemmy even ship its own image host? There are plenty of places to upload images you want to post that are already good at hosting images, arguably better than pictrs is for some applications. Running your own opens up whole categories of new problems like this that are inessential to running a federated link aggregator. People selfhost Lemmy and turn around and dump the images for “their” image host in S3 anyway.

    We should all get out of the image hosting business unless we really want to be there.




  • I did use older Android, and I agree that the new permission model is absolutely much better for the use case of running apps that you do not trust or even like. I can scan a coupon with the camera today without having to worry that the store’s app is going to be taking pictures of me tomorrow.

    But that’s hardly any of what I use my phone for. So I pay a lot of the costs of more hoops to jump through to allow stuff I actually want, while not really getting much of the benefit of being able to use malicious applications relatively safely.

    And the one time I had a real permission problem, it was Snapchat trying to bully me into giving it access to all my files so it could “detect screenshots” before it would let me talk to my friends. And Android permissions were no help there, because the app can still tell if I reject its requests and won’t get booted from the store for refusing to work until I grant access to everything, even though I do not want to.

    The whole system seems to me to be designed to make people feel like their privacy is being protected, by popping up all the time to say that unused permissions have been removed and hey look at all these privacy options you have. It does indeed stop people from spying on your location and camera all the time without you noticing. But while the little permanent green dot is flashing every five minutes when your location is sent to Home Assistant like you explicitly asked, and you are trying to decide if you want to let Zoom use Bluetooth headsets just right now or on an ongoing basis, Google is hoping you don’t notice that the OS and most of the apps are designed to extract value from you rather than to serve your interests.

    It’s now safer to run the evil apps, but they’re still there trying to do evil.


  • What’s the security problem with Compatibility Mode? Is it just that it lets you let an app run with more permissions than it otherwise has on the new APIs? Or does it turn off a bunch of mitigations?

    The Android permissions churn seems meant to protect people from applications: previously you could just say you need GPS, install, and then use GPS all the time. But untrustworthy apps started tracking people all the time, so Google declared that now only Google Maps is allowed to track people all the time, and that everybody else has to do a new location access ritual. If I have an old app that I trust (or wrote!) but doesn’t do the ritual, I ought to be able to convey to the OS that I trust the application anyway. The machine works for me, not for Google’s idea of what my privacy preferences are.

    I don’t see how a developer not implementing new permissions models is the developer not caring about security. I guess a more robustly sandboxed app is more secure than a less robustly sandboxed app? But just because a security enhancement like that is available doesn’t mean it’s actually worth doing, and the user experience of the new system (get sent to settings to toggle on file system access for a file manager) is often worse than before.

    Having new development is better for the user than not; they will get features and improvements. But having to do development to prevent the user from losing features over time is a pure cost to the developer. The rate at which it currently happens makes it unnecessary hard to do projects that aren’t shaped like commercial subscription services.



  • You can use interest rates to convert between stocks and flows of money. If the prevailing interest rate is 5%, a thing will produce 5%, or 1/20th, of its actual value every year. So you can take the annual cost of something and multiply by 20 (and vigorously wave your hands at compounding) to get its actual value.

    A $10/month subscription costs $120/year, or $2,400 over 20 years. So it’s equivalent to a $2,400 purchase.

    You can also think of it as, you need to set aside $2,400 in investments to pay for your subscription, e.g. in retirement. Or, if you ditched your subscription you could afford to borrow $2,400 more to e.g. buy a house. Or, you as a customer are the same value to the business as $2,400 in capital, minus whatever they have to spend to make the thing.

    You should think a lot about a $2,400 purchase.


  • Games are a good example. One might want to publish a game and then work on the next game, not go back to the first game again and add dynamic permission prompts for the accelerometer or recompile with the new SDK or whatever. But someone also might want to play Space Grocer I before Space Grocer II-X to get the whole story.

    The fewer breaking changes there are, the lower the burden of an app being “supported” is. Someone might be willing to recompile the app every couple years, or add a new required argument to a function call, but not really able to commit to re-architecting the program to deal with completely new and now-mandatory concepts.

    Even on software I actively work on that is “supported” by me, I struggle with the frequency of e.g. angry messages demanding I upgrade to new and incompatible versions of Node modules. Time spent porting to new and incompatible versions of a framework is time not spent keeping the app worth using.


  • If you write a commercial program and sell it once, you are probably not going to be selling new copies in 10 years. If you keep getting paid you should indeed keep working. But if you stop working on it, it is better for the finished software to last longer.

    Windows 11 has a “compatibility mode” that goes back to before XP. Android has a dialog that says that an old APK “needs to be updated”, regardless of the continued existence of the original developer or whether the user is happy with the features and level of support.

    It is this attitude of “we don’t need to think about backward compatibility because we are powerful and all software has a developer on call to deal with our breaking changes” that causes software to go obsolete very quickly now. User needs also change over time, but not nearly as fast.





  • Usually for Windows VM gaming you want to pass through a GPU and a USB controller and plug in directly. You might be able to use something like Steam streaming but I wouldn’t recommend a normal desktop-app-oriented thin client setup, not having tried it.

    You may run into weird problems with latency spikes: mostly it will work great and everything runs at 90 FPS or whatever, but then inexplicably 1 frame every few minutes takes 100ms and nobody can tell you why.

    There can also be problems with storage access speed. What ought to be very fast storage on the host is substantially slower storage once the image file and host FS overhead, or the block device pass through overhead, come into play. Or maybe you just need an NVMe device to pass straight through.