Federated services have always had privacy issues but I expected Lemmy would have the fewest, but it’s visibly worse for privacy than even Reddit.
- Deleted comments remain on the server but hidden to non-admins, the username remains visible
- Deleted account usernames remain visible too
- Anything remains visible on federated servers!
- When you delete your account, media does not get deleted on any server
I don’t think there is a legal requirement that you store that data, just that you make the data you store available, or in some situations, you add logging for valid law enforcement requests.
Apple for example does not have access to end-to-end iCloud data that is encrypted to my knowledge. They wouldn’t be able to provide the contents of my notes application to law enforcement necessarily - and that is currently legal.
I’m basing what I have said off of work I have done with attorneys in similar situations. I don’t know evidentiary law, but I wouldn’t want to be accused of destroying evidence of something. But my question stands. Why should someone who has doxed someone get away with it by deleting their account? How is that ethical?
Doxxing is not illegal in many places - the US included. Cyberstalking and harassment may be illegal, depending on location. That’s beside the point, but this is an extremely specific example.
Ultimately users should, in my opinion, be in control of their data. Tildes, for example, preserves deleted comments for (I think) 30 days and then permanently removes them. It seems like that approach is a compromise that would work for your situation while still respecting privacy long term.
So the key thing here is, “are you aware that the data is part of a legal proceeding or crime?”
If no, deleting it as part of normal operations is perfectly legal. There are plenty of VPNs which do not log user information, and will produce for the authorities all of the logs they retain (i.e. an empty log file).
From an ethical standpoint, keeping peoples’ data which they want removed, against their wishes, based on the hypothetical that at some point someone might do something wrong, is by far the less ethical route.
“You might do something bad, so I’m going to keep all your data whether you like it or not!” <- the bad thing
It’s cute how you think I’m going to take legal advice from you. You do you, have a nice evening.
Apple (and Google, Microsoft, etc) are checking signatures of all files on their services to detect illegal stuff. They do it for copyrighted content and they do it for CSAM.
Checking against a known-malicious hash is very different than claiming to have access to the plain data. In fact, even for the known-malicious hashes, the companies doing the checks usually don’t have access to the source data (so i.e. they don’t even necessarily know what it contains).