But considering that humans do get copyright strikes when they do something too similar that should also applies to AI, doesn’t matter if it’s not exact.
But considering that humans do get copyright strikes when they do something too similar that should also applies to AI, doesn’t matter if it’s not exact.
Yes, I understand :) Just thought it’d be nice to increase awareness of the problem.
There are problems that goes both ways, the number of subscribers, for example.
I get having megathreads, but currently kbin doesn’t care whether a post on Lemmy is pinned, so megathreads make it harder here to follow news and discussions on the topic.
Can’t be done unfortunately. There’s a limit to how many pending moderator invites there can be. r/politicalhumor did the next best thing though.
What I hate most about it is people are doing very poorly at checking their own information intake for accuracy and misinformation already, this comes at one of the worst time to make things go south. It’s going to challenge the stability of society in a lot of way and with how crypto went I have 0% trust that techbros and corporate will not sabotage efforts to get things right for the sake of their own profit.
A recent report saying the opposite, but with proper data to back it up unlike this poorly thought out personal anecdote.
I think the best outcome is for Fediverse to succeed at proving the model is better for users than mega corps. Then grow and last long enough until the EU takes notice, such that if any bad actors try to ruin it they’d want to protect it. We’re probably talking far into the future, but I think if handled well it can get to that point.
If that’s the case then there’s no need for it to be off-record. Unless the conversation of what you pointed out is open to scrutiny it shouldn’t happen.
Exactly this right here.