If that’s the extent of their differences, then they aren’t very different…
I’m just some guy, you know.
If that’s the extent of their differences, then they aren’t very different…
I suspect that this was something of a test case, with the regulator flexing their censorship muscle, and I’m glad it didn’t work out.
This was a POV stabbing video that people spread around to glorify violence. It’s in the same category as beheading videos.
America may have decided that child porn is the only media exception to free speech, but other more sane countries draw the line a little bit more broadly to include all forms of extremely violent crime filmed to be glorified, including things like murder, attempted murder, torture, and the rape of adults.
If you want to operate a business in places like Australia or New Zealand, you cannot be distributing violent gore videos within their borders.
I hope they revisit this as X users are pretty routinely celebrating things like the Christchurch shooting and other violent extremist incidents. Sometimes censorship makes sense, and when people are antagonistically spreading videos of people being maimed and killed, the “free speech” argument absolutely doesn’t fucking cut it.
Did you actually read the article? The designers of this vision model used a software trick (inspired by the concept of quantum tunneling that has nothing to do with quantum computing) to allow inputs to bypass hidden layers at random, resulting in results that were able to see certain optical illusions in a way that other vision models cannot.
This can be done by just adding some noise to the image. Sometimes it gets recognized as one thing and sometimes like another, just like humans would.
Word soup by someone who knows way less than these researchers.
Signal is bad then?
Yeah. Why use X3DH when there are algorithms that already exist and we know are secure?
So in which direction you want it go? More private or more moderated?
Privacy is good, but when the public chatrooms are distributing child porn, you can’t use encryption as an excuse not moderating. Failure to moderate illegal content is a crime.
Let the pedos run their own Matrix server or something. You can’t be knowingly providing comms and distribution to child pornographers.
Also terabytes of child porn that they won’t take down.
I recently set up Synapse just to play around with the protocol, and I do not remember this instruction at all. Where did you get this?
As someone who integrates Okta for a living, I have no idea why this would be part of the config. I can’t even figure how you would use Okta for content filtering at all.
It’s an authentication service…
Removed by mod
Removed by mod
They deliberately misrepresented it. Just another person who thinks that if you oppose Goldman Sachs for their contributions to late stage capitalism that you are obligated to disagree with every single piece of messaging from them without exception.
If the CEO of Goldman Sachs shits in a toilet, and this guy finds out, he’s going to shit on the floor in protest.
Not really comparable.
AI has lots of potential for the future, and Goldman Sachs continues to invest in that sector.
They are specifically talking about the bubble of Generative AI startups, none of which have any long term viability as they either produce a novelty, or they produce something so inaccurate that nobody would trust it after using it.
They aren’t the people saying that the Internet won’t catch on. They’re the ones warning you that dot com is a bubble.
They’re right.
This only applies to Sony products, right?
I use Buffalo drives and Optical Quantum BD-Rs for archiving. It doesn’t sound like that will be affected.
Copyright infringement becomes theft when you make money off of someone else’s work, which is the goal of every one of these AI companies. I 100% mean theft.
My take is that you can train AI on whatever you want for research purposes, but if you brazenly distribute models trained on other people’s content, you should be liable for theft, especially if you are profiting off of it.
Just because AI has so much potential doesn’t mean we should be reckless and abusive with it. Just because we can build a plagiarism machine capable of reproducing facsimiles of humanity doesn’t mean that how we are building that is ethical or legal.
Is the solution to male loneliness ripping your father’s shrieking soul from the depths of the underworld and crudely resurrecting him in defiance of God’s will?