Aggravationstation@feddit.uktoTechnology@beehaw.org•Don’t believe the hype: AGI is far from inevitable
6·
27 days agoPossible or not I don’t think we’ll get to the point of AGI. I’m pretty sure at some point someone will do something monumentally stupid with AI that will wipe out humanity.
Maybe. But I have a feeling it’ll be a dumb single mistake that’ll make someone say “ah, shit” just before we’re wiped out.
When the Soviets trained anti-tank dogs in WW2 they did so on tanks that weren’t running to save fuel: “Their deployment revealed some serious problems… In the field, the dogs refused to dive under moving tanks.” https://en.m.wikipedia.org/wiki/Anti-tank_dog
History is littered with these kinds of mistakes. It would only take one military AI with access to autonomous weapons to have a similar issue in it’s training data to potentially kill us all.