• 0 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle

  • The goal of emulation is to make the console work for every game that was released on it.

    After all, if one or more games don’t work, the emulation cannot be accurate. And with imaccurate emulation, how can you ensure your other games are correctly emulated? In fact, if you know the emulator isn’t perfect - which it isn’t if some games have issues - how can you know any game is correctly emulated? You can test the game but it would take an infinite time to test everything. If you only do a few runs of the game, how do you know you haven’t missed anything?

    In other terms: The language L containing every perfectly emulated game is undecidable.



  • AI and robotics companies don’t want this to happen. OpenAI, for example, has reportedly fought to “water down” safety regulations and reduce AI-quality requirements. According to an article in Time, it lobbied European Union officials against classifying models like ChatGPT as “high risk,” which would have brought “stringent legal requirements including transparency, traceability, and human oversight.” The reasoning was supposedly that OpenAI did not intend to put its products to high-risk use—a logical twist akin to the Titanic owners lobbying that the ship should not be inspected for lifeboats on the principle that it was a “general purpose” vessel that also could sail in warm waters where there were no icebergs and people could float for days.

    What would’ve been high risk? Well:

    In one section of the White Paper OpenAI shared with European officials at the time, the company pushed back against a proposed amendment to the AI Act that would have classified generative AI systems such as ChatGPT and Dall-E as “high risk” if they generated text or imagery that could “falsely appear to a person to be human generated and authentic.”

    That does make sense, considering ELIZA from the 60s would fit this description. It pretty much repeated what you wrote to it in a different style.

    I don’t see how generative AI can be considered high risk when it’s literally just fancy keyboard autofill. If a doctor asks ChatGPT what the correct dose of medication for a patient is, it’s not ChatGPT which should be considered high risk but rather the doctor.