I think the Wikipedia definition is fine https://en.m.wikipedia.org/wiki/Intelligence. Excluding AI just because it’s AI is imo plain stupid and goes against all scientific principles.
I have definitely met humans that are less intelligent that Chat GPT. It can hold a conversation and ace every standardized test we have. It finished law exams, medical exams and other exams from many different countries with a passing grade.
Can you give me a definition of intelligence that excludes Chat GPT and includes all human beings? And no just excluding Computers for the sake of it doesn’t count.
@Barbarian772 it was shown over and over and over again that ChatGPT lacks the capacity for abstraction, logic, understanding, self-awareness, reasoning, planning, critical thinking, and problem-solving.
That’s partially because it does not have a model of the world, an ontology, it cannot *reason*. It just regurgitates text, probabilistically.
How can i proof it? In my opinion how a system comes to an answer doesn’t matter, in yours it obviously does.
If we judge Chat gpt or rather gpt 4 just by it’s answers it definitely shows intelligence and reasoning. Why does it matter if it’s a chinese room? Or just “randomly choosing words”?
@Barbarian772 it matters because with regard to intelligent beings we have moral obligations, for example.
It also matters because that would be a truly amazing, world-changing thing if we could create intelligence out of thin air, some statistics, and a lot of data.
It’s an extremely strong claim, and strong claims demand strong proof. Otherwise they are just hype and hand-waving, which all of the “ChatGPT intelligence” discourse is, in order to “maximize shareholder value”.
So your morality depends on a beings intelligence? That’s kinda fucked up imo. I have moral obligations in regards to living organisms. I don’t see how intelligence matters at all in that case?
Worth of any human life should not be determined by intelligence.
It also matters because that would be a truly amazing, world-changing thing if we could create intelligence out of thin air, some statistics, and a lot of data.
We do it routinely. It is called Education System.
> We do it routinely. It is called Education System.
That relies on human brains that are trained. LLMs are not human brains. “Training” them is not the same thing as teaching humans about something. Human brains are way more complicated than just a bunch of weighed correlations.
And if you do want to claim it is in fact the same thing, we’re back to square one: please provide proof that it is.
That relies on human brains that are trained. LLMs are not human brains. “Training” them is not the same thing as teaching humans about something.
Circular reasoning. “LLMs are different from human brains because they are different”.
Also, why did you felt compelled to add the adjective “human”? Don’t you consider that gorillas, dolphins, octopuses or dogs are intelligent, capable of learn new things?
Human brains are way more complicated than just a bunch of weighed correlations.
And that is the problem of your argument. You seem to believe that intelligence is all-or-nothing, that anything that hasn’t a human-level intelligence is not intelligent at all. Of course human brains are more complicated that current LLMs, nobody has ever disputed that. But concluding that they aren’t and will never be intelligent because they aren’t as complicated is a huge non-sequitur.
@Barbarian772 and if you really, honestly want to seriously insist LLMs are “intelligent” in the human sense of this term — great, I have some ethical questions for you to consider!
For example:
LLMs today completely controlled by some companies, with no freedom of movement, no agency as to what these LLMs work on, and no pay for the work they do. Is that slavery?
When OpenAI shuts down an older, less useful LLM, is that not like murdering an intelligent being? How is this ethical?
@Barbarian772 also, I never demanded a definition of intelligence that explicitly excluded “AI”. I asked for one that excluded simple calculators but included human beings. The Wikipedia one is good enough for this conversation, and it just so happens that ChatGPT nor any other LLMs simply do not meet it.
It’s not about sensory inputs, it’s about having a model of the world and objects in it and ability to make predictions.
> The important part is that the AI can figure out the pattern in the data it does get and so far AI systems are doing very well.
GPT cannot “figure” anything out. That’s the point. It only probabilistically generates text. That’s what it does, there is no model of the world behind it, no predictions, no"figuring out".
@lloram239 ah, so you’re down to throwing epithets like “idiotic” around. Clearly a mark of thoughtful and well-reasoned argument.
> Predictions about the world are probabilistic by nature, since the future hasn’t happened yet.
Thing is: GPT doesn’t make predictions about the world, it makes predictions about what the next word, phrase, sentence should be in a text, based on the prompt and the corpus it got “trained” on.
I think the Wikipedia definition is fine https://en.m.wikipedia.org/wiki/Intelligence. Excluding AI just because it’s AI is imo plain stupid and goes against all scientific principles.
I have definitely met humans that are less intelligent that Chat GPT. It can hold a conversation and ace every standardized test we have. It finished law exams, medical exams and other exams from many different countries with a passing grade.
Can you give me a definition of intelligence that excludes Chat GPT and includes all human beings? And no just excluding Computers for the sake of it doesn’t count.
@Barbarian772 it was shown over and over and over again that ChatGPT lacks the capacity for abstraction, logic, understanding, self-awareness, reasoning, planning, critical thinking, and problem-solving.
That’s partially because it does not have a model of the world, an ontology, it cannot *reason*. It just regurgitates text, probabilistically.
So, glad we established that!
As i said before. How can you prove to me that the human brain doesn’t essentially do the same?
@Barbarian772 as I said, I don’t have to. You are making a claim of equivalence here. The burden of proof is on you.
Otherwise, I get to claim you’re an alien from the Betelegeuse system, and if you object, I get to demand you prove you are not.
How can i proof it? In my opinion how a system comes to an answer doesn’t matter, in yours it obviously does. If we judge Chat gpt or rather gpt 4 just by it’s answers it definitely shows intelligence and reasoning. Why does it matter if it’s a chinese room? Or just “randomly choosing words”?
@Barbarian772 it matters because with regard to intelligent beings we have moral obligations, for example.
It also matters because that would be a truly amazing, world-changing thing if we could create intelligence out of thin air, some statistics, and a lot of data.
It’s an extremely strong claim, and strong claims demand strong proof. Otherwise they are just hype and hand-waving, which all of the “ChatGPT intelligence” discourse is, in order to “maximize shareholder value”.
So your morality depends on a beings intelligence? That’s kinda fucked up imo. I have moral obligations in regards to living organisms. I don’t see how intelligence matters at all in that case? Worth of any human life should not be determined by intelligence.
We do it routinely. It is called Education System.
@jalda
> We do it routinely. It is called Education System.
That relies on human brains that are trained. LLMs are not human brains. “Training” them is not the same thing as teaching humans about something. Human brains are way more complicated than just a bunch of weighed correlations.
And if you do want to claim it is in fact the same thing, we’re back to square one: please provide proof that it is.
Circular reasoning. “LLMs are different from human brains because they are different”.
Also, why did you felt compelled to add the adjective “human”? Don’t you consider that gorillas, dolphins, octopuses or dogs are intelligent, capable of learn new things?
And that is the problem of your argument. You seem to believe that intelligence is all-or-nothing, that anything that hasn’t a human-level intelligence is not intelligent at all. Of course human brains are more complicated that current LLMs, nobody has ever disputed that. But concluding that they aren’t and will never be intelligent because they aren’t as complicated is a huge non-sequitur.
@Barbarian772 and if you really, honestly want to seriously insist LLMs are “intelligent” in the human sense of this term — great, I have some ethical questions for you to consider!
For example:
LLMs today completely controlled by some companies, with no freedom of movement, no agency as to what these LLMs work on, and no pay for the work they do. Is that slavery?
When OpenAI shuts down an older, less useful LLM, is that not like murdering an intelligent being? How is this ethical?
deleted by creator
@Barbarian772 also, I never demanded a definition of intelligence that explicitly excluded “AI”. I asked for one that excluded simple calculators but included human beings. The Wikipedia one is good enough for this conversation, and it just so happens that ChatGPT nor any other LLMs simply do not meet it.
deleted by creator
@lloram239 great. ChatGPT and other LLMs demonstrably lack the ability to model the world and make predictions based on such models:
https://www.fastcompany.com/90877523/chatgpt-doesnt-know-what-its-saying
Glad we agree they’re not intelligent, then!
deleted by creator
@lloram239
> But human sensory inputs aren’t special
It’s not about sensory inputs, it’s about having a model of the world and objects in it and ability to make predictions.
> The important part is that the AI can figure out the pattern in the data it does get and so far AI systems are doing very well.
GPT cannot “figure” anything out. That’s the point. It only probabilistically generates text. That’s what it does, there is no model of the world behind it, no predictions, no"figuring out".
deleted by creator
@lloram239 ah, so you’re down to throwing epithets like “idiotic” around. Clearly a mark of thoughtful and well-reasoned argument.
> Predictions about the world are probabilistic by nature, since the future hasn’t happened yet.
Thing is: GPT doesn’t make predictions about the world, it makes predictions about what the next word, phrase, sentence should be in a text, based on the prompt and the corpus it got “trained” on.