View Single Post
Old 03-07-2023, 08:01 AM   #5
Anaraxes
 
Join Date: Sep 2007
Default Re: How close are we to NAIs?

Quote:
Originally Posted by Fred Brackin View Post
ChatGPT... doesn't understand at all whether it's done something the right way or the wrong way.
This is a key point for anything that's supposed to be a self-motivated intelligence. ChatGPT exhibits zero understanding of the content of the text it generates. It's great at generating that text and parsing user questions, but there's nothing higher level than that. It can't tell if it just stole the right or wrong answer from a web site, whether answer #2 matches answer #3 that it just gave, and so on. There's no stateful model of what it's talking about, expectations, or internal motivations. So, nice as it is as a natural language processor, it doesn't exhibit (to me) any "intelligence" behind the words.

Kinda gives me the same feeling as a mentalist / con man doing cold reading. The output looks good, but there's not really any "there" there. It's more about selling that output while eliciting the actual thoughts from the mark and playing the odds while rephrasing that info as vaguely as you can get away with into the next response, rather than actually generating the answers.

I don't have THS, but the definition for NAI in the GURPS Wiki is "Programs that follow basic paths, ranging from smart tools to videogame NPCs". So by that definition, we've had NAIs for a while. And whatever ChatGPT does, it isn't one of those and in fact is less N-AI then, say, those videogame characters, which at least have some minimal self motivation (even if it's just "patrol this path and alert to anything you see"). I don't think ChatGPT does anything at all if you don't keep prodding it.
Anaraxes is offline   Reply With Quote