03-17-2018, 01:11 PM | #61 | |
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
__________________
Bill Stoddard I don't think we're in Oz any more. |
|
03-17-2018, 01:16 PM | #62 | |
Join Date: Feb 2005
Location: Berkeley, CA
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
|
|
03-17-2018, 03:32 PM | #63 | |
Banned
Join Date: Mar 2018
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
Multi-tasking is basically impossible, but that may not be as true for an AI as it is for a human. You could possibly have linked mini-minds (compartmentalized mind in GURPS terms) that can split and coordinate tasks. |
|
03-17-2018, 04:23 PM | #64 | |
Join Date: Jul 2008
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
__________________
I don't know any 3e, so there is no chance that I am talking about 3e rules by accident. |
|
03-17-2018, 05:38 PM | #65 | ||
Join Date: Feb 2012
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
We could even use a speaker as the only interface. We do it every day with phones, and I never wonder if I am speaking with a person through the phone, or with a simulation outputting through the phone speaker. Quote:
So I can say: we totally can study qualitative phenomena. Yet, brain physiology and functioning are currently approached with scientific method, by means of mathematical relations between measures. We totally need further steps in epistemology before we’ll be able to directly relate brain and mind, because - following the argument of John Lucas - our mind is capable of actions that cannot be reduced (in the technical sense) to computation. I agree that we could discover, in our future, some physical aspect of brain functioning that cannot be reduced to computation. I am thinking of intra-neuron elaboration, relations between quantum mechanics and chirality, and more so. Still, we can at least hypotize that it’s possible to create a “good enough” simulation of a sentient mind by means of computation. We don’t really know; we have to choose in our sci-fi speculations. The idea of perfect simulation of a mind has a sound logic, and the whole argument is very appealing: an artificial mind, behaving as a true person, and no sentient being knows if it’s alive or it just seem so. By the way it’s the main theme of an old GURPS adventure: “Loving the Deads”. Last edited by Ji ji; 03-17-2018 at 05:56 PM. |
||
03-17-2018, 06:05 PM | #66 | |
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
My basic view is that the Searle Chinese room argument, and the Turing test, are invalid as statements of what it is to be "human." They seem to focus entirely on language. I don't think that typed or written or even spoken inputs and outputs are sufficient for humanity or for sapience. Consider the Turing machine. Can I say to it, "How did you like the chicken masala we had for dinner?" or "doesn't that singer have a great voice?" or "beautiful weather, isn't it?" Can it comment on the color of my eyes, or whether the room is too hot or too cold? A human being, if not disabled in specific ways, would be able to perceive and think about a vast range of things of that sort, and discuss them, without having been prepared to do so in advance; the discussion would be based on their awareness of the world and of their existence in it, their embodiment, and their language would be a way of expressing and focusing that awareness. For example, C can say to me, "He wants your attention," and I can look around and see that our cat has come up to where I'm sitting and flopped onto his back on the floor next to me—and a purely symbol manipulating engine, even if it could notionally pass the Turing test or do the Searle trick, could not do such things. (Setting aside the idea that its inputs come from a simulated human body in a simulated physical world.) In other words, as a human being, and more broadly a sapient one, I have intentionality: I can direct my awareness to physical entities, and use language to refer to them. My language is not just a self-contained system of symbols: It contains words that refer to the world, such as "I," "you," "they," "here," "there," "now," "then," and "thus," and such words can guide another person's attention to a common feature of the environment. This is a big part of the primary use of language, which is face to face communication. Exchanging messages over the Internet, or by teletype, as Turing imagined, is a specialized secondary use of language. And I can tell that another person, or a nonhuman animal without speech, is conscious by seeing them move their body in a way that directs their attention to stimuli that provide interesting information. Which also points at the kind of things I would take as providing evidence that a machine was conscious. I hope this is some help.
__________________
Bill Stoddard I don't think we're in Oz any more. |
|
03-17-2018, 06:38 PM | #67 | |
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
It also outputs electrical signals to the muscles and such that don't take a binary on/off form. In fact no neurons do binary on/off; they do pulses with different frequencies. Of course you can model this mathematically, but then you can model the stresses and flows in the Earth's crust mathematically, and that doesn't mean that the Earth is a huge computer; it means that mathematics is science's characteristic tool. I also think that talking about the relationship between the mind and the brain is a misleading phrasing. It's like talking about the relationship between the legs and running, as if running were a separate entity that somehow entered into an interaction with the legs.
__________________
Bill Stoddard I don't think we're in Oz any more. |
|
03-17-2018, 06:38 PM | #68 | |
Join Date: Feb 2005
Location: Berkeley, CA
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
|
|
03-17-2018, 06:45 PM | #69 | |
Join Date: Jul 2008
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
An 'AI' that has to be given advance preparation for the specific question you're going to ask out of that list would, of course, not be performing at the level you expect of a human. But having been given advance preparation for all of them and using the appropriate parts...that would only be fair. (You also seem to be drawing a bit of a bead on physical senses, which seems a bit odd since those aren't all that hard for computers. They're not as conveniently descretized as language so as to be convenient fodder for certain types of philosophy, and some of the processing hasn't been solved yet, but a modern computer with the right peripherals could sense the temperature of the room or the color of your eyes just fine.)
__________________
I don't know any 3e, so there is no chance that I am talking about 3e rules by accident. |
|
03-17-2018, 07:21 PM | #70 | |
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
__________________
Bill Stoddard I don't think we're in Oz any more. |
|
|
|