03-17-2018, 08:02 PM | #71 | |
Join Date: Jul 2008
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
...Of course, the more troubling thing is that you're making the difference between sapience and non-sapience come down to a set of working eyes.
__________________
I don't know any 3e, so there is no chance that I am talking about 3e rules by accident. |
|
03-17-2018, 08:07 PM | #72 | |
Join Date: Feb 2005
Location: Berkeley, CA
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
It is, of course, legitimate to say that conversing with someone over a teletype doesn't give enough information to decide whether they're sentient, though we don't act that way (I've never communicated with you by any medium that's meaningfully more capable than a teletype, and I'm reasonably confident you're sentient). |
|
03-17-2018, 08:50 PM | #73 |
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
Unless that's just synedoche, it's far too narrow. What I'm saying you need is embodiment, physical location, and sense perception. That's what gives you the ability to be conscious OF something. A blind and deaf person might still have that; I'm not saying that Helen Keller wasn't conscious. But I see consciousness as a relation between a physical entity and other physical entities.
__________________
Bill Stoddard I don't think we're in Oz any more. |
03-18-2018, 12:16 AM | #74 | |||||||||
Join Date: Feb 2007
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
The most that can ever be scientifically said on such matters is: "As far as we know, it can't be done." It cannot be shown to be impossible. Yeah, within the limits of our knowledge, Special and General Relativity are solid. Will that hold in the future? Nobody knows, and assigning a probability of that holding is meaningless. It's either true, or not. Quote:
But there's no reason to assume that improvement of spaceships has stopped at the current level, either, and some pretty good reasons to assume they will continue to improve over time. Quote:
Quote:
That's not an argument. If he was arguing for a specific timespan of the near future, he might have a case, but on an open-ended timescale it's meaningless. Quote:
Quote:
(I don't believe in the Computeraggedon and I have never have. Vinge's Singularity is mostly hype.) Quote:
It's like somebody 100 years ago saying that if you could transplant hearts and have them beat, or install implants to let deaf people hear, of course you could do something simple like curing a chest cold...except of course it turns out that the latter is harder than either of the former, as well as quite different in nature. But that 'looks' like a reasonable comparison, from the POV of 100 years ago. Does strong AI imply 'uploading'? Maybe, but maybe not. We don't know enough to say. Are their limits to cyborgization that we have no idea about? No way to know until we get there. Some things people expect to be easy turn out to be hard, some things people (including experts) expect to be difficult turn out to be easy. And sometimes the experts turn out to be right. "It's difficult to make predictions, esp. about the future." Quote:
My favorite example is the construction of the irrigations systems that made mass agriculture possible in the America Southwest. From a strict economic POV, it never made sense. It happened anyway, in large part for non-rational emotional reasons. Quote:
__________________
HMS Overflow-For conversations off topic here. |
|||||||||
03-18-2018, 12:21 AM | #75 | |
Join Date: Feb 2007
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
The trouble with most uses of SAI in SF is that it tends to be 'human in a box' for story-telling purposes, which really doesn't make sense when examined closely. Nor does a society of equally-interacting, equal-status humans and AIs. It's one of those Cool tropes that falls apart when you really think about it. I'm not sure even a 'human brain in a jar' would stay psychologically human-like on an open-ended basis. I suspect his thought processes would begin to diverge from 'baseline' humans fairly quickly.
__________________
HMS Overflow-For conversations off topic here. |
|
03-18-2018, 09:01 AM | #76 | ||
Join Date: Feb 2012
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
Quote:
It’s not a real issue as we could simulate a brain on further levels, for example simulating every atom or every subatomic particle. It becomes just a problem of increased data size and complexity. However: every physical aspect of the brain can be measured, and the sequence of its physical states and transformation can be reduced to an algorithm. In this level of reasoning, there is no world and the vision. There are physical alterations of specialised nerve cells hit by radiations, which in turn trigger a complex set of physical transformation in the brain. All of this can be simulated by a sequencer of physical states using a set of transformation rules (for example a computer). The input is raw data with an appropriate format. It can be a simulation from another computer, or a sensor interfacing with the environment. As living beings we use a sensor (the eye); hopefully we’ll be soon able to build prostethic eyes/sensors for people with damaged eyes. On the other hand, there are phenomena that we cannot simulate with a formal system. They are in two categories. One is introspection or sentiency. As a qualitative phenomenon, it cannot be represented as formal system. The second are undecidable propositions; we can manage them, but a formal system cannot. The first is not a problem for the simulation. We never directly perceive the others’ experience of self; we just perceive their behaviours and we assume that they have self-awareness as we have*. In fact, if reality would be just a simulation feed to our brain, we could not tell the difference. The second problem should be solved by the simulation use. Of course a computer simulating a mind would not have self-awareness or understanding of undecidable propositions, but it could behave like it had, and we couldn’t tell the diiference - as above. *Regardless of speech. When my dog behaves happy, I perceive only her behaviour, and yet I assume that she is behaving happy because she feels happy. I know that she is happy because I have first-hand experience of feeling and happyness in myself, and so she has and knows. Even more amazing, this implies that we know the other one knows our feelings because it is like us. PS: Sometimes I think that this board has the most stimulating discussions and the most brilliant minds in the whole internet. Thanks guys! Last edited by Ji ji; 03-18-2018 at 09:11 AM. |
||
03-18-2018, 10:03 AM | #77 | |
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
If nothing exists, there can be no consciousness: a consciousness with nothing to be conscious of is a contradiction in terms. A consciousness conscious of nothing but itself is a contradiction in terms: before it could identify itself as consciousness, it had to be conscious of something. (Ayn Rand, Atlas Shrugged.) And the reliance on formal systems is bound up with the idea that consciousness is primary. It goes back historically to David Hume's distinction between "relations of ideas" and "matters of fact," which was part of a philosophy in which there were no material objects, no cause and effect, no continuing self, no reliable memory, no other minds; there was only an ongoing stream of sensations, which constituted both all that we could know of physical reality and all that we could know of our own minds. For Hume, "relations of ideas" were purely formal. For existence-first or "outside-in" philosophers like Aristotle, logic wasn't just "relations of ideas": it was a statement of how things were in reality, to which human thought had to conform if it was to be about reality. I think that consciousness is consciousness OF something. And I also think that consciousness is a process taking place in a physical being, and manifested in the activity of such a being—in particular, in its orientation to the things it's conscious OF. When somebody's lying on the ground after a car crash, one of the things the paramedics check is whether the person is conscious. And they don't do this by engaging in arcane philosophical discussions, or by applying some sort of advanced scientific instruments; they attempt to get the person's attention, talk to them, see if they can track what's going on around them, and so on. They check whether they're engaged with the external world. So when philosophers talk about consciousness as some sort of mysterious inner state, they're talking about something entirely different from the ordinary meaning of "consciousness"; in fact, I think, about a philosophical chimera. I would add that discussion of whether we "directly perceive" others' consciousness seems to involve another double standard. What does it mean to "directly perceive" something? I can see my checkbook on the desk in front of me. But that isn't some sort of causeless, meansless event; I perceive it because it reflects light, because light stimulates the cells of my retinae, because they fire nerve impulses, and so on. That doesn't seem "direct." On the other hand, saying that I don't directly perceive the checkbook doesn't entail that there is something else that I DO directly perceive, such as an image of a checkbook in some hidden realm of consciousness (what Dennett calls the "Cartesian theater"), or that I logically infer the physical checkbook from the inner representation of the checkbook; it entails that this idea of "direct perception" doesn't mean anything. I perceive that the checkbook is on my desk, and I perceive (for example) that my cat is awake and has heard something, and both of those are results of my brain putting together information from my senses—and not doing so through a process of logical inference. Perception is a physical process, just as much as flight is, or burning; a computer model of perception no more results in anything perceiving the physical world than a computer model of fire results in flame, smoke, or ash or consumes fuel.
__________________
Bill Stoddard I don't think we're in Oz any more. Last edited by whswhs; 03-18-2018 at 10:15 AM. |
|
03-18-2018, 11:45 AM | #78 | |
Join Date: Jun 2006
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
All of your senses are virtual doesn't seem like it disqualifies you from being conscious. And in fact is still perception of something in the physical world - yeah your consciousness *interprets* a physical pattern of voltages on the input pins of your chip in a decidedly different way than a human would, but it's still external physical stimulus being turned into mental model of the world. Possibly a very wrong model, but still. Edit: And come to think of it the Chinese room as at least one sense (the one your questions come in through) and an output mechanism that can show attention to some particular part of the data obtains through it (it can in a classic internet discussion tactic come right out and *say* "I'm paying attention to this particular word in what you said and ignoring the rest"), so arguably it's just a difference of degree. Not that a supposedly qualitative philosophical difference turning out to be quantitative would be exactly unprecedented.
__________________
-- MA Lloyd Last edited by malloyd; 03-18-2018 at 11:53 AM. |
|
03-18-2018, 12:37 PM | #79 | |
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
First, a terminological note: You say that consciousness "interprets" something. Consciousness is not an entity or a physical subsystem within an entity; it doesn't have agency. It's the brain, or the organism, or the computer, that interprets things. Now, if you say that any process in which stimulation of a physical system results in its forming a model of a world is "perception," that seems to include, for example, the classic brain in a box with electrodes, or the person trapped in the Matrix, or the person victimized by Descartes' "evil genius." In other words, it includes a wide range of the classic skeptical scenarios for perception being completely misleading and untrustworthy, in effect no different from hallucination. And if those are the case, then skeptical conclusions seem to follow from them: You cannot claim to know anything, not even that there are external physical stimuli. The key here is your comment "perhaps a very wrong model." A process that can just as well create a very wrong model as a right one isn't perception. A subordinate point is that you describe the computer as "perceiving" the voltages on various input pins. That seems exactly like saying that I "perceive" the firing of my retinal neurons. And that's a misleading way to describe it. I perceive a monitor screen with a big yellow patch, a smaller white patch, and some black shapes spread across the white patch; I interpret those shapes as words; but my retinal impulses are not what I perceive, but (part of) how I perceive. I don't say "Oh, I'm getting a frequency of X on this neuron's firing, and a frequency of Y on this one's," and so on for some vast number of neurons, and then deduce that they form a certain image; rather, I say "that's a monitor screen showing such and such." Indeed, taking it the other way, supposing that what we "perceive" is the internal electrical states of our brains, is another path to skeptical conclusions. For comparison, if I put a thermometer into a roast game hen that I've taken out of the oven, what the thermometer is measuring is the internal temperature of the game hen; it's not measuring a voltage in a wire, though there is such a voltage and it forms part of the process that results in the measurement. And "perceive" is like "measure": It describes an action toward the physical world.
__________________
Bill Stoddard I don't think we're in Oz any more. |
|
03-18-2018, 12:58 PM | #80 |
Join Date: Feb 2005
Location: Berkeley, CA
|
Re: No AI/No Supercomputers: Complexity Limits?
It's not 'just as well', it's quite a bit of effort, but by your definition humans are incapable of perception. Senses most certainly can be deceived, and we're working on synthesizing them (mostly to treat problems, mind you, but you can play music on a cochlear implant).
|
Thread Tools | |
Display Modes | |
|
|