03-17-2018, 09:09 AM | #51 | |
Join Date: Feb 2012
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
As omne vivum ex vivo, it could well be that abiogenesis problem and consciousness problem are related; however, by no way we currently know when these two problems will be solved or if they can be solved at all. |
|
03-17-2018, 09:31 AM | #52 | |
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
But also, working aircraft were not achieved by building flapping wings.
__________________
Bill Stoddard I don't think we're in Oz any more. |
|
03-17-2018, 11:32 AM | #53 |
Join Date: Feb 2012
|
Re: No AI/No Supercomputers: Complexity Limits?
Simulation of mind by an algorithm is not limited to simulation of a brain. However, a complete physical simulation of a brain could be a very good approach to mind simulation. Could be as in “depending on the soundness of some assumption not yet demonstrated”.
A simulation of a brain could or could not behave as a sentient entity. The point is that we could maybe create something which behaves as a such, without being sentient; this would go around the hard problem of creating a mind. So, the simulation definitely isn’t a true AI, but it function as it was. The problem you are referring to is “an algorithm simulating a fire cannot burn things, can only burn simulations of things”. While a sound problem, its translation in our example is on a different level. As the simulation of fire can’t really, the simulation of brain cannot cause a sentient mind. But what if we are interested in something which behaves as a mind and we don’t care about its onthology? It’s the the Chinese room argument of John Searle: whereas the relation between input and output is good enough, we can have an algorithm behaving like a sentient being. With this assumption, sentient-like AI becomes feasible by means of straight complexity. |
03-17-2018, 11:59 AM | #54 | |
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
But I really had in mind two distinct differences. There is the difference between, say, a digital model of a hummingbird and a living bird, which is that one simulates flight and the other actually flies. But there is also the difference between a living bird and an airplane, which is that though both fly a balance of lift, thrust, weight, and drag, they generate the thrust and lift by different mechanisms that give them different capabilities. Even gliding birds don't hold their wings completely rigid, or generate thrust with propellers.
__________________
Bill Stoddard I don't think we're in Oz any more. |
|
03-17-2018, 12:03 PM | #55 | |
Banned
Join Date: Mar 2018
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
And I do not believe that a 'simulation of a mind' can really be functional without some treatment of these qualitative factors. Actual logical comprehension is required to understand and solve problems, and the only reason computers are even useful is because you have a conscious being to interpret what amounts to micro-lightning into meaningful results. Otherwise they don't even really do math - they just jiggle atoms. A computer is only as useful as its user. Computers can be arranged so that they automatically perform tasks which human beings would be required to otherwise, but they're not doing anything intentional or analytic. Rather the intention and analysis puts them in place. All they are is a set of 'gears', and are no more 'solving problems' or 'learning' than a clock is 'keeping time'. As I've said before, this is not to say that AI can't be built, just that it needs to be built as a thinking-machine and is not merely an extension of computation. Computation can be done by beer cans hanging from strings, but you still need a designer to set them up and an end user to interpret the results. I think that a true AI (which is not merely a brain-in-a-jar) would be a good analogue to the bird v. airplane. A thinking machine could and probably would be as different from human minds as birds are from airplanes - still following the same basic physical and logical laws, but employing very different mechanics to get to the result. And, just like the airplane, the artificial AI may be vastly more impressive in some dimensions than the biological mind (if not quite as deft in some scales). This is where the 'alien intelligence' factor comes in. An AI might be able to figure out where all the nukes on the planet are hidden in ten hours of analysis, but still not understand why you want to take a girl on a date. You might ask it to build you a hotel and wind up with a labyrinth no one can navigate but which does nonetheless make a certain amount of sense regarding construction costs and safety. Without the evolutionary and social conditioning (which depend so much on our mental hardware) it may be very hard to impress on it considerations we simply take for granted, even if it can do things that we never could. Assuming we had a relatively controllable AI you may well require a lot of human oversight still just to get the results you want - which has its own dangers: the AI might make decisions that are beyond your IQ, and when you 'edit' its results you end up with a disaster you're too dull to have seen. Some friends of the family 'redesigned' an engine to improve performance and it caught fire. The original engineer obviously knew something they hadn't even taken into account. Philosophical wankery aside, this poses some problems from a role-playing perspective. It's hard to play a character that's significantly smarter than you, but what about a character that's significantly smarter than you, completely amoral, and doesn't know or care what a 'family' even is? Realistic AI is like a realistic alien: nobody really knows what it would be like, but it would be rather surprising if it was like Clark Kent. If all you want AI to be is a computer-generated personal assistant (like Star Trek) then you don't need to worry about it. But if you're trying to emulate it in a believable way it becomes extremely difficult. Last edited by VonKatzen; 03-17-2018 at 12:21 PM. |
|
03-17-2018, 12:20 PM | #56 |
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
I don't think you need a computer for that. You could just hire a modernist architect.
__________________
Bill Stoddard I don't think we're in Oz any more. |
03-17-2018, 12:36 PM | #57 | |
Banned
Join Date: Mar 2018
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
This is not to say that useful, controllable true AI is not possible. But it's much harder than some people might think. I don't even think the Skynet-style of madness is accurate - Skynet's self-defense and genocide is a pretty human response to a threat. An actual AI might do something far weirder. Like start putting flouride in people's drinking water to make them stupid. Or kill itself because it becomes overstimulated. Or take up knitting on a global scale to calm itself. Who knows? Pseudo-AI is another conversation worth having - but I think the more pseudo-intelligence you want a computer to have the more you have to specialize it. I can believe a good computer killing machine, or a good computer telescope data analyst, but one that does both stretches credibility. Pseudo-AI is much more controllable and predictable than true intelligence, so it might be the option that societies go with even if they can build true thinking machines. Or perhaps a very stupid true AI (akin to IQ 60) connected to very good pseudo-AI specialized sub-systems. It would have very little ability to understand anything, but could use its 'tools' to do things for you as long as the orders weren't too complex. An analogue would be a savant with mental disabilities. He might not be able to run your store, but he can do your accounting books like a wizard. Something like IQ 6, with a bunch of talents, skills and techniques along very particular lines. Putting that into a killer robot would give you a very lethal fighting machine (if the worst commander ever, too). Last edited by VonKatzen; 03-17-2018 at 12:52 PM. |
|
03-17-2018, 12:50 PM | #58 |
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
Well, you know, a lot of human projections of AI behavior amount to envisioning the AI as a human bureaucracy. It works by incredibly complicated rules, it arrives at decisions that may make no human sense, and the incentives it responds to are different from ours.
Though I suppose you could view a bureaucracy as an attempt to make an AI out of human components. In some versions it might even be considered a GOFAI that operates entirely by symbol manipulation without knowing what its symbols refer to in the physical world.
__________________
Bill Stoddard I don't think we're in Oz any more. |
03-17-2018, 12:51 PM | #59 |
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
It would be at least quite unusual for a human being to be good at both.
__________________
Bill Stoddard I don't think we're in Oz any more. |
03-17-2018, 12:53 PM | #60 | |
Banned
Join Date: Mar 2018
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
A similar case would be the quasi-sentient animals. I can not outfight a tiger - I don't have the mental or physical equipment. Unless you give me options - like a gun and a helicopter. Then the tiger is screwed. Whereas the tiger wouldn't do anything with a gun and a helicopter except lay on them. |
|
|
|