03-13-2018, 10:17 PM | #11 | |
Banned
Join Date: Mar 2018
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
You can't create understanding unless the machinery for understanding exists. This may be possible in ways other than meat and water, but it's not going to be bunch of microprocessors. That's not to say that we wouldn't design an AI that could interface much more easily with digital computers. Or design computers to better interface with out own intelligence. But that's not the same thing as an intelligent computer. As Searle says, the brain is a computer, but it's not just a computer. Computers are used by intelligence, they do not produce it. How this affects my predictions for my TL9 world is that while AI may be 100% possible, the way we are currently going at it is (I believe) simply wrong from an engineering point of view, and if they keep going at it this way they're just spinning their wheels. There are other issues with AI aside from this. Super-intelligent AI (true AI) may be impossible because super-intelligence is not possible. Focus and speed are often at cross purposes, and another axis is functionality. Though exceeding human capacity may be possible it may not be by much. A super-fast intelligent AI still needs to wait for things to happen, and it still needs to maintain focus and give commands. The actual upper limit of intelligence may be below GURPS IQ 20, no matter how well designed it is. Futurism is 80% hockum and 20% speculation. It is entirely possible that the actual far future of humanity finds us living on Earth and in quasi-immortal cyborg bodies, and that's it. Beyond that I see as little evidence for the Singularity as I do the Rapture. And given that Science! has become something of the official religion of modern states I tend to believe it comes from the same psychological impulse, with just as little actual evidential support. Certainly most of its advocates understand physics and biology about as well as most other believers understand Thomistic theology. Last edited by VonKatzen; 03-13-2018 at 10:50 PM. |
|
03-14-2018, 12:03 AM | #12 | |
Join Date: Feb 2007
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
A more basic limit is signal propagation time. The further apart the physical components of the machine, the farther the signal has to go between component A and component B, the slower the machine gets. The limiting factor (as far as we know) is the speed of light. It might not sound like much, an extra microsecond for that signal to cover the distance, and it isn't, for a single operation. If the computer is doing a billion operations, though, that adds over sixteen minutes to the activity. A lot of processor design is trying to keep the distances the signals must travel down. A computer the size of the Earth would have components on one side .04 light-seconds from the components on the other, that fastest possible signal exchange would require .04 seconds between those points. For activities involving the whole machine, that implies fairly slow processing. I could imagine that such a computer might be very, very 'smart', but not very fast.
__________________
HMS Overflow-For conversations off topic here. |
|
03-14-2018, 02:27 AM | #13 | |
Join Date: Feb 2005
Location: Berkeley, CA
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
That's because one of the outputs of the system is an explosion. There are some outputs of a person that need a body, but it's debatable if they're essential, and even if they are it just means your AI needs to be embodied. You base this assertion on what? Until you can usefully classify what consciousness is, you can't make reliable assertions about what sorts of systems can generate it. |
|
03-14-2018, 03:42 AM | #14 | |||||||||
Join Date: Oct 2008
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
We already have today a lot of ANI(artificial narrow intelligence) around us, they are in fact so common that we do not think of many of them as such. Only when they are publicized as being much better than humans at something(like beating humans at chess) do we take note and then only briefly. (see https://en.wikipedia.org/wiki/AI_effect ) You likely use several such every day and worldwide they are used in everything from image recognition to airline pricing to warehouse optimization and thousands of other things. We already live in the age of ANI, we just do not call a lot of it AI because "it is just technology and AI is magic" Quote:
Quote:
We have things like instant access to so much information that was unheard of just 30 years ago. We also have amazing systems that automate, learn and improve in a way that was inconceivable to most people even 10 years ago. As example one of our customers is a fairly small manufacturing company making steel parts. They have been our customer from 1994 I can explain some of the technological changes that have happened in that one company during those years just as a single example. One of the things they do is cut shapes from steel plate according to drawings. In 1994 they we in process of transitioning from traditional cutting where someone steers the plasma jet to a robot doing it. The production planning had moved from cut paper pieces to a CAD program about a year earlier. One of the problems with their thing is that there are hundreds of grades and types of steel and they come in vary varying thicknesses. They all thus have different thermal properties, so you have to do things differently and plan safety margins between pieces to be different depending on the material. Also the placing of the different strange shapes on the plate requires rotating them and using the right piece of plate from the stock of half cut plates and such. They had a good layout planner. When he then left the company four years later the scrap % went up about 40%, to go down slowly in the coming years as the planners learned their job, about 5 years later they were at about the same scrap % as before. So it took a human engineer about 5 years to learn the trade. In 2012 they installed an automated planning system, it had about 20% higher scrap rate than the human planner, but could do the planning much faster and cost only about the same as the engineer in a year. In less than 3 years it had reached the same efficiency as the human, and in 2017, about 5 years after the installation, the scrap rate is about 35% less than the best humans could do. That is Artificial narrow intelligence in action. Only it is not called AI by the company or by the manufacturing, instead it is "self optimizing", and yet it learned both faster and better than a human. Quote:
Quote:
Quote:
Computers in 1950 were slow to make, requires a lot of expensive equipment and time for a single one, whereas today they still require expensive equipment to make that expensive equipment produces huge numbers of computers. That is engineering. Quote:
Quote:
Basically progress is like throwing a lot of things at a wall. Eventually some will stick and we have progress, but most things will just fall down. But given the huge number of throws we do today progress as whole is still high. Quote:
Note because of length snipped a lot of things about your game world assumptions, they are fine, after all the GM sets the game world parameters, my discussion is not about that, just the claim that that is the real world.. :) |
|||||||||
03-14-2018, 04:25 AM | #15 | |
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
This isn't really any different from saying that ecology is a set of observations and theories that exists in human minds (and media such as books and Web pages), but we can also talk about "the ecology of Greenland," meaning not what people think about the web of life in Greenland, or even what people in Greenland think about the web of like, but that actual web of life that exists in Greenland. Or saying that physics is a construct in the human mind, but physical reality is not.
__________________
Bill Stoddard I don't think we're in Oz any more. |
|
03-14-2018, 04:56 AM | #16 |
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
I think maybe "AI" and "supercomputer" are not the right terms. We have "artificial intelligence" in the real world; for example, my desktop has a program that uses AI to decide which incoming e-mail should be classified as junk. But it's an idiom; it doesn't mean the same thing as "intelligence" that is "artificial." Similarly, there have been a lot of "supercomputers," which are high-end computers that can be used to address difficult scientific problems, from the Cray-1 to the Blue Gene series. But they're not computers that transcend the concept of computation in some way. In fact the Cray-1 had about the same clock speed as current desktop models, and significantly less RAM, so its raw computational power was comparable or maybe marginally lower (though its programming was different).
A word means what it is used to refer to. As far as Complexity is concerned, what do you take to be a measure of Complexity? People who talk about actual computers sometimes seem to be interested in how many "flops" they're capable of; and there's been a continuing upward trend there. Of course that's not the same as saying that we can write code that uses those flops for anything productive. But would you take "Complexity" as referring to the log of flops to some base, with some number of flops as the zero point of the scale? Or do you have something else in mind?
__________________
Bill Stoddard I don't think we're in Oz any more. Last edited by whswhs; 03-14-2018 at 05:02 AM. |
03-14-2018, 06:51 AM | #17 | ||
Join Date: Jun 2006
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
Quote:
But it's most interesting to AI debates as an example of something that may or may not be important to building an actual AI - hardware that works *differently* than a conventional computer. I have a strong suspicion that information processing technology is may be reaching the point where it's about to diversify into a bunch of things that are not computers in the sense we've used the word until now, but do some of the same kinds of stuff differently. This isn't unusual after all, to use your example technology consider the different ways of cutting stuff. Digital computers start out as stone knives, and have been refined into really, really good stone knives, but that's still what they are. Quantum computers might count as saw blades, or maybe tool steel. But what we really want is somebody to invent plasma torches (or sonics, or hot wire plastic cutters, or acids....)
__________________
-- MA Lloyd |
||
03-14-2018, 08:00 AM | #18 |
Join Date: Feb 2016
|
Re: No AI/No Supercomputers: Complexity Limits?
It is a good analogy, though I think that digital computer are bronze knives rather than stone knives (that would be mechanical computers). In a century, our descendants will probably think of our computers much in the way that we think of cars from the 1910s (primitive, quaint, slow, and lacking any real comfort). If digital intelligence is a possibility, it will probably use radically different architecture than contemporary computing in order to mimic biological complexity (for example, optical computers using rotating polarity gates for ten state computing or something similarly different). Or we might just ditch digital entirely and convert to biological computers that use protein coding and viral transcription for computation. We really do not know.
|
03-14-2018, 10:54 AM | #19 | |||
Join Date: Feb 2007
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
Intelligence defined as learning and doing is certainly present in today's computers, it was present in a very limited form in ENIAC. But here's where language becomes a trap. The phrase 'artificial intelligence' is weasel-line now. It's been redefined by the so-called professionals so many times that it can be used to mean almost anything. But what it mean originally was artificial people, or at least artificial conscious beings embodied as hardware/software. Beings that could originate new thoughts and ideas, have desires or intentions of their own, etc. That's what the professionals originally meant by it. It meant HAL 9000, Colossus, Skynet, R2-D2, V.I.N..CENT, R. Daneel Olivaw. That's what Minsky et al meant by 'artificial intelligence', originally. Over time, as it became clear that not only did the 'experts' have no idea what consciousness even was, but that there was no immediate prospect of it appearing in computers, the term 'artificial intelligence' was repeatedly redefined, until now it's mostly used to talk about 'machine learning' and improving sorting and comparison algorithms, often with some empty hype for the rubes along the way (like the infamous Watson on Jeopardy). Quote:
__________________
HMS Overflow-For conversations off topic here. |
|||
03-14-2018, 11:11 AM | #20 | ||||||
Join Date: Feb 2007
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
Some of the refinements have been fairly sharp, admittedly, but nothing basically new. Quote:
Quote:
Quote:
An example of a genuine new technology is the telegraph. Or radio. Or the airplane. Yes, they use what came before but do something fundamentally new and different with it. We had a wild explosion of science and technology from the late 1600s to the early 1900s. It was historically exceptional, though comparable periods have happened before. Quote:
Quote:
Much of that improvement, too, is the result of a somewhat improbable concentration of resources. We kept Moore's Law going for decades in large part by pouring ever-vaster resources into production processes, newer and exer-more-expensive facilities, finer and finer refinements to squeeze out theh potential of silicon. Once reason some of the other substrates that used to be talked about a lot either never took off or remained niche is that we squeezed silicon so hard. It's a little like the internal combustion engine. Almost everything about it is the better part of a century old, but we've refined and refined and refined it to the point where it's far more efficient and effective than most engineers would have considered likely in 1920. But it's not new.
__________________
HMS Overflow-For conversations off topic here. |
||||||
|
|