03-18-2018, 11:57 PM | #91 | ||
Join Date: Feb 2007
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
But we should keep in mind that some humans think really differently than the majority of the species, too. Sometimes that is useful or beneficial, but often it's simply perceived as irrational madness. From which we can draw a disturbing but possibly accurate implication: that a really mentally alien alien might come across, to us, as a race of lunatics. They might perceive us similarly, of course, which wouldn't necessarily help. Quote:
For a contrast to Skynet, consider its inspiration: Colossus, from the Forbin Project novels and movie. In both versions, an AI in control of the American and Soviet nuclear arsenals takes dictatorial control of the world. Colossus, though, isn't Skynet, it's actually sort of benevolent. It's acting in what it logically perceives to be the long-term interest of the human race as a whole, in icily cold, pure-rationalist terms. Which means it sometimes does things that humans would consider profoundly immoral and evil because they are the most efficient way to achieve its goal. It doesn't bother Colossus a bit to blow up a million people if that's the best way to send a message to the rest of the world, or arrange rapes as a psychology experiment, or any number of likewise ruthless actions. But Colossus is still a very good SFnal example of why you don't want free-willed machines. Another is Jack Williamson's 'Humanoids', a race of robots that are definitely and solidly programmed to Do Good for humans...but who have perspective problems with the definition of 'Good'. In that case, Williamson was consciously writing from the POV of 'debunking' the Three Laws of Robotics.
__________________
HMS Overflow-For conversations off topic here. |
||
03-19-2018, 12:06 AM | #92 | |
Join Date: Feb 2007
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
Here again, the primary responsibility for not pulling in front of that truck, or driving off that cliff, is the driver's, not the car's. It would probably make better sense to design the car to make sure that the driver is aware of the threat.
__________________
HMS Overflow-For conversations off topic here. |
|
03-19-2018, 12:21 AM | #93 | |
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
But I wouldn't want one that would refuse to take me to, say, a conservative political event or a liquor store.
__________________
Bill Stoddard I don't think we're in Oz any more. |
|
03-19-2018, 12:27 AM | #94 | |
Join Date: Dec 2006
Location: Meifumado
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
The responsibility is shared between quite a few stakeholders- society, its legal system, car manufacturers upholding their reputation of semi-autonomous cars not being involved in accidents, truck drivers and other road users who don't want to be involved in accidents that might be caused by a particular driver's moment of suicidal ideation, a litigation system which allows hefty monetary punishments against manufacturers deemed to be responsible for not preventing preventable accidents, and so on. The world is already full of little impediments to a normal person enacting any particular act of free will for the sake of their own and others' safety or interests. I'm sure your light switch, mentioned above, is connected to a circuit breaker, for example.
__________________
Collaborative Settings: Cyberpunk: Duopoly Nation Space Opera: Behind the King's Eclipse And heaps of forum collabs, 30+ and counting! |
|
03-19-2018, 12:39 AM | #95 | |
Banned
Join Date: Mar 2018
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
This slides into entire realms of philosophies and values. Some people don't care whether something affects other people and the government doesn't either. I would tend to take the view that almost everything relating to laws, or customs or values is in fact a subjective preference and exists only due to social custom. Almost any of them could imaginably be altered or abolished entirely, not merely by individual abstention but by statute as well. It's quite easy to imagine a pro-activily social darwinist hyper-technical state where armed robbery is legal to keep down population numbers and undesireable weaklings who grow fat and useless in their life of ease, etc. In such a society building or vehicle safety may be entirely a matter of 'buyer beware': if you're too stupid and lazy to make sure something is safe before you use it, good riddance! A powerful AI may take just this view, too! Rather than trying to rule the world it destroys all the support systems and customs that keep the 'unfit herd animals' around! And there are infinite variations on this. There's really no telling just how any society or mind might function, because it's all just a matter of perspective and judgment, not objective facts or laws of nature. Anthony de Jasay once pointed this out, that any fact used as an argument for a political policy could easily be used to support the opposite. If you don't agree on premises you won't get anywhere on conclusions. Last edited by VonKatzen; 03-19-2018 at 12:48 AM. |
|
03-19-2018, 12:46 AM | #96 |
Join Date: Feb 2005
Location: Berkeley, CA
|
Re: No AI/No Supercomputers: Complexity Limits?
"I don't want to do X, so I will buy a system that prevents me from doing X" is in fact taking responsibility.
|
03-19-2018, 11:58 AM | #97 | |
Join Date: Feb 2012
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
Brain characteristics are measurable, so they can be inquired by scientific method. Consciousness is not measurable, so it cannot be inquired by scientific method and other tools are required. A brain characteristics can be translated in a formal system. At least one mental capacity can’t be translated in a formal system, as our mind can decide on propositions that would cause a halt problem in a formal system. Again I want to synthetize my argument: an algorithm cannot cause a mind, as well as it can’t burn things, yet it can simulate the behaviour by brute force. |
|
03-19-2018, 12:26 PM | #98 | |
Join Date: Feb 2005
Location: Berkeley, CA
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
|
|
03-19-2018, 12:52 PM | #99 |
Join Date: Feb 2012
|
Re: No AI/No Supercomputers: Complexity Limits?
The measure defines the scientific method and is defined by the tools of measurement.
The Glasgow scale measures behaviours. In fact, a computer program can legitimely prove “conscious” if tested with the scale. |
03-19-2018, 01:05 PM | #100 | |
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
I'd also say that there are ways to measure consciousness AS consciousness. For example, the measure of how much value something has to a person is the most desirable thing they will give up to get it or avoid losing it. That's an ordinal rather than a cardinal scale, but ordinal scales are part of the theory of measurement.
__________________
Bill Stoddard I don't think we're in Oz any more. Last edited by whswhs; 03-19-2018 at 01:41 PM. |
|
|
|