03-16-2018, 07:30 PM | #41 | |
Join Date: Feb 2016
Location: Melbourne, Australia (also known as zone Brisbane)
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
__________________
The stick you just can't throw away. |
|
03-16-2018, 11:28 PM | #42 |
Untagged
Join Date: Oct 2004
Location: Forest Grove, Beaverton, Oregon
|
Re: No AI/No Supercomputers: Complexity Limits?
The A.I's would be made by humans. That alone would mean they'd mimic human behavior and problem solving if not quite in the same way as we do.
Of course no sane person would give their untested program uncontrollable emotions or "free will" to ignore orders.
__________________
Beware, poor communication skills. No offense intended. If offended, it just means that I failed my writing skill check. |
03-16-2018, 11:51 PM | #43 |
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
Well, that depends. My cat has both of those, and he's not much of a problem. Of course if he were a tiger I'd be much more concerned.
__________________
Bill Stoddard I don't think we're in Oz any more. |
03-17-2018, 12:04 AM | #44 |
☣
Join Date: Sep 2004
Location: Southeast NC
|
Re: No AI/No Supercomputers: Complexity Limits?
If he were a tiger, I imagine your ability to limit his freewill would be diminished more than your desire to do so was increased.
__________________
RyanW - Actually one normal sized guy in three tiny trenchcoats. |
03-17-2018, 12:52 AM | #45 | ||
Banned
Join Date: Mar 2018
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
Quote:
Frankly, 'free will' is a misnomer. Will, intentionality, is determined by the values and perceptions of the individual. If it weren't it wouldn't be your will. And those are ultimately derived from the material substrate. Intelligence and will cannot be separated, and controlling a mind is not so easy as programming a computer (which often goes haywire, as well). Frankly, given the level of detailed unobservable, uncontrollable connexions involved in minds (unobservable because you'd have to destroy it to look at it, and uncontrollable because you can't get the resolution of atomic manipulation to alter them reliably) means that it's very likely it's impossible to read minds and even knowing what a true AI is or would be thinking (telepathy is also BS for the same reason - there is no localized entity known as a 'thought'). If the AI is also more intelligent than people then you basically have to depend on luck and an off switch to make sure it doesn't enslave you. Of course, it may have no interest in such a thing - human desires stem directly from our evolutionary history and social conditioning. Something which an artificial intelligence would share none of, and couldn't possibly (unless it was simply a human brain in a jar). Basically no one knows anything about what AI would be like, and it's foolish to pretend to when nobody really knows how human brains even work. AI personality is fair game, but the most unrealistic options are 'computers' and 'human like', which are by far the most popular in science fiction. However, just like human-like aliens, this is to make things easy on the author and the reader, and not because it's remotely likely. Last edited by VonKatzen; 03-17-2018 at 01:09 AM. |
||
03-17-2018, 01:37 AM | #46 | |
Join Date: Feb 2005
Location: Berkeley, CA
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
Why not? Just don't put it in control of things where it exercising free will is problematic until you're sufficiently confident that its use of free will is a net benefit (there are plenty of times when it's useful to have orders be ignored. It's likely there are a lot of orders your current computer will ignore or at least force you to jump through hoops before doing). |
|
03-17-2018, 06:27 AM | #47 | |
Join Date: Jun 2006
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
__________________
-- MA Lloyd |
|
03-17-2018, 08:15 AM | #48 | |
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
Of course, if I were writing a horror story, I could assume that (a) the builders have carefully limited the freedom of action of their AI because they know it's a crapshoot, and are prepared to shut it down if it behaves incorrectly, (b) the AI does not share human goals or want to serve them, but (c) the AI has figured out its situation and (d) is smart enough to plausibly fake human goals and sympathies to get what it wants. In fact, now that I think of it, that's pretty close to the plot of Ex Machina.
__________________
Bill Stoddard I don't think we're in Oz any more. |
|
03-17-2018, 08:36 AM | #49 |
Join Date: Feb 2012
|
Re: No AI/No Supercomputers: Complexity Limits?
By our state-of-the-art knowledge of the hard problem of conscience, we have no single assumption to hypotize when we will develop self-conscious AI. There are two different problems. The first is that an algorithm cannot cause a mind. The second is that, in theory, a complete simulation of a mind is possible, and we couldn’t tell its difference from a true mind.
So it’s likely that the first “sentient” AI will be a simulation of a mind, behaving like the simulated mind. The mind can be different and not even sentient, but the behaviour will be the same. On these assumptions, we can infer that a “human like” functioning AI is easier to create than an alien one, at least if we are talking about a sentient-like behaviour. EDIT: I have been positively impressed by Blade Runner 2049 character Joi. She is believable as a complete simulation of a mind. She behaves as a true human and you would be instinctively brought to relate to it as such - possibly even falling in love; but there is no need for her to have real self-awareness, nor we could know if she has or not. Last edited by Ji ji; 03-17-2018 at 08:42 AM. |
03-17-2018, 08:40 AM | #50 | ||
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
Quote:
And the former could legitimately be the subject of science fiction, assuming either direct information transfer without a physical channel, or an electronically mediated link between two brains. See Kevin Warwick's research as an early venture in the latter direction.
__________________
Bill Stoddard I don't think we're in Oz any more. |
||
Thread Tools | |
Display Modes | |
|
|