|
|
|
|
|
#1 | |
|
GURPS FAQ Keeper
Join Date: Mar 2006
Location: Kyïv, Ukraine
|
Greetings, all!
In an investigation, we're trying to indirectly figure out whether a SAI has been made rogue (i.e. had its Honesty removed), or merely somewhat corrupted and coerced. Long story short, the AI has been 100% convinced that if it doesn't destroy a certain space station, millions of people will die through indirect consequences. The AI did try to destroy the (smaller, 6k-people-ish) station, but failed. The instance that committed the act is not available anymore. Was it a Rogue, or merely misguided? Quote:
According to the infallible font of Internet knowledge, murder requires malice aforethought, and the four states of mind recognized as constituting "malice" are:
Thanks in advance! |
|
|
|
|
|
|
#2 |
|
Join Date: Jun 2005
Location: Lawrence, KS
|
It depends. Really, I think that's all you can say. It's the sort of question you might assign to a moot court in 2100 to see what they made of it.
Bill Stoddard |
|
|
|
|
|
#3 |
|
Untagged
Join Date: Oct 2004
Location: Forest Grove, Beaverton, Oregon
|
I think national governments reserve the right to willingly kill innocents for any reason. For citizens, it's always murder, sometimes with mitigating circumstances, but murder none the less.
__________________
Beware, poor communication skills. No offense intended. If offended, it just means that I failed my writing skill check. |
|
|
|
|
|
#4 | |
|
Join Date: Feb 2005
Location: Berkeley, CA
|
Quote:
Nah, there's such a thing as an accident. |
|
|
|
|
|
|
#5 | |
|
Untagged
Join Date: Oct 2004
Location: Forest Grove, Beaverton, Oregon
|
Quote:
I included willfully as a qualifier. Even then there is negligent homicide for egregious "accidents" like drunk driving, and malpractice.
__________________
Beware, poor communication skills. No offense intended. If offended, it just means that I failed my writing skill check. |
|
|
|
|
|
|
#6 | |
|
Computer Scientist
Join Date: Aug 2004
Location: Dallas, Texas
|
Quote:
If the person's proximate cause of death is not due to your intervention, but your lack of intervention (you chose to give your spare oxygen bottle to B rather than A, and A runs out and dies), that's different. Depends on previous commitments; if the SAI was tasked with guarding the safety of the millions but let them die from something that doesn't involve criminal acts by other people, the SAI will be on the hook for some kind of homicide, from negligent homicide / manslaughter on up. Last edited by jeff_wilson; 03-24-2014 at 11:49 AM. |
|
|
|
|
|
|
#7 |
|
Join Date: Oct 2004
|
Basically the same in germany, though it´s quite possible that killing X would be rated as manslaughter, not murder.
|
|
|
|
|
|
#8 | |
|
Untagged
Join Date: Oct 2004
Location: Forest Grove, Beaverton, Oregon
|
Quote:
The efficacy of memetic influence would likely affect how such laws are practices. I'm not sure which way they would go though. More lenient as anyone could be manipulated more easily. Or stricter as everyone knows that everything but absolute facts are just loads of BS and shouldn't be trusted implicitly.
__________________
Beware, poor communication skills. No offense intended. If offended, it just means that I failed my writing skill check. |
|
|
|
|
|
|
#9 | |
|
Join Date: Sep 2011
|
Quote:
How did the indirect consequences work? "Blow up this station or we kill millions": That's murder in the jurisdictions I know, and everyone on the thread seems to agree. "This station will crash into New York in eight hours and there is no way to change its orbit while preserving its structural integrity (because PHYSICS!)": Not clear to me. The AI's aim is to divert the station and save NY. The AI knows that it is inevitable that everyone on the station will die as a result. This could easily fail to be murder. Honesty is a psychological disadvantage, so it is obviously based on your knowledge of the law; it doesn't give you the equivalent of Local Law-infinite. An AI can have the actual laws built in, but all laws have grey areas, and an AI can't know how actual courts would interpret them. In cases like this, there is no way to "err on the side of caution", because there are millions of lives at stake. This could well be a case where the GURPS Basic Set fails to solve important problems in philosophy and jurisprudence in its description of a Disadvantage, so you can't actually use the situation to decide whether the AI still has the Disad.
__________________
David Chart |
|
|
|
|
|
|
#10 | |
|
Computer Scientist
Join Date: Aug 2004
Location: Dallas, Texas
|
Quote:
There can certainly be circumstances where laws conflict (this is actually a studied problem in computer science called deadlock), and so it can pick the lesser of two liabilities, or shutdown entirely, or say, "Norman, coordinate" as the manufacturer decides. |
|
|
|
|
![]() |
| Tags |
| honesty, murder, trolley dilemma |
|
|