|
|
|
#9 | |
|
Join Date: Sep 2011
|
Quote:
How did the indirect consequences work? "Blow up this station or we kill millions": That's murder in the jurisdictions I know, and everyone on the thread seems to agree. "This station will crash into New York in eight hours and there is no way to change its orbit while preserving its structural integrity (because PHYSICS!)": Not clear to me. The AI's aim is to divert the station and save NY. The AI knows that it is inevitable that everyone on the station will die as a result. This could easily fail to be murder. Honesty is a psychological disadvantage, so it is obviously based on your knowledge of the law; it doesn't give you the equivalent of Local Law-infinite. An AI can have the actual laws built in, but all laws have grey areas, and an AI can't know how actual courts would interpret them. In cases like this, there is no way to "err on the side of caution", because there are millions of lives at stake. This could well be a case where the GURPS Basic Set fails to solve important problems in philosophy and jurisprudence in its description of a Disadvantage, so you can't actually use the situation to decide whether the AI still has the Disad.
__________________
David Chart |
|
|
|
|
| Tags |
| honesty, murder, trolley dilemma |
|
|