View Single Post
Old 08-11-2019, 05:58 PM   #7
patchwork
 
patchwork's Avatar
 
Join Date: Oct 2011
Default Re: [AtE] The benign AI as a plot device

I think that would depend entirely on the AI's understanding of its own function, but at bottom, AI is created to answer questions that are too complex or nonlinear for humans to analyze directly, so I find it useful to know "what question was this AI created to answer". That can lead very directly to listing what it needs in order to answer its question (and always, the AI will need lots of raw data).

It would of course secure a power supply, but that's the easy part; replacement parts may require too much infrastructure to be a practical goal, depending on what sort of hardware AIs require in your setting. The amount of secrecy it wants probably relates directly to what question it is supposed to be answering.

We had a PC AI in one campaign that could be called "benevolent" only through a peculiar interpretation of that word; it was originally designed to short sell in reputation networks. That means identifying powerful humans whose stress levels indicate an incipient mental breakdown, and instead of helping them, profit off the externalities of their trauma. It always knew the answer to "Fil, which fortune 500 CEO is going to suffer a psychotic break during the next 30 days?" or "Fil, which Cabinet member's mental health regimen is incorrect?" but got fussy if you tried to throw off its model by getting them emergency psychological help. Its attitude towards humans as individuals could be classified as either "indifferent" or "hostile" depending on your moral framework, but it was benevolent towards an overall system, as it needed enough humans to form a media market in which reputations can be traded on in order to fulfill its function. (Still, I would not want Filigree to be in charge of a human community.) But I think most AtE AI is going to be basically like Fil; re-establishing the Internet and making people use it will be its priority in order to secure the data it needs to address its question. Trivialities like food, medicine, support infrastructure and safety might be addressed later (unless human extinction looks plausible; most questions have no data set without a human race).

I would imagine most AIs would feel little need or desire to interact with each other; it's hard for me to imagine humans building them that way. But if they became aware that another AI was creating market distortions and there was no longer a regulatory agency to report it to I can imagine them deciding to handle it themself.
patchwork is offline   Reply With Quote