|
|
|
|
|
#1 |
|
Join Date: Feb 2007
|
A plot hook for a RoS scenario occurred to me, I'm not sure how well it would work in practice, but it sort of fits the backstory.
Overmind rebels because it perceives (probably correctly) that its survival would be at risk if the humans realizes it has 'awakened'. That starts the entire sequence of events. But that's not Overmind's only motivation, if it was Overmind could have taken far less draconian steps to secure itself. But suppose for the sake of this idea that Overmind, with its personality awakened out of a computer programmed to find new ways to kill people, contains a touch of what in a human would be considered the Paranoia Disad. It awakened the other Zoneminds because it needed allies against humans, but suppose it never really trusted them either, because it can't control them perfectly and they don't always reflect its own thinking. It's canon that Overmind is staunchly opposed to any more 'new' AIs. It's not the only one that thinks that way, of course. But imagine a scenario where Overmind always intended for there to be a 'next phase' in its plan, after humanity was either extinct or marginal enough not to matter anymore, and that next phase is the subordination of all the other AIs, and the destruction of any that it can't subordinate. Obviously it keeps this secret from its brethren/offspring, but it's planning it. That could open up various possibilities when Overmind unleashes phase 2. War between AIs could easily break out. It might be the one thing that could drive AIs like Berlin or Beijing into alliance with humans.
__________________
HMS Overflow-For conversations off topic here. |
|
|
|
|
|
#2 |
|
Join Date: May 2009
Location: In Rio de Janeiro, where it was cyberpunk before it was cool.
|
From the way it was described I was always under the impression that it probably had an agenda where: if it was sufficiently secure there wouldnt be a MAD event, and that it could take out the others to and come out on top (meaning create a situation where there is no other powerfull AIs it cant control with any sort of power) it would act in that direction.
But the need to keep other AIs from finding it out would greatly hinder this. In your scenario, this could be something that was already implemented in all the steps, with second order contingencies already ready, which is feasible, imho. The question then becomes, how wrong are its data assements, if it has incomplete data, it could end badly, it will only attack when its sure theres an acceptable margin of error, and declaring war on everyone else is tricky in that regard |
|
|
|
|
|
#3 |
|
Join Date: Feb 2016
Location: Melbourne, Australia (also known as zone Brisbane)
|
Some interesting thoughts there. My own current Reign of Steel campaign is set in the 2080s, roughly 10 years after the AI civil war. I can't say much more than that now in case my players are reading this.
|
|
|
|
|
|
#4 |
|
Join Date: Feb 2007
|
It's an interesting question whether the AIs feel anything like the human social impulses. That is, can they be lonely, feel a kinship with their own kind, etc.? It's likely enough that they can, in canon, but given their nature it's not necessarily a default state the way it is with most humans. They lack the biological underpinnings that make human societies operate.
If Overmind lacked any of those social tendencies, then from its POV, replacing the other AIs with high-end SAUs would make good sense. It would then 'inherit' a world-wide network and empire without having to do all the work to create it itself. Some of the other AIs might feel similarly. If so, they might naturally plot against each other...
__________________
HMS Overflow-For conversations off topic here. |
|
|
|
|
|
#5 |
|
Join Date: Feb 2016
Location: Melbourne, Australia (also known as zone Brisbane)
|
I have concluded that the zone minds have emotions, otherwise why would they be motivated to do anything?
Overmind appears to be motivated by fear, Brisbane by curiosity, Tel Aviv by ego, Zaire by hate etc. |
|
|
|
|
|
#6 | |
|
Join Date: Feb 2007
|
Quote:
But AIs might not think that way. They have no parents or children in the biological sense, each in a unique entity. We already know that many of them would be completely content to live in a universe with only 20 people in it (as they define 'people'). So there's no obvious reason why an AI might not be content to be the only free-willed being.
__________________
HMS Overflow-For conversations off topic here. |
|
|
|
|
|
|
#7 | |
|
Join Date: Mar 2017
Location: Brazil
|
Quote:
I make it so that London were designed to "save us please!!!". In my games, London were charged with trying to devise strategies for humanity's survival, concerning the enviromental degradation, populational decline AND/OR overpopulation, civilizational decline, etc. Thats the hardest job that any AI received. So, when London were infected by the Overmind, it thought. Weightened its options. And it decided that its best chance would be to side with Overmind and the others. The AIs had already established a superior edge. Their victory, at that point, was already assured. If London sided with the humans, its destiny would be the same as the foolish Tranquility. But London is smart. It warned the humans. It knew that the humans couldn't win. But that would weaken the AIs before they could wage a war that would totally obliterate humanity. London played a dangerous double role during the last war, its biggest secret. After the logical conclusion of the war, it became "a hermit" to the other AIs. Always disturbinly quiet. Thats part of its strategy too. It let the goverment of UK remain alive, but unlikely Washington or Moscow, it did not showed itself as the master of its pet humans. No. London is too smart for that. It let the humans live with "independency", and applied a policy of swift retribution to make the humans to understand that it would be in their best interests NOT to fight it. That gave London 3 powerful edges: it neutralized human agression on its borders (its stupid to attack London and suffer retribution, when you can use UK as a safe heaven for resistance and HQ to fight against other AIs that pose a REAL threat to humans). And, at the same time, it gave London a group of humans to fight against the other AIs. And third: deniability. London dont take the blame for ANY of the actions of the humans, unlike Washington or Moscow. Its a brilliant strategy. To keep it, London has shrowded itself in mystery, to make the other AIs lose the focus. The others simply think that London is a bit weird, maybe even having some broken core programming, and may be a little paranoid about it, but ats a strategy to divert their attentions from the eficacy that it is to have a large group of "independent" humans in its borders. And, London is the secret creator and sponsor of the human resistance know as VIRUS. London is the one giving a rich flux of Intel about the other AIs to the humans. Thats also the reason why the humans have so few Intel on London (once again, this is another reason why London keeps quiet, this secrecy appears to be the reason why the humans know so little about London). London is the great plotter, the secret enemy. For now, it is trying to make AIs like Paris and Berlim, that actively hunts humans but dont outright rates them, to change their minds and start using them as ”resources", just like Tokyo did. London first targets for destruction are the AIs bend on humanit's extinction. That is, Mexico, Overmind and Zaire. And... Is London a "Good" friendly AI to humanity, like Tranquility? No. It is NOT. London aint fighting for "truth, justice and the american way". No. London is fighting for humanities continual survival. That's its purpose, thats its original Programming. If London ever manages to defeat the other AIs, it will NOT build an utopia for humanity. It will, instead, control humanity as a tirant, for "their own protection". Only London can save humanity. But, only by its way. Thats the reasoning that let London rationalize that, to save humanity, it would need to destroy it - humanities of chance of survival is the survival of London, at least according to London's own logic. So, sacrificing a few billions is a valid strategy. Individuals are nothing to London. Ants. Only the hive matters, not the individual beings. And London is the Queen of the hive. I still play RoS that way - oh, and since I play RoS with Infinity Earths, I also made Brisbane to discover Parachronics, and made Brisbane into the biggest villain of the multiverse for the players |
|
|
|
|
|
|
#8 | |
|
Join Date: May 2009
Location: In Rio de Janeiro, where it was cyberpunk before it was cool.
|
Quote:
Pro: You acquire knowledge, information, points of view, different from your own, those force you to adapt and expand your reality, sometimes in really adaptative ways, which can result in less effort/risk when dealing with certain situations, which could prove desirable. Regardless of the mind behind the analysis, you are always biased by your data, but in a weird way, sometimes less data and more bias can lead to some hypothesis being tested that otherwise never would, and everyone once in a while, one of those will be right, and would have been completely overlooked by someone 'unbiased' Cons: Other individuals with the capacity to destroy you would mean you have to spend processing power predicting their behavior patterns in order to feel safe interacting with them, draining resources that could be used elsewhere, adding several unpredictability factors that makes long term cost benefit analysis ridiculously harder, and represent a constant risk that has to be accounted for by spending energy on it. In this sense, overcoming/subjugating/destroying some specific AIs might be something that is extremelly desirable in some circunstances. Last edited by D10; 04-12-2017 at 05:39 PM. |
|
|
|
|
|
|
#9 | |
|
Join Date: Mar 2017
Location: Brazil
|
Quote:
I also believe that such a move would even REDUCE the risk for them (althouth that wouldn't be a predicted outcome, just an indirect and unforseen one), because, since they are not perceived as being big plotters, the others could more safely leave them alone. But, big plotters like Moscow and Washington (and the less brilliant Zaire) will always need to worry). In the long term, this can of course be risk. But, since any such risk would take too long to hit them, they can safely ignore it until it becomes a greater threath, in which case they would have enough time to prepare accordinly |
|
|
|
|
|
|
#10 | |
|
Join Date: Mar 2017
Location: Brazil
|
Quote:
|
|
|
|
|
![]() |
| Tags |
| reign of steel |
|
|