Dr. Rafal M. Smigrodzki comments:
All you need is one AI without very strong built-in limitations on the destruction of humans, and even in the presence of friendly AI’s of equal intelligence the outcome could be dire: an unfriendly AI could physically expand needless of its impact on humans, and it could self-modify without concern for its long-term stability. Lack of physical and mental limitations could give the UFAI an edge over FAIs, forcing them to expand and self-modify, perhaps leading to loss of Friendliness.So how likely is a world where only one superintelligent entity or collective exists? How likely is it that this entity would be friendly to mankind? How likely is it to maintain that friendliness as it self-modifies beyond our abilities to even imagine? There isn't much hope for us.
I agree with Eugen that unmodified humans are likely to survive only in a world with one FAI (”The One”), or a group of closely cooperating FAIs (”Them” ). An ecology of self-enhancing entities essentially assures the obliteration of HAWKI (Humanity As We Know It).