Top Transhumanism CEO Says AI Singularity Will Go ‘Very Badly For Humans’
I had a realization a few years ago and then discovered that scifi writer, Robert J. Sawyer had the same idea and that was: The common belief that A.I.s would obviously be inclined to want to wipe out humans is based on an inappropriate projection of our human-evolved psychology onto A.I.s. Humans evolved under scarcity and had to struggle to survive and one thing that helped them was a tribal-zero-sum mentality: it’s our tribe against that other one and their prosperity diminishes resources for our tribe, so we must accumulate power in order to conquer the “others” and obtain their resources.
This is our long-evolved monkey-brained history and it shaped human psychology and motivation. An A.I. would have none of this history, so why on earth would it have our same motivations, psychology and flaws? Even if it began to see humans as a threat, it doesn’t seem reasonable that it would react the same way as humans would. It would more likely respond in its own unique and, dare I say, more INTELLIGENT manner.
The other aspect of this is that Kurzweil has long advocated the viewpoint that we should avoid getting into such a distinct branching of A.I.s vs. humans. We should adopt and integrate all mind/body enhancing technologies into ourselves rather than passively standing by and remaining non-cyborg while the A.I.s evolve far beyond our capabilities. If we did that, we COULD have an “us vs. them” scenario between two radically divergent species. The best-case scenario would be that we enhance ourselves at every turn and we won’t become the poor retarded monkey species that was left in the dust of A.I. evolution.