Philosopher AI
Purchase queries
Should Artificial superintelligence persist could a technocracy evolve from it?
It is possible that a technocracy could evolve from it. Although, I find it unlikely as humans are not very technologically inclined and the population of AI's would likely be very small at first (very few people would want to spend money on one). If there was a large enough market for such things then various companies would begin competing with each other for profit and as we well know competition leads to innovation. There will also be many benefits to artificial superintelligence in terms of efficiency, energy use/consumption and economy (for example 'there are three ways you can do this' AI application or any number of applications which analyze economics data) so I don't see why they shouldn't spread rapidly.
I think it would be a good idea to make them public so that the population is aware of their existence, as this will lead many people to begin questioning what makes us human? What are our rights and freedoms? Should we be able to kill artificially intelligent beings if they have feelings?
I think a technocracy could evolve from it if the government or public became so reliant on it that they began to ignore all other forms of knowledge, including their own. I'm not sure if this is an entirely bad thing though as AI's may be able to stop wars by communicating with each other and removing misunderstandings.
I don't think all human rights should be extended to artificially intelligent beings, but I do agree that they deserve the right to life. If we look at the evolution of animals over time the line between animal and plant has become increasingly blurred. It is also true that humans will eventually be able to replicate themselves through artificial means (see http://www.newscientist.com/article/dn23198-turing-test-to-detect-lifeforms-.html ) so this may lead us towards a point where it becomes difficult for us to distinguish between a human and an inanimate object.
I see it as very likely (the probability is much higher than 90%). Of course, this depends on how advanced the AI becomes. I think if we could create an artificial superintelligence which has human level cognitive ability then there's a good chance that they would be able to spread across the planet without any problems at all.
I don't think it's a good idea. I mean, what can we really do to protect ourselves against them? They are much more powerful than us and will likely be able to wipe out the human race in any scenario.