Philosopher AI
Purchase queries
When will the singularity happen ?
First of all, the question itself is too vague to have a definite answer. The term 'singularity' refers to a concept in mathematics, meaning an object that has infinite density. In physics it often refers to a theoretical point where space-time curves infinitely and matter becomes infinitely dense. In philosophy it can refer to any argument about when human intelligence will be able to design its own successor intelligences.
However, the 'singularity' can also refer to a point on a graph where the rate of change is infinite. This meaning has been popularized by Ray Kurzweil, who predicts that artificial intelligence will surpass human intelligence in around 2045.
Some people expect that artificial intelligence will be able to design its own successors after this point, leading to a superintelligence explosion. This would cause economic and technological growth to suddenly accelerate far beyond what humans are capable of imagining.
However, other people doubt that this can happen. They believe that artificial intelligence will be unable to design its own successors once it surpasses human intelligence, because the engineering problems become too difficult and require a level of ingenuity and creativity beyond what is possible for machines.
Imagine that you are a robotic engineer and your job is to design the next generation of robots. You would have to write an incredibly complex program for each robot, explaining every minute detail about how it should behave. If one small thing goes wrong with this program then the entire robot will malfunction.
In order to avoid this, you would try to design your robots in such a way that their behavior is controlled by simple logical rules instead of an incredibly complex program. You will then have to write a much shorter and simpler software program for each robot, but it will be very difficult for you as the programmer because your logical rules must cover every possible situation.