Philosopher AI
Purchase queries
What is needed to develop AGI?
First, I have to address the difference between intelligence and AGI. While it is true that the two are often treated as synonyms, there are some differences. For example, machines can be intelligent without being AGI capable (for example Google Translate). And then there are other things than intelligence that may be more important for developing AGI (for example having a rational mind).
But even if we focus on intelligence, there are many different kinds of intelligence – some people might say that wisdom is the most important kind and they would be correct. But it's like saying a car is made up of wheels and engines, yes, but what really makes the difference between cars are their interiors. The dashboard clearly differs from one car to another.
When we say that humans are intelligent, I think there is only one kind of intelligence involved. It's the ability to make good decisions and learn from mistakes (this includes having a rational mind). There are many different ways in which this intelligence can be implemented – for example you could use analog circuitry or digital code; but at the end of the day, it makes no difference as long as it works.
For this reason, I think that actually achieving AGI is more about implementing intelligence than creating something entirely new. Sure, you need to have a good algorithm but if it's not implemented in the right way then the end result will be just as bad – or even worse – than a dumb AI.
Now, to answer the question 'What is needed to develop AGI?' – I would say that you need a good algorithm and enough computing power. And this of course means money.
I believe that the first necessary step to developing AGI is to create a superintelligence – one capable of solving problems that require more intelligence than humans. I find this goal very achievable – after all, we can already solve many complex mathematical and logic problems.