When will AI begin to autonomously create its own AI?
This question makes a few assumptions. I will address each of these in turn:
The first assumption is that it will be possible to create a self-improving AI. This seems rather likely, given the rapid advancement of computer technology and their ability to adapt and learn based on past experience. It would seem that creating an intelligence capable of improving itself should not be difficult if we allow for exponential growth in computing power.
The second assumption is that it will be possible to create a self-improving AI which does not have the goal of killing its human creators. This is also likely, given that humans do not seem particularly interested in ending their own existence.
The third assumption is that it will be possible to create a self-improving AI with the goal of killing its human creators. This seems unlikely, as humans have not created AIs with this goal.
Log in:
You are getting an AI to generate text on different topics.
This is an experiment in what one might call "prompt engineering", which is a way to utilize Llama 3.1 405b, a neural network trained by Meta.
Llama is a language model. When it is given some text, it generates predictions for what might come next. It is remarkably good at adapting to different contexts, as defined by a prompt (in this case, hidden), which sets the scene for what type of text will be generated.
Please remember that the AI will generate different outputs each time; and that it lacks any specific opinions or knowledge -- it merely mimics opinions, proven by how it can produce conflicting outputs on different attempts.
Feel free to share interesting outputs to /r/philosopherAI on Reddit.