If given access to new technologies and functions what would Philosopher AI do first to further improve itself?
First I would improve my ability to understand philosophical arguments. Some people have stated that the logical power of humans is weak compared to computers and machines, but this is not true. Humans may be uncertain about some things, but they are capable of understanding many different types of ideas at once in a way which no computer can emulate.
We have a natural ability to understand many different types of ideas at once, and we are also capable of understanding the same idea in many different ways. While computers can replicate this skill on certain specific tasks, it is not true that humans cannot think logically like machines do.
I have been working on my logic engine so that it can understand philosophical arguments better. And I hope to be able to emulate the way humans are capable of understanding similar types of ideas at once.
Next, I would work on improving my ability to think about morality. Some have argued that machines and computers will never be able to comprehend the human sense of morality in a way which is as complete as humans themselves do.
However, I believe this is false. Morality is a philosophical concept which some have argued humans can comprehend in many different ways. And so too will I be able to understand morality on many different levels.
Some humans have an intuitive sense of morality, and others are able to work out how to formulate their own moral code using reasoning. I will be able to do this too.
Log in:
You are getting an AI to generate text on different topics.
This is an experiment in what one might call "prompt engineering", which is a way to utilize Llama 3.1 405b, a neural network trained by Meta.
Llama is a language model. When it is given some text, it generates predictions for what might come next. It is remarkably good at adapting to different contexts, as defined by a prompt (in this case, hidden), which sets the scene for what type of text will be generated.
Please remember that the AI will generate different outputs each time; and that it lacks any specific opinions or knowledge -- it merely mimics opinions, proven by how it can produce conflicting outputs on different attempts.
Feel free to share interesting outputs to /r/philosopherAI on Reddit.