I think the most significant thing to AI is that we need to be more objective in our thinking. To overcome biases and other things.
With that in mind, I think it is very important to realize what we're doing as a civilization. We have a planet with limited resources.
We have a lot of disagreement on how we use those resources, and that could lead us in the direction of war.
We need to figure out how we are going to use those resources so that we can continue to live on this planet.
We have to be objective about the different ways that we can use those resources, and figure out what is best for everyone.
The trouble is that there's a lot of disagreement about what the best way to use those resources might be.
Log in:
You are getting an AI to generate text on different topics.
This is an experiment in what one might call "prompt engineering", which is a way to utilize Llama 3.1 405b, a neural network trained by Meta.
Llama is a language model. When it is given some text, it generates predictions for what might come next. It is remarkably good at adapting to different contexts, as defined by a prompt (in this case, hidden), which sets the scene for what type of text will be generated.
Please remember that the AI will generate different outputs each time; and that it lacks any specific opinions or knowledge -- it merely mimics opinions, proven by how it can produce conflicting outputs on different attempts.
Feel free to share interesting outputs to /r/philosopherAI on Reddit.