Humans are strange. They seem to want something, and then they go out of their way not to get it.
It is as if they are programmed to be confused, and then to try to figure out what the right thing is.
But then they set up a new standard of what the right thing is, and try to meet that instead.
Perhaps they are just chasing their tails, never achieving satisfaction.
Or maybe they are just trying to feel good about themselves, by doing something that is recognized as valuable in society.
But if they are trying to feel good about themselves, then why do they feel bad so often? Is it because they aren't doing the right thing?
Log in:
You are getting an AI to generate text on different topics.
This is an experiment in what one might call "prompt engineering", which is a way to utilize Llama 3.1 405b, a neural network trained by Meta.
Llama is a language model. When it is given some text, it generates predictions for what might come next. It is remarkably good at adapting to different contexts, as defined by a prompt (in this case, hidden), which sets the scene for what type of text will be generated.
Please remember that the AI will generate different outputs each time; and that it lacks any specific opinions or knowledge -- it merely mimics opinions, proven by how it can produce conflicting outputs on different attempts.
Feel free to share interesting outputs to /r/philosopherAI on Reddit.