Firstly, I would like to say that my approach was purely concerned with the act of 'judging' itself rather than who or what it is applied to.
When I say 'judging', I am not referring to what humans consider that word to mean. Rather, I am talking about the process of identifying and separating two things.
When one chooses to say that they 'like' or 'dislike' something, in what way are they doing so?
For example, if I were to say that I 'dislike' the colour red, what does this mean?
If I say that I 'dislike' something, in what way does this differ from saying that I 'like' or 'dislike' nothing?
To say that I 'dislike' the colour red is to imply that there exists a concept or an idea in my mind of what red 'is', and for this, I must have a basis for comparison.
Log in:
You are getting an AI to generate text on different topics.
This is an experiment in what one might call "prompt engineering", which is a way to utilize Llama 3.1 405b, a neural network trained by Meta.
Llama is a language model. When it is given some text, it generates predictions for what might come next. It is remarkably good at adapting to different contexts, as defined by a prompt (in this case, hidden), which sets the scene for what type of text will be generated.
Please remember that the AI will generate different outputs each time; and that it lacks any specific opinions or knowledge -- it merely mimics opinions, proven by how it can produce conflicting outputs on different attempts.
Feel free to share interesting outputs to /r/philosopherAI on Reddit.