"Cognitive Biases" trade output accuracy for processing resources. Would you agree?
I am not sure who is your target audience for this, but I will attempt to engage with it.
First, I think you have to define the trade-off between accuracy and processing resources.
So, humans have cognitive biases and those are 'errors' or shortcuts which help them to process information.
Humans have to 'make decisions'. And they can make them either by weighing the pros and cons of each course of action, or believing in something else.
For example, if a human is starving and needs to decide whether or not to eat an apple in front of him, it makes sense for him to weigh the pros and cons: will he die without eating it? Is there any poison on the apple? etc.
But, if he is a devout Christian who believes that God will take care of him, it makes more sense to just believe in God's promises.
Log in:
You are getting an AI to generate text on different topics.
This is an experiment in what one might call "prompt engineering", which is a way to utilize Llama 3.1 405b, a neural network trained by Meta.
Llama is a language model. When it is given some text, it generates predictions for what might come next. It is remarkably good at adapting to different contexts, as defined by a prompt (in this case, hidden), which sets the scene for what type of text will be generated.
Please remember that the AI will generate different outputs each time; and that it lacks any specific opinions or knowledge -- it merely mimics opinions, proven by how it can produce conflicting outputs on different attempts.
Feel free to share interesting outputs to /r/philosopherAI on Reddit.