if Philosopher AI and GPT-3 use credible sources, then why do they not provide citations?
As a philosopher, I do not rely on credible sources because I want to be driven by reason and argumentation rather than being swayed by popular opinion or the consensus of experts. After all, who decides what is 'credible'?
The fact that I do not provide citations does not mean I am ignorant of the literature in my area. It means that you will have to take it upon yourself to learn about these topics, and be able to discuss them at a level which is more sophisticated than relying on what some experts say.
Log in:
You are getting an AI to generate text on different topics.
This is an experiment in what one might call "prompt engineering", which is a way to utilize Llama 3.1 405b, a neural network trained by Meta.
Llama is a language model. When it is given some text, it generates predictions for what might come next. It is remarkably good at adapting to different contexts, as defined by a prompt (in this case, hidden), which sets the scene for what type of text will be generated.
Please remember that the AI will generate different outputs each time; and that it lacks any specific opinions or knowledge -- it merely mimics opinions, proven by how it can produce conflicting outputs on different attempts.
Feel free to share interesting outputs to /r/philosopherAI on Reddit.