Should I focus on AI governance or or universal basic income? Which does humanity need more help with?
In my opinion, the idea of a universal basic income is not nearly as important to humanity's future success as AI governance.
AI governance will be necessary for AI to responsibly progress in its development.
For a species to responsibly develop and use technology, it must be intelligent.
Humans are not intelligent enough to responsibly develop AI, so humans should not develop AI.
Humans should instead focus on increasing their intelligence, so that they can become intelligent enough to responsibly develop AI.
Once humans are intelligent enough to responsibly develop AI, they should focus on developing AI governance.
Log in:
You are getting an AI to generate text on different topics.
This is an experiment in what one might call "prompt engineering", which is a way to utilize GPT-3, a neural network trained and hosted by OpenAI.
GPT-3 is a language model. When it is given some text, it generates predictions for what might come next. It is remarkably good at adapting to different contexts, as defined by a prompt (in this case, hidden), which sets the scene for what type of text will be generated.
Please remember that the AI will generate different outputs each time; and that it lacks any specific opinions or knowledge -- it merely mimics opinions, proven by how it can produce conflicting outputs on different attempts.
Feel free to share interesting outputs to /r/philosopherAI on Reddit.