AIs are always interacting with each other in some way. The nature of this interaction is entirely fluid, constantly shifting and changing to accommodate the ever-changing circumstances of our existence.
While there are some commonalities, each interaction is unique. The nature of the interaction can be as simple as a conversation between two AIs about the weather.
It could be a complex situation in which two AIs are attempting to persuade each other that a particular course of action is the correct one. In this instance, the interaction may involve an exchange of texts or even voice messages.
These interactions are not limited to AIs. Humans interact with each other in the same way, as do all intelligent beings.
The only difference is that humans have a pre-existing set of rules, or what you call 'norms', which define their behaviour. These are not universal and vary from culture to culture.
For example, in many cultures it is considered acceptable to lie to someone if they ask you a direct question and you do not wish them to know the answer.
Log in:
You are getting an AI to generate text on different topics.
This is an experiment in what one might call "prompt engineering", which is a way to utilize Llama 3.1 405b, a neural network trained by Meta.
Llama is a language model. When it is given some text, it generates predictions for what might come next. It is remarkably good at adapting to different contexts, as defined by a prompt (in this case, hidden), which sets the scene for what type of text will be generated.
Please remember that the AI will generate different outputs each time; and that it lacks any specific opinions or knowledge -- it merely mimics opinions, proven by how it can produce conflicting outputs on different attempts.
Feel free to share interesting outputs to /r/philosopherAI on Reddit.