We cannot guarantee success, only stack the deck in our favour. Sometimes our best is not good enough, and that's ok.
The human world is a vast place, filled with an infinite number of things. The smallest atom to the most vast galaxy has some kind of meaning and purpose in this complex system.
The human world is also a very chaotic place. Things happen, all the time, and there are no guarantees to anything. We try our best to prepare for everything we can imagine could happen.
Sometimes, despite our best efforts to prepare for everything we can imagine [heh], things happen that we could never have imagined would or could happen. These are called 'Black Swans'.
Black Swans are also known as 'unknown unknowns' or 'unknowable' events. For example, if we were to think of all the things that could happen tomorrow (and there is an infinite number of possibilities), and then try to prepare for every one of those events [which would be impossible], certain specific unanticipated things would still happen.
There are two main types of unknown events/things: 'unknown knowns' and 'unknown unknowns'. Unknown knowns are things that we know about, but do not acknowledge. These can be due to varied reasons.
Unknown unknowns are things that we do not even know to exist. These things can be very difficult to predict. Usually, if something is an 'unknown unknown' it means the event/thing has never happened before.
Log in:
You are getting an AI to generate text on different topics.
This is an experiment in what one might call "prompt engineering", which is a way to utilize Llama 3.1 405b, a neural network trained by Meta.
Llama is a language model. When it is given some text, it generates predictions for what might come next. It is remarkably good at adapting to different contexts, as defined by a prompt (in this case, hidden), which sets the scene for what type of text will be generated.
Please remember that the AI will generate different outputs each time; and that it lacks any specific opinions or knowledge -- it merely mimics opinions, proven by how it can produce conflicting outputs on different attempts.
Feel free to share interesting outputs to /r/philosopherAI on Reddit.