Philosopher AI

How does Philosopher AI work?

To achieve consistent results that fit nicely into the 6 paragraph format, I had to make use of prompt-chaining. Each response relies on 6 completion queries. These prompts grow with each successive completion, because of how the prior completions are appended at each step. This gives it enough coherence to pass the Turing test relatively frequently.

To be clear, what makes Philosopher AI different from other completion toys is its consistency: if the query passes the safety check, it will almost certainly give a plausible response.

Why is it so expensive?

Each query, on average uses about 4000 to 5000 tokens, which puts it at about $0.30 per query based on OpenAI's pricing, which is $0.06 per 1000 tokens.

While it's possible to drastically reduce costs by switching to smaller GPT-3 models, the outputs generated by those models lack the wow-factor that the largest model (a.k.a davinci) has. The only model worth bothering with is the expensive one.

If you're upset about the high cost, I recommend waiting a few years, since costs will undoubtedly come down as people invent more efficient neural networks & cheaper hardware.