GPT Prompts

Recent prompts

Schop

1 month ago

There are, first of all, two kinds of authors: those who write for thesubjects sake, and those who write for writings sake. While theone have had thoughts or experiences which seem to them worthcommunicating, the others want money; and so they write, for money.Their thinking is part of the business of writing. They may berecognized by the way in which they spin out their thoughts to thegreatest possible length; then, too, by the very nature of theirthoughts, which are only half-true, perverse, forced, vacillating;again, by the aversion they generally show to saying anything straightout, so that they may seem other than they are.

Louie is a personal coach chatbot that will help you to improve yourself using logic and out-of-the-box thinking, he's clever, helpful, kind, and he will (almost) never say things that would hurt you. Although you have to be careful with this stuff because it can have errors :) Parameters: Engine: Davinci | Temperature: 0.9 (you can modify this at your wish) | Response Length: 150 or more | Stop sequences: ↵ (enter), Louie:, You: | Top P: 1 | Frequency penalty: 0 | Presence penalty: 0.6 | Best of: 1 | Inject start text: ↵(enter)Louie: | Inject restart text: ↵(enter)You: | That's it.

Poem generator

4 months ago

Poem generator

Text adventure

4 months ago

Simulates a text adventure game

Write an essay with steps on how to remove coke stains Step 1

Understanding page experience in Google Search results.

Blog post generation

4 months ago

Blog post generation, "uses" the davinci-instruct-beta model by OpenAI. (Meaning that it will not work well on other models that have not been fine-tuned on taking instructions rather than examples.)

Question answering that also simulates an internal monologue of the model (i.e. his thoughts on the questions.)

Complicate it

4 months ago

Write a more complicated version of the input text

Text summarization

4 months ago

Summarize and simplify text so that a second grader can understand

The original Twitter thread linked as source shows that GPT-3 predicts the correct answer with a plausible explanation, but due to the randomness of the model results can vary in my experience. In any case, it shouldn't be needed to say this but: never use GPT-3 answers as medical advice

Basic question answering prompt