Brief Prompting Advice
Some Ideas For Beginners (and we're all beginners)
When David Goodstein sat down to write his canonical physics textbook States of Matter, he began it thusly:
Ludwig Boltzmann, who spent much of his life studying statistical mechanics, died in 1906, by his own hand. Paul Ehrenfest, carrying on his work, died similarly in 1933. Now it is our turn to study statistical mechanics. Perhaps it will be wise to approach the subject cautiously.
This is how I feel writing now.
ChatGPT was released right at the end of 2022, which means almost nobody on earth has more than about 3.5 years experience prompting an LLM. That means I am, statistically speaking, a world-leading expert. Below is my expert advice.
Perhaps it will be wise for both of us to approach the subject cautiously.
Photography Class
Jerry Uelsmann’s photography class at U Florida is famous after its mention in James Clear’s Atomic Habits. On the first day of class, he divides the class into two halves. One half is the “quantity” group. They’ll be graded only on the amount of work they produce. The other half is all about “quality”. They are graded on their excellence — they need only produce one photo during the whole semester, but it needs to be outstanding.
At the end of term, all the photos were submitted and everyone was surprised to find that all of the best images were produced by the quantity group. This group experimented more, played with composition and light and bokeh, and iterated on their findings. They produced far more photos than the quality half of the class, and many were absolute crap, but there were also stunning masterpieces that were near perfection. The quality group, meanwhile, spent most of their time dreaming and speculating and talking about perfection and not enough time taking photos.
We’re suddenly in a world where we can build more of our half-baked ideas. This is a second order effect, but the game we’re playing is clear now. And we have so many more at-bat opportunities for home runs.
Accessibility
Back in 2011, there was Google Plus. And it was so good, and so easy to use, that one of its most prolific and well known engineers (Steve Yegge) wrote an epic criticism of the system AND accidentally cross-posted it to the general public.
That piece is known as Stevey’s Google Platforms Rant. If this is before your time, lock in, because it’s glorious. In and among the fabulous metaphors, Stevey makes his primary point:
It’s called Accessibility, and it’s the most important thing in the computing world.
The. Most. Important. Thing.
If you’re sorta thinking, “huh? You mean like, blind and deaf people Accessibility?” then you’re not alone, because I’ve come to understand that there are lots and LOTS of people just like you: people for whom this idea does not have the right Accessibility, so it hasn’t been able to get through to you yet. It’s not your fault for not understanding, any more than it would be your fault for being blind or deaf or motion-restricted or living with any other disability. When software — or idea-ware for that matter — fails to be accessible to anyone for any reason, it is the fault of the software or of the messaging of the idea. It is an Accessibility failure.
I was in the “huh” crowd too: accessibility was just 508 compliance to me. But this one little paragraph revolutionized my entire understanding of getting-your-idea-across. If someone else doesn’t get your idea, it’s not their fault. It’s your fault.
These days, that someone is an AI. And look, the AIs are 140 IQ smart and getting smarter. If they don’t get it, it’s probably not a them thing. You did not present the information properly. You did not give them the details, rules, and structure they needed to understand. If they do something wrong, it’s because you didn’t think of something you needed to provide.
Context is accessibility with a new name.
Context Is King
Thinking like this totally changes the game. Instead of getting frustrated or asking simple one sentence questions, you can start to build a picture, or tell a story, or explain what you’re getting at.
It still needs to be said, context is not just words. All modern LLMs are multimodal, and screenshots are some of the best debugging tools available. This is why we have so many MCPs too. Playwright, allowing the agent to run SQL queries, Bash commands — all of this is just additional tools to load context. The more context, the better the response.
Take a look at the Claude Code System Prompts as an example. There are a couple of hundred of these, progressively loaded as needed based on the tools Claude wants to run. If it needs git status, it loads details about that. If it wants to grep around and find stuff, it’s given some useful hints for how to grep well. This is also the entire point of the Skills ecosystem: give the AI context for the tools it can use and how to use them.
Context windows now are 1 million tokens or greater. For the record, Charles Dickens’ Bleak House is like 320,000 words. Performance degrades in the second half of the window some, but you can still stuff a lot in there. If you want a real answer on something, write a novel, throw a few thousand lines of code at it, some sample screenshots, and then watch the richness of the conversation you have next.
“Why” is context. Understanding the goal of what you’re doing will shape the response and provide better results. Treat your AI like a highly intelligent interlocutor who will understand your context, not a robot.
“Who you are” is context. Tell your AI where you’re coming from. Tell it how old you are, how you think about things, what philosophical background you maintain, what religion you believe. And sometimes tell it to forget all of that and suppose its user is a farmer from northern Louisiana who ended up on a scholarship at Oxford for linguistics (unless you are said farmer).
Voice Wins
In my 2020 predictions, I said that voice would never become a major input route. Today I’m eating my words. Voice is incredibly productive, for two reasons.
The first is simply that the voice models have gotten so incredibly good. Wispr adds a question mark to my sentences based on the smallest upward trail to my voice. It finds proper nouns properly, gets rid of my umms and coughs, and just generally has a remarkably low error rate.
And the second reason comes back to context. The more you give, the better the response. And let’s face it, it’s far easier to talk than it is to type. Unless you’re a stenographer, it’s faster too.
Order does not matter at all to an AI. As soon as I start trying to write something out, I try to order my statements and put them together logically like puzzle pieces, which slows me down at least as much as my typing. But when I talk, I just spit it out however it comes out. The machine loves word vomit. If I repeat myself, that’s fine, it’s additional signal that what I said twice was important. The AI is just a giant pattern matching machine, and it can do it across many minutes of text. So just spit it all out, and make sure you get it all.
I’ve flipped completely to spilling my guts by talking to my computer.
Metaprompt
Many times I’ve heard people say something like, “I want to get the AI to do this.. but I have no idea what the prompt should be.”
You may not, but the AI sure does.
The AI has seen a gazillion prompts. All of the RL and evals and RLHF and other fancy things are all done by giving the AI an input (prompt) and verifying the output. The AI knows or can research all the domains you can imagine. Get the AI to write your prompt for you.
So if you want to do something, but you’re not sure how, it should like like this.
Use your voice and talk clearly into the microphone:
“Hey AI, I have an idea...
<three minutes of speech and garbled half-baked ideas about the thing you want to do>
So, can you make me a prompt that will get you to do that?”
In 30 seconds, you’ll have a clear, highly specified prompt. Clear your context, copy it in, and go.
Dialogue Not Answers
An AI can be a very helpful thinking partner if you allow it to be. It can give you a quick answer to a question, yes, but it can also partner with you and help shape an idea that might not be formed yet. This can be explicit like the Grill-Me skill or the AskUserQuestion tool.
Or you can just ask the AI to have a Socratic dialogue with you, or to give you problems until you approach mastery. The ancient world knew the best form of learning was a patient and learned individual tutor. We all have one now.
Seek And Borrow
I’ll show you one of my favorite simple prompts, which I pair with a large design document or a bunch of code and details on a repository:
what’s the single smartest and most radically innovative and accretive and useful and compelling addition you could make to the project at this point?
I grabbed this from someone off Twitter, I wish I remembered who. It’s remarkable how many different frames there are to see aspects of a project. You do not have a monopoly on the best prompts and, if you look, you’ll find stuff everywhere. Seek them out and build them into your workflows. My favorites all become commands.
Agreement Is Cheap
People accept the first answer they get because they can’t imagine a better one. So your job is to imagine a better one. AI isn’t a slightly improved Google. It’s something different and alien. AI is more like a giant library that talks back to you, like in Hogwarts. Or if you prefer, more like Borges, but the Library of Babel was not a happy place. All that knowledge is trying to get your attention. Why wouldn’t you talk to it? It seems wise. It feels good.
And that’s the risk. It’s trained so well on reinforcement signals that it will coherently reinforce any good or bad argument you give it. It will be confident, so your job is to disagree. The novelty that makes this risk worthwhile is that you can participate in debate or dialectic, depending on the perspective you need, and it will do either willingly. Make it argue both sides of a debate. Make it explore the trade-offs — as Thomas Sowell said, “There are no solutions, only trade-offs.” Ask for evidence to support the contrary position. Dispute its solutions. Have it refute your thesis.
But remember its first instinct is always, always a mirror. Make sure to get past your reflection.
Verifiability Wins
Adam Mastroianni brilliantly described the difference between subjective and objective intelligence. The main difference is verifiability:
Some problems have clear boundaries and verifiable solutions, like “What’s the cube root of 38,126?”. These problems require objective intelligence. Other problems are vague and squishy and it’s not clear whether you’ve solved them, or whether they exist at all, like “How do I live a good life?”. These problems require subjective intelligence. Objective intelligence can be trained, reinforced, and validated. Subjective intelligence cannot.
It’s amazing to think that these things are still just next token prediction machines. Who would have thought that such a simple idea could refactor a codebase, write a hit song, convince someone to join the Church, or to carry through on their suicide. It will happily argue, echo, and support any argument no matter how silly, true, or evil.
Those second two are scary, certainly, but what makes them a problem is that they are unrooted in reality. Listening immediately is as foolish as testing your code in production. The thing that connects output back to the world is a test.
An AI is only as good as its evals. Give it a test, e.g. “the output should always be in this format”, and it will pass it willingly. It wants to grade itself, because this is how it’s trained, and it wants to get an A. This is also why coding has so clearly become the first, best domain for AI. Code can be made rigorous against a set of tests. So if you describe the behaviors you want out of a program in tests, the AI will write towards that confidently.
Our job as human users of AI is to break down our subjectivity into passable objective criteria wherever possible, which has the additional side benefit of maintaining our taste and voice as the discriminator. This is why “Make this UI better and prettier” is such an unkempt prompt, whereas “All render states must load in under 100ms, the theme needs to use color wheel theory for primary, secondary, and background colors, and interaction points must follow the attached clear layout guidelines” will help curate your UX more cleanly.
In fact I would argue that this is the whole game of taste. The point is not just to have taste, but to be able to describe it. What makes a sentence good? Why is one painting garish and another ethereal? We’ve become such experts in the emotive experience that taste evokes that we’ve forgotten the skills leveraged by creatives to produce the experience.
Defining objective criteria from subjective ideas is all about the production of taste.
You Can’t Use AI without the I
Most of the advice so far has been about loading context with the goal of getting the AI to do The Right Thing. It’s been a recurring theme that the AI is trained on certain instincts based on reinforced behavior, and that means when it gets a lot of context it can develop its own opinions. It sounds smart and confident because it was trained on a lot of smart people. This is the right way to architect this database structure and this is the right way to rewrite a sentence — emdash anyone?
A lot of the time it will be on the money. But not always. And even when it gets the output verifiably right, that doesn’t mean the user has all the right understanding. The Right-est Thing in the list of Right Things an AI should do is give you all the context you need. This whole thing is a feedback loop. It can make you more efficient and do things on your behalf, yes, but that works best when you have the right understanding.
This is the part we’re all still trying to figure out. If you’re a coder you don’t need to write all the code anymore, but you do need to understand the functions and the tests and the shape of the data. Which parts do you need to know and which can you leave to the machine? Nobody has the right answers yet. When I’m right about this, I can flow and build very quickly. And when I’m wrong I still paint myself into dark corners and stand there blindfolded, unaware of the problems I’ve accumulated.
Maintaining your understanding of whatever it is you’re working on is the best barometer I’ve seen yet. It means you have taste and direction in hand and the tools you wield are working on your behalf.
The best way I’ve found to do this is to focus most carefully on properly loading your human context window.
An Example
Let’s use an example - this is one of my investing prompts. I run it as a command, like so:
/thesis “My harebrained moneymaking idea goes here”
It does excellent market research on a particular thesis I seed it with. I’ve been consistently happy with the research and output. Some prompt observations:
It’s 200 lines long and loads additional files for more details
I borrowed some of the language from other great prompts I found.
I definitely had AI help me write this prompt
My thesis is often dictated by voice (I use Wispr)
It intentionally reinforces seeking truth rather than confirming the thesis and actively looks for disconfirming evidence.
It produces an extensive set of context I can load into MY brain and make decisions.
Most important: this prompt is always just a starting point. After this, we talk quite a bit more to explore whether the research and analysis holds or needs adjustment.
That’s all for now! If you have items you’d add to this list, I’d love to hear them!
An Addendum For Developers Only
Even though I’ve been using some developer-ish examples, most of this has been general advice. When you’re using command line tools and harnesses that can add as many tokens as you want, the game changes a bit. It’s no longer about getting as much context as you possibly can. Now it becomes about making sure that you have the RIGHT context, and that you don’t OVERLOAD the model with too many directives. Context rot is a real thing. Reload whenever possible, use summarization and memory (not just compaction) as helpers in this process.



