7 March 2026
AI & Prompting Guide
How to Actually Use AI — 99% of People Are Doing This Wrong
Most people get average results from AI and assume the models are overhyped. The problem isn't the model. It's how you're using it.
I want to start with something honest.
When I first started using AI, my outputs were average. I thought the models were impressive but overhyped. Useful for basic stuff, unreliable for anything that mattered.
Then I realised the problem wasn't the model. It was purely me.
Here's what I was doing wrong, and what changed everything for me.
The Mistakes (And Why They're Killing Your Results)
Treating AI like a search engine
This is the most common one and it gradually ruins everything.
People type a question, read the first answer, and move on. That's not how this works.
LLMs aren't databases. They're reasoning engines. A search engine finds existing information. An LLM synthesises, builds, and creates — but only if you engage with it like that.
The mental model shift: stop asking questions. Start giving briefs.
Vague inputs, vague outputs
“Write me some marketing copy.”
Copy for what? For who? What tone? What outcome? What platform? What length?
The model doesn't know. So it guesses. And its guess is the most average, safest, most generic interpretation of what you could possibly mean.
Every vague prompt produces a vague output. Not because the model is bad, because you didn't give it the information it needed to be good.
Accepting the first output
This one is almost universal.
Most people read the first response, decide it's good enough, and move on. Or they decide it's not good enough and give up.
Neither is the right move.
The first output is a draft. That's it. Your job is to respond to it — tell the model what's working, what isn't, what needs to change, and why. The model gets dramatically better after one or two rounds of iteration.
People who get exceptional AI outputs aren't using better models. They're iterating more.
No context, no memory
LLMs have little-to-no memory between conversations. Every new chat, you're starting from zero.
Most people open a chat, type a request with no background, get a mediocre answer, and wonder why it doesn't understand their project.
It doesn't understand your project because you never explained it.
Before asking anything complex, give the model what it needs: what you're building, who it's for, what stack you're using, what constraints matter, what you've already tried. Two minutes of context saves twenty minutes of back-and-forth.
One model for everything
There isn't a best model. There's a best model for each type of task.
Using ChatGPT for everything because it's familiar is like using a hammer for every job because you own a hammer. It works. It's rarely optimal.
The people getting the most out of AI are switching models based on the task, and they know which model excels at what.
Asking multiple things in one prompt
“Review my code, suggest improvements, rewrite it, and explain what you changed.”
That's four tasks. The model will attempt all four simultaneously and do each one worse than if you'd asked for them separately.
One prompt. One task. Then iterate.
How to Level Up
The four-layer prompt
Every strong prompt has four parts. Most people write one.
Role — tell the model who to be.
"You are a senior Next.js developer with deep experience in API design."
Context — tell it what it needs to know.
"I'm building a webhook handler for a SaaS product. Stack is TypeScript, Next.js 15, deployed on Vercel."
Task — be specific about what you want.
"Write an API route that validates the incoming signature, deduplicates events using a Redis key, and returns 200 immediately before processing."
Output format — tell it how to respond.
"Return the complete file only. No explanation unless the logic is non-obvious."
That's it. Four layers. The difference between a prompt that gets something usable and one that gets something you'd never actually use.
Make it think before it answers
For anything requiring real reasoning — architecture decisions, debugging, strategy — don't ask for the answer straight away. Ask it to think through the problem first.
Bad
“What's the best way to structure a multi-tenant database?”
Good
“Before answering, think through the tradeoffs between shared schema, schema-per-tenant, and database-per-tenant. Consider that I have 500–2000 tenants, data isolation is a legal requirement, and I'm on PostgreSQL. Then give me a recommendation with your reasoning.”
The second prompt forces the model through the logic before landing on a conclusion. The output is specific, reasoned, and actually applicable to your situation — not a generic answer that could apply to anyone.
Show, don't tell
If you want a specific style, tone, or format — show an example.
“Rewrite this in a direct, conversational tone. Short sentences, no corporate language. Example: 'We leverage cutting-edge AI to optimise workflows' → 'We build AI that removes the manual work from your business.' Now rewrite this: [your text]”
One example beats a paragraph of style instructions. The model is a pattern matcher. Give it the pattern.
Use system prompts for anything you use repeatedly
If you're using AI for the same type of task regularly — writing, coding, research, client work — stop re-explaining yourself every time.
Set up a system prompt that tells the model everything it needs to know about how you work. Your stack, your style, your rules, your constraints. Paste it at the start of every relevant conversation.
In tools like Cursor, Antigravity, or Claude Projects, you can save this permanently. You configure it once and the model behaves consistently every time without being briefed.
Iterate like it's a conversation
The best prompts are rarely written in one go. They evolve.
After the first output
“Good. Now make the tone more direct — the second paragraph reads too formally. Cut it by half.”
After the second
“Better. The third section still isn't landing. The point I'm making is [X]. Try again with that framing.”
This is how you get from “decent” to “exactly what I needed.” It's not a different model. It's a different mindset.
Pick the right model for the job
Here's how I think about it:
Stop defaulting to one model. The right tool for the right job is still the right principle.
Treat it like a smart collaborator, not a magic button
This is the mindset shift that changes everything.
The people getting the most out of LLMs aren't treating them like a vending machine — put in a request, get out an answer. They're treating them like a capable collaborator who needs a good brief, responds to feedback, and gets better the more clearly you communicate.
You wouldn't hand a brief to a new employee that said “write some content” and walk away. You'd explain the project, the audience, the goal, the format you need, and check back in once they'd had a go.
Apply that same logic to AI. The model is only as useful as the quality of your direction.
TL;DR
Most people blame the model when the output is bad. The model is rarely the problem.
Vague in, vague out. Every time.
Give it a role. Give it context.
Be specific about the task.
Tell it how to respond.
Iterate on the output.
Use the right model for the job.
Treat it like a collaborator, not a button.
The gap between people who think AI is overhyped and people who can't imagine working without it isn't the tools they're using.
It's how they're using them.
Want AI systems built for your business?
We build AI-powered automation systems for service businesses — the kind that save 25+ hours a week and recover revenue you didn't know you were losing.
Book a Free Audit