I spent three hours trying to get ChatGPT to generate a specific JSON format.
Three. Hours.
I tried begging. I tried threatening. I tried saying "please" in all caps. Nothing worked.
Then I learned about prompt engineering, restructured my prompt in five minutes, and got exactly what I needed on the first try.
That's the difference between throwing words at an LLM and actually engineering your prompts.
Here's the thing: prompt engineering isn't magic. It's not about finding secret incantations or Claude-whispering techniques. It's a learnable skill with clear patterns and principles.
And it's probably the highest ROI skill you can learn right now. Ten minutes learning prompt engineering will save you hours of frustration.
Let me show you how to go from "ChatGPT won't do what I want" to "I can make any LLM do exactly what I need." No fluff. Just practical techniques that actually work.
Why Prompt Engineering Actually Matters
"Can't I just ask nicely and hope for the best?"
You could. But you'll waste a lot of time.
Bad prompting:
Takes 10+ attempts to get what you want
Results are inconsistent
You can't reproduce good outputs
Feels like you're fighting the AI
Good prompting:
First attempt usually works
Results are consistent and predictable
You can reuse successful patterns
Feels like you're directing the AI
Real example from my life:
Bad prompt:
Write a product description for a headphone
Result: Generic, boring, could be any headphone.
Good prompt:
You are a product copywriter for a premium audio brand.
Write a product description for noise-canceling headphones targeting remote workers who need to focus in noisy environments.
Format:
- Opening hook (1 sentence addressing the pain point)
- 3 key benefits with specific details
- Call-to-action
Tone: Professional but conversational. Emphasize productivity gains.
Example benefit format:
"[Feature]: [Specific outcome for user]"
Length: 100-150 words
Result: Exactly what I needed, first try.
The difference? Structure, specificity, and context.
The Anatomy of a Good Prompt
Every good prompt has five components. Not all prompts need all five, but knowing them gives you options.
1. Role/Context - Who should the AI be?
Give the AI a role. It sounds silly, but it works.
❌ Without role:
Explain quantum computing
✅ With role:
You are a CS professor explaining quantum computing to undergraduate students who have basic knowledge of classical computing but no physics background
Why this works: The AI adjusts its language, assumptions, and depth based on the role.
Other useful roles:
"You are a senior software engineer reviewing code"
"You are a patient teacher explaining to a 10-year-old"
"You are a technical writer creating documentation"
"You are a critical editor providing honest feedback"
2. Task - What exactly do you want?
Be specific. Painfully specific.
❌ Vague:
✅ Specific:
Rewrite this email to be more professional, concise (under 100 words), and include a clear call-to-action to schedule a meeting this week
The more specific, the better:
"Summarize" → "Summarize in 3 bullet points"
"Write code" → "Write Python code using pandas with error handling"
"Explain" → "Explain using an analogy from cooking"
3. Format - How should the output look?
Don't just describe the format. Show an example.
❌ Without format:
✅ With format:
List pros and cons in this format:
Pros:
- [Benefit]: [Specific explanation]
- [Benefit]: [Specific explanation]
Cons:
- [Drawback]: [Specific explanation]
- [Drawback]: [Specific explanation]
Common formats:
Bullet points
Numbered lists
Tables
JSON
Markdown
Code blocks
Step-by-step instructions
4. Examples - Show, don't just tell
This is the secret weapon. Give examples of what you want.
❌ Without examples:
Write engaging social media posts
✅ With examples:
Write 3 LinkedIn posts in this style:
Example 1:
"I spent 5 years avoiding Docker. Then I learned these 3 things and now I can't imagine deploying without it. Here's what changed my mind: [thread]"
Example 2:
"Hot take: Your ML model's accuracy doesn't matter if users don't trust it. Here's why explainability > performance: [thread]"
Your posts should:
- Start with a hook (personal story or hot take)
- Promise value
- End with [thread] to indicate more coming
5. Constraints - What are the boundaries?
Tell the AI what NOT to do, what limits exist, and any specific requirements.
❌ Without constraints:
✅ With constraints:
Write a blog post with these constraints:
- Length: 800-1000 words
- Reading level: 8th grade (use simple language)
- Tone: Professional but not academic
- No jargon without explanation
- Include 2-3 code examples
- Don't use passive voice
- Target audience: Junior developers
Zero-Shot vs Few-Shot Prompting
Two fundamental approaches to prompting.
Zero-Shot: Just Ask
You give instructions but no examples.
Classify this email as urgent, normal, or low-priority:
"Hey, just following up on the Q4 report. Let me know when you have a chance to review."
Classification:
When to use:
Simple, common tasks
Well-defined categories
Model already knows the pattern
Pros: Quick, simple Cons: Less control over output format
Few-Shot: Show Examples
You give 1-5 examples, then your actual task.
Classify emails as urgent, normal, or low-priority.
Examples:
Email: "URGENT: Server down, customers can't access site"
Classification: urgent
Email: "Meeting notes from yesterday attached"
Classification: low-priority
Email: "Client asking about Q4 pricing, needs answer by Friday"
Classification: normal
Now classify this:
Email: "Board presentation needs final review before tomorrow's meeting"
Classification:
When to use:
Specific format needed
Custom classification
Unique style or tone
Model doesn't naturally know the pattern
Pros: Much more control, consistent outputs Cons: Takes longer to write, uses more tokens
The rule: Start zero-shot. If results are inconsistent, add examples.
Chain-of-Thought Prompting
This is where it gets interesting.
The problem: Sometimes LLMs give you the answer without showing their work. If the answer is wrong, you don't know where it went off track.
The solution: Make the LLM think step-by-step.
❌ Without CoT:
What's 27 × 34?
Answer: 918
✅ With CoT:
What's 27 × 34? Think step by step.
Let me break this down:
1. 27 × 30 = 810
2. 27 × 4 = 108
3. 810 + 108 = 918
Answer: 918
Why this is powerful:
You can see the reasoning. If the answer is wrong, you can spot where the logic broke.
Better example - real-world use:
❌ Direct:
Should we use microservices or monolith for our startup?
Answer: Use microservices
✅ Chain-of-Thought:
Should we use microservices or monolith for our startup? Consider:
1. Team size (3 engineers)
2. Timeline (MVP in 3 months)
3. Expected traffic (1000 users initially)
Think through the tradeoffs step by step.
Analysis:
1. Team size: 3 engineers is small. Microservices require more coordination and DevOps knowledge. With 3 people, communication overhead of microservices might slow us down.
2. Timeline: 3 months is tight. Microservices mean setting up multiple repos, CI/CD pipelines, service discovery, etc. A monolith is faster to start.
3. Traffic: 1000 users is tiny. We don't need the scalability benefits of microservices yet. Premature optimization.
Conclusion: Start with a monolith. The operational complexity of microservices doesn't justify the benefits at this scale. Revisit when you have 10+ engineers or proven scaling needs.
When to use CoT:
Complex decisions
Math or logic problems
Debugging
Analysis requiring multiple factors
When you need to verify reasoning
How to trigger CoT:
"Think step by step"
"Let's break this down"
"First, consider... Then..."
"Walk through the logic"
Role Assignment
Giving the AI a specific role dramatically changes output quality.
Generic Prompts Get Generic Results
❌ No role:
Write an email about the project delay.
Dear Team,
I wanted to inform you that the project will be delayed by two weeks
Bland. Corporate. Soulless.
Specific Roles Get Specific Results
✅ With role:
You are a senior product manager who values transparency and has built trust with your team over 3 years.
Write an email to your team about a 2-week project delay caused by unexpected API changes from a vendor. Your team worked hard to hit the deadline.
Tone: Honest but optimistic. Acknowledge their effort. Explain the situation clearly. Outline next steps
Result:
Subject: Update on Launch Timeline - Here's What Happened
Hi team,
First, I want to acknowledge the incredible effort you've put in over the past month. You've crushed every milestone, and I've been consistently impressed by the quality of work.
Here's the situation: Our vendor partner pushed a breaking API change yesterday that affects our core integration. This isn't something we could have anticipated, and it's completely outside our control.
The reality: We need two extra weeks to adapt to the new API and re-test everything. I know this is frustrating after all your hard work to hit the original date.
Here's the plan:
- This week: API adaptation (backend team leading)
- Next week: Testing and QA
- Week of Feb 12: Launch
I've already communicated this to stakeholders. They understand the situation and support the new timeline.
Questions or concerns? My door is always open.
Thanks for your continued excellence.
[Name]
Much better. Empathetic, clear, professional.
Useful Role Templates
For writing:
You are a [role] with [experience] writing for [audience].
Examples:
- "You are a technical blogger with 5 years of experience explaining complex topics to junior developers"
- "You are a copywriter for a B2B SaaS company writing to enterprise CTOs"
- "You are a patient teacher explaining to students who struggle with math"
For analysis:
You are a [expert type] analyzing [topic] for [purpose].
Examples:
- "You are a data scientist analyzing this dataset to find actionable insights for the marketing team"
- "You are a security expert reviewing this code for vulnerabilities"
- "You are an experienced architect evaluating this system design for scalability issues"
For coding:
You are a [seniority] [language] developer who values [principles].
Examples:
- "You are a senior Python developer who values readability and comprehensive error handling"
- "You are a pragmatic JavaScript developer who ships working code over perfect code"
- "You are a Rust developer who prioritizes memory safety and performance"
Common Prompt Engineering Mistakes
Let me save you some time. Here are the mistakes everyone makes:
Mistake #1: Being Too Vague
❌ Vague:
What "better" means:
More concise?
More detailed?
More professional?
More casual?
More technical?
The AI doesn't know. You'll get random improvements.
✅ Specific:
Make this more concise (under 50 words) while keeping the main points. Remove unnecessary adjectives and combine sentences where possible
Mistake #2: Not Providing Examples
❌ No examples:
How is the AI supposed to know your style?
✅ With examples:
Write in this style:
Example 1: "Docker is amazing once you stop fighting it. Here's what I wish I knew before wasting a week on networking issues."
Example 2: "Everyone says 'just use Kubernetes.' Nobody mentions you need a PhD to configure it properly."
Your style: Direct, slightly humorous, starts with a pain point, uses first-person, includes specific examples.
Now write a similar post about learning Rust
Mistake #3: Ignoring Format
❌ No format specified:
AI returns:
A paragraph
Bullet points
Numbered list
Table
Who knows?
✅ Format specified:
Give me the 5 key points in this exact format:
1. [Point]: [One sentence explanation]
2. [Point]: [One sentence explanation]
Mistake #4: No Constraints
❌ No boundaries:
Write a blog post about AI
AI writes:
500 words? 5000 words?
Technical level: PhD or beginner?
Tone: Academic or casual?
Structure: Essay or listicle?
✅ Clear constraints:
Write a 1000-word blog post about AI for non-technical business leaders.
Constraints:
- Avoid jargon (or explain it)
- Use real-world business examples
- Conversational tone
- Include 3 practical applications
- End with clear next steps
Mistake #5: Not Iterating
❌ Give up after one try:
[Prompt]
[Bad result]
"AI is useless."
✅ Refine based on output:
[Prompt v1]
[Okay result, but too formal]
[Prompt v2: Add "use a conversational tone"]
[Better result, but too long]
[Prompt v3: Add "keep under 200 words"]
[Perfect result!]
Good prompting is iterative. Your first prompt rarely works perfectly. Refine it.
Practical Prompting Patterns
Here are battle-tested templates you can use immediately.
Pattern 1: The Clarification Loop
When you're not sure what you want:
I need to [goal] but I'm not sure of the best approach.
Ask me 3-5 clarifying questions to understand:
- My constraints
- My audience
- My desired outcome
- Any relevant context
Then provide a recommendation based on my answers
Why this works: The AI helps you figure out what you actually need.
Pattern 2: The Expert Panel
Get multiple perspectives:
I need to decide whether to [decision].
Simulate a discussion between:
1. A pragmatic engineer (values shipping fast)
2. A perfectionist architect (values long-term quality)
3. A business-focused PM (values ROI and user impact)
Have each present their perspective, then summarize the key tradeoffs
Why this works: You see multiple angles without confirmation bias.
Pattern 3: The Structured Output
When you need consistent formatting:
Extract information from this text in JSON format:
{
"name": "",
"email": "",
"role": "",
"company": "",
"key_points": []
}
Text: [your text here]
Return ONLY valid JSON. No additional text
Why this works: Easier to parse programmatically.
Pattern 4: The Step-by-Step Guide
For processes or tutorials:
Create a step-by-step guide for [task].
Format each step as:
Step X: [Action verb] [What to do]
Why: [Brief explanation of why this step matters]
Example: [Concrete example]
Common mistake: [What people often get wrong]
Include 5-7 steps total
Why this works: Comprehensive, actionable, anticipates issues.
Pattern 5: The Critique Request
For improving your work:
Review this [code/writing/design] as a [role].
Provide feedback in this format:
What works well:
- [Specific strength]
- [Specific strength]
What needs improvement:
- [Specific issue]: [Why it's a problem] → [Suggested fix]
- [Specific issue]: [Why it's a problem] → [Suggested fix]
Priority fixes: [Top 2-3 most important changes]
Why this works: Balanced, actionable, prioritized feedback.
Advanced Technique: Meta-Prompting
This is next-level: use the AI to write better prompts.
I want to [goal].
Help me write a better prompt by:
1. Identifying what information I should provide
2. Suggesting a clear structure
3. Recommending specific constraints
4. Providing an example of a well-crafted prompt for this task
Then, write the optimized prompt for me
Example:
My vague request:
I want ChatGPT to help me write better emails
Meta-prompt:
I want to create a prompt that helps me write professional emails faster.
Help me write a better prompt by:
1. What information should I provide?
2. What structure works best?
3. What constraints should I include?
4. Show me an example prompt
AI response:
Here's an optimized prompt for email writing:
---
You are a professional communication coach helping me write effective emails.
For each email I need to write, I'll provide:
- Recipient's role and relationship to me
- Purpose of the email
- Key points to include
- Desired tone
Then create an email that is:
- Concise (under 150 words unless specified)
- Clear and actionable
- Appropriate for the context
- Includes a specific call-to-action
Template:
RECIPIENT: [role and relationship]
PURPOSE: [what I want to achieve]
KEY POINTS: [bullet list]
TONE: [professional/casual/urgent/etc.]
---
Use this template for your email requests
Boom. The AI just wrote you a better prompt.
The Prompt Voodoo Framework
After writing thousands of prompts, I developed a systematic approach. I call it The Prompt Voodoo Framework.
It's a 5-step process for crafting prompts that work:
Step 1: Define the Outcome
What EXACTLY do you want?
Not "help me with code" but "write a Python function that processes CSV files and handles missing data."
Step 2: Set the Context
Who is the AI? Who is the audience?
"You are a senior developer. The audience is junior engineers learning Python."
Step 3: Provide the Structure
What should the output look like?
Show examples. Specify format. Give constraints.
Step 4: Add Examples (If Needed)
Is the task unique or require specific style?
Show 2-3 examples of what "good" looks like.
Step 5: Iterate and Refine
Run the prompt. Evaluate output. Refine prompt. Repeat.
This framework is the foundation of my Prompt Voodoo course, where I teach this systematically with 50+ examples across different use cases.
Prompt Engineering for Different Tasks
Let me show you how to apply these principles to common tasks.
For Code Generation
You are an experienced Python developer who writes clean, well-documented code.
Task: Write a function that [specific task]
Requirements:
- Include type hints
- Add comprehensive docstring
- Handle edge cases (empty input, None, etc.)
- Use descriptive variable names
- Add 2-3 inline comments explaining complex logic
- Include example usage
Code:
For Content Creation
You are a content writer for [audience].
Write a [type] about [topic].
Style guide:
- Tone: [professional/casual/humorous]
- POV: [first-person/third-person]
- Length: [word count]
- Include: [specific elements]
- Avoid: [things to avoid]
Format:
[structure template]
For Data Analysis
You are a data analyst presenting findings to [audience].
Analyze this data and provide:
1. Key insights (3-5 bullet points)
2. Trends or patterns
3. Actionable recommendations
4. Potential concerns or caveats
Present findings in a way that [non-technical stakeholders/executives/engineers] can understand and act on.
Data:
[your data]
For Debugging
You are a senior engineer debugging code.
I'm getting [error]. Here's my code:
[code]
Please:
1. Identify the likely cause
2. Explain why it's happening
3. Provide a fix with explanation
4. Suggest how to prevent this in the future
Be specific - point to exact lines and explain the logic
Measuring Prompt Quality
How do you know if your prompt is good?
Test 1: Consistency
Run the prompt 3-5 times. Do you get similar results?
❌ Bad: Wildly different outputs each time ✅ Good: Consistent structure and quality
Test 2: First-Try Success
Did you get what you wanted on the first attempt?
❌ Bad: Took 5+ iterations ✅ Good: First or second try worked
Test 3: Transferability
Can someone else use your prompt and get good results?
❌ Bad: Only works for you with context ✅ Good: Anyone can use it successfully
Test 4: Specificity
Is the output exactly what you need, or just "pretty good"?
❌ Bad: Close, but needs manual editing ✅ Good: Ready to use as-is
Building Your Prompt Library
Don't reinvent the wheel. Save your good prompts.
Create a personal prompt library:
markdown
# Code Review Prompt
You are an expert [LANGUAGE] developer reviewing code for:
- Performance
- Security
- Best practices
- Readability
Review this code:
[CODE]
Format:
1. Critical issues (must fix)
2. Improvements (should fix)
3. Suggestions (nice to have)
4. What's good (positive reinforcement)
---
# Email Rewrite Prompt
Rewrite this email to be:
- Professional but warm
- Concise (under 100 words)
- Action-oriented (clear next step)
Original:
[EMAIL]
Rewritten:
---
# Explain Like I'm 5 Prompt
You are a patient teacher explaining complex topics to beginners.
Explain [TOPIC] using:
- Simple language (no jargon)
- A relatable analogy
- A concrete example
- Why it matters
Topic: [TOPIC]
Conclusion
Prompt engineering isn't about finding magic words. It's about:
Being specific - Say exactly what you want
Providing context - Give the AI a role and audience
Showing examples - Demonstrate good outputs
Setting constraints - Define boundaries
Iterating - Refine based on results
The Prompt Voodoo principles:
✅ Clarity over cleverness - Simple, direct prompts work best
✅ Examples over explanations - Show, don't just tell
✅ Structure over hope - Format your prompts systematically
✅ Iteration over perfection - Refine based on output
Your next steps:
Pick one task you do regularly - Email writing? Code reviews? Content creation?
Write a structured prompt - Use the 5-component framework
Test and refine - Run it 3-5 times, improve based on results
Save successful prompts - Build your personal library
Practice daily - Prompt engineering is a muscle
Want to go deeper?
I've created The Prompt Voodoo, a comprehensive course on prompt engineering that covers:
50+ proven prompt templates
Advanced techniques (chain-of-thought, few-shot, meta-prompting)
Domain-specific prompting (code, content, analysis, creative work)
Building custom GPTs and assistants
Systematic frameworks for any task
Learn more at: www.thepromptvoodoo.com
Questions? Let's connect:
Portfolio: jonathansodeke.framer.website
GitHub: github.com/Shodexco
LinkedIn: www.linkedin.com/in/jonathan-sodeke
Remember: The goal isn't to become a prompt engineer. The goal is to get AI to do what you want, when you want it, consistently.
Now go write some better prompts. Your future self will thank you for the time saved.
About the Author
Jonathan Sodeke is a Data Engineer, ML Engineer, and prompt engineering instructor. He's helped hundreds of professionals unlock AI's potential through systematic prompting techniques.
When he's not crafting the perfect prompt at 2am, he's building production AI systems and teaching others to work smarter with AI tools.
Portfolio: jonathansodeke.framer.website
GitHub: github.com/Shodexco
LinkedIn: www.linkedin.com/in/jonathan-sodeke