S2. Ep10 - The New Rules of Prompt Engineering in 2026
We first covered prompt engineering on the podcast almost exactly a year ago, and honestly, nearly everything we said has changed. So this week, Katie and Noel revisit the topic from scratch to break down what actually works in 2026.
The big shift is from rigid prompt structures to what we're calling context engineering. Instead of obsessing over the perfect sentence, it's about giving AI the right information upfront. We walk through how to use one conversation purely to build context, then bring that into your actual working conversation. It's the technique I used to build Clyde, and the results speak for themselves.
We also get into why JSON format produces dramatically better structured outputs, why telling AI what you don't want is just as important as telling it what you do, and why those "steal my prompts" posts are still terrible advice. Plus we compare how Claude and ChatGPT handle vague prompts completely differently, which changes how you should approach each one.
Oh, and we've got a Project Clyde update. Early access is live, and we've managed a 66% reduction in token usage since the beta. Pretty pleased with that one.
If you want to check out Clyde, here is the link to the website.
How to find us:
Join our membership over on Skool, where we support you on your AI and automation journey. We share exclusive content in the membership that shows you the automations we talks about in action how to build them. Find out more about the AI Business Club here.
We have a free LinkedIn group (AI Automations For Business), the group is open to all.
New for 2026, you can also find us on Substack, click here to subscribe and get all the latest news and updates from us.
If you would like dedicated help with your automations or would like us to build them for you then you can find our agency at makeautomations.ai
-
Katie (00:26)
Hello, welcome back to another episode. Hi, hello, I'm Katie. And as always, I've got Noel here with me today. How's it going, Noel?
Noel (00:36)
Yeah, it's going great this week. Thanks. Very busy. I've had lots of things going on in the Clyde world this week.
Katie (00:39)
Have you had a week? Yeah, do you want to give us a little update about Clyde and what's been going on?
Noel (00:50)
Yeah. So the early access version of Clyde is now available. Clyde can now create its own sub teams and manage them directly. There's also an updated organisation chart. There's a Kanban task board all integrated in there. But I think the biggest change I did was the entire prompting architecture. What I was finding with Clyde, and what people also find with OpenClaw, is it chews through AI tokens. So I was like, right, we need to fix this. The fix I've added in has saved about 66% of the AI token usage. So pretty big update. It's working really well at the minute. Lots more to come.
Katie (01:36)
Very impressive. And if anyone wants to go and check Clyde out, where do they go?
Noel (01:48)
So we've got projectclyde.app as the website and you can find all the links to GitHub to get the free access version and also the links to Patreon to get access to the early access version as well. All the links are there and we'll put it in the show notes as well.
Katie (02:04)
Amazing. Big update.
Noel (02:09)
Very big, yeah.
Katie (02:10)
So anything else going on in the AI world? Any updates?
Noel (02:20)
I've not seen anything that's raising any big red flags or anything like that. It seems fairly quiet this week. I'm sure they're cooking up something for the next week.
Katie (02:33)
Yeah, okay. So this week we are talking all about prompt engineering. Now, if you are a long time listener of the podcast, you're probably thinking, well, Katie, Noel, you've already done a podcast episode on this. And they would be right, wouldn't they, Noel?
Noel (02:54)
They would, yeah. That was way back in season one, episode two. That's when we first did prompt engineering. But lots of things have changed in the meantime. Lots of updates.
Katie (03:04)
Yeah, because that must have been almost a year ago. Almost to the week that we did that episode.
Noel (03:09)
Yes, pretty much. And it's kind of almost in that pre-reasoning model phase as well. I don't think we had those at that point. So yeah, lots of change.
Katie (03:21)
Yeah, I'm just trying to think how much AI has changed in the year that we've been doing the podcast.
Noel (03:29)
Yeah, I mean agents weren't really a thing, were they, this time last year? They were if you paid $200 a month to OpenAI. They'd give you access to an agent that didn't really work very well. But now they're just a common thing.
Katie (03:33)
Yeah, I can even remember in the summer we were talking to people and they were paying like £800 a month for an agent or something. Anyway, we're going off topic. So prompt engineering has changed so much in the last year. Do you want to tell us a little bit about that, Noel?
Noel (03:57)
Yeah, so I would say 12 months ago and previous to that, you had to be really on it when it came to prompt engineering. You had to have everything structured in the right kind of way in order for large language models like ChatGPT or Claude to understand what you wanted and give you roughly the right output.
Whereas now these days it's more chat orientated. The models are smarter and for the user, it's more of a two-way conversation to get to the endpoint. It doesn't mean prompt engineering is useless or outdated. It's still a really awesome skill to learn and it'll make your life with LLMs so much simpler. But the way in which we can use them is kind of changing and has changed a lot in the last few months. We'll dig into that in just a moment.
Katie (05:16)
Okay, so if you're still using prompts the way we did this time last year, it's outdated and there's a better way to prompt engineer, is what you're saying.
Noel (05:31)
Yeah, definitely. I have to think back to when I had my TikTok and I used to show ChatGPT prompts in GPT-3 and they had to be really rigorously structured to get the output you needed. You just don't really need to do that anymore. And I think the way it has changed a lot. So the big change that we're going to talk about is context engineering.
Katie (06:02)
Okay, let's dig into that. So what even does that mean?
Noel (06:07)
So if you could go to any chat model right now and say, go off and build me a landing page for my website, this is my company name, and that's all you give it, it will go off and do that for you. It'll build you an HTML website. But obviously a lot of it is going to be made up because all it's got to work with is that I need a landing page and that's the company name. So it needs to make up everything.
The key these days is being able to provide the AI with enough context to understand what you want.
Katie (06:47)
And also, I guess, what you don't want. Because I think that's really important as well. We are so focused very often on what we want that actually we forget that putting in what we don't want as well is important.
Noel (06:50)
Definitely. It's really important.
Katie (07:05)
Because for me, one of my bugbears with Claude is, well, it's my fault, because I always forget to tell it not to include any Oxford commas. I do not use Oxford commas at all. Never have done. Probably never will. That's just my writing style. So it's even something really simple like that that we forget.
Noel (07:17)
Yeah.
Katie (07:34)
Because very often we put in the prompt, it comes out and then we're like, well, no, I didn't want it to be like this. I didn't want it to have Oxford commas. I don't want it to include this or that. Or I write in long sentences, not short sentences.
Noel (07:52)
Yeah, exactly. It's really important to get that right, isn't it? Otherwise you just end up in that loop going round and round. But I guess also sometimes you don't actually know what you don't want until you see it. I've had a lot of that recently. I didn't like that. I didn't know I didn't like it.
So with context engineering, there's lots of things. It kind of sounds probably quite complicated and you might think, I don't have the time to go through all of that. I just want something quick done. I don't have time to create all of that extra context to get what I need.
But what I've been finding recently is I would have an idea of what I need to do, and then I open up a fresh chat window and go, right, this is what I'm after doing. Go off and create me that context. So it'll create a document. We have a back and forth, talk about it, and then it'll create that output. So then when I come to do the job, I can just go, here's everything you need. Copy paste that in and off it goes.
That's kind of how I went off and built Clyde. I gave the language model all of the requirements and then it built a massive document based on that, which I could use as the context. And as soon as we started building it, it worked first time, which is amazing in itself. So it's really important to get that context right.
And obviously don't just, if you use an LLM to create that context, do read it first. A lot of people these days will just go, yeah, AI's created it, copy paste, whatever. And then they look at the output and go, hold on a minute, this isn't right. And then read the instructions again and it clearly says in there to do this particular thing. I think some people are getting a little lazy with that. But it's always important to double check all of your instructions before you start a project.
Katie (10:10)
Yeah. I know sometimes I've typed too fast and I've put "can" instead of "cannot" and stuff like that. So then it's basically doing the opposite of what I want it to do. We're all trying to do things so fast, aren't we?
Noel (10:22)
Yeah. I also find when doing that initial context bit with AI, sometimes it might add stuff in. So depending on what chat model you're using, it might think, hey, this is a good idea, I'm just going to chuck that feature in there as well. So when you finally get to doing something, you look at the output and you're like, well, where did this come from? It's just gone off on a complete tangent, which is not very helpful.
Katie (11:00)
Not helpful at all.
Noel (11:06)
It tried to sneak some of those into my Clyde document. I was like, no, no, no, no, that's not what I was after. But yeah, giving AI as much context and information is probably the most important thing to do these days. Once you start doing that, your outputs and your interactions with AI will be so much better. I've been finding that a lot recently in the last few months. Giving it all the information, the outputs, I'm just like, wow, okay, that's exactly what I wanted. Thank you very much. Move on to the next thing.
Katie (11:42)
Yeah. So can you give us some examples? Let's say you're writing an email newsletter, a weekly email to your email subscribers, and maybe you want help with some ideas on what to send that week. You know what you want to promote.
Noel (11:55)
Yeah.
Katie (12:11)
And you know you want the email to be around that particular subject that goes nicely into your offer. What would be a prompt that you would use now versus this time last year?
Noel (12:28)
So what I used to do was describe absolutely everything that I wanted manually. But what I find I do more these days is I'll go off and say, your particular offer is on a page on your website. I'll go and grab that URL for the website, add that into the conversation. I would give it examples of all of the previous emails so it understands the structure and your tone of voice as well. So I'm just feeding it as much information as I possibly can and arming it with all of that. So when I set it to work to create that email, it will understand everything and then create it first time, hopefully, or very close anyway. It will get you most of the way.
That's kind of how I've changed to using stuff. I find I'm going off, I'm copy pasting a lot of stuff these days. Whether it be links or documents, adding documents into the chat, I'm doing that far more than what I used to do. And getting far better outputs from it.
Katie (13:37)
Okay, so copying and pasting writing examples that you've actually written without AI. And then saying, I need some help putting a framework together for an email that I want to send out to my email subscribers. I send once a week. This is an example of a previous email that I have sent. The email needs to be around this offer on my web page, copy and paste the URL.
Noel (14:19)
Yeah.
Katie (14:35)
And then would you say, as you can tell from my writing, I have a warm and friendly tone? Something like that?
Noel (14:42)
Yeah, you can do that. I guess if you're using Claude and you've already got a brand voice skill, then reference that within the prompt. So write it using this specific skill. Claude would usually recognise that you're trying to create written content and will automatically pick it. But if you're using skills, I always add that in as well, just to make sure.
Katie (14:56)
Okay. So we've built a skill in Claude. We've referenced it. Is there anything else that you would put in before?
Noel (15:17)
No, I think that gives all of it. Because one thing you would normally find, especially when you're trying to promote an offer as a business owner, you'd just go, well, this is what it's called, this is the price. Whereas now if you give it the website, it understands more fully what's going on. So you'll get the nuances of what the actual offer is.
Katie (15:43)
Yeah, and how it's going to benefit your reader and how it's going to help them.
Noel (15:52)
Yeah, it can pick up on all of that just by looking at the website. It'll do a better job than we can.
Katie (15:59)
As long as your sales page on your website is good or has that information.
Noel (16:06)
Exactly. Because you might forget to tell the AI, well, actually, this is an introductory offer limited to a certain amount per day or whatever. In the old way you might have forgotten to include that. So when you got the output, you'd be like, oh, I've got to include that. Whereas now give it the website, it would find that anyway and incorporate it seamlessly into the text.
Katie (16:36)
Right. Okay. So hit return and it should come out with something that you're pretty pleased with first time round. And then if you don't want anything included as well, obviously you would have added that. So for me, don't include Oxford commas.
Noel (16:47)
Yeah, I've only started using them since we mentioned it on an earlier podcast. You called me out for not knowing what that was. I can't remember what we were talking about, but now I use it all the time.
Katie (17:03)
There's an amazing song by Vampire Weekend called Oxford Comma. It's one of my favourite songs of all time. But that might have been in your black hole. We call a period of Noel's time a black hole. It's where Noel was serving in the Royal Navy. So he doesn't know about a whole bunch of films that he hasn't watched, a whole bunch of music that he's not heard of. So I think that probably would have fallen into your music black hole.
Noel (17:38)
Yep. Yeah, that 10 year hole.
Katie (17:53)
It's always quite interesting when Netflix puts on old films and I'm like, my goodness, I used to love that film, I went to see it at the cinema. And Noel's like, never heard of it, never watched it.
Noel (18:04)
No. If no one had a pirated version on DVD, I'd never have watched it in the Navy.
Katie (18:19)
Or if it wasn't on the Christmas TV.
Noel (18:26)
Yeah, forces television. There's not much music on the BBC World Service either.
Katie (18:30)
No. So I guess you've started using them. Oh, fancy Noel.
Noel (18:43)
I have, yes. I think we were talking about brand voice for that one.
Katie (18:51)
Yeah, I think I then made it known about how much I dislike Oxford commas and you were like, what's an Oxford comma?
Noel (19:00)
It is, yeah. I mean, a client just taught me a new trick earlier. So I'm still learning.
Katie (19:05)
Every day is a learning day. I feel like it's always been a successful day if you've learned something new. People listening to this podcast might now have learned that Vampire Weekend had a song called Oxford Comma. Go and listen to it on Spotify because honestly, it's a banger.
Noel (19:22)
Yeah, I'll take your word for it. I have no idea still.
Katie (19:40)
That's something new you could also learn today. Okay. So is there anything else we need to know about the new prompt engineering of 2026?
Noel (19:46)
So one thing I'm finding, and it goes back to the architecture update I did for Clyde, is that AI works really well when you give it structured data. It can be a little bit techy, but basically there's a language called JSON, which is a very simple coding language. It makes things really clear. So it would be like a parameter, like name, and then the value would be Noel. And then it could be like date of birth, and then the value. No, I'm not going to give that away.
Katie (20:38)
Your National Insurance number and just the three numbers on the back of your card.
Noel (20:45)
Yeah. But with JSON, it's really structured and straightforward to read. It can be quite difficult to create it manually. But what I tend to do, especially when it comes to image prompting, is let's say you want to recreate an image based on one reference image you've already got. You would supply that and then say to your LLM, right, give me the entire composition and style in JSON format. And it would go off and create that code snippet and you can read it all. It should be really straightforward.
But then you could feed that into the LLM again and go, right, now create me 20 different variations based on this JSON and give it to me back in JSON format. And then you can feed that into an image model and say, create me an image, and just give it that JSON file and it would go off and do it.
That kind of work doesn't just apply to images. It can also work for text because AI these days really loves structured data. They can't get enough of it. But unfortunately, the chat model, that's not how we talk to them. So if you want something in a really rigorous structured way, start to look at getting it to provide it in a JSON format and then adding that into another conversation. Because then you'll find you get exactly what you want every single time, which is pretty awesome.
I guess it's been around for ages, but I think it's becoming more and more of a thing recently. A lot more prominent. A lot of people talking about it.
Katie (22:25)
Okay. So is there one thing that everyone gets wrong about prompt engineering in 2026?
Noel (22:53)
I guess it's just being too vague. I think we've still got that vagueness problem that we've had for years ever since chat models became a thing. There seems to be a perception where you could give it a really simple one line prompt and then expect miracles. And then you don't get it and you just go, well, AI is not for me. I think a lot of people fall into that trap.
You still have to be specific and it goes back to the context engineering as well. The more you give it, the more precise you are, the better it'll be. So don't be too vague when talking to LLMs.
Katie (23:40)
Do you feel like you have to change the prompt depending on what LLM you're using? So if you're using Gemini, Claude, ChatGPT, do you feel like you have to change the prompt for each model or do you use the same for each?
Noel (24:02)
So I would say if you're more along the vague side of things, you kind of have to give more structure and instructions when it comes to ChatGPT to be able to get the outputs you need. But if you've already created one that's got loads of context, they should work no matter which model you put it into.
If you've just got a quick question or a really vague thing that you need to ask, then Claude's kind of your man for that. Whereas with Gemini and ChatGPT, you do need to give it that extra little nudge. It can't be too vague otherwise they're just too eager to please and give you an answer.
Katie (24:58)
Yeah, I definitely agree with that. I feel like the prompt I have to put into ChatGPT versus what I have to put into Claude are actually quite different. I have to put in so much more information into ChatGPT.
Noel (25:03)
Oh yeah, 100%. I also find if you put something vague into Claude, it will come back and challenge you instead of coming back and going, oh, that's an amazing idea, you're so brilliant, here's some stuff that you're not really going to read. It will ask you follow-up questions to understand a bit more.
Katie (25:30)
Yeah, sometimes if you haven't given Claude enough information, it says, before I write this out for you, I've got a couple of questions. And I love that it then asks you a couple of questions to make sure it's actually got all the information it needs, whereas ChatGPT just doesn't do that.
Noel (26:01)
No, it's too eager to please, isn't it? But I think actually, going back to the follow-up questions that Claude asks, in the last month they added that new pop-up that comes up now. It gives you multiple choice options. And I find those are incredible.
Katie (26:16)
Yeah, love those.
Noel (26:38)
Because normally you would have to respond in numbered points. Yeah, but now you can just click.
Katie (26:45)
Yeah, you can just go click, click. Or skip even. If something's not relevant, you just go skip. Love that so much. It's so good.
Noel (26:51)
Yeah, or add in something else. I've been using that a lot in Claude Code last week.
Katie (27:06)
Whoever came up with that for Claude, 10 out of 10. I feel like you deserve a raise.
Noel (27:11)
Yeah. I'd rather just tap one, two, three or four on the keyboard or click any day of the week.
Katie (27:19)
Same. What would you say is the most overhyped prompting advice that's going around on the internet right now?
Noel (27:33)
That's a tricky one actually.
Katie (27:37)
Shall I say something that I see quite a lot? Because I'm very much in the entrepreneur, coaching, online business world. I get a lot of ads pointed my direction about AI and things like that. I don't see it quite as much, but it's definitely still out there. It's "steal my 10 prompts to build a 100K offer" or "steal my prompts and never be stuck for content again." Those are really my bugbear.
Noel (28:19)
Yeah. I don't get those that much these days. I think I've probably hidden those sort of ads.
Katie (28:38)
Yeah, I get it more from one or two different companies, but it seems to have faded out by a lot of other people. Can we just go into why, when you download someone else's prompts, that's actually not a very good idea?
Noel (28:45)
Yeah, what I found, because I used to download them just to see what they were back in the day, most of them were super vague. The responses you would then get back were kind of useful, but at the same time, it was like, well, this has got nothing to do with me. They're not tailored to you in any way, shape or form. They're incredibly generic. So if you think a thousand people have downloaded those prompts, you're going to sound exactly the same as those thousand people. And that's not really what we want.
I would still avoid them. I just don't find them that useful. And if you really wanted prompts to do specific things or to help you, just fire up ChatGPT or Claude and ask it. See what it comes back with. Because I tell you what, none of those people have written those prompts themselves. I would put my hat on that.
Katie (30:08)
Yeah. They're probably written by AI.
Noel (30:18)
Yeah, especially where there's some where they go "steal my 5,000 prompts." It's like, you haven't checked and verified any of those. So I would use AI to help you ideate those prompts and then take it from there and make them more personalised to you. Give it that extra context about you, what you do. These days it kind of already knows anyway with memories and things like that. It gets a picture of you as a person or business over time. But it's always good to reinforce those bits as well. They're just not good, not in my experience.
Katie (30:56)
Yeah. So this is sort of related to that. How do you know when a prompt is actually working for you versus just producing something that looks right?
Noel (31:20)
I guess it comes down to, would you repeat that time after time? If you feel comfortable thinking, yeah, if I run this next week, I'd be happy with that exact output every time, then that's when you know you've got it nailed down.
And from there, if you're on Claude, or even Gemini, you could then create skills based on that. You could say, well, look, this is what I usually ask for, this is the output I get, create a skill to do it. You could do the same in ChatGPT with custom GPTs if you're on a paid account. So from that point, all you've really got to do is supply the really vague prompt to say, just do this again this week, here's a file or whatever that you give it. And then it's all fully repeatable. So if you're comfortable with repeats, you've got it right.
Katie (32:27)
Awesome. Anything else you want to add about prompt engineering in 2026?
Noel (32:28)
Yeah, so in the olden days we used to have to have like...
Katie (32:43)
2025, when we were just getting electricity.
Noel (32:51)
Yeah, I was a bit less grey then, but not much. So there used to be prompt formats. You would always have the role, goals, expected outputs and workflows and all that sort of stuff. And you used to have to use that a lot in large language models to get what you wanted.
Whereas today, not really so much. But that skill is still really important when it comes to agents and configuring their prompts. If you go to Make.com or anything and create a new agent, you'll be left with a blank system prompt box. And it's then up to you to define those instructions. So having that skill of knowing what structure to use to get that agent working correctly, that's still really important.
But again, you could just ask your large language model and say, this is what I want this agent to do, create me a structured prompt for that agent, and it will go off and do a pretty good job. Obviously check and read. It used to be terrible at creating prompts for itself back in the day.
Katie (34:03)
Back in the olden days. Back in 2025.
Noel (34:15)
Yeah, GPT-3.5 was absolutely awful for creating prompts. These days it's a lot better and it will give you a properly structured format. Usually it's in what we call Markdown format, which is another language which AI really likes because it's got heading tags. So it understands what's really important. It's a bit like SEO with websites. You see the main title and then your subheadings. It'll pick up on bold text. You could have all caps, bold, "DO NOT DO THIS." And the AI will understand that a bit more. So yeah, still a really important skill to learn.
Katie (35:17)
Okay. Well, should we leave it there?
Noel (35:11)
Yeah, absolutely. Lots to get into there for anyone wanting to level up their prompt engineering.
Katie (35:17)
And if you've got any questions, please send us an email. We'll leave our email address below or come over to our free LinkedIn group. Anyone is welcome to join. We'll leave the link in the show notes for you. If you want to come along and join, let us know. If you liked this episode or if you want us to cover anything else on our future upcoming podcast episodes, thank you so much for listening and we will catch you next time for another episode very soon.