S1. Ep42 - Why Only 6% of Businesses Trust AI Agents (And How to Fix That)

In this episode, Noel and Katie dive into a shocking statistic from Harvard Business Review: only 6% of organisations fully trust AI agents. Despite an expected 86% increase in investment over the next two years, the trust simply isn't there.

They explore what AI agents actually are, why businesses are hesitant to adopt them, and the key concerns holding companies back from cybersecurity and data quality to unclear processes and lack of skills.

Most importantly, they share practical advice on how to build trust in AI agents, common reasons why agent projects fail, and when you should use a traditional automation instead of an agent.

If you would like to check out Episode 14 for more information on AI agents vs automations, you can listen now via the links below.

Apple podcasts

Spotify Podcasts

How to find us:

Join our membership over on Skool, where we support you on your AI and automation journey. We share exclusive content in the membership that shows you the automations we talks about in action how to build them. Find out more about the AI Business Club here.

We have a free LinkedIn group (AI Automations For Business), the group is open to all.

If you would like dedicated help with your automations or would like us to build them for you then you can find our agency at makeautomations.ai

  • Katie (00:26)

    Hello, welcome back to another episode. Hi, hello, I'm Katie. And as always, I've got Noel here with me today. How are you doing, Noel?

    Noel (00:36)

    Amazing as always.

    Katie (00:38)

    Good to hear. So, we were having a conversation about something that we read from Harvard Business Review and we thought this is going to make a really interesting podcast episode.

    Noel (00:53)

    Definitely. Yeah, I couldn't quite believe what I was reading, although I could in some ways.

    Katie (00:58)

    Yeah. So in a recent Harvard Business Review survey, only 6% of organisations say that they fully trust AI agents. So that's 94% of businesses don't fully trust AI agents, which to be honest, I didn't think it was going to be like only 6% of organisations that fully trust them. For some reason, I thought it was going to be a lot higher, not like over 50% or anything, but I did think it was going to be a lot higher than 6%.

    Noel (01:38)

    Yeah, I was hoping for at least 30-40%. That sort of area, maybe 20s, but yeah, six. I was like, hey, crazy.

    Katie (01:55)

    Yeah.

    Noel (01:55)

    But I guess before we kind of delve deeper into the episode, I guess we need to take a little step back and figure out what an AI agent is, for anyone that's brand new to this. Some people will maybe just be using ChatGPT, so I think we need to discuss what an AI agent actually is.

    And essentially what it is is a large language model which has access to tools and it makes up its own mind on how it does things. So you could give it access to like a Google Sheet, for an example. And you could have tools there which would be search for rows or add rows or anything like that. So each of those would be a different tool. And then when you give it a plain language prompt, you say, could you add in a row for X, Y, Z. And then that agent will then figure out how it does that task. So it's quite fluid. So you do need to be kind of strict on the tools you give it access to and your system prompt for that agent so it understands. But essentially an agent or agentic AI is figuring it out for itself, which is a little scary. You're letting it loose to do things. And yeah, I can see why there is that distrust. But when you do it right, do it properly, then yeah, you're golden. But that's a rough overview of what agentic AI is.

    Katie (03:33)

    Okay, so let's talk more then about that trust gap. Because I think it seems that the organisations that were surveyed for this Harvard Business Review only really trust agents for routine limited tasks or supervised tasks.

    Noel (03:55)

    Yeah, and that's a good place to start. If you've never had an agent within your business, then that's a good place to start. You give it something that's routine, and you know what the expected outcomes should be. So when you start seeing those outcomes from the agent, you can easily spot whether it's doing right or wrong, and then go in and correct it if it needs to. So that's fine, that's a good starting point, but yeah, you do then need to start building upon that. So you're giving it a little bit more access to things and then giving it a little bit more trust than you would normally. But obviously you only do that in areas where it's not going to destroy anything within the business. It's not gonna send stuff out to clients that it shouldn't do and all that sort of stuff. So yeah, it's kind of starting small and building that trust step by step.

    Katie (04:55)

    Yeah, it is expected that investment in agentic AI over the next two years is to increase by 86%. So people are really putting money into this, but somehow the trust isn't following.

    Noel (05:07)

    Yeah, that's also an odd stat. The first stat that shocked me was 6%. And then this 86% one, I was like, okay, I get why most businesses, big and small, want to have AI agents within there. They see the benefits, but if you're not implementing it in the right way, then you're basically going to waste that investment or not get the most out of that investment. It could increase your productivity by 10%, but if you did it properly, it could have done it by 60% or whatever. So yeah, interesting. There's a lot of money going into this.

    Katie (05:51)

    Yeah, it's almost like this stat with the 86% increase of investment going into it, and people not trusting it, is almost giving me NFT vibes. And I don't really know why.

    Noel (06:18)

    That's very true. I get that. Yeah, I could see that. Do I want to buy this thing of a monkey? Yes, I probably will. I don't quite trust that I'm going to be able to sell it, but I'm going to buy it anyway. Yeah, but this is going to be around a lot longer than NFTs.

    Katie (06:40)

    Let's hope so. Okay, so I just thought we could talk through some of the concerns that the report made. So people who were surveyed for this Harvard Business Review said that their concerns were cyber security and privacy. So like, is this going to leak my data or do something rogue?

    Noel (07:16)

    Yeah, that's a very, very common question when you do anything with AI. But the majority of the agents, if you've built them on a no-code platform or you've built them in code, then you're using the API for those particular AI models that you want to use for that use case. And obviously check their terms and policies and all that sort of stuff.

    But the big ones like Anthropic and OpenAI, they're all fully secure. They're fully encrypted. They don't look at the data. They don't train from the data. So it's fine to use. It's just that there's not as much education in that sort of area for people to fully trust these AI providers.

    Katie (08:03)

    Okay, another concern was data output quality, so hallucinations and wrong answers in business workflows.

    Noel (08:12)

    Yeah, and I think we could probably attribute most of that to either feeding the agent the wrong context. So your system prompt might be wrong or it's called the wrong tool and it's brought in data that probably shouldn't have done to give the answer, even though it thought that was the right thing to do. So yeah, that's where you're going to get those sort of hallucinations and errors.

    What I find now with the later models that we've got at the end of 2025, the hallucinations are better, but it's just making sure that we've got the right context going in. Otherwise, you're just never going to get the right answer.

    Katie (08:47)

    Yeah, okay. And then processes not ready for automation. So maybe the process itself is messy. They haven't got an SOP. It's undocumented. Which surprises me a bit, if I'm honest with that one. I don't know. I expect most businesses or organisations to have some sort of organisation.

    Noel (09:06)

    Yeah, I think some businesses will obviously have the business processes, but is it ready to add an agent into it? Is it properly defined enough? If your team is going through a process and then they often go off and do their own thing halfway through, then that's probably not going to be ready for automation because your AI agent's going to do the same.

    Katie (09:42)

    Yeah, that's a really good point to make. Okay. And then the other top concern from the report was tech limitations, but this is not like using tech. It's the systems not being able to talk to one another.

    Noel (09:58)

    Yes. If your system is completely away from the internet, it's not a cloud service and things like that, then that's when it can become kind of difficult to then start adding in agents and things like that. You're going to have to either host a large language model in-house, or you're going to have to find a way of getting around business firewalls. It gets kind of messy. So I agree, there are technical limitations, but if your stuff's already on the cloud, there are secure ways of getting around that sort of thing.

    Katie (10:54)

    I find as well, when it comes to tech limitations, a lot of what people are advertising or promoting as agents are actually just automations with a chatbot.

    Noel (11:13)

    Yeah, I do see this quite a bit. They'll say that something is an agent. And then you look at it and think, well, that's an end-to-end automated automation process. Yes, it's got an AI in there, but the AI is taking something in and giving the same thing out every single time. So that's not an agent. There's no thought process in there for it to say, well, I'm going to choose this route instead of that route. It's got one route.

    Katie (11:45)

    Yeah, and I know we talked about the difference between AI agents and automations in Episode 14, but can you just kind of do a brief overview to define what is an automation versus what is an AI agent? Just so if anyone is listening who is new and they've not listened to Episode 14, they've got a clear understanding because I feel like those lines can get blurred very easily.

    Noel (12:22)

    Yes. And it also doesn't help that the words AI agent are kind of like massive buzzwords. So everyone thinks that they need an AI agent because that's what everyone says. That's not true either really. So if your process is a repeatable process, let's say you're going to take a transcript from a call, and then you're going to go off and create transcription notes and then maybe send a quotation back to whoever you had that call with. That doesn't need to be an agent. Yes, you'll have AI in the middle of that, but you're going to have the same thing coming in and the same thing coming out. And that's what you want every single time. It's quite important that that bit's right. So for that, that would be an automation.

    Whereas if you had an agent, you could say, let's say you attach your agent into Slack and then you have a database in Airtable or Notion. You could then say to it, what's on my to-do list in Notion? And then it would use tools that it's been given access to and think, well, how can I get that information to best answer this question? So it'd be like, well, I'll probably use the search tool. That'd probably be a good place to start. And then it might go off and get information and that sort of stuff. So it's thinking about what it's gonna do.

    But it's never really 100% repeatable. In some cases, I would say like 99% of the time it's gonna be fine, providing your system prompt is good. If it's not good, then yeah, you're probably gonna run into trouble. But if it's good, it's gonna be perfectly fine. But if you need something where you need to talk to something like you would do with ChatGPT and have a two-way conversation with a bit of history in there, or you're happy for it to go off and do things in its own way, then that's where you need an agent. But if it's easy, same thing in, same thing out, automation every day of the week.

    Katie (14:40)

    Yeah. So automation is fixed steps. It's predictable input, output, great for when X happens do Y and Z exactly this way. Whereas AI agent has judgment, interprets context, chooses actions, may use tools.

    Noel (14:46)

    Yep. I mean, you can also put in some logic as well into your automation. You can say, within Make and n8n, you can have different routes within your automation based on conditions. So if something is mentioned within that call, it might send it one route or it might send it down different routes. So there's different logic that you can build in, but it is fixed. Whereas an agent would just figure it out itself.

    Katie (15:22)

    Yeah. Okay. The AI agent, you're kind of letting it loose a bit more. Whereas the automation, you have complete control over it.

    Noel (15:30)

    Yes, absolutely.

    Katie (15:42)

    Yeah, okay. So I guess that's why a lot of people don't fully trust it, isn't it? There's not the fixed outcome. It is kind of letting it loose.

    Noel (15:55)

    Yeah, they've probably put an agent in where they should have had an automation. It was mentioned in that report actually, where they were saying, look, a lot of repeatable processes, we couldn't trust it to do that. And they said, you shouldn't be using the agent for that. But then all considered, this is all new to everyone. This isn't something that's been around for 20 years.

    Katie (16:29)

    No, it's new and I guess a lot of people are learning all at the same time as well, aren't they?

    Noel (16:28)

    Yes, exactly.

    Katie (16:29)

    Okay so let's have a look at why agents fail. What would you say is the biggest reason why most agent projects would fail?

    Noel (16:35)

    I guess one of the big ones is garbage in, garbage out. You can't expect an agent to be a miracle worker. Yes, it's very clever. No, it might be able to figure things out, but you've got to make sure that the information that you're giving it and the instructions that you give it are absolutely spot on. You really need to get in and test it as well. So you need to think, if you've got a team, you might have somebody that will put lots of thought into the question they ask. And then you might have someone who's just going to ask a little five-word question and they expect great things. So you've got to test all of those bits out.

    Making sure that you're giving it the right context, the right tools as well. You don't want to be giving it access to the wrong things or giving it access to the wrong database. So you might have multiple and you'd be thinking, well, why is it talking about this? I want to be talking about that. But yeah, if you've got that wrong, then you will get the garbage out, unfortunately.

    Katie (17:56)

    Yeah, and we say this a lot as well, even with prompt engineering. If you're putting in a really basic prompt, then you're only ever going to get a very basic answer.

    Noel (18:02)

    Yeah, it's also important to put into your agent prompts expected outcomes. So if somebody asks this, I want you to go and use this tool followed by that tool and then give the response. So if there's things in there where you think, that should be fairly predictable, then bake that into that prompt. And then, depending on the AI model you've chosen, it should follow those instructions.

    Katie (18:40)

    Yeah. Okay. What would be another reason why AI agents fail?

    Noel (18:48)

    I guess it comes back to no clear process. So if you're expecting it to go off and do something for your business, but really you and your team can't do that process effectively already, then it's never probably going to meet your expectations. You're going to be expecting it to produce great things and then it's not quite gonna fulfil that need and you'll be thinking well why isn't this working. But really it comes down to understanding exactly what you want it to do.

    Katie (19:23)

    Yeah, so I guess as well when people say these AI agents are going to take my job, they're not really, because someone still needs to be able to understand what the outcome is to be able to even set them up in the first place.

    Noel (19:43)

    Exactly, yeah, definitely. You've got to have someone there to tap the keys to ask the question. Or talk to it, whatever.

    Katie (19:56)

    Yep. Very good point. Any other reasons why AI agents might fail?

    Noel (20:03)

    It also comes down to guardrails and things like that. So you can put guardrails within the system prompt. You could say when someone asks this sort of thing, only do this. So that's really, really important. You don't want it to go off and use a ton of tools instead when it could have just used one.

    I had an example of that when I was testing an AI agent in Make.com and I was using an OpenAI model and it was just going off and it was using every single tool in its arsenal to answer the question. I was like, what's it doing? And I thought, I didn't want to blame Make for this sort of error. I thought, well, maybe it's something else. And I switched out for Anthropic and then used one of their models. And it just went, yeah, cool. Yeah, I'll just use that tool there. And then that gave you the answer.

    So having those sort of guardrails and things like that is really important. It also helps with prompt injections as well. You could have people with nefarious means who want to maybe get the agent to do things that it's not supposed to do or access things that it shouldn't and things like that. So really important to have that sort of thing built in.

    And n8n have just brought out a module within their tool that does guardrails. I've not tested it out, but it is in there. And I'd expect Make to do something fairly similar, although theirs is pretty strict on what it does already.

    Katie (21:35)

    Yeah. And I guess with Maia coming out as well, maybe that would help with the guardrails.

    Noel (21:55)

    So yeah, Maia is just going to help you build the automation with the agent in it. But they do have tools to help you out with the system prompt. So you could say, well, rewrite it and then add in there, I want it to only do this and this. Then it will produce really, really good system prompts when you use their AI to do it.

    Katie (22:20)

    Hmm.

    Noel (22:23)

    And I guess the other thing is why they fail is you might give your agent too much. So you might say, well, I'm going to give you access to Notion. I'm going to give you access to Slack. I'm going to give you all of this information from all these different locations. I'm going to dump it in one big super agent. And at that point, you would have to really nail down the guardrails in your system prompt.

    But at the same time, you're kind of giving it too much. So if somebody comes in with a vague question into the agent, then it's going to go down all kinds of crazy different routes to try and answer your question as best as it can.

    So what I would say is, have agents that are specific. What you could have is like a classification agent. So that's the one that the user talks to. But then that classification agent has access to like three others. So one could be going off to Notion, one could be going to Slack. So when you ask a question about Slack, that first agent talks to just the Slack one and then it gets the response and then it's fine. Yeah, definitely if it's getting too big, think of ways that you could probably cut that down and then link them together. Top tip.

    Katie (23:57)

    Okay, that was a great top tip, thank you, Noel.

    Noel (23:53)

    You're welcome.

    Katie (23:57)

    Okay, so only 6% of businesses fully trust agents today, not because AI is doing something wrong or just going off and doing its own thing, but it could be that maybe the processes aren't in place, guardrails are missing, people are jumping to agents thinking that they need an agent, and sometimes actually all they need is an automation.

    Noel (24:28)

    Definitely, that's the big one.

    Katie (24:30)

    Yeah, for sure. Like I said, if you haven't listened to Episode 14, where we go through the difference between AI agents and AI automations, then definitely check that out. And of course, we will leave the link below for you. As always, we appreciate you so much for listening. If you like this podcast, please leave us a review or leave a rating because that really helps us in creating more amazing content for you. If you've got any questions that you would love for us to discuss or answer on the podcast, then please get in touch. You can email us hello@makeautomations.ai or you can come over to our free LinkedIn group which is called AI Automation for Business. Thank you so much again and we will catch you next time for another episode very soon.

Previous
Previous

S2. Ep1 - AI in 2025: The Year Everything Changed + Our 2026 Predictions

Next
Next

S1. Ep41 - How to Create Your Own Business App Without Coding