Ep.77 Making better decisions in the age of distraction and AI

Todd Battley Todd Battley
Chief Executive, Australia
Jason Mattingley Jason Mattingley
NHMRC Leadership Fellow, Queensland Brain Institute
21 August 2025
17 min

Maria Rampa: Hi I’m Maria Rampa and welcome to this episode of Engineering Reimagined.

Today’s episode is the second of a two-part series where Aurecon’s Chief Executive – Australia Todd Battley sits down with Professor Jason Mattingley from the Queensland Brain Institute exploring how to make better decisions in an age of distraction, bias and artificial intelligence.

Today, Todd and Jason discuss why all decisions are not created equal – and why the ones that really matter require more than gut instinct. Todd and Jason look at how to think more clearly, avoid mental traps, seek out diverse perspectives and use evidence over emotion.

As AI tools become more deeply woven into our decision-making processes, Todd and Jason also discuss the importance of keeping the human in the loop. They explore the benefits of using AI as a tool for cognitive offloading without surrendering our ability to reason, reflect, and learn.

If you've ever wondered why a rational choice can still feel wrong, or how to think more deeply when it counts, this episode is for you.

+++++

Todd Battley: We talk about emotions and as it relates to decision-making and one of the things you will often hear people say is they have a gut feel about something. And I've often wondered about what that actually means because I'm guessing the gut isn't telling us very much, it's actually something happening in our brains.

Jason Mattingley: Yeah, we use that term, the gut feeling. It's something that feels natural, kind of easy, feels right. It's a general feeling of certainty. It's not that our gut is literally telling us what to do. But of course, the brain is controlling the gut amongst other things, but it's going in that direction rather than the other way around. But we're talking about intuition, basically. And it raises an interesting question when we behave, when we make decisions, when we reason, how do we know whether the quality of our reasoning and our decision making is good or not? And most of us would say, well, there's a feeling. If I say what's 2 plus 2, you'll say 4, and you'll have high confidence that that's correct, because you've had a lifetime of being exposed to mathematics, and you just know that that's a fact. But then there are other decisions we have to make. These are the more realistic day-to-day decisions about, well, what school should I choose for my child to go to? There's no objective, correct answer to that. And so in those cases, you're evaluating lots and lots of evidence with different amounts of reliability, uncertain, very long-term outcomes, not a short-term outcome. And there, we don't have such a strong kind of intuition anymore. And so that's where you need to be very unemotional and just look at the evidence that suits you and suits your principles. And I would say from those kinds of decisions, generally I would try not to go on gut feeling because gut feelings can often lead us astray. They're a bit more susceptible to bias, a bit susceptible to the availability of information. I use this example in some of my classes, the people who have a fear of flying, what they're aware of is they see on the news, these horrible crashes. And it's so horrible. I'm not going to fly anymore. Forgetting that actually it's much more dangerous to get in your car and drive to work, statistically speaking. And so that's an example of where a person's intuition has led them to an irrational behaviour. And, you can take that to any other situation and say, well, I actually have to look at evidence in a more dispassionate way, and I need to be careful to be aware of what biases I might be bringing to the table in the decisions that I'm making. So I always say when you're making decisions that really matter for the future, it's important not to go on gut feel, to go on evidence, to go to seek out differing opinions. That's the other thing. We often seek out views that kind of jive with our own, and this idea of being in an information bubble. It's always easier to talk to people who agree with you, but it's actually more productive to be talking to people who don't agree with you or who make you think in a different way when it comes to making good decisions, I think that's a really important thing to bear in mind as well.

Todd Battley: So if we do make a really rational decision based on great data with high reliability, why can it sometimes feel wrong?

Jason Mattingley: That's something that we and many other people are investigating. One of the concepts that we're interested in is the concept of noise. So anyone who's done mathematics or is in engineering will know about noise, stochastic processes, things that can't be predicted, things that can keep you away from getting access to what you're trying to get access to. People think of signal-to-noise problems and that kind of thing. And when it comes to intuition or gut feel, we have a suspicion that what people are really reading out in their level of intuition or certainty about a decision is the amount of noisiness that they have in brain processing. We think that maybe what people are able to do is tap into how noisy the information processing is at any given moment in time. Maybe that's the key. Now, it's another question, well, can we actually sort of zoom out if you like and float above ourselves and ask how much noise is in my brain at the moment, right? That seems like a bizarre concept, but actually there is a bit of evidence. We record brain activity while we have people in the lab doing decision-making tasks for example, and we do have some evidence that people can tap into how noisy their processing is and that becomes a readout of how certain they are or how good they feel about the decision that they've made. And sometimes that can go wrong. So sometimes people can get the wrong read on the noisiness and be very confident and actually get the decision wrong. And of course it goes the other way as well. Sometimes people can have a good stab in the dark, you guess and get it exactly right. So the amount of noise that's in a system, forget about the gut, it's all in the head and how noisy that system is. And maybe I just elaborate a bit. So, what is noise in the brain? What does that really mean? It's not auditory noise, it not static. It's all of the nerve cells that are connected together in this huge network, they all communicate, in a way, that has a certain amount of uncertainty associated with it. There's a bit of noise built into the system, the way that the chemicals and the electrical signals flow. There's always a bit of noise in there. I think it's that that might be the key in helping us understand whether we've got a decision right or wrong.

Todd Battley: So we shouldn't be saying perhaps, I've got a great gut feel on this one. Maybe the answer is, it feels like a quieter decision. I'd imagine that the old-fashioned gut feel or the quieter decision feels quicker, we forget about it quickly and we move on with our day, the other one probably stays with us a bit longer.

Jason Mattingley: That's right. As a famous psychologist, Daniel Kahneman, who won the Nobel Prize in economics in 2002. And most people in business have read that book or know a little bit about it.

Jason Mattingley: That's Thinking, Fast and Slow.

Jason Mattingley: Thinking, Fast and Slow is the book, he spent his life trying to identify biases and why people jump to the wrong conclusions, feel like they've made the right decision and it's been wrong. He writes about this example, and I give it in my lectures. And I give because it's very simple and people when they first hear it get it wrong. And he says, okay, imagine this problem, a bat and ball cost $110. Let's put it into modern terms. Let's say it's $110 for a bat and a ball. That's a bit more realistic. The bat costs $100 more than the ball. What does the ball cost? Now, I won't put you on the spot and ask you to answer it. But the natural thing to do is to say, oh, the ball costs $10. Because what people do is a subtraction. They're like, well, there's 110 minus 100, it leaves 10. But that's the wrong answer, because the point is the bat costs $100 more than the ball. So the correct answer is the ball costs $5. So people can work that out after they listen to this, but that is the correct answer. And it's just an example of something that seems like a simple arithmetical problem. 110 minus 100, what's the answer? It's 10, everybody knows that. Feel good, wrong. And that's an example of where there's a bias to find the most effortless solution, and that's an effortless solution. Turn that question into a simple arithmetical problem, and the answer occurs to you, and you move on, but it's the wrong answer. So that's why I say, stepping back and why Kahneman says, whenever you've got a real serious decision to make, step back from it and think slowly and deeply, use counterfactual reasoning, what ifs and so on, and you're less likely to make those errors.

Jason Mattingley: I saw an article that talked about brain activity I think in some detail about the effect on long-term AI users, so really heavy users of the technology. But what it showed was that even over a relatively short period where things like ChatGPT and other large language models have become ubiquitous in everybody's computer, they're on everyone's phone, that even in that short period of time, really only a couple of years, people have got really reliant on it. And I think what the article was trying to show is we'd lost some capacity for reasoning or we just accepted it and moved on really quickly. I think in our STEM professions, we see this as a bit of a risk, not just for graduates coming out of university, but for all of us as we take Kahneman's example and just quickly go on to the easiest thing. What impact do you think AI will have on our kind of cognitive ability and how our brain's function?

Jason Mattingley: Any tool can have risks and benefits and it depends how you use it. And I think a sort of uncritical reliance on a technology, there's potential pitfalls there. You know I think AI is great for what we sometimes call cognitive offloading, you can get it to do perhaps some more routine things where there's little risk of an error, but it takes precious cognitive resources from you, and it gives you the automatic answer. In the way that we would use GPS in our car, we all do that now, I do anyway, and the way we would use a diary on our phone, it sends us alerts. So for very simple processes, I think AI is good for that. As soon as we start to rely on it to make the decisions, as soon you take the human out of the loop, then things can go wrong. There's always the opportunity for error. And of course, the databases that large language models and other AI systems are based on, it can be biased themselves. So I do think the risk is, people who slavishly use AI to do their job for them, that is definitely going to get people out of the habit of thinking and reasoning for themselves. And although I don't think that capacity is then lost forever. Like any habit, you've got to keep at it and so I think if you've spent years relying on AI to make all your decisions for you and suddenly that's taken away, then it's going to take you some time to relearn what it's like to have to make decisions for yourself. So I think the key perhaps, is keeping the human in the loop, making sure that human is responsible for arbitrating and making final decisions based on the output of a large language model.

Todd Battley: In many workplaces, in fact all the professional services globally, have really worked on an apprenticeship style model. We don't use that term but someone starts out as a graduate, they come in working under the direction of someone a bit more senior. Over time you learn how the business works, how this particular function, whether you're reviewing a contract or whether you're doing some calculations at our work or whatever it might be, you start off with some really basic activities that after a year or two you look back on and think I can't believe how much I struggled with that early on but I can now do that in my head and then you sort of grow and the complexity of the work grows often with years and if you look at the great practitioners in any of the traditional professions, they've honed their craft over decades and that's really valuable. What's fabulous about some of this AI is that it's going to put everything that's in the brains of those wonderful practitioners, it's going to make it available to everyone. If you had a message for particularly our younger listeners, who will be still doing an apprenticeship style model through most of the professions, what can they do practically to make sure they're just doing the cognitive offloading, but not becoming reliant because we do want them to be able to deal with complexity, and to do that really well, they have to understand what got them there.

Jason Mattingley: We're struggling with that at the university at the moment, because students have access to these tools. When we give them items of assessment, that might be write a laboratory report or write an essay or sit an exam where you write answers to things or even a multiple choice test. All of those can be done with AI, all of those could be done with ChatGPT. Ultimately, what I would ask any expert or any person studying to become an expert is, at the end of the day, do you want to have that knowledge or not? How meaningful is it to you if you say, well, look, I do know about this area, but I can only answer your questions when I've got my computer and ChatGPT next to me. I wouldn't want to go to a doctor or a lawyer with a problem and have them say to me, look I did pass my undergraduate in that field, but I used AI to get there. So actually, I don't know any more than you do about this topic. But it doesn't matter, I've gotten my computer here, we'll get the answer together. I don't think people would go for that. I think we still want a person who's an expert. So I think even just asking a moral question of yourself, which is, if I’m at university, if I'm training in a job, if I'm an apprentice, is this something that I want? There's a satisfaction that comes from learning things. There is nothing like the feeling, that dopamine reward when you crack the problem. It took you days or hours or whatever it is and then you feel like you've got some competence and that can be in the workplace as you were describing but it can be anything, when I hit a golf ball and it goes straight. I, just that's so rewarding and I could get a machine to hit that ball for me, but that's not rewarding. So it's always important to ask, why am I doing this and is it meaningful to me if I use AI as a crutch for everything? Where do I want to be in this system? I want to use the tools, but I want be the expert. I want to have internalised that knowledge.

Todd Battley: One of the things that we do see, in the modern workplace is just the sheer amount of data that is available to help. And I wonder if you had any advice, well, firstly, how do we process that? We look at overwhelming amounts of data to make sometimes simple, sometimes complex decisions. What's going on in our brain there? Is that back to the sort of noisy, quiet bit? And then have you got any advice for our listeners on what they might be able to do practically to sort of weed their way through that and make a good decision, or at least be in the best position to make a good decision?

Jason Mattingley: We can only focus on perhaps one or two key things at any given moment. That's why we need to avoid distraction. And we have this concept of working memory. You can only juggle maybe two or three or perhaps four things at a time and see how they relate to one another. Whereas automatised systems, AI systems, can do amazing sort of parallel processing. So they're actually great for jobs that require trawling through vast amounts of data. Humans are not good at that. What humans are good at is being flexible, being very adaptive, being able to change behaviour and learn things quickly and generalise from one situation to another. And we're sort of jack of all trades, maybe master of none, but we can do lots of different things. Just witness what the human race has done. AI is good at doing a deep analysis of complex large data sets and trawling up, looking for patterns to offer up to the expert to then interpret. Radiography is a good example. The radiographer looks at an X-ray or MRI scan and says, oh yeah, this looks a bit abnormal. But that's a very fallible kind of system. Yes, you need lots of years of experience looking at X-rays, but human memory is fallible by its very nature. And actually, AI is much better at doing those kinds of things, trawling through millions of X-rays and being able to pattern match the one that it's seeing now with what it's got stored in its memory. Humans are not great at that and so it can offer an opinion and then the expert can take that and say, well, I can now look at this and say I think that's a good bet. I'm happy to rely on that, or no, I can see it's made an error, and my experience tells me where it's gone wrong. I can intervene and I can veto that diagnosis. Anything that involves extracting patterns from large datasets, models, algorithms are great at that, humans are not great at that. Making high-level decisions, and these sort of counterfactual reasoning type things. Humans are great that, and LLMs are getting better but they're still not here yet.

Todd Battley: Well, Jason, it's been fascinating to talk with you today. I guess what I've really taken from this is our wonderful but fallible brains do things that are remarkable. And it's great that we have people like you building an understanding of that because that can help us maybe design some systems around our lives that help us just be more effective. I've loved the idea of noisy and quiet, perhaps better than gut feel and why we make the decisions we make and maybe some great ideas around counterfactual thinking. I love that and that's something I think I'll be able to take into my life and work and I thank you for that. If you had a final thought for our audience on the podcast.

Jason Mattingley: Well, to come back to the AI, which we've talked about quite a bit, I'm an advocate. I think it is a great tool. I think that it will revolutionise what we can do, how we do it, efficiency in the workplace, which is really important. But my tip would be, you know, use it as a way of augmenting what you do rather than replacing what you do. Maybe that's the best way to think about it. Don't become a slave to the tool, use it to your advantage, but retain your capacity to learn and to think and to reason as well.

Todd Battley: That's great advice. We can all take that on. I listen to quite a few podcasts. One of my favourite ones is The Howie Games. I love the guests he gets on. I love the fact that he can ask them questions and they open up to him. And he finishes off most of his episodes with this one question, and I'm going to borrow it shamelessly from him, Jason. And so, it's the question that divides the nation. So you ready?

Jason Mattingley: I'm ready.

Todd Battley: Here goes. Pineapple on pizza or not?

Jason Mattingley: All right, well, so as a Queenslander, it's true, I probably should say yes to the pineapple, but actually I'm not a big fan of pineapple on pizza. Maybe that's because I actually grew up in Melbourne, so pineapple wasn't such a big part of my growing up experience with pizzas. I prefer to keep sweet things for dessert and savoury things for pizza.

Todd Battley: Well, you heard it here first. Professor Jason Mattingley, thank you very much. 

Jason Mattingley: Thanks very much, Todd.

+++++

Maria Rampa: That’s it for today’s episode. Thanks for joining us as we unpacked the science of better decision-making.

Remember: slow thinking, diverse perspectives, and keeping the human in the loop are more important than ever in a world filled with data and digital noise. Use the tools, but don’t lose the thinker.

If you enjoyed this episode, don’t forget to check out part one where Todd and Jason cover the perils of distraction and strategies for carving out time for deep thinking in today’s modern work environment. Hit subscribe on Apple or Spotify and don’t forget to follow Aurecon on your favourite social media platform to stay up to date and join the conversation.

Until next time, thanks for listening.

How do we avoid cognitive traps and make informed decisions?

In the second instalment of this two-part series, Aurecon’s Chief Executive – Australia Todd Battley continues his thought-provoking conversation with Professor Jason Mattingley from the Queensland Brain Institute. Together they discuss how to make better decisions in an era defined by distraction, cognitive bias, and the increasing use of artificial intelligence.

In this episode, Todd and Jason explore why all decisions are not created equal, emphasising the importance of slow thinking, evidence over emotion and the value of seeking diverse perspectives, especially in complex situations.

“AI is great for what we sometimes call cognitive offloading, you can get it to do perhaps some more routine things where there's little risk of an error, but it takes precious cognitive resources from you and it gives you the automatic answer… I think AI is good for that. But as soon as we start to rely on it to make the decisions, as soon you take the human out of the loop, then things can go wrong. There's always the opportunity for error.” – Professor Jason Mattingley

For additional insights, listen to part one of the series, where Todd and Jason discuss strategies to manage distraction and create space for deep thinking in today’s modern work environment.

Additional resources



Unfortunately, you are using a web browser that Aurecon does not support.

Please change your browser to one of the options below to improve your experience.

Supported browsers: