Technology

Artificial Intelligence: The ethical risks and challenges of AI

Danielle Bond and Anne Gregory | 11 September 2019 | 25:28

Podcast transcript: Artificial Intelligence: The ethical risks and challenges of AI

Kalay Maistry:

Welcome to the final episode of Season 1 of Engineering Reimagined. What a ride it has been – we have spoken to a French chef, environmental innovators and cross-cultural consultants, covering topics ranging from the role of engineers in natural disasters, the challenge for innovation in the field of law, and the pursuit of human beings to populate Mars. Sit back for a very special episode today to close out our first season that touches on a topic that is increasingly important in today’s world.

+++++

Do you think of technology as your digital butler or digital stalker?

Technology as a digital butler is where your tech knows you so well and completes actions that frees up your time and brain to do other things. At the other end of the spectrum, technology is an anxiety-inducing stalker, running behind you, constantly tapping on your shoulder to show you things you don’t necessarily want to see.

For most of us, both descriptions ring true depending on the type of content, purpose and timing – the difference between booking a dentist’s appointment online vs being fed unwanted advertising.

Technology will continue to evolve and play an even greater role in our lives through the rise of artificial intelligence, or AI. AI will significantly change not only the type of work performed in many professions but also the roles themselves over the coming years.

But what about the ethical challenges that AI also brings? As described so aptly in the Netflix documentary ‘The Great Hack’ that shed light on the Cambridge Analytica scandal, there are rising concerns about how users’ data is leveraged for political and commercial gain. What role do professionals have as ethical guardians in the work they do? What’s at stake if we get it wrong? In many ways, the rise of AI is just as much a new frontier for ethics and risk as it is for emerging technology.

Join Professor Anne Gregory and Danielle Bond as they discuss this subject. Dr Anne Gregory is Professor of Corporate Communication at the University of Huddersfield in the UK. Anne has held numerous specialist research and consultancy programmes for public and private sector clients and is a former chair of the Global Alliance for Public Relations and Communication Management.

Danielle Bond is Global Head of Marketing and Communications at Aurecon where she works closely with leadership to develop and execute integrated marketing and communications programmes. She is also a global board member of the International Association of Business Communicators.

+++++

Danielle Bond: Just a bit of housekeeping before we dive in. Anne, there's lots of definitions of artificial intelligence in use today, so for the sake of our discussion, though, let's just perhaps agree we're talking about how machines simulate human intelligence processes, so machine learning, reasoning and self-correction. I heard you speak at a conference last year here in Melbourne on artificial intelligence and how it impacts the communication profession that we're both in, how it will impact how we communicate and engage with the public, such a fascinating area. What’s inspired you to specialise in this field?

Prof Anne Gregory: Well, I suppose the first inkling that I had that artificial intelligence was going to be really important was when as soon as I went online, I got an advert that popped up and said, “You need to buy X”, when I'd been online searching about X. And that alerted me to the fact that there's something going on here that is quicker than human.

Second thing is I do quite a lot of work with government in the health area. Artificial intelligence and predicting what health morbidities are going to be, looking at patterns of disease, and then looking at possible cures and predicting what parts of the population are going to be prone to flu or whatever.

Finally, I began to think about the power that artificial intelligence really has and those organisations that hold data and are adept at artificial intelligence are potentially huge players in our lives and in society in the future.

Danielle Bond: It's fascinating, isn't it? The World Economic Forum talks about the fourth industrial revolution as a technological revolution that's going to alter the way we live, the way we work and relate to one another. It brings up a couple of issues, one around the responsibility of those that collect data. And perhaps you might sort of share your thoughts on ethics, data collection particularly as the value of data continues to increase, and it becomes a commercial asset. What are some of the ethical risks and challenges that you see AI could pose if unchecked?

Prof Anne Gregory: It's an incredibly powerful technology, and I guess the key question for me is, “Is the technology there to help human decision-making and enhance our lives, or is it going to control us?” So do we know what data is being collected about us? Do we know what uses that data is going to be put to?

Prof Anne Gregory: So we know, for example, that Donald Trump and his campaign back in 2016 had 5000 data points about every single voting American. Did those Americans know how that data was collected? Do they have a right to know? I think they do, we both work for large organisations who collect data all the time. Are we transparent about what data we collect, about how we're going to use that data not just now but in the future. Do we give a right to people to say, “Hang on a minute. I've given you my data for this purpose and that's where it's got to stop”? Along with that goes the power that that gives. So if we talk about commercial organisations, that means that when you and I are in discussion with a commercial organisation, they know far more about you than you think. And therefore, the old discussion with a bank manager, where it was one to one, and you knew what information they had about you, that's not the same anymore because it's not just data about us that organisations collect. It's about people like us.

These are incredibly powerful tools organisations have, and we need to be aware of the ethics of that. So the governance that are put in place by organisations to control the nature of data collection and how that data is collected and used, I think, is really, really important.

Danielle Bond: Do you think people understand the full reach of AI? So if you were to visit Aurecon's website, you'd get a notice that, you know, to accept our cookies policy, to know that we're tracking you and you get to accept or decline, and we point you to our privacy policy, which discloses what we capture. But when you walk into a building, or perhaps you're walking around your city, or you're driving along roads, are consumers aware of the data that's being collected about them? And should they be aware?

Prof Anne Gregory: It's really interesting, because in China now, for example, we know that sections of the population are under surveillance. And so the security cameras that are on the street corners where people move, what they buy in the shops. That's all collected. What transport they use, how they use their money. It's all collected, and therefore the power of the state is increasing.

For us in engineering and for people who work in engineering, I think there's a tremendous opportunity for good here. If we could look at the flow of people around buildings, and we know what it is that generates flow? And understanding how people move and what generates their movements I think, for us and for people who work in design engineering, is really important because it means they actually can build spaces that are going to enhance people's lives. And using artificial intelligence to understand things like energy flows, pattern of people's, the way that they live their lives, is really helpful.

But there's a downside to that. As I say: who's collecting that data? It might be you as an organisation, but who else is tapped into the data that you're collecting? What is your system connected to?

Danielle Bond: So there's a lot of discussion, isn't there, that countries should create governing bodies that are responsible for saying, these are the standards. These are the guidelines around which AI will be used ethically in this country. In your view, would this work? Would it have a desired impact?

Prof Anne Gregory: I think there should be an overriding body, if you like, within countries to weed out what might be some rogue operators. But also to learn and to spread good practice, because AI is a force for good as well as there being dangers with it. And if you have a national body that helps to guide and regulate, just as we have national energy organisations, and national water organisations, then we can actually collect a lot of learning for this and generate information that's going to help good design. There are a couple of issues around that. First of all, AI goes across borders. So you might be really well-regulated in Australia, but across the water, somewhere else it may be a poorly regulated area.

So we need to go a bit below that. Yeah, national regulation, but actually it starts with you and me. So how do we use information personally, that people give us? Then it starts within the organisations. What systems and processes are put in place within organisations to make sure that there is good governance of AI? We need a diversity of views in about how AI can impact on people's lives, and we need to make sure that the whole organisation from top to bottom has a profound understanding of what the AI applications of the organisations have. There's a European bank that uses AI a lot to generate base information about clients. So if you or I go to see the bank manager, I know it's rare these days. But if you or I went to see the bank manager, that bank manager understands the algorithms that have generated the background information about you and me. So he or she will have information not just about our banking history, but where we live, and the sorts of people who live in that area; our ethnicity and what our spending patterns are; our age and what our likely liabilities and risks are; our health conditions.

But at the end of the day, this is a transaction between human beings. The bank manager needs to make the decision about whether or not they will lend to you or I.

Because there could be all sorts or reasons why you or I went into debt last year. Humans have to drive AI, not the other way around.

Danielle Bond: So that's really interesting, because there is of course, that's a great fear for people. That AI, what's the future for me as a human, as a human worker? And in fact it's the combination of artificial intelligence with some of those characteristics of people that machines, at least today, cannot replicate. So I'm interested in your thoughts around that human factor. If you were to advise professionals, engineers but other professionals, who are thinking about their future in an AI world, what are the things they should do to differentiate themselves from the machine?

Prof Anne Gregory: I think for future professionals, things that denote those individual interactions and the human side, the soft skills, if you like, and what makes us human are going to become more and more important. What do we teach people in an age where the skills that we teach them in the first year of university courses, for example, are out of date by the time they get into the workforce?

Prof Anne Gregory: Well we need to teach them what it means actually to take responsibility. What it means to be a really good analyst, both of algorithms, yes, but an analyst of human situations and what's going to be the best decision in this situation where you've got lots of choices. A machine might be able to give you a choice based on background information and on history, but what's right in this situation, right now, for these people? That's a human judgement, it's not just a machine judgement.

Danielle Bond: Well there's a lot of talk, isn't there, about big data. And the data might show there's a correlation between various data points, but it doesn't necessarily mean it's causation and nor does it mean that that's the right answer. So you need a diversity of thinking and experience, so there will always be room, in your view, for that experienced professional?

Prof Anne Gregory: Absolutely. And the way that algorithms work is that they tend to actually emphasise the norm more, so what you lose then is the diverse ends. So the little people, the small voices, maybe those people in society who have less power, who may be disadvantaged, who are minorities who have different points of view. We have got to go and seek those views out, and if we're going to make good decisions, we need to have that variety of voices. And when we negotiate for a project in the neighbourhood, the algorithm will tell us one thing, and it will tell us what the vast majority of people want and what their behaviours are likely to be. It won't give us the richness of human life, and we have to go and seek those voices out and make sure that we create environments which are enhancing for everybody who is involved.

Danielle Bond: So we talk a lot about human-centred design and AI and use of visualisation tools can really help organisations like Aurecon, contractors, governments really visualise things and engage communities in urban design in a way that was never possible before. They're the exciting opportunities, aren't they?

Prof Anne Gregory: I think it's absolutely fantastic. What it pointed to me was, AI and the visualisation element gives us a great opportunity to do some really deep engagement with communities. You know, contractors tend to build individual buildings, but for me it opens a whole area of, what does it mean to live in a place, and contractors and designers being able to collaborate?

Prof Anne Gregory: Can we visualise what it's going to be when these buildings are around us, and can we have a discussion with local communities about if that's going to be something that's life-enhancing for them? Can we see this building at midnight, please, and can we see it at 6:00 in the morning? And can we see it in the winter and can we see it in the spring? And can we see it when the street is full? And can we actually go there and step through it and experience it? I think it's an amazing technology and something that must help us make better decisions.

The way that organisations make their decisions is changing. It's being informed by big data. Who is going to be the person that actually monitors the quality of that intelligence? Who is going to be asking the questions about who actually sorts that information?

For example, we know now that there's issues with driverless cars: they don't see black men. Well, why is that? Because the algorithms created did not actually encompass the whole range of ethnicities that cars will encounter on the road. So there's a bit of a blind spot there for a driver. What are the blind spots that a board's going to have to be aware of?

Danielle Bond: I've heard you talk about before, that perhaps organisations will see them employ a chief ethics officer to help the organisation navigate AI and big data. Client data, customer data, employee data, so it's not just personal data, sometimes it might be industrial data.

Prof Anne Gregory: Absolutely.

Danielle Bond: In your view, would this work?

Prof Anne Gregory: So these are things I think that an ethics officer would do within an organisation. Big organisations have a lot of information, and a lot of power. Just because we can, should we do this? Just because we know that we can put together certain arguments which will be persuasive because it's to our material advantage, should we do that when we know that the other groups maybe have a legitimate interest in this issue are not able to collect the data that we're able to collect? What about issues of transparency?

Danielle Bond: It seems that everywhere you look there's a lot of discussion about how artificial intelligence is going to impact the workforce of the future. What do you think the impact of AI will be on jobs?

Prof Anne Gregory: Recent work by organisations like McKinsey has indicated that it's going to be the knowledge-based and case-based jobs that are highly at risk. So people like insurance actuaries, who require lots of history and case history. Lawyers even, specialised lawyers who have got closed knowledge about particular criminal histories, those are the jobs really highly at risk. And also accountants. You can get the bots doing accounts very, very efficiently these days.

Prof Anne Gregory: The jobs that are going to be more highly valued and less easy to replace are those that have high levels of human interactions. Care workers, for example, in fact all those jobs that are really badly paid now are the ones that could well be well-paid. So teaching. We can do a lot online by teaching, but actually mentoring and coaching and human skills are going to be highly valued.

Danielle Bond: And jobs of the future? We mightn't be able to imagine what they are?

Prof Anne Gregory: No, I think it's quite difficult. But if we think about the engineering area, I would ... It's interesting that those Mickey Mouse degrees, do you remember those Mickey Mouse degrees that people talk about like gaming, visualisation, media?

Those are the jobs of the future. So those people with analytical skills, yes, but who are able to put together a virtual reality explorations of building and spaces? They're going to be highly valued in the engineering industry.

Danielle Bond: You mentioned that a lot of the skills perhaps we learn at university in the first year may be redundant when we graduate and then we're into the workforce. How important is lifelong learning, do you think Anne? And what's the role of universities for mature workers, not just graduates?

Prof Anne Gregory: I think there's a really interesting discussion to be had here, because most entry-level graduates do those skills jobs and they're at risk. And therefore lifelong learning is going to be increasingly important.

But I think there might be a resurgence, and this is just me saying this, of some of the old subjects that were taught at university? For example, philosophy? How do you put together a good, rational argument? What about ethics? What about the axioms of language, that's philosophy. What does it mean to create meaning for people? How do you create conversations that actually engage people? Good old Aristotle, you know he thought about that before clever people like public relations and communications directors were around.

What about anthropology and how societies function? Are those subjects going to make a resurgence? I have a feeling they might do. Somebody has to understand what's going on with the bots, and decide what sort of society we want and what sort of lives we want to live. Just because you're powerful and you have access to data are you entitled to rule the world? Is there a new elite going to be created? Possibly.

Danielle Bond: So the dialogue in society, between government, between the private sector institutions and the community. I mean, how, how good is that dialogue at the moment? I sometimes fear that we're not having enough of a discussion around this industrial, technological transformation that we're experiencing.

Prof Anne Gregory: I would absolutely agree with you there, and there's still a lot of people think that Facebook and Google are just platforms? They're not. They're data agencies, and who is asking those big questions of Google and Facebook? They're beginning to be asked now, and people are beginning to become alert to the fact that ... Do you know what? I've probably got something like 10,000 photos on Facebook? And I've done all these posts. How much do they know about me? How are they using this information? Who knows about me? These are profound questions that we've not been asking. They're beginning to be asked now.

Prof Anne Gregory: I think there's three players in this. First of all, there's commercial organisations and their use of data and artificial intelligence. There's the role of government, and some governments who are using this, to control their populations. Governments have an awful lot of power, and they know an awful lot about us. We have no choice about interacting with them. How are they going to use our data? I mentioned about health, economics, etc, fantastic.

But there are maligned uses as well, so who's questioning government?

And then there's the role of civil society, which of course is the questioning of commercial organisations and government. Civic society, raising these questions, holding government to account, holding organisations to account, and for there to be a fruitful discussion between those three to say, well, again, ultimately what sort of society do we want?

Danielle Bond: So are you excited or scared about the future?

Prof Anne Gregory: Both. I'm excited because I think there are all sorts of possibilities for us to have enriched lives and for things like health, poverty to be eradicated through the use of good information and through good governance. But I'm also scared because the phrase “surveillance society” comes to mind. And the power of some of the organisations like Google and Facebook, if left unchecked, they're the obvious ones. Baby X, which is an experiment in New Zealand where they're actually growing a child through artificial intelligence. They're teaching it human emotions, they're teaching it to see a spider and react to a spider. To be able to react to taste and smell and all sorts of things, and this child will grow and it has emotional intelligence as well as computing power in its brain.

Prof Anne Gregory: So I'm scared by some of that stuff? But I'm also quite excited by it because it would be really good if were able to clearly differentiate the role between the human and the machine? But there's no doubt at all that there will be a level of integration that can be either enriching and empowering for humans, or the alternative.

Kalay Maistry: And that wraps up our first season of Engineering Reimagined! Thank you for listening. We’ve had listeners from more than 91 countries tune in throughout the season which is just fantastic. We’re delighted to announce that we are working on Season 2 so stay tuned for future episodes!

We will also be sharing a survey to our subscribers asking for your feedback on Season 1 – we would really appreciate your thoughts around what you would like to hear more or less of so keep an eye out for this survey. It will hit our social media channels shortly.

Apple badge Google badge Spotify badge

Professor Anne Gregory dives into the advantages and disadvantages of artificial intelligence

As portrayed so aptly in the Netflix documentary The Great Hack that shed light on Cambridge Analytica’s Facebook data scandal, there are rising concerns around how users’ data is leveraged for political and commercial gain.

With the growth of artificial intelligence or AI, technology will play an even greater role in our lives. AI will significantly change not only the type of work performed in many professions but also the roles themselves over the coming years.

But what about the ethical challenges that AI also bring? If AI is used for simulation of human intelligence processes, what role do professionals have as ethical guardians in the work they do? What’s at stake if we get it wrong? In many ways, the rise of AI is just as much a new frontier for ethics and risk as it is for emerging technology.

In this final episode of Season 1 of Engineering Reimagined, Professor Anne Gregory, a Professor of Corporate Communication at the University of Huddersfield and Danielle Bond, Global Head of Marketing and Communications at Aurecon, discuss some of the ethical risks and challenges that AI could pose. We also explore the possibility of creating governing bodies to mandate standards and guidelines around how AI can be used ethically.

Meet our guest and host

Learn more about Danielle Bond and Anne Gregory.

Danielle Bond, Aurecon

Danielle Bond

Global Head of Marketing and Communications, Aurecon

Danielle is the Global Head of Marketing and Communications at Aurecon where she works closely with leadership to develop and execute integrated marketing and communications programmes. She is also a global board member of the International Association of Business Communicators.

Being an award-winning Marketing & Communications professional, she believes that brand strategy is both a passion and a strength. She likes to work in a high performing team and firmly believes that creativity and innovation thrive when people can have fun at work.

Anne Gregory

Anne Gregory

Professor of Corporate Communication, University of Huddersfield

Anne is a Professor of Corporate Communication at the University of Huddersfield in the United Kingdom. She has led numerous specialist research and consultancy programmes for public and private sector clients and is a former chair of the Global Alliance for Public Relations and Communication Management.

In addition to her teaching and international research, Anne is also a Reviewer of Government Departmental Communications and an acknowledged authority on ethics, artificial intelligence, strategic communication, and capability. She has received awards for her exceptional contribution to the profession.

Enjoying our podcast?

Subscribe to Engineering Reimagined | Aurecon podcast
Leave a review for Engineering Reimagined | Aurecon podcast

Apple badge Google badge Spotify badge

Aurecon Podcast Engineering Reimagined
To top

Unfortunately, you are using a web browser that Aurecon does not support.

Please change your browser to one of the options below to improve your experience.

Supported browsers: