A digital version of YOU: the opportunities and risks of AI

Slaven Marusic Slaven Marusic
Data Science Lead
1 February 2022
6 min read

“Alexa, take me to the moon.” A few months ago, it would have been something you’d laugh at or something you’d only see in a cartoon or a meme, but now it’s happening for real: Alexa is going to the moon. This March, NASA’s Artemis 1 mission will take off with Alexa playing tourist in deep space. The expedition is part of a technology demonstration to see how the virtual assistant might one day benefit astronauts flying to far off destinations, including Mars.

Deep space aside, Alexa and other voice activated virtual assistants have been dishing up answers to all our questions long enough for us earthlings to become quite used to talking to robots. As helpful as they might be in providing us with information or performing digitised tasks – “Alexa: set my alarm for 6 am” – what might be possible if we had our very own ‘digital twin’, a virtual you that is capable of learning and thinking in the way we do?

In engineering, the concept of digital twin refers to the digital representation of a built asset or system, which may also encompass data from real-time operations and processes it to assist in maintenance and decision making.

If human activity and decision making could be applied in the same fashion, just imagine if the ‘digital you’ could complete tasks automatically – respond to texts and emails, draft documents, run personal errands. With a raft of routine life-admin taken care of, we’d all have a lot more time to achieve more than we could otherwise.

But how would the digital you function? Would it be a complete digitisation of the individual self, or a tool that becomes adept at understanding the tasks we do that could be automated? Would it be an assistant, or a substitute? And, if we could create digital yous capable of thinking and behaving in the way we do, would it even be wise to do so?

Hyper-personalised digital assistants

The human species has come a long way in terms of predictive technology. We’ve been developing various versions of it for some years now. Natural language processing has found its way into our everyday lives, including predictive search requests on Google. Most of us have spent time communicating with chatbots on webpages.

Despite these advancements, many businesses that use bots have experienced poor adoption rates and user frustration, with almost 80 per cent of customers deciding not to purchase when they know they are not talking to a human.

Peter Voss, founder of advanced artificial general intelligence systems company Aigo.ai, suggests that personal assistants should offer a hyper-personalised experience and “be seen as the customer’s assistant, not the company’s”, in order to deliver value. The assistants should ‘interactively learn’ the individual preferences and situations of the humans they interact with.

A good example of this (but maybe not entirely) is Eugenia Kuyda’s Replika. The app creates a personal AI that offers helpful conversation where a customer can safely share their thoughts, feelings, beliefs, experiences, memories, dreams. What if Kuyda’s app could be taken to the next level and learn a person’s innate style of management or decision making, and then able to implement this for certain tasks?

Learning of habits and behaviours may be one thing, but learning the level of our talents is a different beast altogether. Is the perceived convenience of language-based user-interface inadvertently facilitating an over reliance on underlying predictive systems for operational decision-making – or is there something even greater at risk?

Will AI make us dumber?

In The Glass Cage: Automation and us (2014), Nicholas Carr wrote that “automation severs ends from means. It makes getting what we want easier, but it distances us from the work of knowing.” If we allow a virtual assistant to perform tasks in our stead, the insight or learning we gain from completing those tasks ourselves is lost, however boring or inconvenient they might be.

We all know practicing certain functions, whether they be physical or cognitive, allows us to gain a certain depth of understanding. When we perform a new task, trial and error or thinking things through is a necessary element of learning and, in turn, making judgments about how to progress. It’s one thing to use artificial intelligence for data entry, but quite another to create a new business pitch.

Replacing real human experience with AI works adequately in many contexts and to certain extents although, as we expand our use of such advancing technology into new fields and operations, we need to consider the risks of allowing automation to do our thinking for us.

Amazon’s Rekognition solution, one of the leading facial-recognition technologies, mistakenly recognised 27 New England athletes as criminals after running a test on a database of mugshots. The risks in naively deferring decision making to automated systems, without sufficient oversight, are already being widely observed. The impacts of data bias and design bias are now acknowledged as fundamental challenges in the ethical and effective operation of such systems.

By distancing ourselves from certain tasks, we may risk losing touch with the important cognitive processes that are required to understand the whole picture. If we stop working through options manually, at what point are we no longer equipped to provide that oversight and final decision making?

This shifting dependence represents a step beyond human decision making, encroaching upon the function and formation of thinking itself. Considering Alexa’s imminent visit to space, for example: what parts of the job of being an astronaut can be automated without risking the loss of crucial systems knowledge?

Before automating more tasks and replicating areas of expertise, we should stop and think about how good judgement is learned and garnered in specific contexts. Let’s not forget that AI functionality is being developed even further. In the time it takes a human to think things through, tasks are executed instantly in many of our automated devices. How much trust are we willing to place in systems that seemingly have ‘minds’ of their own?

If our AI systems are getting ‘smarter’, we should too – in the way we both design and utilise them!

How much can we trust the digital you?

What makes a ‘digital you’ dynamic even more complicated is the different contexts and scenarios in which this type of automation could exist. Scientifically, we are already very used to taking certain theories or precedents at face value.

Once something has been widely acknowledged as proven fact, we no longer deem it necessary to recreate those underlying experiments. With AI designed to become more and more capable, at what point do we lose track of the establishment of such authoritative precedents? The emerging domain of explainable AI is perhaps one avenue seeking to address this.

Our dependence on the technology can potentially remove our doubts and inherent ability to question things because since they’re designed like us, we believe we can trust them. But do we even trust ourselves 100 per cent all of the time?

If we were to become dependent on AI to the point of handing over all critical and creative faculties, would we be able to adequately evolve to the next iteration of human society? What’s to become of human beings if the digital yous are determining our ‘best’ lives for us?

As we continue to progress and develop further forms of AI, with varying capabilities and roles, it’s crucial we traverse the fine line between creating partners or replacements. After all, walking on the moon and talking to robots sounded impossible once who knows how far we will take AI (or it will take us) in future.

Click here to subscribe to Just Imagine.

Danielle Bond
Written by
Danielle Bond

Danielle often wishes she had a digital twin to juggle her professional and personal workload.

Slaven Marusic
Written by
Slaven Marusic

Slaven loves to invite people to consider “what if…?”


Unfortunately, you are using a web browser that Aurecon does not support.

Please change your browser to one of the options below to improve your experience.

Supported browsers: