Designing inclusive AI: where do machines end and we begin?

Nick Williams Nick Williams
Digital Capability Leader
23 November 2021
6 min read

"You become responsible, forever, for what you have tamed." So said the fox to the Little Prince in Antoine de Saint-Exupéry's famous novella. The fox goes on to say that if the Little Prince does indeed tame him, they will need each other; because in a way, the Little Prince, too, would have been tamed by the fox. 

Could it be that the more we tame, or train, machines – the greater our responsibility for it grows? And as we succeed at taming AI, to what degree do we allow it to tame us?

Digital technology has already become such a part of us that for most, it will be hard to remember what life was without it. Perhaps ironically, we are continuing to explore the digitisation of those aspects of human intelligence that remain. 

But the rise of artificial intelligence (AI), like many technologies that preceded it, is raising ethical questions all around us. Are we really able to capture and codify data for an intelligence that will drive cultural values we can relate to and will be inclusive for the humanity that are impacted by it?

Enter the diverse domains of design. When we shape the physical artefacts that will represent our culture when we are long gone, we lean into the future but learn from the past. However, while we are collectively transfixed by novelty and innovation, we struggle to recall the wisdom in lessons already learned many times over. 

Cultures underpin identities

Our cultural heritage and language form the basis of our identities. According to UNESCO, one-third of languages today have fewer than 1,000 speakers, and every two weeks, a language dies. To many, technology has the power to solve this problem – case in point the AI for Cultural Heritage was established to use AI to work with organisations and governments around the world in helping preserve the languages we speak and the cultural artefacts we treasure. 

So, we might have better tools to work with, but: AI is not something outside of us, it reflects who and what we are. Our human culture is embedded in both explicit and tacit knowledge within AI systems. We, therefore, need to reflect the diversity of human cultures if we are to create an AI that is inclusive and indeed intelligent.

Many, however, have lamented the lack of diversity in the tech industry. The question of whose culture and subsequent values do we feed into our systems is critical when we consider that AI is just as susceptible to bias as humans are. People from all walks of life and the myriad of different races and cultures should be represented in the group that exercise authority in this space. 

We have a recent example of what can go wrong where this is not the case. Racist and misogynistic terms were found in an MIT image library of 80 million images – a dataset that was used to train AI systems. 

This sparked the founding of IVOW, an AI start-up which purpose is to use the very technology that was misused to actually help preserve cultural context and indigenous knowledge. IVOW is developing a chatbot that will record and share cultural stories in an aim to preserve cultural heritage for the next generations. 

Technology as a 'pharmakon' – beyond an image classification problem

Philosopher Bernard Stiegler described digital technology as a pharmakon, simultaneously a cure and a poison. The cure promises more opportunities for human culture. The poison, meanwhile, likely includes the destructive power to endanger hermeneutic knowledge, the draining of one's capacity to reflect on experiences and the breaking of social solidarities.

On the surface, there is much promise – technology is being used to accelerate the recording of history. Semi-automatic scanners, robotic page-turners, and an automatic handwriting recognition system to transcribe annotations have been created to digitise millions of manuscripts. International research teams working on the open-source Time Machine project aim to digitise the last 5,000 years of European history and to use AI to reconstruct it with a "Large Scale Historical Simulator."

To many a data engineer: Recording history might appear as a simple matter of image classification, but not all knowledge conforms to classification. The Metropolitan Museum of Art, which houses over 1.5 million works of art, is undertaking the task of digitising, classifying and tagging each work in a scalable way with the use of AI. Unlike with a document, art is inherently highly unstructured. Visual elements need to be perceived and the authenticity and authority that lie with the work are challenged.

If we want to use them authentically in the space of cultural production, our machines need to understand forms of knowledge in their many tacit and implicit forms. Some jobs are easily disrupted – specialised skills like accounting might be sought after, but they are also easier for AI systems to replicate and replace. 

A more generalist knowledge and skillset is where the interesting territory lies. According to Historian Yuval Noah Harari, the hardest profession to automate would be the caveman, because cavemen are great generalists, finely tuned into all the subtle cues, gestures, twitches and smells, their brain is processing all the time. This is good news for designers as design is entwined with such generalist features that defies codification.

Full-fledged design partners?

In his 1976 book, Soft Architecture Machines, MIT Media Lab founder Nicholas Negroponte proposes a kind of architecture without architects. He suggests the architecture of information and culture, which brings a closer and more equal partnership between man and machine. 

He forecast that humans will not merely use machines but that they will be intimately coupled with them – as candid collaborators on design. In his proposal, designers start to become less necessary to individual projects and instead critical to collective outputs.

We tend to marvel at AI's convergent smarts like its superior memory capacity and aptitude for complex calculations. But we are also starting to see that some of the real magic might lie in its divergent (or lateral) solutions to problems, often-times criticised as dim-witted blunders.

An entertaining collection of anecdotes on machine learning experiments showcases how evolving algorithms have creatively subverted researchers' expectations or intentions, exposed unrecognised bugs in their code, produced unexpected adaptations, and engaged in behaviours uncannily similar with ones found in nature. 

Or what about the computer who has written a "novel" narrating its own cross-country road trip? The resulting prose, as one article put it, is a hallucinatory, oddly illuminating account of a bot's life on the interstate; the Electric Kool-Aid Acid Test meets Google Street View, narrated by Siri.

Where do we sign the prenup?

AI is the new electricity – where we once electrified everything, we are now "cognifying" it. There in the periphery is a silent force attempting to make everything smarter. Designing where machines end and humans begin will probably depend on how inclusive we are in training, and how intuitive we allow this technology to become.

We need to respect this technology as our future partners. This way we can be optimistic about our future designs before we relinquish control to an algorithm.

Click here to subscribe to Just Imagine.

Nick Williams
Written by
Nick Williams

Unfortunately, you are using a web browser that Aurecon does not support.

Please change your browser to one of the options below to improve your experience.

Supported browsers: