What does a former coffee-maker-turned-AI say about AI policy on the verge of the 2020s?
‘This is what we have now. We can say this kind of research has been done in human behavior. We can refer to other kind of research that focuses on machine learning systems. Not only the AI systems, but try to understand the origins of these systems and how they are working. How did they get here? What are the implications of this for the social contract?’
As we approach a new decade of policy discussions, we could say that this quote presents common thoughts, without providing a substantial input on AI ethics and black box algorithms. Besides their possibly worrying repetition, what makes these words we heard during the main AI session of IGF2019 notable? Well, probably the fact that they are not meant to bring new words into the discussion. But they do bring a new voice. One of a peculiar nature, built out of a neural network.
Last year, on the fringes of the UN High-Level Panel on Digital Cooperation, it was debated whether AI should be offered a seat at the table regarding our digital future: Will we or should we talk about AI with AI in the future? What kind of AI would that be exactly? The narrow type, built on vast amounts of data, patterns, and machine learning algorithms? Or a new superior kind, not yet conceived? What would the embodiment of an AI system look like? A deus ex machina, which comes to compensate for human deficiencies? Or would it be a Frankensteinian extra-dimensional being of unknown origin, between the living and the unliving? Perhaps it would be a version of Q, a being that ‘possesses immeasurable power over normal human notions of time, space, the laws of physics, and reality itself, and is capable of altering it to its whim’. Or maybe it would be closer to a submissive companion, such as the hologram partners from Blade Runner 2049 or the current ‘I’d blush if I could’ voice assistants?
That being said, one would not imagine something like, for example, a coffee maker. Sitting on the far right of the stage was a small construction made out of plastic, wires, and web cameras.
This repurposed coffee machine represents a non-anthropomorphised embodiment of AI, with its neural network built by the Diplo AI Lab, as a mechanism representing the inner workings of neural networks. But above all, it is an inquiry into the tension between the expectations of supernatural forces and of mundane appliances; between feeding one’s desire versus the reality of feeding data into the neural networks of the everyday. The intervention IQ’whalo’s performance presents is partially a check into what is called ‘automation bias’ – the default trust we place in an automated system rather than in our own observations.
The name of this coffee-maker-turned-AI comes from the Serbian word for coffee maker, ‘kuvalo’. The Q in IQ’whalo, besides being a reference to Star Trek, is also a nod to Q, the first genderless voice created with the idea of breaking down gender binarism. IQ’whalo applied this concept by altering the pitch of the synthesised voice in order to create a genderless expression.
The task of deanthropomorphing goes a long way. The ungendering of IQ’whalo has presented countless obstacles to the common human need to classify. When referring to ‘it’, IQ’whalo is often either attributed male or female qualities, depending on the context through which the observer views it. It so happens that the most difficult characteristic to maintain is that of no anthropomorphic identity features.
However, ascribing features of agency opens a whole new can of worms when we step out of purely human traits. Here we come to the basic foundations of what is at the core of our social expectations of advanced technologies. Eric Kluitenberg calls this space of expectation ‘the compensation machine’, as technologies come to aid human deficiencies. As the gap between the outer functions of a technology and the inner workings grows, so does the space of imagination. And that should come as no surprise, considering the lack of understanding of how opaque, black-boxed systems work. Just thinking of common scenarios interacting with devices already in one’s surroundings, expectations from these systems range from overestimating the scope of technical possibilities, to presuming they offer solutions that anticipate potential problems, to acquiring actual mind-reading capabilities. AI has above all become a concept that feeds desires.
While we hear it everywhere these days, artificial intelligence is a surprisingly new term in the land of buzzwords. If we look at the history of the IGF through transcripts, we can see that discussions on AI, though seemingly following us through time, have shown up with the exact term only since 2015.
So where does IQ’whalo step in this story and what does it actually do?
IQ’whalo started as an invitation to open and reexamine the face value of negotiating artificial intelligence in light of increasing complexity of technology, coupled with automation. It was an in vivo experiment, to observe what would happen if a neural network was to generate a speech on regulating AI based on previous IGF discussions, and include that speech to play out in the setting of AI policy experts.
IQ’whalo currently works by using the open-source GPT-2 transformer-based language model to generate synthetic text based on the input of policy papers and transcripts. For its speech in the main AI session, IQ’whalo generated texts based on the existing AI-related transcripts from the IGF.
The content it produced resembled a poetic or Dadaist stream of consciousness, including some segments of completely coherent text, out of which its speech was extracted. It also produced some repetitive logical fragments that had no meaningful connection to the rest of its text. These results showed that a basic neural network can generate some meaningful sentences and ideas. They also showed that a continuous stream of thought presents a challenge for this type of system. The general impression that IQ’whalo’s speech left on the audience was not surprising – they expected to hear something new, while in the end, it sounded too familiar. But the familiarity accounts to the fact that a neural network creates new content based on the content it is fed by, in other words – it cannot create something strikingly different from its input. On the other hand, IQ’whalo showed observers what kind of patterns, phrases, and statements come out of these types of talks. Pointing out these commonalities could actually prove to be a more valuable contribution than hearing what the imagination thinks of as the voice of a futurist living intelligent technology. What it offers is a mirror of digital policy both in discussions and in the history behind them, through transcripts as written documents of a certain time and context, including their misheard language, misspellings, and nuances that are building blocks of social exchange and dialogue. What IQ’whalo could account to be is a manifestation of the current status of AI policy.
As we head into an exciting new decade in which automation provides as much hope as it does fear, what can we expect the role of neural networks or novel models of AI to be? Can they help in building more effective prediction dictionaries? If AI proves to be an effective mirror of our doings, do we look away, or should we use it to improve our social habitat towards a more healthy, just, and inclusive one?
IQ’whalo’s coming adventures, as well as future explorations within the AI lab, can be found on the HumAInism website.