Barbican ‘AI: More than Human’ – artificial agency, determinism and delusion of the cognitive self by Agata Kik
To be more than a human means to be more than his brain, to be more than the cognitive self, it means to be her instinct, to be like an AI machine, programmed by the body memories, free from desire, control or will. Artificial Intelligence (AI) is intelligence that man-made machines exhibit, mimicking the human brain, capable of learning and problem-solving. Despite the fact that AI is applied to the cognitive functions, its automatic behaviour is what belongs to the body, which the contemporary man has lost touch with, way too profoundly recently, I think(feel).
Barbican Centre have just closed the doors to their major exhibition AI: More than Human, a trajectory, almost a time travel, tracing the history of the relationship between human and technology, from its extraordinary ancient roots in Japanese Shintoism and Ada Lovelace, Charles Babbage’s early experiments in computing, to AI’s major developmental leaps from the 1940s to the present day. Executed through an interdisciplinary approach, the exhibition combined both artistic and scientific perspectives. With guest curators Suzanne Livingston and Maholo Uchida on board, the project was probably the most ambitious attempt that Barbican International Enterprises (BIE) have decided to host in their space until today. The happening was part of the series of events Life Rewired, Barbican’s 2019 season exploring contemporaneity, which has become fully dependent on technology.
The vast array of artworks in the exhibition did establish some strong links between each other, pointing to these aspects of the human, which AI has the biggest potential to reveal, as in a way, the reason we have created AI, was to learn more about the human brain. As a result, most of the artworks in the exhibition involved the viewer’s participation and interaction with the work displayed. The viewer felt being recognised by the other, not only by the living being, but also a machine. One of the most explored themes was ways of looking, seeing or knowing, and also the nature and agency of human memory. What I have understood myself, is that what we see is what we are programmed to see and so the human experience acts as an AI algorithm.
Artist Ben Grosser’s Computers Watching Movies, was computationally produced, using a software employing computer vision algorithms and artificial intelligence routines so that the system retained to some extent its free will. As the system’s moves were guided by the programming, so was the audience, under the delusion that human beings have control over their thinking, invited to draw upon their own visual memory of a scene, when they watched it, and reflect once more what they actually feel, when they decide what to watch, when they do so. On the other hand, Nexus Studios collaborated with artist Memo Akten and together presented Learning to See, the work which pointed the audience towards ideas of how the computer interprets their image. Allowing the visitors to manipulate the images, this collaborative work revealed to one that a computer can see only what it already knows, just like us, humans.
To get more familiar with the human, artist Mario Klingemann created a work, in which the machine performed getting familiar with the human himself. Attracting the attention of the audience, welcoming their judgement, this constantly-evolving piece of live art consisted of a neural network, which the viewer was encouraged to train through the mutual responsive behaviour of an audience member and the machine. In collaboration with Google Arts & Culture, the artist has created a machine, which accumulates attention by generating imagery that is interesting to humans. The system uses artificial intelligence techniques to first acquire data and to then measure the effect of the visual output it generates, in order to train how to maximise the time humans are willing to spend with it. The work consists of three modules, the building curatorial blocks, which exhibit the machine learning process. These include acquisition of data, when the machine is learning about the human, then collecting the feedback, learning the viewer’s preferences. After the audience has interacted with the visual material that the machine has created, finally during the step of creation takes place, when the machine generates the most interesting imagery, it has learnt through the process.
Artist and designer Es Devlin, collaborating with Ross Goodwin and Google Arts and Culture a well, created PoemPortraits, a social sculpture that invites the visitor to donate a single word, which then is incorporated by an algorithm, which creates a two-line poem, including words also by other visitors. Having been trained on 20 million words before the exhibition, at the end of the show a collective PoemPortrait was generated from every contribution among the audience.
Interaction between the human and the machine, which is aware of his or her presence, was exhibited also through the work by collective teamLab, interactive digital installation What a Loving and Beautiful World, an immersive environment, whose changes are triggered by the shadows of the viewer’s figure. Lauren McCarthy, on the other hand, experimented with the tensions between intimacy and privacy, which one is confronted with, while interacting with a machine while at home. To explore this, the artist thus created a human version of a smart home intelligence system. Digital art and design collective Universal Everything took over the lobby space of the galleries, having placed digital avatars, which mimicked visitors’ movements around the corridors of the entrance hall. Through the duration of the show, the system was learning ergonomic abilities, which began as rather childlike and evolved into a more developed idea of the moving human body with the end of their performance at the Barbican.
Interdependence and mutual influence between the organic and synthetic was best visualised in the work by architect, designer and MIT professor Neri Oxman. The work originated in her research lab, The Mediated Matter Group at MIT. The group explores the reciprocity of Nature-inspired design and design-inspired Nature, the synergy of thought and matter and also the science of materials to enhance the relation between natural and man-made environments. Their research area called Material Ecology enhances the mediation between objects and environment; between humans and objects; and between humans and the environment. At Barbican Centre, one could view Vespers, a collection of 3D-printed death masks with designs that cultivate new life after death, as they contain microorganisms producing pigments and chemical substances such as vitamins, antibodies or antimicrobial drugs. Inhabited by living microorganisms that have been synthetically engineered by Oxman’s team, the death masks explore the concept of rebirth, given to the dead human, by the synthetic that outlives her or him.
Fascinated by the interrelation between the organic, synthetic and also the digital, Massive Attack encoded their album Mezzanine in strands of synthetic DNA in a spraypaint can, pointing to the fact that music and technology also collide, reflecting on the contemporary solutions to data storing.
Presence of the medium of sound in relation to the computer-driven contemporaneity was also marked by artist and electronic musician Kode9, who presented a newly commissioned sound installation on the golem and opened the discourse inside the show with the stories of mythical artificial man-made creatures in the form of an audio essay. Japanese art and technology specialist Daito Manabe from Rhizomatiks and neuroscientist Yukiyasu Kamitani presented Dissonant Imaginary. Using brain decoding technology facilitated by fMRI, to generate visual imagery of brain activity that changes according to sound, this collaborative research art project investigated the relationship between sound and images. Further, collective Qosmo invited the visitor to make music together with AI, to point to the fact that the machine is there for us to extend the creativity of the human; to expand the human creation, which then can be even more enhanced by the human themselves.
AI is for me a programmed body, which takes the form of a synthetic machine, just like a human body is programmed by body memories orchestrated by the unconscious psyche. Giving in to the unconscious powers, letting control go, joining the vibrational energy flow, in the end, might be the only right way to live and to know. The perceptive power of intuition in the exhibition at the Barbican was for example highlighted in Resurrecting The Sublime by Christina Agapakis of Ginkgo Bioworks, Alexandra Daisy Ginsberg and Sissel Tolaas. This collaborative work focuses on the human sense of smell, pointing to embodied knowledge, which is so much more extended than the cognitive. The artists asked questions about our relationship with nature and decisions we take, though we think or moves we make. Humans, such as sentient beings, spend so much time in front of the screen. Why are we not keen on knowing more about the world by incorporating all of the senses that we were given at birth? Possibly, it is just an interface that needs to be changed. I hope one day, as a curator, I will know also how the smell can be documented, stored as digital data to be shared with you for example now in the ether of the online.
As a conclusion, it would be good to speculate about the future. Thankfully, there is Lawrence Lek, who can imagine and visualise more than anyone else. During the AI show at the Barbican, the artist presented open-world video game 2065, taking place in the times, when thanks to mechanistic automation, we do not have to work anymore and spend the whole day playing video games. The most important element of this idea is that every player starts the game with the same intention: ‘they all want to be somewhere else’. I believe this is the message when one has nothing to do but play, they just want to be somewhere else. Could I compare the life of the present-day to a game? I am not sure, but the desire of escaping the current reality, of exchanging the being of now and here for abstract dreaming, overthinking or simply overdose and drinking, has been an antidote to having to cope with the material world for quite some part of present-day humanity. I am totally not against virtual reality, I am quite an advocate of multidimensionality, but I believe in the productivity of belonging to other worlds than material reality, only when this is done in simultaneity, as expansion instead of mere replacing.
Have you heard that in the 17th centuries computers were human or humans were called computers? During the 20th century, as a result of wartime havoc on the male gender, human computers were mostly female counting, for example, numbers of rockets needed to make a plane airborne. To wrap up, would be good to come back to the origin of AI, to the first algorithm carried out by the machine, to Ada Lovelace, the first one to recognise that the machine could go well beyond mere counting. This extraordinary female, who gave birth to the first computer algorithm, is believed to get there thanks to her familiarity with patterns, compositions and repetitions, of the textile designs for example. We now know from Lovelace’s notes that it was also her sensitivity to and analytic approach to understanding music that led her to sense something not yet known, not seen by the eye of the male. I would like to point out, at the end of this paper that the origin of Artificial Intelligence, lies in the feminine instinct, the unconscious knowledge that moves the body, awakens the brain to fire neurons, so that the mind can welcome innovative ideas like this one, like the AI, this lifeless material, this inanimate form that was born only due to the fertile analytical brain.