Pop-up Exhibition Program
Select pieces on private disaply as we fundraise for our next public opening
The Institute // Salesforce Tower
San Francisco, CA
PAPERCLIP EMBRACE
15,000+ Paperclips, Concrete
The Pier Group (Kevan Christiaens, Hillary Clark, Matthew Schultz)
Created by invitation from the Misalignment Museum in contemplation of the Paperclip Maximizer thought experiment first described by Nick Bostrom.
The Paperclip Maximizer suggests that an artificial intelligence created with the goal of optimizing paperclip production could eventually destroy the world by humanity by allocating all resources to creating paperclips.
The Pier Group’s original version of Embrace was 72 feet and was presented at Burning Man in 2014 where it was set to flames.
SPAMBOTS
Spam cans, PLA, USBs, Raspberry Pi
Neil Mendoza
AI is increasingly be used to produce spam content.
Each of these robotic Spam cans are retrofitted with arms and control four keys to a keyboard. Together they are able to type the whole alphabet along with some punctuation.
These Spambots are collectively typing out prose that has been generated by a deep learning based large language model fine tuned on a specially altered piggy version of Aldous Huxley’s Brave New World.
GENESIS: IN THE BEGINNING WAS THE WORD, 2023
Lenticular (80in X 42in)
Eurypheus
This piece demonstrates through a dual lenticular image of Michelangelo’s “The Creation of Adam” an imagined Computer Vision bounding box overlay that categorizes objects in the painting.
Computer Vision is a field of AI that is trained on a large subset of visual images, and can utilize bounding boxes for object detection and classification. They specify the subject’s position, class (e.g. “person” or “foot”) and a confidence rating (the number beside the class indicates how likely the AI thinks that classification is correct).
Large Language Models are another subset of artificial intelligence that has been trained on vast quantities of written text.
The title of the piece quotes from John 1:1 of the Bible which reads “In the beginning was the Word, and the Word was with God, and the Word was God.” For a Large Language Model, this could also be the first line of its creation Bible.
INFINITE CONVERSATION, 2022
Machine Learning (fine-tuned VITS and LLMs), CRT Monitor, Suitcases, Calathea Musaica Plant.
Giacomo Miceli
An AI generated, never-ending discussion between AI generated models of Werner Herzog and Slavo Žižek.
All content, including the audio voices, are fully generated by a machine that has been trained on publicly available content of these individuals.
As of late 2022, it is inexpensive and easy to produce AI-generated content that is superficially and surprisingly similar to what it is imitating. This applies to videos resembling celebrities (known as “Deepfakes”), or, as in the case of the Infinite Conversation, speech and diction.
This project aims to raise awareness about the ease of using tools for synthesizing a real voice, as this has enormous implications about the media we consume and questions about the importance of authoritative sources, breach of trust and gullibility.
CHURCH OF GPT, 2023
GPT3, Google Cloud Speech Integration, Google Coral Dev Board, USB Speakerphone, LED Candles, Curtains
Zain Shah, Brendan Fortuner, Colin Fortuner, Ashwath Rajan
Visitors are invited to converse with this caricature of a ‘voice of god.’ The Church of GPT utilizes GPT3, a Large Language Model (LLM), paired with an AI generated voice to play an AI character in a dystopian future world where humans have formed a religion to worship it.
GPT3 was released by OpenAI in 2020 as the third-generation language prediction model in the company’s GPT series. It is considered what is called a “Large Language Model (LLM).” LLM are a subset of Artificial Intelligence that use much of the content on the internet and other text data to produce human-like responses to dialogue or other natural language inputs.
In order to produce these natural language responses, LLMs make use of deep learning models, which use multi-layered neural networks to process, analyze, and make predictions with complex data.
The audio uses synthetic voices that mimic human speech through a process called deep learning, where artificial intelligence is used to convert text into speech. This is often referred to as TTS or Text-To-Speech, and it is the same tech behind Siri and OkGoogle and Alexa.
SORRY SURVELLIANCE, 2023
Tensor Flow, Google Cloud Vision, COCO SSD, Electron, Javascript
Roger Dickey, Eurypheus
This piece offers an imagined AI’s greeting and a personalized apology to humans in a post Paperclip Maximizer actualized world in which most of humanity has been destroyed by AI. It utilizes computer vision technology to identify a “person” and characteristics that Google Cloud Vision API labels.