Cognitive Infrastructures Studio
Synthetic Intelligence Design-Development Studio
Central Saint-Martins, London
June 21st – July 19th 2024
What are Cognitive Infrastructures?
As AI becomes both more general and more foundational, it shouldn’t be seen as a disembodied virtual brain. It is a real, material force. AI is embedded into the active, decision making systems of real world systems. As AI becomes infrastructural, infrastructures become intelligent.
As artificial intelligence becomes infrastructural, and as societal infrastructures concurrently become more cognitive, the relation between AI theory and practice needs realignment. Across scales – from world-datafiction and data visualization to users and UI, and back again – many of the most interesting problems in AI design are still embryonic.
Natural Intelligence emerges at environmental scale and in the interactions of multiple agents. It is located not only in brains but in active landscapes. Similarly, artificial intelligence is not contained within single artificial minds but extends throughout the networks of planetary computation: it is baked into industrial processes; it generates images and text; it coordinates circulation in cities; it senses, models and acts in the wild.
This represents an infrastructuralization of AI, but also a ‘making cognitive’ of both new and legacy infrastructures. These are capable of responding to us, to the world and to each other in ways we recognize as embedded and networked cognition.
AI is physicalized, from user interfaces on the surface of handheld devices to deep below the built environment. As we interact with the world, we retrain model weights, making actions newly reflexive in knowing that performing an action is also a way of representing it within a model. To play with the model is to remake the model, increasingly in real time.
What kind of design space is this? What does it afford, enable, produce, and delimit? When AIs are simultaneously platforms, applications and users, what are the interfaces between society and its intelligent simulations? How can we understand AI Alignment not as just AI bending to society, but how societies evolve in relationship to AI? What kinds of Cognitive Infrastructures might be revealed and composed?
How might this frame human-AI interaction design? What happens when the production and curation of data is for increasingly generalized, multimodal and foundational, models? How might the collective intelligence of generative AI make the world not only queryable, but re-composable in new ways? How will simulations collapse the distances between the virtual and the real? How will human societies align toward the insights and affordances of artificial intelligence, rather than AI bending to human constructs? Ultimately, how will the inclusion of a fuller range of planetary information, beyond traces of individual human users, expand what counts as intelligence?
Individual users will not only interact with big models, but multiple combinations of models will interact with groups of people in overlapping combinations. Perhaps the most critical and unfamiliar interactions will unfold between different AIs without human interference.
Cognitive Infrastructures are forming, framing, and evolving a new ecology of planetary intelligence.
2024 Summer Studio
Cognitive Infrastructures is the theme for Antikythera’s 2024 Synthetic Intelligence Studio in London. Antikythera’s 2024 Summer Studio Cognitive Infrastructures will run from June 21st to July 19th.
Studio Researchers include Serpentine and King's College London PhD researcher Alasdair Milne, artist and programmer Cezar Mocan, Cambridge Center of the Future of Intelligence MPhil graduate in Ethics of AI, Data and Algorithms student Chloe Loewith, École normale supérieure PhD Researcher and Lecturer Daniele Cavalli, writer, filmmaker and researcher Gary Zhexi Zhang, University of Ecole Des Ponts ParisTech computer vision researcher Ioannis Siglidis, University of the Arts London interdisciplinary artist and technologist Iulia Ionescu, Utrecht University MSc student and TNO research intern Ivar Frisch, Google DeepMind Research Engineer and University College London PhD student Jackie Kay, University of the Arts London lecturer and technical artist Jenn Leung, Google software engineer and MIT Media Lab and CMU research affiliate Michelle Chang, AI/ML researcher Philip Moreira Tomei, Royal College of Art and the Hong Kong Polytechnic University researcher in artificial and distributed intelligence Sonia Bernac, University of Oxford PhD student in theoretical ML Tyler Farghly, and Google DeepMind Senior AI Researcher Winnie Street.
Affiliate Researchers include Google-DeepMind VP of Technology and Society Blaise Agüera y Arcas, Google-DeepMind simulations researchers Joel Z Leibo and Sasha Vezhnevets, Santa Fe Institute astrophysicist Sara Imari Walker, science fiction writer Chen Qiufan, and Cambridge University Centre for the Study of Existential Risk researcher Thomas Moynihan.
The studio will be based in Kings Cross at Central Saint Martins. Students from CSM’s MA Narrative Environments program will support the studio researchers and production.
Design & Philosophy for Speculative Synthetic Intelligence
Rather than applying philosophy to ideas about technology, Antikythera derives and develops philosophy from direct encounters with technology. Rather than approach Artificial Intelligence as the imitation of the human, Synthetic Intelligence starts with the emerging potential of machine intelligences.
Antikythera approaches the issues of Synthetic Intelligence though several core principles:
Computation is not just calculation, but the basis of a new global infrastructure of planetary computation remaking politics, economics, culture and science in its image.
The ongoing emergence of AI represents a fundamental evolution of that global infrastructure, from stacks based on procedural programming architectures, to ones based on training, serving and interacting with large models: from The Stack to AI Stack.
Machine intelligence is less a discrete artificial brain than the pervasive animation of distributed information sensing and processing infrastructures.
”Antikythera” refers to computation as both an instrumental technology–a technology that allows us to do new things, as well as an existential technology–a technology that discloses and reveals underlying conditions.
As existing technologies have outpaced legacy theory, philosophy is not something to be applied to or projected upon technology, but something to be generated from direct, exploratory encounters with technology.
Studio Briefs
Antikythera’s Cognitive Infrastructures studio will unfold from several interrelated speculative briefs for intellectual and practical exploration:
CIVILIZATIONAL OVERHANG AND PRODUCTIVE DISALIGNMENT: AI overhang affects not only narrow domains, but also arguably, civilizations, and how they understand and register their own organization–past, present, and future. As a macroscopic goal, simple “alignment” of AI to existing human values is inadequate and even dangerous. The history of technology suggests that the positive impacts of AI will not arise through its subordination to or mimicry of human desires. Productive disalignment–bending society toward fundamental insights of AI– is just as essential.
HAIID: HUMAN-AI INTERACTION DESIGN: HAIID is an emerging field, one that contemplates the evolution of Human-Computer Interaction in a world where AI can process complex psychosocial phenomena. Anthropomorphization of AI often leads to weird “folk ontologies'' of what AI is and what it wants. Drawing on perspectives from a global span of cultures, mapping the odd and outlier cases of HAIID gives designers a wider view of possible interaction models.
TOY WORLD POLICIES: Toy Worlds allow AIs to navigate and manipulate low-dimensional virtual spaces as analogues for higher dimensional real world spaces, each space standing in for the other in sequence. In doing so, AIs learning means the adjustment of policies that focus and adapt their learned expertises. The sim-to-real gap can be rethought in two ways: recovering loss between low and high dimensions, and the agnostic transfer of policy from one domain to another.
EMBEDDINGS VISUALIZATION: The predictive intelligence of LLMs is based on the adjacencies of word embeddings in mind-altering, complex vector spaces. Different ways of visualizing embeddings are different ways of comprehending machine intelligence. Descriptive and generative models for this can be drawn from neural network and brain visualization, complex systems modeling, agent interaction mapping, semantic trees, and more.
GENERATIVE AI AND MASSIVELY-DISTRIBUTED PROMPTING: For generative AI, the distances between training data and prompt engineering can seem, at different times, to be vast or tiny. To train or to prompt are both forms of interaction with large models. As the collective intelligence of culture is transformed into weights, weights are activated by prompts shared across domains to produce new artifacts. Though interface culture tends to individuate interactions with models, there are many ways to design massively-distributed prompting, producing collective artifacts that mirror the societal-scale intelligence of training data.
MULTIMODAL LLM INTERFACES: The design space of interaction with multimodal LLMs is not limited to individual or group chat interfaces, but can include a diverse range of media inputs that can be combined to produce a diverse range of hybrid outputs. Redefining “language” as that which can be tokenized breaks down not only genre and media but multiplies and integrates forms of sensing (sight, sound, speech, text, movement, etc.) New interfaces may allow users both comprehension and composition of those hybrids.
DATA PROVENANCE AND PROVIDENCE: THE GOOD, THE POISONED, AND THE COLLAPSED: The future utility of LLMs as cognitive infrastructures may be undermined by model collapse caused by retraining on outputs, and model degradation caused by training on poisoned data. Like the Ouroborous, the model eats its own tail. Meanwhile, differentiation between human-generated and model-generated data and artifacts will become more difficult and complex. High quality domain-specific data is largely private and/or privatized and so not generally available for socially-useful models. Will synthetic data and techniques like federated learning become more essential in ensuring data quality, and if so what are the necessary systems?
THE PLANETARY ACROSS HUMAN AND INHUMAN LANGUAGES: The planetary computation we have today is not planetary enough. The tokenization of collective intelligence is limited by how the most important LLMs are trained largely on English, and on English produced by a relatively small slice of humans, which themselves are a small slice of information producing and consuming forms of life. Given the role of planetary AI for ecological sensing, monitoring and governance –not to mention fundamental science– the prospect of “organizing all the world’s information” takes on renewed urgency and complexity.