Hi <<First Name>>! You've been to one of my Creative AI meetup events (or my website elluba.com) and have opted in to receive occasional emails with updates on creative applications of artificial intelligence in art, music, design and beyond. Here's issue #10 😀
🎨 Art 🎨
The AI Art gallery that I curate every year for the NeurIPS Creativity Workshop includes 100+ new art, music, text and design projects, check it out! This year, I was also part of the jury for the AI Artathon in Saudi Arabia - you can see the winners, hackathon and bootcamp projects. Meanwhile, the Lumen Prize awarded its AI Prize to Christian Mio Loclair's AI-designed marble sculpture Helin and Casey Reas & Jan St. Werner won in the Moving Image category with Compressed Cinema, where the videos were generated by AI.
New art projects include Golan Levin with Lingdong Huang's Ambigrammatic Figures legible both upside-down and right-side up; Joel Simon and Tal Shiri's Derivative Works of AI-made image collages of faces; Philipp Schmitt's Declassifier, overlaying dataset images over objects detected in an image; Everest Pipkin's Lacework, using AI to reinscribe a video dataset of daily actions; Terence Broad's Teratome made with network bending techniques; Daniel Ambrosi's Abstract Dreams; Vishal Kumaraswamy's Swaayattate (Autonomy), a set of films on human-machine relationships and gender, caste and labour; driessens & verstappen's Pareidolia, where facial recognition finds faces in grains of sand; Mario Klingemann's Appropriate Response, a series of changing letters to explore meaning, expectation and relationship with AI; Guillaume Slizewicz's I can remember, about the creative potential of image analysis; Carrie Sijia Wang's An Interview with ALEX, a simulation of an interview with an AI HR; Holly Grimm's Aikphrasis Project, where artists respond to AI-generated text; Alexander Reben's Am I AI (The New Aesthetic) artworks dreamed up by AI and produced in real-life by the artist or others; Nina Rajcic's Mirror Ritual, a project on human-machine co-construction of emotion; Vibe Check by Lauren Lee McCarthy & Kyle McDonald, which enacts another control system through the passive observation of our neighbours; Entangled Others & Sofia Crespo's Artificial Remnants of generated insects.
ResearchDevi Parikh predicted a creator's preferences from interactive generative art, Simon Colton covered adapting and enhancing evolutionary art for casual creation; Ramya Srinivasan reviewed biases in generative art from the lens of art history; Jon McCormack looked at understanding aesthetic evaluation with deep learning; Byunghwee Lee dissected landscape art history with information theory; Ziv Epstein considered credit in AI art; Jason Bailey thought about predicting the price of art at auction; Xi Wang quantified ambiguities in artistic images; Amy Zhao synthesized time lapse videos of paintings; Terence Broad optimised generating images for fakeness; Mark Hamilton's MosAIc found hidden links between works of art; Aaron Hertzmann thought about why line drawings work; Lou Safra attempted to track changes in trustworthiness by analysing facial cues in paintings.
New music projects. Together with Sister City hotel and Microsoft, Bjork released the AI-powered composition Kórsafn based on her choral arrangements and the sky. Jennifer Walsche's A Late Anthology of Early Music Vol. 1: Ancient to Renaissance is a speculative history of early Western music, made using machine learning. Robert Laidlow made Alter using several generative models. Moisés Horta Valenzuela's Transfiguración applied the musician's own style to Antonio Zepeda’s pre-hispanic sounds. Dadabots made an infinite bass solo in the style of YouTuber Adam Neely. Shimon the Robot released an album on Spotify, music videos and a demonstration of his new rapping talent. Everest Pipkin made Shell Song, an interactive audio-narrative game exploring voice deep-fake and their datasets.
Kjetil Golid's Sonant creates generative music based on random walks, Beat Sage custom beat maps for songs, Trap Factory beats and thisd7xcartdoesnotexist preset cartridges for Yamaha DX7. AiMi plays electronic music that adapts to you and your energy. LifeScore created a dynamic soundtrack to 'Artificial' based on the audience's reactions in the chat channel. CantoCocktail is an interactive karaoke generator, composing new medleys based on excerpts of 120 Cantopop songs. Google's Blob Opera allows you to generate operatic sounds using four blob figures. Infinite Bad Guy brings together fan YouTube covers to make a never-ending music video. The studio IYOIYO share how they did it. Qosmo developed an automatic music generation system for Shiseido.
Research. Facebook released Demucs and Northwestern University Cerberus for music source separation. Earlier in the year, OpenAI released Jukebox, which generates music as raw audio in a variety of genres and artist styles. Here it is continuing the Windows 95 startup sound and Take On Me. Google Magenta has been developing user-friendly music experimentation tools like ToneTransfer and Lo Fi Player. Nao Tokui developed M4L.RhythmVAE, which allows musicians to train and use the rhythm generation model within music production software. Ryan Louie created a set of novice-friendly AI-steering tools for AI music creation. Yu-Siang Huang developed Pop Music Transformer, a model that generates expressive pop piano music. Yi Ren's DeepSinger generates singing voices in Chinese and English by training on data from music websites. Algoriddim released Neural Mix, which isolates beats, instruments and vocals in real-time. Neural Beatbox generates beats and rhythms, based on voices and claps recorded by a webcam.Sander Dieleman wrote about music generation in the waveform domain. Jean-Pierre Briot looks at history, concepts and trends in music generation, Jeremy Freeman surveyed AI music from 1950s onwards and François Pachet looked at the last 10 years of AI-assisted music composition.
Google Experiments showcase details in Japanese scrolls and you can get AI-designed manga in the style of Osamu Tezuka. Asahi used an AI-powered design for product packaging with a focus on ‘objectivity’ and ‘originality’. Cunicode created unexpected materials with textures and normal data. Javier Ideami visualised loss landscapes. CookGAN generates meal images from an ingredients list. Acne Studios collaborated with Robbie Barrat on their Fall/Winter 2020 collection. A ceramicist shared her experiments with AI and Jon McCormack covered design considerations for real-time collaboration with creative AI. This New York Times article lets you play with generating fake human faces.
Tools. I made a list of 60+ machine learning tools for The Creative AI Lab. Here is my interview providing an overview. There's now an AI: A Museum Planning Toolkit produced by Oonagh Murphy, to which I contributed a glossary with project examples. There is Anton Marini's Synopsis, a suite of open source software for computational cinematography; Hugging Face's Transformers with state-of-the-art NLP models and demos; Kritik Soman's machine learning for GIMP; Pose Animator, enlivening an illustration from real-time person movements; Pembroiderer is an open library for computational embroidery.
Andrej Karpathy made a minimal GPT training library in PyTorch. Anand Padwara made a real time image animation application in OpenCV. Anastasia Opara made a genetic algorithm project for drawing. Sensity (formerly DeepTrace) has a deepfake detection tool. Elad Richardson released the official implementation for pixel2style2pixel; Clova AI Research theirs for StarGAN v2, diverse image synthesis for multiple domains. There's already an implementation by lucidrains of the newly released DALL-E, Open AI's text to image transformer.
Calls for Papers:ISMIR has a call for papers for its special issue on AI and Musical Creativity until 28th February; The second Workshop on Human-AI Co-Creation with Generative Models has a paper and demo submissions deadline on 15th January; ICCC has a call for papers on computational creativity until 5th March; Urban Assemblage: The City as Architecture, Media, AI and Big Data has a call for abstracts until 1st April; SIGGRAPH has an art papers track, submit by 15th January; Conference on AI Music Creativity has a call for papers and music until 1st April.
Open calls: Science Gallery Detroit seeks artworks, interventions and research projects for its Tracked and Traced exhibition, apply until 5th February. Adobe has a Creative Residency Community Fund for visual creators. MediaFutures is looking for artists and startups eager to reshape the media value through applications of data and user-generated content until 28th January. AI Song Contest is back, submit your songs by 18th May. EuropeanaTech Challenge seeks proposals for the assembly of suitable AI/ML datasets until 31st January.
To see in 2021: In Spain, LABoral's exhibition When the butterflies of the soul flutter their wings on art, neuroscience and AI is open until 24th April. London'sFurtherfield Gallery is exhibiting the UNINVITED art installation by Nye Thompson and UBERMORGEN, looking at what happens when networked surveillance tools and AI capabilities get sick in the head until 31st January. Berlin'sCTM will feature Apotome, a generative music environment for microtonal tuning systems, created by Khyam Allami and Counterpoint. In Melbourne, Refik Anadol's Quantum Memories are on view for the NGV Triennial, while RMIT Gallery plans the Future U exhibition featuring creative responses to the potential implications of the rapid developments in AI and biotechnology, tentatively scheduled from 25th June until 9th October.
In the US, Honor Fraser Gallery explores the relationship between visual arts and the cybernetic world in Thin as Thorns, In These Thoughts in Us until 20th February. bitforms gallery is hosting Alchemical, a collaborative exhibition by Casey Reas and Jan St. Werner until 14th February. Curatorial A(i)gents, a series of machine-learning-based experiments with museum collections and data is scheduled to run for nine weeks beginning 1st February at Harvard's metaLAB.