In their 1944 book, Dialectic of Enlightenment, Theodor Adorno and Max Horkheimer offered a perspective which criticised the culture industry of manufacturing a system in which people no longer need to be discerning about the cultural products they consume (music, books, films, television), and producers are no longer inventive or imaginative about the cultural products they create. In their words, differentiations between “A and B films or stories in magazines in different price ranges depend not so much on their subject matter, as on classifying, organising, and labelling consumers.” And in this regard, there is nothing left for the consumer to classify, as producers have done it for them. This marked one of the earliest accounts of the commodification of culture.
Almost 80 years later, these consequences of the culture industry are still relevant today. One may argue that they have been further amplified by the conditions set forth by the digital age. The digitisation of cultural products, as well as the global adoption of the internet and the plethora of distribution platforms built on top of it, have led to two key developments in music commodification. The first is the invention of the digital playlist. This transformed the primary format of consumption from long-form media (albums) to collections of short-form media (songs) drawn from an endless pit of choices. In doing so, the power of curation shifted away from the artist and towards the consumer and the platform. The second key development was the ability to track and amass infinite amounts of data on consumers’ habits as they interacted with these distribution platforms – Where were they when they listened to this song? When do they usually listen to this playlist? What other songs do people listen to for this activity? While the invention of “playlisting” gave rise to the “digital influencer” (i.e. tastemaker), advancements in data technology allowed for algorithms to become the ultimate influencers. As the former VP of Product at Spotify, Shiva Rajaman, stated when speaking about the Spotify Moments project, “Instead of orienting around this idea of having music which you put in a library, we orient more around your life."
In this project, which aims to provide satirical commentary on the state of vibe capitalism, I created an algorithm that attempts to generate unique music given a person’s unique vibe. They will take a 3-second video of themselves showing off their vibe in front of a webcam and an algorithm processes this video and composes a tune which is played back to the person. The algorithm is a black box to the person and they are told that each song is unique to each individual’s vibe. However, as they explore songs generated for other people in the past, they may realise that the songs are quite similar and that the algorithm uses broad strokes features of each person’s video to write each song.
The algorithm makes use of the Google Vision API to track head movements and perform sentiment analysis on the user’s facial expressions. Sentiment prediction is outputted as a probability distribution over Joyful, Sorrowful, Surprise, Anger, and Neutral. The predicted sentiments are used to select and re-sample from a bank of sounds and soundscapes I created using a combination of analogue hardware (Prophet 6, Erica Synths Bassline, Eventide Space Reverb Pedal) and a variety of digital plugins. At runtime, the samples are pre-loaded into a Javascript library called Tone.js, which is used to trigger and process them in real time. Parameters used to modulate the quality and timing of the sounds are determined by the user’s head movements. Although the sound bank remains unchanged, no two vibes are alike! As the Vibe Machine is used, eventually some of the code and research text “leak” onto the page.