Nov 15, 2017 | By Rose Garrod
Experience Lifestyle & Interiors on WGSN.
Aug 22, 2011
During a performance a microphone situated next to the machine listens to the environment, or music being played, and picks up sound data. The Robot then analyzes this information, which is fed into a genetic algorithm (a problem solving technique that mimics the process of natural evolution), to decide upon the painting style that ensues – the artwork alters in real-time as different music is played.
The algorithm processes the amount of pressure used in the brush, and the amount of paint added to the brush tip. In turn, the result of the performance becomes a repeated interaction where by sound affects the painting outcome, and the painting action affects the sound, in an ever-rotating cycle – this creates completely original artwork in accordance with the performance.
Recently featured on our Stylesight blog, producer Jamie xx and visual artist Quayola have collaborated on an immersive multi-sensory experience Structures, a project comprising a 3D visualizer that interprets sound, and then transforms it into real-time computer generated artwork, entirely dependent on the ‘live’ sound being played.
These projects highlight some interesting concepts emerging in contemporary art and design – visual and sonic artists are utilizing multimedia platforms to investigate how sound, art and technology can become responsive to one another, and are creating innovative, personalized, and more ‘human’ technological experiences for users audiences. Watch Grosser’s inspiring project in action here. – Samantha Fox
Know what’s next. Become a WGSN member today to benefit from our daily trend intelligence, retail analytics, consumer insights and bespoke consultancy services.