The ‘Computacional_Synesthesia Project: between computational vision and algorithmic composition’ comprises a series of speculations initiated according to the concept of signal transduction between digital information and sensitive stimulus. In this work, we subvert the concept, which originally refers to the transfer of genetic material between cells through an intermediary, to use it as a ‘symbolic translation’ between binary code (digital images) and audio/ musical parameters (pitch, duration, amplitude and timbre). The compositional experiments also deal with the notion of synesthesia between vision and hearing, because they cause a kind of affective translation between visual and audio languages.
An example is the compositional process in the work anthropophagy (a_person_sitting_on_a_chair), which is based on two axes. First, we translated symbolically the pixel vectors of the images into numerical values, which were then processed by a sound synthesizer. Second, we used commercial artificial intelligence content classifiers (such as Google, Amazon, Microsoft, etc.) as tags in searchers engines on these companies’ own search engines. The words applied by computer vision were presented by the project Art Decoder – a script developed by Gabriel Pereira, Bernardo Fontes and Rafael Tshu.