Skip to main content

SoundScape

Soundwaves as a platform for communication of three-dimensional data sets - SoundScape.

Description

Soundwaves as a platform for communication of three-dimensional data sets - SoundScape.

100%

100%

100%

  • Concluded

Overview

Duration

Aug 2023 - Dec 2023

Type of action

ERA-NET Cofund

Project Abstract

The purpose of SoundScape was to communicate three-dimensional research data exclusively through sound. This contrasts with traditional dissemination, which often relies on visual aids, e.g. images, graphs or tables. With this project, we wished to create a sound experience that linked directly to tangible three-dimensional data sets and would spark interest in further understanding science. This led to six concrete objectives that were all met. 

To collect 3-dimensional data on hydrolysation from the BlueCC project 

The data was collected from previous hydrolysation experiments on HPLC-SEC instruments. To transform the data and keeping the unambiguous nature of the data sets, all data points had to be converted into ASCII-files. Each individual data point represented a time-value and an intensityvalue on a certain wavelength. Wavelengths spanned from 190 nm to 800 nm with 1.2 nm intervals and time recordings from 0 min to 50 min with 0.01 min (0.6 sec) intervals. This resulted in datasets with 2.3 million individual data points. 

To screen different audio software alternatives 

Many different software alternatives were screened. After generating an overview of strengths and weaknesses, we have concluded that the most appropriate and capable alternative was the highcharts sonification studio (HSS) for sound creation and Logic pro for audio improvement. 

To transfer data into software and secure compatibility 

The dataset had to be truncated for several reasons: 2.3 million datapoints proved impossible to work with in any reasonable audio-format, most of the light absorption is centered around 190- 280 nm and will be the area in which differences between datasets can be seen, the sample elution time is between 5 and 17 min meaning that everything else is noise. This reduction led to 100k data points, which was still too high for the software to accommodate. The final reduction of data points consisted of averaging 10 different wavelengths into a manageable data set size of 11k. 

To create a scientifically valid sonification approach of converting visual data to audio 

The scientifically valid sonification was created using a sine wave and the truncated dataset alone. This created a soundscape where different wavelengths could easily be discerned from each other. The frequencies selected to represent the spectroscopy wavelengths were chosen based on two criteria: 1. Equal frequency distances, like the spectroscopy measurements and calculated averages, and 2. Sonically and aesthetical audibility. 

To create a more subjectively aesthetic output that still efficiently communicates the results 

The aesthetically pleasing soundscape was created by using the original scientifically valid soundscape and adding harmonics that expanded it´s audio space. In practice this meant copying the scientific audio output several times, both to expand the duration, but also adding octave and fifth harmonics to fill the audio range and spectrum with pleasant frequency intervals/chords. This pleasing output was about 10 minutes in duration. Four versions of it were entered in a hardware sampler and performance instrument called the electron octatrack. There we manipulated the sound further, with effects and time stretches, which enabled us to prepare the performance that was planned for reaching the audience during our final performance. 

To reach audience otherwise uninterested in research results 

We were accepted to perform both the scientific and the more subjectively aesthetic soundscape at the Insomnia electronic music and arts festival. We were accredited as artists in the festival. The regular attendee at such events is not the average attendee we regularly see in our scientific meetings or conferences. However, the engagement seen after the event has led us to believe that this truly is an important area to explore also in the future. 

The performance was recorded and after some audio improvement, it was released in all currently available streaming services. This maximized our impact and outreach beyond our location and timeframe of the project.

Consortium

Coordinator:

Runar Gjerp Solstad