The world’s leading publication for data science, AI, and ML professionals.

Data Sonification with JFugue and SpringMVC

Build a service to play data in real-time using SpringMVC, JFugue and WebAudioFont

Photo by Denisse Leon on Unsplash
Photo by Denisse Leon on Unsplash

We’re all accustomed to visualising data in images/charts. Whether it’s the price of a company stock going up and down or the number of customer’s sales in the quarter. We are used to identifying the pattern through the X and Y-axis in the graph.

What about other ways to present data besides images? Data Sonification or auditory display is the term used for transforming data into sound. Probably the first time it became a research field was in 1992. In that year, the book ‘Auditory Display’ was released. It was the resulting proceedings of the International Conference of Auditory Display (ICAD).

Although the definition of sonification looks simple at first, it can also be blurred with other elements that reproduce sound. Not everything that emits sound is considered data sonification. For example, Music. Is music an example of data sonification? As an analogy, music can be compared to sonifying application response times as much as a graph of the stock market can be compared with a painting. Data sonification is all about visualizing the data underneath it; the sound must clearly allow the correct interpretation. Whereas music and painting are more subjective, there are more layers of interpretation, focusing more on the viewer or listener on how he or she can be inspired.

Thomas Herman, 2008 proposes the below more elaborated definition:

A technique that uses data as input, and generates sound signals (eventually in response to optional additional excitation or triggering) may be called sonification, if and only if

  • The sound reflects objective properties or relations in the input data.
  • The transformation is systematic. That means there is a precise definition of how the data (and optional interactions) cause the sound to change.
  • The sonification is reproducible: Given the same data and identical interactions (or triggers), the resulting sound has to be structurally identical.
  • The system can intentionally be used with different data and also be in repetition with the same data.

Sample Service

The service could be used by a support team needing to monitor the infrastructure or network in real-time. The service basically accesses/reads the log files of another application (that needs monitoring), converts certain values and events into sound notes and streams them to the browser / WS caller.

The log file content used in this sample is more for demonstration purposes. It contains only response times in milliseconds and whether an Exception or Timeout occurred. It would contain many other things in the real world, but in the end, one could parse it to get the required data.

In terms of logic on how each value and event is converted to a sound, it is rather quite simple. Maybe, just maybe, because I’m far from being a musician, and I wasn’t quite creative. But I believe it is effective and follows the points described in the definition.

These are the features in more detail:

  • Stream base64 representation of audio as the response from web service.
  • Play it in the browser
  • Convert response times and certain events to specific notes. High response time = high numeric note in the piano. Low response time = low numeric note. Exception and Timeout play specific sounds/notes. The intent is to call for attention to such events.
  • Normalize response times to range: 50–127. This range is equivalent to the numeric note that one can play. Example: Considering 5000ms is the maximum response time, 5000 plays 127 note value. See Figure 2 below.
  • Read the application log file as it is written.
Figure 2. Numeric note values
Figure 2. Numeric note values

The service uses the following technologies/frameworks:

Here are the main parts of the code.

Controller:

Index.html

Conclusion

Data sonification is a great alternative to images for visualising data. When I first read about it, I admit, I was wowed by it and jumped to the internet to do more research. I had never heard it before.

However, as I read and thought more, I began to question its practicality. Is it beneficial? Where does it really justify its implementation?

For example, in the two cases below, I think that it’s not just a gimmick but a necessity. Cases where I would see myself in need of such technology. To a less extent, the second one:

  1. Visually impaired users.
  2. Highly critical systems/networks. Especially if the support team has other tasks apart from monitoring, the amount of data can be overwhelming. It is not practical to keep looking at a screen analysing the data in real-time. __ Alerts/alarms are often after the fact, generate false positives, and overflow the email inbox.

The research field is still young and has a lot to evolve. As it is more studied, we’ll see more situations where it effectively applies.


Related Articles