Music Notation and Data Visualization

I was learning a song from a Youtube video today, and found myself rewinding to watch the same few chords being played multiple times. The student in me started writing the chords down, except I used the “letter names” of each note instead of the normal “music notation” on a staff.

A notebook with the letter names of notes written on them lying on top of a Yamaha keyboard

As I was playing along to the video, I realized that I process this type of notation a lot slower than the “normal” music notation, where notes are represented by circles on a staff, and I can “guess” at the type of chord I’m playing just by the shape of the chords on a staff.

The letters above were arranged from low notes to high notes, but because I didn’t make use of distance, mapping the distance between notes to actual distance on the keyboard, I had to read each letter individually. It’s a lot more cognitive work to read each letter individually, as opposed to approximating pitches based on a visual map.

A certain clustering of notes tells me to expect a “major” chord, and another tells me to expect dissonance, and having played for long enough, these reactions happen almost intuitively.

I’m a decent sight-reader, but I rarely read every note on a staff anymore. Instead, I’ll look at the first couple notes, and judge the melody lines based on their altitude: whether the notes seem to be moving up or down. The representation of the music notes allows me to make

At work, we do a lot of exploratory data analysis and visualization, and on occasion, will create custom representations of complex datasets. The connection between the two just occurred to me today: that the same way I use “pattens” in a symbolic representation of notes to make rapid inferences and decisions about where I place my hands and how to move around the keyboard, when we design dashboards for high-level overviews, we’re enabling this kind of inference.

The flip side of visual representations assisting us with quick inferences, though, is that the representations that we choose shape and constrain the narratives we’re able to access most easily. When I’m looking at the “shape” of a line of music, I guess approximately where it is on the piano,  what the “next step” my fingers should be taking, and what sound I should expect.

However, when a piece of music has highly chromatic scales (lots of notes moving in close proximity, and deviating from what I expect them to sound like), and tightly clustered chords (making the visual clutter of a chord harder to parse quickly), there are times I consider alternate representations. In fact, even if I don’t fully build it out, that’s usually how I grok a difficult passage of music: by notating it differently, and annotating the score with symbols and words that tell me what to expect and linking it concrete past experiences.

Who knows – I might build this out into a longer series of blog posts considering data visualization and representation through the lens of a musician.  For now – I think I’ll put a post-it with a couple music notes on it, to remind me of the importance of visualization, representation and mapping when I’m lost in the code and statistics of my data.