“In one way or another, I’ve always been involved in music not just as an art form, but as a form of research,” explains Atau Tanaka, Professor of Media Computing at Goldsmiths University in London.
In anticipation for the upcoming European Research Music Conference, taking place at the Universitat Pompeu Fabra in Barcelona on June 11-13, I had the opportunity to sit down with Professor Tanaka for an interview around his research into digitising humanity’s embodied relationship with music and sound.
I also took this opportunity to discuss how funding from the European Research Council not only enabled his research to be conducted, but how the science behind our relationship with music and sound will have a lasting impact on our society.
To begin, I asked how he first became involved in the ERC, and what incentives inspired him to apply for funding in the first place.
“The funding this project received from the ERC provided me with a path for discovering situations or opportunities in which research could serve as a purpose for making music, and thinking of music as something that could somehow once again become new, or never before heard and experienced.”
Professor Tanaka has been working in this area through different contexts, including his time at Sony in Paris in the early 2000’s doing early mobile music research. In the lab in Paris there was a strong EU funding culture of applying for grants under Framework Period 7, the predecessor of Horizon 2020.
“That’s where I got to know about EU collaborative projects,” he explains. “I applied for an ERC grant for the first time in 2008, when I was a professor at Newcastle University. Interdisciplinary research was a huge buzzword at that time, as was practice based research, and so therein lies the challenge of balancing scientific research methods with music, which is a creative, yet highly technical subject of the humanities.
The ERC seemed apt, because it was PI focused, and because it really demanded vision, and a kind of a risk where you could take a subject area and push it beyond its normal boundaries. However, it took until 2012 to be awarded, which is actually an important message to anyone applying for funding for their research.
The only way to write a good grant is to get one rejected, so to those writing starting grants, my advice is not to wait until your eligibility is about to run out. Get started right away, and understand that getting rejected is actually quite often a part of the process.
While the initial idea for a proposal must of course be rigorous and credible, one incentive to apply for funding from the ERC is its spirit of curiosity, and taking chances. A researcher needs to be able to think about where they might be in five years, and be able to talk about their goals with confidence, but they are still encouraged to leave open the possibility for discovery and the unknown.
Underlying that, there must be some kind of methodology and approach and process that is solid, and could eventually guarantee results. I think that is the nature of fundamental research, one that is currently being challenged and threatened in an increasingly neoliberal economy.
The ERC is a great scheme in that it continues supporting fundamental research, as opposed to the shift from FP7 to Horizon 2020, and the increasing pressure for business-oriented innovation research to take over. They support economic development instead of knowledge, which isn’t necessarily a bad thing, but to allocate funds meant specifically for research and knowledge I feel is somewhat misplaced. I’m disappointed by that, as I do not find that industrial pressures are sufficiently curiosity driven, which should be the basis of scientific research.”
To follow on from our discussion around the benefits of ERC funded research, I asked how Professor Tanaka’s recently finished project, and its currently funded follow up, would be represented at the upcoming European Research Music Conference in Barcelona.
“My recently finished ERC project was called Meta Gesture Music, and ran from 2012-2017. This project was funded under the SH5 “Cultures and Cultural Production” panel of the ERC, and looked at three things. First it looked at sound in culture, it looked at the body interaction with digital technologies, and it looked at cutting edge technologies of machine learning, all within the context of new musical performance.
So what does this mean, exactly? Essentially it means looking at how we have, ourselves, a social and embodied relationship with sound in everyday life, and that these interactions condition our music experiences.
Today, we have sophisticated human interface technologies designed to detect gesture, corporeal action, and body physiology as inputs to an interactive, musical system, or ‘a new musical instrument’. We are able to be be expressive with these technologies, as we would be with any traditional musical instrument.
The challenge was to rediscover the rich relationships between the analog world with real musical instruments, and to create digital technologies that have that same kind of richness.
That project ended in 2017 with the publication of the usual academic papers and book chapters, but also with a concert in London and the release of an album on CD, Spotify, and iTunes. As part of that research, I created musical artefacts, recordings and performances, which I consider as research outputs as important as the traditional publications.
As of May 1st, I’m doing an ERC Proof-of-Concept project. This is a shorter, eighteen month long project that takes the fruits of the bigger, previous project to explore possible commercialisation channels. So essentially, taking the last five years’ worth of research, and then applying it to new and different kinds of products. Specifically, we are looking to produce not just a tool for music production, but a physiological instrument for gestural musical performance.
To give an example, a Theremin is an old instrument that was used to detect the movement of the hands, and is a well known touchstone in the history of electronic music. So now I’m developing a new instrument that directly accesses the electrical impulses of muscle contraction in the body as a musical input.
It is worth noting that in the time I’ve been doing this research, technology has hit the mainstream and public consciousness, but because it’s generally product driven, we don’t always understand the deeper individual human creative potential , nor endeavour to make each individual musically expressive. Now, with digital integration, we can take concepts from our basic research to make them more accessible to the general public.
Traditional instruments, for example, are notorious for difficulty. Mastering an instrument takes years of practice, while digital ones we can be made easier. We want to navigate this space to create something rewarding, should someone become a digital musical instrument virtuoso, so to speak. In this way I wonder if embodied interaction with digital media can become more palpable, providing a richness of interaction that can be easy to access, but provides continuously rewarding experiences?
Our proof of concept will be a bio-musical instrument. From the previous project, this will include the element of muscle interfaces, which will be used to design a type of ‘turnkey,’ easy to use musical instrument for beginners, virtuosi, and eventually, for the disabled.
I originally became interested in this area of research when I was a student at Harvard in the 1980’s, right at the cusp of the transition from analog to digital, and music was the first medium to go digital, before image and video.
In the early 80’s musical interfaces were being developed that would prove to be revolutionary. Even in computing, the mouse, for example, is an interface that we now use every single day, but at the time, the concept was new.
Musical instruments demand interaction, through different processes, for example, blowing, bowing, strumming. So given the example of working with computer interface devices, I wanted to see if we could transfer the creation of sound and eventually music to the digital space. Since then, my vision in research has been driven by visceral interaction with digital sound.
Today of course, there exists software for the creation of digital music, but the human element is somewhat removed. Virtual reality is trending now, but it is in its second wave, so to speak. The first wave was in the early 1990’s, the same time I started on my research in the real time performance of digital music.
During that first wave of VR, I became interested in the concept of a virtual musical instrument. Ironically, this meant not taking away the real aspect but to make instruments with new technology but that still went back to the essence of our bodily relationship with sound.”
The ERC has been very good at reminding us that opportunities exists to apply for a second grant, and the follow on funding for proof of concept. But obviously, this must be different from the fundamental research and can translate it into a commercial possibility, which is what I am working on now.
Turning to the upcoming event in Barcelona, now the question becomes how this research fits in with that of the other grantees presenting and performing the outcomes of their own projects.
According to Professor Tanaka, digital music technologies have exploded in the last twenty years, especially with the arrival of Digital Audio Workstation products, and consumer services like Spotify. This means that the number of people working in and on music is vast, and the music industry has now gone almost completely digital.
“The research community in this area is a tight knit family and I’m lucky to know a number of the presenters. For example, Professor Xavier Serra at University Pompeu Fabra, who has organised the conference, has focused on analysing and re-synthesizing sound. Professor Serra would go on to use his ERC grant as a way to apply his signal processing and musical analysis techniques to look at music from different cultures around the world.
I think that’s a really interesting demonstration of low level signal processing can be mapped to a high level, and very exploratory cultural aspect, which the ERC allows.
Although I’m looking forward to all of the presentations, a new grantee I’m excited about is Pierre Alexandre Tremblay from the University of Huddersfield. As a composer and electric bass player, Pierre has just started an ambitious ERC project called FluCoMa, which explores ‘new musical ways of exploiting ever-growing banks of sound and gestures within the digital composition process.’ Plus, this is a great example of a grantee bringing their experience as a performer to an ERC funded project, which is of course, very close to my heart.”
The end of our discussion focused on the general public, who are also invited the take part in the event and witness the performances. As much of the research funded by the ERC follows one common theme, the betterment of humanity, it can be difficult sometimes for the general public to feel like progress is actually being made. That the research they are funding through their taxes has a meaningful, tangible impact on their lives.
In a results driven society, this has proven to be a difficult hurdle for science to overcome, even with the collective intelligence of the world’s researchers and the funding that pours in from governmental schemes, such as the ERC. However, Professor Tanaka is optimistic.
“I think that the fundamental research we carry out does eventually have an impact on our daily lives, but can take 20-30 years to have that impact. The way we live today, with the technology we have, our environmental awareness, quality of life, all these factors have their roots in fundamental, scientific, and humanistic research.
With music, that connection is even clearer. Music is something in everyday life, an art form that is universal and accessible but undeniably difficult at the same time. Bridging these worlds is the act of bringing research from the lab and out into the public domain.
Popular music today is really interesting in that the sounds being produced would have been considered experimental just a few years ago. Our sonic palette is beginning to grow and the place of electronics within is now pervasive and humanised.
I do research, my music is highly experimental, but to see the things we discover being picked up by young musicians in pop and other mainstream music continues to amaze me.
I feel that the future has arrived and that’s a nice thing, but it can actually be suffocating, and so to continue to be inventive and daring in our explorations when so many things are now possible is a real challenge and responsibility for young researchers – to go beyond. And within all that, we must also now take stock of the human and ethical implications around things like privacy, data exploitation and human rights.
Atau Tanaka is a professor of Media Computing at Goldsmiths University in London. He will be presenting the outcomes of his ERC funded project, Meta-Gesture Music: Social, Embodied, Interactive Musical Instruments, at the European Research Music Conference in Barcelona on June 11-13th.