Hi everyone, I'm really excited to share some great news with you all! Our community has reached an amazing threshold - you are now more than a million composers on the platform �...
Last week, Corentin and myself were at the first international Web Audio Conference @Ircam in France.
This is completely new since audio in web browser didn’t evolve during the last 10 years.
The main topic was the web audio API, a project started two years ago by Chris Rogers from Google. Nowadays, people like Chris Wilson (Google), Paul Adenot (Mozilla France), Chris Lowis are working hard on it.
I do not pretend to be as good as Chris to introduce why such a project has been started two years ago. Here is a presentation from the Google IO/12.
In my own words, manipulating audio in web browser was extremely restricted before Web Audio Api. This picture from Chris presentation really reveals the enthousiasm of many developers regarding this project.
What happened during theses three days?
People were mostly going on stage to speak about their works, researches or product. The first one to show up was an experimental project made by a Soundcloud engineer, Jan Monschke (who is such a good buddy). His project was a bit similar to Flat, it's a collaborative audio workstation allowing real-time edition. His project was working very well, even if he declared having some latency issue for live collaborative playing.
We, later shared our impressions about what we think as the future of music edition and it was damn interesting.
I thought about some other projects as crazy ones and wish to contribute on it if had more free time. But #FlatFirst.
In the morning, after a presentation of the current limits of the Web audio next to Native audio by Paul Adenot, we attented to 8 more concretes projects presentations.
The fresher, from my eyes, was Hyperaudio, by Mark Boas. It's a tool allowing to work on synchronized transcript, video and of course audio ! It's a bit like a video editor but working with transcript. The demo was really impressive and Mark was copying and pasting sentences to make audio loops, or only searching for a word an audio speech and make all the founded words play as a list. Check their demos if you want to learn more.
We then went to mozilla office and I have to admit, their office rocks !
We spent the afternoon exchanging about projects, initiative based on the WebAudioAPI.
One of most interesting conversation we had was with Stephane Letz from GRAME institute.
He is working on Faust, a language made to make musical modules (synthesizers, filters, effects...). We actually tried to use Faust within Flat to generate realistic instruments sounds without the help of any samples, thanks to digital waveguides. But we stop it so far since it was CPU intensive and not always reliable for most of our users.
But we didn't give up and we talked a lot about how to improve performance and avoid critical bugs with Faust. It's unfortunately a bit slow to move forward because some issue involve new modifications to the EcmaScript specs. In fact, we need a new annotation allowing to treat a very little float variable as a float, and not casting it to 0.
During the evening we took part in crowdaudio gigs. Most of them were including real-time collaboration with websockets to produce music. A bit like Flat, but more live ! I definetevely loved that.
Here is a link to video from @norbertschnell ==> Drops Concerts
We took party into an informal plenary session with the audio working group. My feeling about that session is that there are still a lot of challenges but they are well identified.
During the afternoon we came back to Mozilla HQ to hack some Web audio stuff. From our side, we tried to add some infinite sustain to Flat samples in order to avoid glitches when the samples are too short. The main challenge was to avoid clicks during the looping part to make it sounds the most naturally. We are actually close from our goal but it's not perfect yet and will need some work before landing it.
The other teams did some really interesting works, and one even produce the first draft of a Web audio specifications. I can say that a special interest for Web audio with websockets spawned from the gigs of last night.
I didn't go into too much details about everything we saw back there. It’s not the main purpose of this article. What I want to say is really simpler, in only three days I finally understood what Corentin is spending all his time on !
Most importantly, I understood the main purpose of the WebAudioAPI project. This is crazy and amazing! We‘re just at the beginning of this journey. Now, I am definetely convinced that our Audio rendering in Flat will become better and better over time. By the way, if you have some free time give us a try :)
Have a great day !