I wrote an article on the MediaRecorder API for .net magazine issue 283.
This isn’t an exhaustive article, just a little advance on what the API can do, as part of the Standards section that tracks new and upcoming APIs. If you want to learn more about the MediaRecorder API, there’s the way more detailed Hacks article I wrote: Record almost everything in the browser with MediaRecorder, or maybe you can watch one of the talks I’ve given on the subject, like this one.
There’s been much talk of Web Animations for a long time but it wasn’t quite clear what that meant exactly, or even when could we get that new thing working in our browsers, or whether we should actually care about it.
A few weeks ago, Belén, Sam and me met to brainstorm and build some demos to accompany the Firefox 48 release with early support for Web Animations. We had been asked with very little time, but we reckoned we could build at least a handful of them.
We tried to look at this from a different, pragmatic perspective.
Most graphic API demos tend to be very technical and abstract, which tends to excite graphics people but mean nothing to web developers. The problem web developers face is not making circles bounce randomly on the screen, it is to display and format text on screen, and they have to write CSS and work with designers who want to play a bit with the CSS code to give it as much ‘swooosh’ as they envision. Any help we can provide to these developers so they can evaluate new APIs quickly and decide if it’s going to make their work easier is going to be highly appreciated.
CSS’s syntax for animations has been with us for a while. We know how it works, there is good support across browsers and there is tooling to generate keyframes. But once the keyframes are defined, there’s no going back. Modifying them on the fly is difficult unless you start messing with strings to build styles.
This is where the JS API proves its worth. It can tap into live animations, and it can create new animations. And you can create animations that are very tedious to define in CSS, even using preprocessors, or just plain impossible to predict: imagine that you want custom colour animations starting from an unknown colour A and going to another unknown colour B. Trying to pregenerate all these unknown transitions from the beginning would result in an enormous CSS file which is hard to maintain, and would not even be as accurate as a newly created animation could be.
Finally another cool feature of these native animations is that you can inspect them using the browser developer tools. And you can scrub back and forth, change their speed to slow them down or make them faster, etc. You cannot tap into JS-library-defined animations (e.g. tween.js) unless there’s support from the library such an inspector, etc.
Essentially: Web Animations cheat by running at browser level. This is the best option if you want to animate DOM elements. For interpolating between plain values (e.g. a three.js object position vector x, y, z properties) you should use tween.js or similar.
The full API isn’t implemented in any browser yet, but both Firefox and Chrome stable have initial support already. There’s no excuse: start animating things already! I’m also excited that both browsers shipped this early support more or less simultaneously—makes things so much easier for developers.
With mega thanks to Belén for pushing this forward and to Sam for going out of his internship way to help us with this 🙌🏼
I was sorting out a bunch of old digital files and folders and was quite amused at the realisation that it seems like I have written some sort of tracker module converter or reader in pretty much every programming language I’ve ever used.
NOTE: a tracker is a music creation program, and a module is how their files are called—they’re tightly optimised compressed binary files and use all sorts of tricks to save data.
Other people write To-Do apps to learn new languages, and I guess I write tracker module conversion tools? 🙃
So, in order of creation, as far as I can remember:
2005: C++: IT to events (in XML)
I wrote a utility in C++ to pre-process and parse pattern data from Impulse Tracker’s IT files, which would then be output as an XML file with event data, and then the C++ executable would use tinyxml to load the file. I am not entirely sure of why I didn’t just incorporate this event processing and reading directly into the executable instead of doing that at build time—I think I either didn’t have enough time to finish the exporter and I ended up generating the important events with Audacity or something like that, or my parser was very slow and I didn’t want the demo to take ages to load.
I can’t find this code, but I remember it was very buggy and I wasn’t progressing very fast. I wrote about it backthen! There was a lot of confusion and non-written knowledge around the tracking format that I finally understood asking other people for help (e.g. how to calculate how much time do each row and each ‘tick’ take). Also, lots of segmentation faults and time spent thinking about memory and string allocations! And I was using the wrong tool for the job.
GitHub and Google Code didn’t even exist back then and SourceForge was terrible, so this is probably collecting digital dust in some folder.
2007: Ruby: XM to events
Apparently I also wrote a Fast Tracker’s XM to timed event utility in Ruby because I was going to work with a musician that used XM (hi JosSs!) and I was so into Ruby as server side and general utility language back then—the demo was going to be built in Flash with probably ActionScript 2.
I actually didn’t finish the demo, therefore the exporter is quite incomplete and I haven’t bothered publishing it, but it was interesting to use Ruby to process binary data from a file in a sort of stream like way.
In absence of a demo, here’s my favourite song from JosSs:
2008: PHP: IT to events (in a .h file)
Another iteration of this event list idea was to parse the data with PHP and generate a song.h file containing a big data array to be used in 64k intros, like this one.
I can’t find the code, but if I remember correctly, the nicest aspect of building this was that I could use some sort of text template to generate the .h file, and since string manipulations are way easier with PHP than with C, I got to the point where I wanted to be really quickly and without tons of segfaults.
2009: PHP: XRNS to MOD
I found another twist on my data conversion efforts: xrns2mod takes a Renoise song and converts it into an Amiga MOD file.
This is not a trivial conversion, as Renoise files are XML based and the player is significantly evolved compared to the MOD format: it can have a lot of channels, and more than one track per channel, plus arbitrary length patterns, plus a lot of instruments, and more than one sample per instrument, etc, whereas the most basic MOD files only have 4 tracks, use a limited note range, different base frequencies, less sample effects, and lesser quality samples, etc. So it’s not just moving numbers from one place to another.
I think I wanted to enter some MOD contest but I found all the MOD trackers quite uncomfortable compared to Renoise, and I wanted to have the best of both worlds: ease of use and also entering the contest. It’s incomplete and quite hardcoded in places, but it was already converting files quite decently the last time I ran it, so I put it on GitHub, just in case it maybe helps someone. Maybe.
Of course, minutes after I published it I found XRNS2XMOD, a similar tool which seems more complete as it also exports to XM in addition of MOD, but it requires Mono and what not.
~2010: Python + Lua: XRNS to events (in a .lua file)
I wrote an exporter in Python for generating a list of timed XRNS events and use them in a demo for ~~incredible synchronisation~~. The demo was run with Luisita which was my programmable graphics environment built with Lua and C++, and since the exporter generated a .lua file, it was very easy to load it all into an array like object at runtime.
This was used in this demo — as you can see the visuals are tightly related to the music:
Funnily enough I remember being frustrated with not having tweening in the Lua version, but it could be ‘easy’ to add it to the web version using tween.js. But it’s not going to happen.
2013: node.js: XRNS to JSON
I also wrote a node.js module for extracting XRNS data into something the browser could use more easily (i.e. JSON). I guess not many people are using it or are interested in it, because 3 years later I haven’t heard of anyone improving or using it, but if you want you can install it with npm:
npm install renoise
I used this to generate the events and note data for the slides for my JSConf.EU 2013 talk in which I programmed virtual instruments to play a tracked song in the browser, built their interfaces with Web Components and also used hardware controlled/read with Open Sound Control and node.js! Yay to mixing everything together! 😎
Here’s the video starting with the actual demo:
Well, I hope you have enjoyed this tour, and remember to always use the best tool for the job 😉
A couple years ago I had to do some analysis of popular apps (basically peeking inside their guts, so to speak). What we found is that Instagram used a lot of native libraries for dealing with video and audio.
Rem suggests that Instagram could be almost 100% a website nowadays, and not the strange hybrid it is. I would love to see that, but the web platform needs a lot more features we don’t have yet, for example:
reliably being able to select which camera to use, and also being able to focus
encoding into a certain type of video format, in a performant non-draining-your-battery way
manipulating videos in real time, sounds included
accessing the ‘photoroll’ easily
As usual, video is a huge issue in the web. Encoding is complex and expensive in battery terms, but thanks to MediaRecorder it is becoming more accessible. Still the formats MediaRecorder outputs are only viewable in browsers (vp8/webm); if you encode a video using the browser you have to convert it locally to something else like mp4 if you want to upload it to sites like Twitter, and unless you’re a tech-savvy person and have installed VLC or similar you cannot even watch the video you generated in your desktop if using the default video viewers in your system.
Similarly, handling video and audio post processing in real time is possible with WebGL + Web Audio but recording is not supported in anything other than Firefox. Chromieum/Opera won’t give you a stream from canvas or record a stream from a Web Audio context either.
And being able to access the photoroll in mobile or a directory in desktop would require that the APIs for exposing directory listings are available so the app could do things such as building thumbnails of the images, but they aren’t.
It’s not impossible—it’s “just” a matter of implementation and adoption.
If apps such as Instagram were 100% web based, and had a reliable chance of working reasonably well on every device, there would be no distinctions between device operating systems, and we could just focus on the device features (price, camera quality, battery life, storage, design…). This would be huge for the users and quite probably for the app makers which would need only one team to build the app, not one per operating system.
Writing this with a very slow connection in Thessaloniki, Greece (ahhh, hotel internet).
I am very excited that my Greek studies are finally coming in handy and so I can read the street signs even if they’re written using the Greek alphabet. It sort of reminded me to last year when I was in Hong Kong and couldn’t read the signs… except now I can! Yay!
I arrived yesterday after a terribly early flight–but then it was well past noon because of the +2h timezone.
Kostas, one of the organisers of DEVit, helped me get a SIM card and he was OK with me telling the world that it took us an hour and a half to get it working with data, because all of the processes they require. I think we did three queues and had time to go for lunch and go back. We also had to enable the Greek keyboard on my phone in order to send a text to the operator containing the Greek ‘y’ (which is not the same as the English keyboard ‘y’), and then it would send me 5 texts in response, confirming whatever it is the ‘y’ meant (I mentally read it as “why?”) with a lot of ALL UPPERCASE Greek paragraphs.
I will just say that in London you can turn up at a shop and buy a SIM with pre-paid credit in 5 minutes, so if I didn’t come from Spain where these processes are similarly ridiculous, I would have been wildly amused.
I re-learnt the most important sentences:
παρακαλω = parakalo = please
ευχαριστωσ = eucharistos = thanks
and last but not least: χαχαχα = hahaha = jajaja 😃
This is also my first time in Greece so I don’t quite know what to expect, but so far it seems welcoming, nice and sunny, full of interesting things to look at, and also very busy. I did try a gyros and I’m looking forward to trying out the Greek coffee: “cooked!”. Will report.
Side note: for a Spanish speaker, hearing spoken Greek in the background is funny: your brain thinks it can understand it because the cadence is very similar to Spanish and many Spanish words are of Greek origin, but then it starts to pay attention and realises it actually can’t.
I’ll attend the NodeJS/Ruby synergy meetup this evening… be sure to say hi and bring all your synergies and thoughts out of the box!
Next week I’ll be in Denmark for AtTheFrontEnd and the following week will see me in Norway for WebRebels. I’ve not been in either country before, so those will be another two first times for me, but if they’re anything like Sweden or Finland I’ll be more than happy.