Robot Media's welcome party video

(follow to the YouTube page for 1080p video!)

Robot Media just moved to a new shiny office in Barcelona's Eixample and we had a welcome party to celebrate with friends. While discussing the party "features", we had this crazy idea where we would assemble a video out of many other videos with robotic stuff. I immediately thought that was a job for ffmpeg, as cutting the videos manually would be super tiring and we didn't have much time for that.

So I assembled a quick script that read a text file with a list of collectively sourced YouTube URLs, then used youtube-dl for downloading the videos, and finally called ffmpeg to extract three second slices from the videos, resized and padded them as required (to create a Full HD version), and finally appended all of them together using the amazing MP4Box.

After that, I placed the half gigabyte file in the very capable hands of mr. Eyeclipse, who added the Robot Media watermark and a tint of "corporate blue" with After Effects. Maybe that could have been done with ffmpeg too, but getting the right tint hue wouldn't have been as interactive (and immediate) as with AE.

We then placed the video on loop in the office TV. Party guests were challenged to identify and tweet the name of ten robots appearing in the video. Only one person managed to do that, and he got a full set of robot stickers!

The script is slightly messy so I haven't got round to publishing it yet. I'll do that soon, so that you can create your own crazy randomized mash-ups with just a couple of YouTube URLs.

Few days after the party I remembered the supersupercut project that was unveiled at past seven on seven project on New York. Back then, they mentioned they had used already "cut" scenes in the project they showed, so the scene detection process was manual.

This time when I checked the supersupercut website, it turned out they are now using something called Shotdetect, an open source piece of software that can detect scenes in videos! Damn! I should have checked it before! That way I would have been able to select slices from different scenes, as sometimes there are several clips from the same scene and it looks like it's the same clip.

Though it would be even more interesting if it could detect things such as faces or objects appearing in the scene and then filter scenes according to that (e.g. return scenes that contain robots). Surely there's some sort of academic thesis on this waiting to be implemented by Adobe on the next CS iteration!

In any case: food (or software) for thought. It goes to the list of "things to revisit" at some point; hopefully it will get into my "bag of tricks" too, just as MP4Box and ffmpeg did :-)

By the way, it was a pleasure to find out that both ffmpeg and MP4Box are available in ArchLinux's repositories, and they are very recent versions too. Excellent! No more compiling from SVN just to get things such as WebM supported! Beat that, Ubuntu :-P