Category Archives: Software

Migrating to a new laptop (or: Apple-inflicted misery, once again)

Yesterday I got my new laptop and the technician’s idea was to just migrate all my settings and stuff over from the old one for simplicity, using Mac OS X’s built-in migration assistant.

I actually didn’t want to do this because I liked the notion of a clean slate to get rid of old cruft that I didn’t want anymore, but I thought I would give the migration assistant the benefit of the doubt.

TL;DR: it doesn’t seem to be ready for migrating a laptop that has been given intensive usage and has plenty of small files (think: the contents of node_modules) and big files too (think: screencasts).

The new laptop is one of those ultra light MacBooks with a USB-C connector, so it doesn’t have an Ethernet connector to start with unless you add one via the semidock.

The initial attempt was to migrate data using the wireless network. After three hours and the progress barely changing from “29 hours” to “28 hours” I gave up and started reaching for the Thunderbolt to Ethernet adapters. We stopped the process and set up both computers connected to the same switch with ethernet cables. The estimation was now 4 hours, MUCH BETTER.

I calculated that it would be done at about 20h… so I just kept working with my desktop computer. I had a ton of emails to reply to, so it was OK to not to use my normal environment—you can’t use the computer while it’s being migrated.

A little bit before 20h I looked at the screen and saw “3 minutes to finish copying your documents” and I got all stupidly excited. Things were going according to plan! Yay! So I started to get ready to leave the office.

Next time I looked at the screen it said something way less encouraging: “Copying Applications… 2 hours, 30 minutes left”

I was definitely not going to wait until 22:30 hours… or even worse, because the estimation kept going up–2 hours, 40 minutes now, 3 hours… I decided to go home, not without wondering if the developer in this classic XKCD cartoon was working at Apple nowadays:

Remaining time: Fifteen minutes... Six days... thirty seconds


Today I accidentally slept in (thanks, jetlag) and when I arrived into the office, all full of hope and optimism, I found the screen stuck at “359 hours, 44 minutes left”.

I turned around to Francisco and asked him: “hey, how many days is 359 hours?” He opened up the calculator and quickly found out.

About 14 days.

And 44 minutes, of course.

I gave the migration “assistant” some more benefit of the doubt and went for lunch. When I came back it was still stuck, so it was time to disregard this “assistance” and call rsync into action.

  • I enabled SSH on my old laptop (Preferences – Sharing – Enable remote login)
  • Created an SSH key on my new laptop so I didn’t have to type in the password of the old one each time. Then enabled the new key with ssh-agent, otherwise ssh doesn’t even bother trying to use that key when connecting to remote hosts, and copied and added the new public key to the authorized_keys file in the old computer (the GitHub instructions for generating keys are very good at explaining this).
  • rsync was already installed on the computers I think, or maybe I installed it with brew, but I’d swear it was already there
  • then to copy entire directories I would use something like rsync -avz --exclude '.DS_Store' sole@oldcomputer.local:/Users/sole/folderIWantToCopy/ /path/to/folder/parent
  • except when the folder would have a space in the name, in which case I had to escape it with a TRIPLE BACKSLASH. For example, to copy the VirtualBox VMs folder: rsync -avz --exclude '.DS_Store' sole@oldcomputer.local:/Users/sole/VirtualBox\\\ VMs/ .

I have most of my stuff in a ~/data directory, so migrating between computers should be easy, by just copying that folder.

Whatever wasn’t, I copied manually. For example, the Google Chrome and Chrome Canary profiles, because I didn’t want to set them up from scratch–you can copy them and keep some of the history without having to sign into Google (some of my profiles just don’t have an associated Google ID, thank you very much). Unfortunately things such as cookies are not preserved, so I need to log into websites again. Urgh, passwords.

cd ~/Library/Application\ Support
mkdir Google/Chrome -p
mkdir Google/Chrome\ Canary -p
rsync -avz --exclude '.DS_Store' sole@oldcomputer.local:/Users/sole/Library/Application\\\ Support/Google/Chrome/ ./Google/Chrome/
rsync -avz --exclude '.DS_Store' sole@oldcomputer.local:/Users/sole/Library/Application\\\ Support/Google/Chrome\\\ Canary/ ./Google/Chrome\ Canary/

I also copied the Thunderbird profiles. They are in ~/Library/Thunderbird. That way I avoided setting up my email accounts, and also my custom local rules.

I logged into my Firefox account in Nightly and it just synced and picked up my history, bookmarks, saved passwords and stuff, so I didn’t even need to bother copying the Firefox profiles. It’s very smooth! You should try it too.

Note that I did all this copying before even downloading and running the apps, to avoid them creating a default profile on their first start.

While things were copying I had a look at the list of apps I had installed and carefully selected which ones I wanted to actually re-install. Some of them I installed using homebrew, other using the always-awkward, iTunesque in spirit and behaviour, App Store. Of note: XCode has spent the whole afternoon installing.

I also took this chance to install nvm instead of just node stable, so I can experiment with various versions of node. Maybe. I guess. We’ll see if it’s more of a mess than not!

In short, it’s now almost midnight but I’m done. I started copying things at 17h, and had a few breaks to do things such as laundry, dishwasher, tidying up my flat, grocery shopping, and preparing and eating dinner, so let’s imagine it just took me 4 hours to actually copy the data I was interested in.

Moral of the story: rsync all the things. Don’t trust Apple to know better than you.

tween.js mega changes

Yesterday I had my every-two-months lucky day in which I could sit down and work on tween.js, and DID I GET THINGS DONE!!!

The first thing I did was to get rid of the minified version. Since the build process wasn’t fully automated, I often forgot to produce and check-in that minified version, and people who used it would get all sorts of weird errors that I didn’t see (specially in Safari and iOS, as the polyfills we added would be in the uncompressed version, but not on the minified, sigh!).

Then I started using semantic-release as my invaluable helper for producing releases. Each time a push to the git repository happens, another service (Travis) runs a battery of tests to make sure nothing is broken. If the tests pass, semantic-release will get in action and (probably not in this order):

  • determine what’s the next “semver” for the package. This is a function of the type of commit you made (a bug fix, a new feature, docs, chore…). Important / breaking commits will cause bumps in the first digit, etc. (I suggest you read more on semver if you’re interested). The type of commit is specified by having the commit message follow a certain syntax. E.g. a feature will be feat: implement feature A
  • tag the commit with the version. E.g. v16.0.1. I believe this is what bower people use and desperately need, and I never provided because I don’t use bower and so didn’t notice.
  • create a github release changelog thingie in github. These go to the releases page in github
  • publish the new version of the package to npm

I think semantic-release can do so much more than this, but just having all these steps performed for me is A W E S O M E. So once I established this “infrastructure” I could go on and fix many other long-standing issues and also merge PRs and address questions.

Since we don’t have to produce a minified version, I got rid of gulp which is what I was using it for. Installing tween.js with npm is now very very lean because I also added an .npmignore and so it just essentially installs the code of the library only. Your trees will not include the examples anymore. Not that it was incredibly big but every byte counts to some people, it seems 😛

I also added jshint (for code correctness) and jscs (for code style) verification as part of the test suite. This was something that would put me off reviewing PRs… and specially explaining to people that it was not OK to change the whole whitespace in the file, or that they should respect the existing guidelines (even if it’s in the contributing file that very few read). So the rules are now there, and everyone has to abide to them, or the tests don’t pass and so the PRs are not accepted.

Interestingly, I added these steps using the advice in Kate Hudson’s talk from Nordic.js front-end automation with npm scripts, where she showed how you don’t actually need a task runner–I recommend you to watch it! Or check out her reading list on the topic.

Next up was dealing with a ton of sorta old and sometimes outdated PRs that had been lying in the guts of github for months. As I explained in my previous tween.js post, something had happened and I hadn’t even seen the notifications for these.

I prioritised FIXES first. Many people are coming up with some novel ideas and features, and I’m grateful for that, but I decided to focus on accuracy and robustness for now. Some aspects of the code are a bit obscure and I am not sure I understand them well, mostly because I just merged them in when someone proposed them, and now I’m paying the price when strange edge case errors are reported.

Of course, I’m still not done by any means, because there was a massive backlog and the day only has so many hours, and I’d like to do human things such as sleeping, etc.

This is brings us to this interesting paradox: many people use tween.js, including big agencies who charge a bunch for their projects, but only a very few submit code or respond to my pleas for help. Maintaining a JS library has become way more demanding over the years. Back in 2011 people didn’t care about npm, bower, tags and releases and what not, so working on tween.js was way more simple and less time consuming. I could have just put a zip file on a website, and people would be happy with that, for all that is worth.

But since I changed roles at Mozilla I am travelling a lot more, and it literally devours your time. I am not complaining about that, but I need quiet time to sit and get heads down on code, and I’m not having much of that lately. Mozilla supports me working on tween.js but I do have my own work duties which take priority. All my coding time is happening during working hours, and when I’m off work I like to enjoy my free time doing things such as being outdoors or just talking to people face to face, not via a bug tracker. Or despair about the list of issues and PRs growing and me not having time to even acknowledge them 😩

But before totally blaming open source for its toxicness, I decided to own this a little bit myself, and totally revamped the README file to make it a bit more welcoming and clear (I took heavy inspiration from Express). I have also filed some bugs and tagged them as help needed or good first bug respectively. If you enjoyed tween.js and want to give back by contributing to these bugs, you’d gain extra points of awesomeness.

People like roadmaps, so this is what I’d like to see next:

  • Review all pending bugs and PRs and resolve or close them.
  • Fix the things that have to be fixed and ensure all code is tested and clear before adding new features, because it is getting to a point where it is unwieldy and scary to even look at a diff (what even does this thing do!?). Hopefully the new automation will help here, and we can focus on logic and not on chores!
  • Divert all new feature ideas to the future ES6/ES2015/ESWHATEVER version of tween where everything will be super modular and you should be able to use parts of it as you need and hack other types of tweening engines as you see fit.

This is it for now. Thanks for reading, and happy tweening!

Organising my music collection with find and ffmpeg

Since I bought a Synology NAS I’ve been spending time sorting out my music library. Let’s say it’s been a very relaxing hobby as I get to listen to music I hadn’t listened to in years, and it brings back great memories and also brings me to research about “what happened to that indie artist whose protected demos I downloaded from MySpace using Firebug in 2007” and I end up in a YouTube hole looking for remixes and live versions of my favourite tracks, so that’s fun.

People always ask me why do I bother keeping an MP3 collection, when “I could just use a service like Spotify”. The answer is: I like indie music and obscure music and bootlegs and all sorts of things that the record label won’t put on Spotify, mostly because often the record label has either disappeared or there isn’t even a record label to start with (as is the case with unsigned artists). So curating this sort of collection is a very interesting hobby–sometimes I feel like I own the last copies of the music of some artists as their myspace pages have disappeared, etc.

But there are things I do not really enjoy doing, namely repetitive stuff.

So whenever I can automate something I will do. The terminal is really great for that–and this is a bit of an exercise to improve my bash skills too! 😎

Task 1: remove unneeded files from a folder and its subdirectories

Sometimes you download a bunch of music files and they’re organised in directories. And maybe there’s a lot of .url or .rtf or .txt files you don’t care about. You could go and delete them manually, or you could just use find to remove them in just one command:

For example, to delete all .m3u playlists in a folder and its subdirectories:

find . -iname *.m3u -delete

You could just run it without the -delete switch to make sure you’re not deleting too much (there is NO going back with this! no recycle bin!)

find . -iname *.m3u

Task 2: Rename cover [whatever].jpg to folder.jpg

Apparently these Synology devices will use folder.jpg to display the cover art of each album, assuming each album is “a folder” and there is such a named imaged file in the folder. But these files are sometimes not named like that; they might be cover.jpg or random-string-of-letters-and-numbers.jpg. Again, you could rename them manually, or you press the turbo pedal and go megafaster with this:

find . -iname *.jpg -execdir mv {} folder.jpg \;

Warning: this is somehow a bit too “fire and forget”, and might fail if there are already existing folder.jpg files or just more than one file in the directory. I haven’t tested those scenarios, but what you can do before running the actual renamer is to use find to list the files that match the .jpg pattern. If you find one per folder, then you’re good to go:

find -iname *.jpg

Probably an over-the-top improvement would use ImageMagick‘s convert to list the size of the files and just rename to folder.jpg the largest one, or whatever, but I am not so obsessed with perfection, and besides I’m only running this on one artist’s folder. Also: maybe if you’re having so much logic, perhaps it’s better if you write that logic in an intelligible sort of script such as Python or Node.js. Your future self will thank you forever, unlike if you write Bash code full of back ticks and other mysterious powertricks you find in Stack Overflow and painfully assemble together, only to forget immediately.

Task 3: convert the files to a format you actually like

I’ve also got music files in formats I don’t like. I’m a bit of a savage and don’t care so much about FLAC or sound purity (here, I said that), because I don’t actually listen to music in high end equipment, and besides I used to listen to music in tapes that had been used and reused so many times you could still listen to the previous three or four recordings, so even the crappiest digital encoding is often like super state of the art for my ears.

I tend to just convert FLAC stuff to 320k MP3, but this also works for converting those horrid M4A files to MP3, and OGG to MP3, or WAV to MP3 (yeah, some artists think that WAV is a good distribution format!).

For example, this will convert all *.wav files in a directory to 320K MP3 while keeping the metadata. You’ll need ffmpeg installed (you can install it on a Mac with brew install ffmpeg).

(for FILE in *.wav; do ffmpeg -i "$FILE" -f mp3 -ab 320000 -map_metadata 0 -id3v2_version 3 "`basename "$FILE" .wav`.mp3" || break; done)

Replace .wav with the extension you hate the most: e.g. .m4a, and for grittiest results and even more savage souls than mine, you can drop the file size by reducing the bit rate to 128000 or even worse–use 64000 for a very convincing “Using an MP3 player from the 90s with 64 Megs of RAM with very shitty earphones” feeling.

It just occurred to me that maybe you could also use this one to convert a bunch of videos from talks into “podcasts” (by stripping out the video and just keeping the audio), but I haven’t tried that.

If you have cool music power tricks to process audio files do send them my way on the comments section–I’d love to add them to my arsenal :-)

Solving the “multiple MacVim instances” confusion

I somehow started to experience this confusing situation where I would have two instances of MacVim open, so I was able to Command-Tab between them, and it was really annoying because if I wanted to switch between different windows of MacVim I had to locate which of the instances contained the window I wanted to edit files on. See where am I getting? No? I understand, it was very confusing.

However I found what had happened: I had two versions of MacVim installed.

So when I opened something with the command line mvim shortcut, one of the instances would be executed. And when I opened some file with “open with MacVim” the other instance would be executed. Chaos ensued!

The solution was to delete the in the /Applications folder, and uninstall it from Homebrew:

brew uninstall macvim

and install it again:

brew install macvim

Now I’m back to just one MacVim instance. Yay!