On Loop 2015

I was invited to join a panel about Open Source and Music in Loop, an slightly unusual (for my “standards”) event. It wasn’t a conference per se, although there were talks. Most of the sessions were panels and workshops, there were very little “individual” talk tracks. Lots of demos, unusual hardware to play with in the hall, relaxed atmosphere, and very little commercialism—really cool!

Continue reading On Loop 2015

Some additional thoughts on the recent discussion about “frameworks vs vanilla JS” on mobile

Some context for those of you that haven’t been following this thing:

Paul talked about his research on whether to use a JavaScript framework on mobile devices was good or not, for a number of parameters of what “good” could mean. I thought it was a really good starting point because it is nuanced, has data, considers a number of pros and cons and it’s, in short, way more constructive than the usual FRAMEWORKS SUCK message.

The conclusion is that vanilla JS is faster to start as it doesn’t have to do all the bootstrapping that frameworks do. But perhaps using a framework can also be convenient for you as a developer—and for the user if the developer can provide value to them instead of building their own ad-hoc framework. But ultimately the decision depends of each particular case, and each developer has to make their own judgment.

Unfortunately people in Twitter started essentially responding that “hey phones would just get faster (MOORE’S LAW!!! MOORE’S LAW!!!!), so deal with it and use a framework”, and by the way, “actually Paul used an old version of Ember and also React would never be used that way”, despite the fact that Paul said so himself in the post. Anyway…

Tom didn’t quite like the analysis and said that you would need to compare using some other app other than TodoMVC because it’s too simple.

Dave added some more insights too, replying to Tom.

I want to add a number of additional reflections because I ALSO HAVE OPINIONS and since I’m done with talks for the year, I have time to manifest my thoughts now:

The “phones will get faster” fallacy

They might. But that’s not the case for everyone who is not a developer or a techie. The constant, absurd, terrifying-to-witness delusion of developers who believe that everyone is accessing their websites using a 4G connection with the latest and greatest device is just painnnnnful.

Granted, some apps will be built for very niche segments, and you can “count” that they will have the appropriate device to run those. An example of this, somehow, could be games that only run on PS4. The crucial difference is that, ideologically and ideally, the web is not a closed controlled platform as a games console is, despite the multiple and insistent efforts from certain vendors and the irrational buy-in from many developers.

But I go on the streets and I do not see everyone using the latest and greatest iPhone. I see a variety of devices. Tons of cheap Android phones which are suuuuper painful to see in action because the hardware is ‘utilitarian’ but developers assume that everyone has a great Nexus phone as they have. And also—people have been taught that they have to install an app per service because said services do not know how to build a website to provide the service. So turns out that people do need to complete a lot of tasks with their phone, and the more apps they install, the more crippled the phones become.

Yes, you can get another phone which is faster, but it’s going to eventually get slow as you reinstall your ‘task based’ apps again. And the fact that is “faster” doesn’t mean that it actually is giving you the 100% of the resources to you.

Also: apparently JS is not getting faster on Android, despite people buying ‘faster’ phones.

Todo MVC might be, in fact, good enough

While it might not be a super complex app, it is still an app and something we can compare between different ways of offering the same functionality built in different ways. Dismissing it because it’s smaller is ignoring the fact that this is a low bound of “how bad” it gets even with small apps. If anything, it might get even worse.

Code size is not necessarily bad by default

You can have a big code base that seems to run very fast on the phone—time from download to render being super small and the app being very responsive.

But you could also have a very small code base that blocks like there’s no tomorrow, and makes the users of an app cry in frustration because it makes them think that their phone is somehow broken (“WHY IS THIS SO SLOW argh I’ll need to get a new phone OMG NO”).

The sweet spot

Or: but what do you really need?

Framework fans often seem to get offended that the simple proposition of not using a framework (also known as “vanilla JS”) is brought into a conversation. “WHY WOULD YOU NOT WANT TO USE A FRAMEWORK IN YOUR SITE?!! ROWR ROWR ROWR GROOOOWL!!

Well, actually, you might not need a framework. Or maybe you might need one. Or maybe not now, but soon. So it’s good to plan a little bit ahead, but it is not advisable to plan too far away, because you might be adding lots of additional baggage you won’t need after all.

I think that using a framework starts to make sense once you go past a certain level of complexity. Specially when you have URLs that need to be parsed with a router, and then you have to render templates and all that—possibly your best bet is to use an existing framework with a good community behind, so that people have already tried doing similar things to what you might want to do, and there’s lots of literature you can find.

Those are probably well better tested than any “Frankenstein framework” you might write, and can also offer you great features such as rendering on both the server and the client side, so you get the best of both worlds. Developing this kind of thing yourself might take you a very long time.

But maybe you don’t need to do all that; perhaps you just need a router. Or just a template engine. In my ideal world you should be able to just pick bits and pieces individually, and use them. But that tends to only work for small projects.

Frameworks as a neutral option

It’s not explicitly discussed in the existing posts, but adopting an external framework developed, vettoed and regulated by some other organisation can be one of the most neutral things to do in organisations which do not want developers to “get territorial”, i.e. if the whole organisation adopts the framework ONE of its developers wrote, that developer “wins”, and it creates a potentially problematic power dynamic. I guess that would be Conway’s law at its best.

Another great side effect of not using a framework developed ‘in-house’ is that developers who work on the code base later don’t need to deal with the terrible lack of documentation that is the usual case with in-house frameworks. To date I yet have to see an in-house framework that is not terribadly documented, and I’ve been building stuff for the web for more than 15 years…

Developers love to engage in loud neverending discussions about framework technicalities but it’s important to keep in mind that politics are what ends up driving adoption more often.

History repeeeaaaating

I already wrote about libraries and frameworks back in 2007, where it was assumed that you had to choose a PHP framework. Ah, this never gets old…

Same conclusions apply, so I’m just going to literally quote myself:

[…] investigate […]


Migrating to a new laptop (or: Apple-inflicted misery, once again)

Yesterday I got my new laptop and the technician’s idea was to just migrate all my settings and stuff over from the old one for simplicity, using Mac OS X’s built-in migration assistant.

I actually didn’t want to do this because I liked the notion of a clean slate to get rid of old cruft that I didn’t want anymore, but I thought I would give the migration assistant the benefit of the doubt.

TL;DR: it doesn’t seem to be ready for migrating a laptop that has been given intensive usage and has plenty of small files (think: the contents of node_modules) and big files too (think: screencasts).

The new laptop is one of those ultra light MacBooks with a USB-C connector, so it doesn’t have an Ethernet connector to start with unless you add one via the semidock.

The initial attempt was to migrate data using the wireless network. After three hours and the progress barely changing from “29 hours” to “28 hours” I gave up and started reaching for the Thunderbolt to Ethernet adapters. We stopped the process and set up both computers connected to the same switch with ethernet cables. The estimation was now 4 hours, MUCH BETTER.

I calculated that it would be done at about 20h… so I just kept working with my desktop computer. I had a ton of emails to reply to, so it was OK to not to use my normal environment—you can’t use the computer while it’s being migrated.

A little bit before 20h I looked at the screen and saw “3 minutes to finish copying your documents” and I got all stupidly excited. Things were going according to plan! Yay! So I started to get ready to leave the office.

Next time I looked at the screen it said something way less encouraging: “Copying Applications… 2 hours, 30 minutes left”

I was definitely not going to wait until 22:30 hours… or even worse, because the estimation kept going up–2 hours, 40 minutes now, 3 hours… I decided to go home, not without wondering if the developer in this classic XKCD cartoon was working at Apple nowadays:

Remaining time: Fifteen minutes... Six days... thirty seconds


Today I accidentally slept in (thanks, jetlag) and when I arrived into the office, all full of hope and optimism, I found the screen stuck at “359 hours, 44 minutes left”.

I turned around to Francisco and asked him: “hey, how many days is 359 hours?” He opened up the calculator and quickly found out.

About 14 days.

And 44 minutes, of course.

I gave the migration “assistant” some more benefit of the doubt and went for lunch. When I came back it was still stuck, so it was time to disregard this “assistance” and call rsync into action.

  • I enabled SSH on my old laptop (Preferences – Sharing – Enable remote login)
  • Created an SSH key on my new laptop so I didn’t have to type in the password of the old one each time. Then enabled the new key with ssh-agent, otherwise ssh doesn’t even bother trying to use that key when connecting to remote hosts, and copied and added the new public key to the authorized_keys file in the old computer (the GitHub instructions for generating keys are very good at explaining this).
  • rsync was already installed on the computers I think, or maybe I installed it with brew, but I’d swear it was already there
  • then to copy entire directories I would use something like rsync -avz --exclude '.DS_Store' sole@oldcomputer.local:/Users/sole/folderIWantToCopy/ /path/to/folder/parent
  • except when the folder would have a space in the name, in which case I had to escape it with a TRIPLE BACKSLASH. For example, to copy the VirtualBox VMs folder: rsync -avz --exclude '.DS_Store' sole@oldcomputer.local:/Users/sole/VirtualBox\\\ VMs/ .

I have most of my stuff in a ~/data directory, so migrating between computers should be easy, by just copying that folder.

Whatever wasn’t, I copied manually. For example, the Google Chrome and Chrome Canary profiles, because I didn’t want to set them up from scratch–you can copy them and keep some of the history without having to sign into Google (some of my profiles just don’t have an associated Google ID, thank you very much). Unfortunately things such as cookies are not preserved, so I need to log into websites again. Urgh, passwords.

cd ~/Library/Application\ Support
mkdir Google/Chrome -p
mkdir Google/Chrome\ Canary -p
rsync -avz --exclude '.DS_Store' sole@oldcomputer.local:/Users/sole/Library/Application\\\ Support/Google/Chrome/ ./Google/Chrome/
rsync -avz --exclude '.DS_Store' sole@oldcomputer.local:/Users/sole/Library/Application\\\ Support/Google/Chrome\\\ Canary/ ./Google/Chrome\ Canary/

I also copied the Thunderbird profiles. They are in ~/Library/Thunderbird. That way I avoided setting up my email accounts, and also my custom local rules.

I logged into my Firefox account in Nightly and it just synced and picked up my history, bookmarks, saved passwords and stuff, so I didn’t even need to bother copying the Firefox profiles. It’s very smooth! You should try it too.

Note that I did all this copying before even downloading and running the apps, to avoid them creating a default profile on their first start.

While things were copying I had a look at the list of apps I had installed and carefully selected which ones I wanted to actually re-install. Some of them I installed using homebrew, other using the always-awkward, iTunesque in spirit and behaviour, App Store. Of note: XCode has spent the whole afternoon installing.

I also took this chance to install nvm instead of just node stable, so I can experiment with various versions of node. Maybe. I guess. We’ll see if it’s more of a mess than not!

In short, it’s now almost midnight but I’m done. I started copying things at 17h, and had a few breaks to do things such as laundry, dishwasher, tidying up my flat, grocery shopping, and preparing and eating dinner, so let’s imagine it just took me 4 hours to actually copy the data I was interested in.

Moral of the story: rsync all the things. Don’t trust Apple to know better than you.

Random thoughts on a jetlagged day

I went on holidays to NY last Monday, and it’s been almost two days since I landed back in London, but despite my constant nagging to “sync”, my “super smart watch” keeps insisting that it is 6pm (when it’s, in fact, 11pm here).

I have been trying to convince myself that I am not, in fact, jetlagged, and that the fact that I slept 11 hours yesterday would make up for my erratic sleep routine, but since it’s 11pm and I’m quite awake I figured it’s a great time to dump random thoughts on your face!

Jeremy Keith is happy that Medium will let him post using an API, so he now posts to his journal and then to Medium. It sounds all fine and dandy, but… what about SEO, I wonder? Do we not care about Google rankings anymore? And also, what’s the position on ‘canonical’ content? How do we update all these copies? For the longest time, I’ve always preferred to post on my blog and then just put links to it in other services. The link to my content won’t change, but the content might.

Related, many people are taking to twitter to dump their thoughts in a sort of timeline-stream-of-consciousness. I am guilty of this too, most often when I’m in an airport and I’m having one of my inspiration (read: rant) moments, but can’t easily reach for a laptop and fire up my blog admin interface. I’ve got multiple issues with this:

a) things tend to get lost in twitter
b) reading long ‘timelines’ is the worst; the twitter UI is definitely not up to speed for these cases
c) I don’t want to read hate comments in the middle of a timeline

A patchy solution is to use Storify or something of the sort, but that means the author has to painfully collect those tweets and thread them back together into that service (I haven’t used it so I’m not sure of how easy it is to do). This is extra work. Also, I’m not sure for how long the stories are archived.

An alternative, if you’re using WordPress, is to copy and paste the links to your tweets, and it will archive the content. Still, the displayed UI is nowhere as comfortable as reading individual sentences joined in paragraphs. It’s not WP’s fault; tweet-timeline-shaped rants feel very awkward in general. It sort of reminds me to the Speakers’ Corner, where people try to make themselves heard by yelling short catchy pieces and hoping they attract someone’s attention.

My point, in other words, is that more people should be getting blogs (again).

Another related rant would be our modern over-reliance on centralised systems. I did take part on it when I was waiting for my flight last Sunday. It started with Adam Brault’s retweet of a post describing how Google, Outlook et al. are penalising new mail systems from the start:

Adam essentially said that people are trading away freedom for convenience.

At this point I had to take part on this discussion. I am all for the security and decentralisation aspects, but frankly, trying to set up some of these systems requires a degree in Maths OR MORE. You can’t expect the average person to just go and do that. People have other priorities.

It’s 2015 and setting a functional and robust mail server should not take a whole week for a person who does not work in an ISP or does not compile their own kernel. This is something that drives me nuts. Likewise with signed email and with PGP and all that; it’s just so damn impossible for “normal people” to use that, let alone understand why they should use it.


Another contentious issue of late seems to be the “modern web” developers vs “the old, progressive enhancement school”.

I see some people that I really like and know have very good intentions being all speakers-cornery to each other and hatin’ on each others approach and seeing everything in black and white, and I feel terribly sad about all of that. Both sides are right and wrong in different ways, and as with everything in life, there is no One True Approach, but people seem to be falling for these quasi-religious wars in a way that is really frustrating and silly, and very worrying when they confuse beginners with all that FUD.

If there’s something I could change in tech with a magic wand, it would be to stop all this loud hate and to start having meaningful, nuanced and researched discussions. Some examples:

  • “JS frameworks are killing the web!” would turn into “loading 300kb of JS before you render anything is a bad practice that leads to terrible experiences, consider using plain HTML, the async or defer attributes, and perhaps the minimum CSS and JS for this first render, and load the rest later as needed”
  • “Are you from the past?!!” would turn into “browsers can do a lot more nowadays than just render static content built by a server, and I’d like to offer that dynamic experience to people and also make it work for as many people as I possibly can, but I do not think I can progressively enhance things in a cost-effective manner past a certain point”

… and so on.

There’s also a couple of other issues at play, which is that…

a) many developers do not understand how JS or CSS work at all, because they come from other non-webby developer backgrounds, and
b) many developers do not understand how some difficult aspects work, and even if they might not need to overcome those difficulties (because hey, maybe their project is never run on browsers that need to be “polyfilled”), they pull in a framework because that “might protect them”.

So that ends up adding a ton of bloat to websites, and also there is this unpalatable cargo cult aura surrounding frameworks. Like if everything you do using a framework is infinitely better and amazing, and also that you must choose one, as your one defining characteristic as a developer.

Likewise, old school people who are used to just “vanilla everything” often don’t empathise with the sheer fear that working on front-end induces on new web developers–specially if they come from more runtime controlled environments.

So my proposal is: instead of attacking scared people, and using your vast knowledge to distinguish yourself from “newbies” and “fake developers”, what about we try to make things less crap? For example, the fact that many aspects of CSS are super counter intuitive and do not make any sense until you read and understand the spec is something to be worried about, not something to brag about. Perhaps you should contribute to clarify the confusing bits. Perhaps you should spend time trying to explain those bits to new developers, instead of yelling at them from your ivory towers.

Most ironically, all this time we spend arguing and debating about the virtues of this and that framework is essentially a waste of time. It’s almost like arguing about football. It makes pretty much no difference. What would actually change things for the better is to deliver…

  • less ads: less unoptimised JS scripts would block the rendering less, and they would load faster, specially over mobile connections because of the extreme latency, so things would be snappier and more pleasant to interact with
  • and smaller images, or at least, the appropriately sized images for the current viewport, or SVG, or PNG where appropriate

And with that I’m done for today–it’s 1 AM and I might be starting to get moderately sleepy, so there’s hope. ZzZzzZz… maybe!

If you’re feeling a bit insomniac: I wrote another random thoughts post, but on a day off, in March, just in case you didn’t have enough sole-thoughts today.