Extreme decoupling or all-as-a-module

I opened my laptop in the morning and found one of my open tabs in Nightly was for Vue.js. I don’t even remember how I ended up there. Was I reading about frameworks? Did anyone send me the link? Who knows!

But I was curious. I am not a megafan of frameworks, but I like looking at them. One, because their idioms are often adopted by other developers, so it’s good to be aware of where are things going. And two, because frameworks do lots of “magic” in the background, and I always want to know how they implement their “magic”—maybe I’ll want to adopt some of it!

So instead of closing the tab, I perused the page. It has a virtual DOM as React does, but they seem to take great pride on their overall minimalism (small file size, little intrusiveness). The examples are amongst the most readable I’ve found for frameworks when it comes to the JavaScript API; the HTML directives are as alien-feeling as most frameworks.

Later I was discussing this strange incident with friends (“I found an open tab in my browser—do you think this is a signal from the universe that I should get into Vue.js?”) and Irina also highlighted the fact that Vue.js “components” might be simpler to build than the equivalent in React, and also be less coupled.

This derived into talking about The Dream:

You know what the dream is? Have everything be an npm package that I can plug in to any framework I like. And everything is packages packages packages


Oprah giving free packages away to everyone
You get a package! And you get a package! And you get a package! And you get a package! And you get a package… everyone gets a package!

(Irina demanded an Oprah themed meme)

And of course this reminded me to earlier conversations with chameleonic Jen about modularising everything and maximising reuse. She would propose, for example, abstracting a card game into separate modules; one for handling the rendering, other for handling card games in an abstract way, another one for handling a specific type of game. This way you could build multiple games by just providing an implementation for the specific game. (Games are notoriously often not built this way).

Likewise, Aria talked about radical modularity at Web Rebels and the notion that if your modules are small enough, they are done. Finished. You rarely need to ever touch them again (unless there’s a bug). Watch the talk: it’s very inspiring.

I really like this “pure” idea, and can work very nicely as long as you keep your data and logic separate from your view.

Unfortunately, the issue is that UI code often intermingles both data and view, so you end up declaring your data as instances of whatever base component the UI is using, which is not very maintainable on the long run. If you want to change the UI you will need to take the ‘data’ bits out of the UI code, or write some sort of adapter between “UI code” and “data”, to have to only change “adapter” when you decide you don’t like your current view layer. This could be a performance hit, so you might want to sacrifice flexibility for performance.

But hey… everything in computing is always a trade-off!


Extensible Web Summit Berlin: notes and thoughts on some of the sessions

As mentioned on this previous post, I attended the Extensible Web Summit past week in Berlin, where I also gave a lightning talk.

We collaboratively wrote notes on the sessions using this etherpad-like installation. Question: how are we going to preserve these notes? who owns OKSoClap?

Since writing everything about the summit in just one post would get unwieldy, these are my notes and thoughts on the sessions I attended afterwards.
Continue reading “Extensible Web Summit Berlin: notes and thoughts on some of the sessions”

“Just turn it into a node module”, and other mantras Edna taught me

Here are the screencast and the write up on the talk I gave today at One-Shot London NodeConf. As usual I diverge a bit from the initial narrative and I forgot to mention a couple of topics I wanted to highlight, but I have had a horrible week and considering that, this has turned out pretty well!

It was great to meet so many interesting people at the conference and seeing old friends again! Also now I’m quite excited to hack with a few things. Damn (or yay!).

Continue reading ““Just turn it into a node module”, and other mantras Edna taught me”

ScotlandJS 2014 – day 2

(With a bit of delay, because socialising happened)

I am typing this at Brew Lab, having my last flat white at Edinburgh (total count in two weeks = about 7, I think). They aren’t paying me for this free advertising, but I want to say that it’s a cool hipster place literally and metaphorically up my street, coffee is quite good, and not only the wi-fi is free and works, but they also have power sockets. So there you go.

Keynote by Mikeal Rogers

Yesterday we barely made it in time for Mikeal Rogers’ keynote, and am certainly glad we rushed! I didn’t take many notes because I was mesmerised by his lighthearted telling of the evolution of his own code and how the improvements in node.js have brought better overall benefits not only for node.js, but also for the modules that have evolved alongside.

The whole talk is interesting to watch, but two things that stood out for me were why the node.js module ecosystem was so rich and growing:

  1. simplicity of interface:
    module.exports = function() { /* ... */ }

    –he explicitly insisted in returning a function call and not a simple object

  2. simplicity of callbacks signature:
    function(err, res) { /* ... */ }

I am pretty sure there was a third thing but he changed to some other topic or I got distracted, and forgot about it. So maybe it wasn’t that important!

“Refactoring legacy code” by Sugendran Ganess

What stood out for me: you want to have as many tests as possible and aspire to the maximum code coverage as possible. So when/if you refactor and you inadvertently change the existing behaviour of the system you notice before it’s too late.

Also: you want to be in master as soon as possible.

Finally he pointed to a couple of tools I want to check out: istanbul.js, platojs.

“Beyond the AST – A Fr**ework for Learnable Programming” by Jason Frame

This was totally unlike what I was actually expecting. The AST part tricked me into believing we were going to listen to some tree building and languages and grammar, but it was actually an enjoyable examination of how we generally teach programming and where we are doing it wrong. He said that natural language is taught in an incremental manner, and once you have a base you can start “getting creative”. In contrast, programming is often taught by throwing a list of concepts and constructs on the faces of people, and we expect them to get creative instantly. Which doesn’t quite work.

He also showed a prototype (?) environment he’d built that would fade out blocks of code that weren’t in the current scope (according to where the cursor was), so new programmers could notice or get a feel that those things weren’t accessible. That would allow him to “remove most sources of frustrations” for beginners.

This environment was also able to execute the program at a slower pace so you could see how the drawings would happen on the screen one at the time, instead of all in just one go. This reminded me to me learning programming with Logo and having to wait for the fill operation to complete because it would raster each line horizontally, from left to right, pixel by pixel. And it wasn’t fast 😛 (Incidentally, I discussed whether slow computers might be better for learning a couple years ago).

Some interesting lessons here, since us in my team are focused in getting more people to build apps, and so we spend lots of time analysing the current offering of materials and tools and trying to identify where the pain points are i.e. where the frustrations occur.

“High Performance Visualizations with Canvas” by Ryan Sandor Richards

This was a nice talk that described what the title says quite succinctly and to the point. He showed how to build graphs using static and realtime data, and something important – how to nicely transition between existing and incoming data, even when various latencies would occur (“networks are going to be networks”). Some of the interesting suggestions that not everyone might know about were, for example:

  • do not redraw static content in each frame. Just draw it once at the beginning and keep it on top with a higher z-index. In his example he would overlay an SVG object (with static content) with a Canvas object underneath (which would hold the changing chart)
  • Image copying is super fast. So if you already drawn part of the content, and can reuse it in the next frame, do it! Copy pixels!

However, I think he “cheated”. All the graphs and examples he showed were based in squares and new data would just come from the right and slide left, so bringing on new data was just a matter of copying the old canvas into the “display” canvas, only one unit to the left, and drawing the new one. When you don’t have much overlapping, it’s “easy” to increase performance “dramatically”. I was maybe expecting how to get crazy performance with complex visualisations where you can’t use this trick, but he didn’t cover that. Instead, he recognised that complex d3 visualisations would turn your computer “into a nuclear reactor”. A problem to solve, or a WebGL+GLSL job?

He also released a library for doing all these things: Epoch.

“Cylon.js: The JavaScript Evolution Of Open Source Robotics” by Ron Evans and his friend whose name I didn’t make a note of

This was a really entertaining, amusing and awesome talk! Ron was a very eloquent speaker who gave plenty of brilliant quotes for me to tweet, and then they also did lots of demos to SHOW not tell that their JS framework would allow you to write the same JS code once and run it in a myriad of physical devices. Arduino, Texel, some sort of round robots that would move by themselves, Pebble, and even drones and that sort of controller for “reading brain waves”. Sorry about not making a note of all of them–I was so amused by the display that I totally forgot.

This talk left me excited about building something with robots, which I guess it’s a measure of success. Good job!

“Make: the forgotten build tool” by James Coglan

An excellent introduction to make that actually answered the questions I wish someone had answered for me about 10 years ago. It really tempts me to go back to make, only I know that Windows users always have all the issues with it and so we have to write the build system in node so it’s something they can execute without having to install an additional tool (cygwin or whatever). Ah, I’m torn, because I really like make + bash…

“The Realtime Web: We’re Doing it Wrong” by Jonathan Martin

So–Jonathan was a good speaker, and the slides were fascinating to watch because they had plenty of animations that at that point of the day seemed really cool to look at. However, after the talk I still don’t know what are we actually doing wrong. I was expecting some sort of utterly mind-blowing revelation but it never happened.


With Mikeal Rogers, Angelina Fabbro and Mike MacCana, at a place with lots of pork meat and lots of drawings and watercolours of happy pigs in the walls. There’s absolutely nothing to complain about this.


“No more `grunt watch`: Modern build workflows with Broccoli” by Jo Liss

Revelation time: for the longest time, I thought Broccoli was a joke-response to the myriad of task runners such as Grunt, Gulp, and etc. So imagine when I learned that it was for reals!

Jo went straight to the point and showed why Broccoli was better than Grunt in specific domains, and demonstrated how to use Broccoli to build a project. The syntax seemed way more imperative than Grunt’s–and way easier to understand. The core concept in Broccoli is trees of files, which, I confirmed with her afterwards, is why Broccoli is called Broccoli. I mean, just look at one and see how it branches into smaller broccolis!

“Build Your Own AngularJS” by Tero Parviainen

Being honest: Tero didn’t catch my attention during the first minutes so I zoned out and kept thinking about how to improve canvas performance. Related: this article about graphics performance in Firefox OS/mobile devices.

“Don’t Go Changin'” by Matt Field

This was a disappointment to me: I thought he’d show a magic clever hack to work with immutable objects in JavaScript but instead he started describing how he’d do things with Clojure where those things are actually native. I tried to stay but my brain wasn’t patient enough for a short introduction on Clojure on the spot, so I left. But…

Impromptu conversation in the hallway!

… I happened to find Jo Liss and some other guy (whose name I didn’t catch, sorry) in the hallway. And somehow we engaged on a conversation about how to onboard new contributors to your project, or even how to get someone to contribute at all. A suggestion was to encourage newcomers to start fixing things in the documentation—a simple “task” that shouldn’t break the code. I described how some teams do it in Mozilla: there’s the Bugs Ahoy! website which keeps tracks of bugs marked as “good first bug” and bugs with assigned mentors. Also, at a past work week in my team we had a similar conversation where we agreed that having an explicit and visible list of issues/to do items made it easy for people to actually get their hands dirty.

Also: how do you steer the project in a direction that makes sense without frustrating contributors and/or maintainers, and without falling in the Feature Creep pit either? One suggestion was to make it very clear what the project was about. Don’t try to solve all the issues, but rather do something, and do it well.

And empower your contributors. Once someone steps in and submits something valuable, treat them as equal peers. Give them commit access. People don’t rush to make stupid things when they get superpowers (unless they’re villains). Instead, they feel honoured and try to make the best they can. Or in other words: if you trust people, you’ll get way better outcomes than if you think they’re all silly and irresponsible.

At this point we actually left the conference before it finished because… *~~~ BRAIN FRIED ~~~*

So sorry but I can’t comment on the rest of the talks.

However, I can tell you that I realised that yesterday was the Eurovision finale, and since Angelina had never seen one, we had a fun time watching it, with me explaining all the politics in play. We really appreciated the impromptu booing, felt the pain of the pretty Finland guys losing to the Netherlands and wholeheartedly agreed that

  1. the stage was one of the best parts of the show, and
  2. any act with keytars should get an extra bunch of points.


This was a good conference with WORKING WI-FI, lots of different topics—although not all of them catered to me, which I guess is OK, and good atmosphere overall.

However at times it felt a bit weird that there weren’t even 10 non-males at the room (that’s a guesstimation based on external looks, and includes the three non-male speakers out of a total 26 talks). The speakers were also quite homogeneously white males so mmm. Often I went to the ladies toilette and the automatic light sensor would turn on the lights, because there were so few of us that it had turned off after a long period of no one entering. That’s unnerving. Is it the Scotland JS community that skewed? I see wider diversity in London events, and I want to see wider diversity in other events too.

I also want to thank Pete Aitken and the rest of the team for organising this conference and being as accommodating, efficient and helpful as they were every time we asked them for anything. Thanks!

You can also read ScotlandJS 2014, day 1.

Modules in PhantomJS

PhantomJS has support for CommonJS modules, or as they call them, “user-defined modules”. This means that if your script called main.js is in the same directory as your other script usefulScript.js you could load the second from main.js, using require in the same way that you would use when using nodejs.

However the official post announcing this new feature says that you should use exports to define your module’s exposed functions, but that doesn’t work. The right way to do it is to use module.exports, like this:

function a() {
    // Does a

function b() {
    // Does b

module.exports = {
    a: a,
    b: b

And then in main.js (which is executed with phantomjs main.js):

var useful = require('./usefulScript');

I thought you could only load npm-installed modules by referencing the module by its full path (including node_modules, and referencing the ‘entry point’ script directly), but a superhelpful commenter pointed out that maybe I could just require as normally. And it works! Of course, the modules should be compatible with the available modules and API calls in PhantomJS too.

First an example referencing the whole path:

var nodeThing = require('./node_modules/nodeThing/main.js');

The same could be rewritten as:

var nodeThing = require('nodeThing');

as long as there’s a package.json file in the module that defines what the entry point of the nodeThing module is. For more information, here’s the documentation for the package.json files.

Sorry if this seems a silly post, but I’m quite new to PhantomJS and I’ve been unable to find this answer anywhere else, so hoping this will help more people looking on how to include or use external scripts in their PhantomJS code.