Tag Archives: modules

Extensible Web Summit Berlin: notes and thoughts on some of the sessions

As mentioned on this previous post, I attended the Extensible Web Summit past week in Berlin, where I also gave a lightning talk.

We collaboratively wrote notes on the sessions using this etherpad-like installation. Question: how are we going to preserve these notes? who owns OKSoClap?

Since writing everything about the summit in just one post would get unwieldy, these are my notes and thoughts on the sessions I attended afterwards.

Extensible web, huh?

This was held on the main room, as we felt that it was important to establish what “Extensible Web” meant before we continued discussing anything else.

I am not sure we totally established what it meant, but at least I came to the conclusion that “Extensible Web” is one where we can use its own low level APIs to build and explain the essential HTML elements. Once you can do that, it means that developers have equal “powers” as browser vendors, and we’re level for experimenting with new components / features / elements / patterns / APIs that might be included in the browser natively once they have been validated by the community.

For example, Chris Wilson used the example of being able to build the <audio> tag using Web Audio (and other bunch of APIs and techs, of course). Domenic talked about a project that tries to build all existing HTML elements using Web Components, and how that highlighted that some/many aspects of native HTML elements are not specced.

As a developer this is the aspect that excites me most about the notion of the extensible web. Give me the same powers browsers have, so that I don’t need to wait on Vendor X to get its act together and implement the ABC API. Maybe I’ll come up with something that everyone likes, and it becomes a universal pattern, or a universal tag.

There were discussions about how to “standardise” components, and how do we choose the one that is “better”. Speaking from experience, I can tell you the winner API/library/whatever will be the one which is easier to adopt and has the better README/documentation/examples. You can write a super efficient maths library but if all you offer to people is a JS file in a ZIP file in a server that is not available all the time, people won’t adopt your work. Instead they will go for the maybe less efficient maths library hosted in GitHub with a complete README and a bunch of examples that demonstrate how to use the library.

It was mentioned that often there are most popular libraries per region, so how are we going to standardise a library that is very popular in China when we do not understand how it works or even the comments which are written in Chinese? Someone said that if they do not speak English which is the lingua franca of the Internet, it will be their fault if we do not adopt their code.

I used to think like that, but experience has taught me that it’s a wrong view of the world. Being able to speak English other than your mother tongue is an immense privilege not everyone has been gifted with. I have had amazing contributions to my open source projects from brilliant people who hardly speak English, and it’s been truly hard to communicate, but if we make an effort, we all win.

So my answer to this is to be patient, and to try to write simpler English which is easier to machine-translate, and be even more patient, and try to use all the tools we can to communicate. Maybe we need to invest in better machine translation tools. Maybe we need to learn other people’s languages. Maybe we need simpler, evident code examples.

The discussion moved onto how to involve more developers and getting them on board, i.e. involved in the Extensible Web and all that, highlighting again the fact that those developers were not in the room, but the work we do has an effect on their lives. Mailing lists being a scary place was mentioned again. Moderate them? Not moderate them? What about having something else instead of mailing lists?

Performance was a discussion topic too. I felt there was a bit of… FUD dropped by some of the participants, with discussion of not being able to control memory in JS in a fine grained way just as you can in C++, or with JS not being “as efficient as native code”. Show me benchmarks and show me where this supposed slowness is affecting your particular use case.

Honestly I have seen native code that is horribly inefficient when dealing with UI, and it’s way easier to generate leaks and mismanage memory in C than it is with JavaScript. Not to speak about wasted developer time tracking segfaults and leaks, and the fact that your code will only work in one platform unless you invest more money in making your native code work in more platforms.

Finally there were discussions about having spec writers also write code that uses such spec, so they get an idea of whether the spec makes sense or not. I think this is a good idea, and would help in both adopting the spec but also in validating its adequateness.

Some people argued that not everyone codes and we shouldn’t make it a requirement for participating in specs. I partially agree with this.

Future JS

Collaborative notes for this session.

This session was a bit of Q & A with Domenic as he’s the one who’s more closely involved in the TC39 people.

The main take aways for me:

ES6 JS breaks when you try to load ES6 code in platforms that do not support it (because the syntax is just different, you cannot use feature detection). But you can use Traceur in the meantime! Idea being that Traceur is to JS as SASS is to CSS.

Sadly I’m slightly opposed to using transpilers because they add complexity to projects, and that extra compilation step sometimes prevents people from using/contributing to a project.

JS modules were discussed too. One reason for them not quite being there yet is because they were specified but their loading mechanism was not, which is actually crucial when it comes to deduplication, dependencies, etc. So that has been holding them down. I’m glad I use Browserify, nanananana :-P

There was also a good point made about JS modules making it harder to overwrite/touch native objects. For example, if you do

import Math from 'Math';

// Mwhahaha
Math.sin = function(x) {
  return Math.cos(x);
}

then other scripts using the Math module should not be affected by your local modification, which is something that does happen right now if you do the above evil reassignment and then use Math.sin in another part of your code, due to Math being a global object. This is neat!

Parallel JS is coming! I just love to do data intensive manipulation (an example is Animated GIF). Being able to run those calculations in parallel, but using the GPU and not Web Workers, would be amazing in terms of performance and battery life. ParallelJS transpiles some JavaScript code to something that can run on the GPU in parallel, and coincidentally there was a talk about Parallel JS in JSConf later that week-end, but I didn’t know it by then. I’ll speak about that in my JSConf post.

Web Components

And this was the last session I attended, as I had to prepare for the Web Audio Hackday happening next day!

Collaborative notes for this session here.

Since I had already exposed my opinions during the lightning talk, I decided to take a more observant role during this session, so I could take note of people’s concerns with regards to Web Components.

I’m a bit sad this session was slightly monopolised by the discussion/demonstration of somebody’s specific project that did not use Web Components because it was built before Web Components ever existed. People tried to drive the conversation back to Web Components by asking why was this project not rewritten with Web Components now–what was missing from the platform in order for that to happen? but the attempts didn’t quite work.

Yes, your project is amazing. Yes, it is a nice technical achievement. But you should also understand that this is just a one hour discussion, and you shouldn’t feel entitled to take over so much time of it by talking about how much better and cool your idea is instead of proposing solutions to make Web Components better–which is what we are here for.

Take aways:

People want to use web components as an encapsulation method. They want to load and make these components available to their HTML somehow, and then have all the members in their team use them in HTML without having to deal with configuration issues and stuff but getting instant quality code up and running.

I will refer to my previous article written after EdgeConf, and insist that Web Components are NOT a silver bullet.

I don’t see how much better it is to load a bunch of web components that declare config settings and then create instances of objects in HTML than it is to do that with JavaScript–with less weirdnesses and metamagic specific to your project, and the advantage that you can use jshint + uglify + browserify and all the rest of proven and well tested tools.

Some of the people in the room presented their work on Web Components that use non visual tags in JSConf, so I will add my thoughts on that talk and link to them later too.

There was a mention about it being legit to have tags that do not have a visible output–for example the <title> tag or the various <meta> tags which then happen to help search engines or any other sort of bots. While that was a valid point, I also thought that you need to teach bots the meaning of new custom components, and you also need to convince bots to implement an interface that lets you tell them that those elements have a certain meaning. This is open to SEO abuse. Also, do search engines trust titles and meta tags that much nowadays?

There was the customary reminder that DOM elements (including custom elements) are also JavaScript objects and you can interface with them and they can have custom methods and attributes and setters and getters, etc–not everything has to be totally imperative or totally declarative, and you can combine the best of both worlds.

Of course data binding came up in the discussion. Several frameworks are proposing different implementations of data binding in different manners, and there doesn’t seem to be any work on implementing this as a standard. Whatever it is, should be based in Object.observe, Domenic pointed out.

Then we were back to one of my favourite subjects: dependency management in Web Components. How do you make one depend on other(s)? One option being HTML imports which I mentioned already are not that cool, but they can deal with HTML and CSS. Another option is just using npm+browserify. This is what I’ve been using for my audio UI component experiments, where I was using custom elements to build complex GUI for virtual instruments.

There is an often overlooked issue with document.registerElement: once you register a custom element with a certain tag name, it cannot be registered again with the same name. So you can’t use two versions of the same element if they both try to register using the same tag name.

All in all: this is such a new technology we don’t know how to use it or what is the best use case or the best design patterns, and we’re confused.

So, in other words: business as usual. We need to keep using and experimenting with it, polishing its raw edges until it’s all smooth and nice.

“Just turn it into a node module”, and other mantras Edna taught me

Here are the screencast and the write up on the talk I gave today at One-Shot London NodeConf. As usual I diverge a bit from the initial narrative and I forgot to mention a couple of topics I wanted to highlight, but I have had a horrible week and considering that, this has turned out pretty well!

It was great to meet so many interesting people at the conference and seeing old friends again! Also now I’m quite excited to hack with a few things. Damn (or yay!).

Continue reading

ScotlandJS 2014 – day 2

(With a bit of delay, because socialising happened)

I am typing this at Brew Lab, having my last flat white at Edinburgh (total count in two weeks = about 7, I think). They aren’t paying me for this free advertising, but I want to say that it’s a cool hipster place literally and metaphorically up my street, coffee is quite good, and not only the wi-fi is free and works, but they also have power sockets. So there you go.

Keynote by Mikeal Rogers

Yesterday we barely made it in time for Mikeal Rogers’ keynote, and am certainly glad we rushed! I didn’t take many notes because I was mesmerised by his lighthearted telling of the evolution of his own code and how the improvements in node.js have brought better overall benefits not only for node.js, but also for the modules that have evolved alongside.

The whole talk is interesting to watch, but two things that stood out for me were why the node.js module ecosystem was so rich and growing:

  1. simplicity of interface:
    module.exports = function() { /* ... */ }

    –he explicitly insisted in returning a function call and not a simple object

  2. simplicity of callbacks signature:
    function(err, res) { /* ... */ }

I am pretty sure there was a third thing but he changed to some other topic or I got distracted, and forgot about it. So maybe it wasn’t that important!

“Refactoring legacy code” by Sugendran Ganess

What stood out for me: you want to have as many tests as possible and aspire to the maximum code coverage as possible. So when/if you refactor and you inadvertently change the existing behaviour of the system you notice before it’s too late.

Also: you want to be in master as soon as possible.

Finally he pointed to a couple of tools I want to check out: istanbul.js, platojs.

“Beyond the AST – A Fr**ework for Learnable Programming” by Jason Frame

This was totally unlike what I was actually expecting. The AST part tricked me into believing we were going to listen to some tree building and languages and grammar, but it was actually an enjoyable examination of how we generally teach programming and where we are doing it wrong. He said that natural language is taught in an incremental manner, and once you have a base you can start “getting creative”. In contrast, programming is often taught by throwing a list of concepts and constructs on the faces of people, and we expect them to get creative instantly. Which doesn’t quite work.

He also showed a prototype (?) environment he’d built that would fade out blocks of code that weren’t in the current scope (according to where the cursor was), so new programmers could notice or get a feel that those things weren’t accessible. That would allow him to “remove most sources of frustrations” for beginners.

This environment was also able to execute the program at a slower pace so you could see how the drawings would happen on the screen one at the time, instead of all in just one go. This reminded me to me learning programming with Logo and having to wait for the fill operation to complete because it would raster each line horizontally, from left to right, pixel by pixel. And it wasn’t fast :P (Incidentally, I discussed whether slow computers might be better for learning a couple years ago).

Some interesting lessons here, since us in my team are focused in getting more people to build apps, and so we spend lots of time analysing the current offering of materials and tools and trying to identify where the pain points are i.e. where the frustrations occur.

“High Performance Visualizations with Canvas” by Ryan Sandor Richards

This was a nice talk that described what the title says quite succinctly and to the point. He showed how to build graphs using static and realtime data, and something important – how to nicely transition between existing and incoming data, even when various latencies would occur (“networks are going to be networks”). Some of the interesting suggestions that not everyone might know about were, for example:

  • do not redraw static content in each frame. Just draw it once at the beginning and keep it on top with a higher z-index. In his example he would overlay an SVG object (with static content) with a Canvas object underneath (which would hold the changing chart)
  • Image copying is super fast. So if you already drawn part of the content, and can reuse it in the next frame, do it! Copy pixels!

However, I think he “cheated”. All the graphs and examples he showed were based in squares and new data would just come from the right and slide left, so bringing on new data was just a matter of copying the old canvas into the “display” canvas, only one unit to the left, and drawing the new one. When you don’t have much overlapping, it’s “easy” to increase performance “dramatically”. I was maybe expecting how to get crazy performance with complex visualisations where you can’t use this trick, but he didn’t cover that. Instead, he recognised that complex d3 visualisations would turn your computer “into a nuclear reactor”. A problem to solve, or a WebGL+GLSL job?

He also released a library for doing all these things: Epoch.

“Cylon.js: The JavaScript Evolution Of Open Source Robotics” by Ron Evans and his friend whose name I didn’t make a note of

This was a really entertaining, amusing and awesome talk! Ron was a very eloquent speaker who gave plenty of brilliant quotes for me to tweet, and then they also did lots of demos to SHOW not tell that their JS framework would allow you to write the same JS code once and run it in a myriad of physical devices. Arduino, Texel, some sort of round robots that would move by themselves, Pebble, and even drones and that sort of controller for “reading brain waves”. Sorry about not making a note of all of them–I was so amused by the display that I totally forgot.

This talk left me excited about building something with robots, which I guess it’s a measure of success. Good job!

“Make: the forgotten build tool” by James Coglan

An excellent introduction to make that actually answered the questions I wish someone had answered for me about 10 years ago. It really tempts me to go back to make, only I know that Windows users always have all the issues with it and so we have to write the build system in node so it’s something they can execute without having to install an additional tool (cygwin or whatever). Ah, I’m torn, because I really like make + bash…

“The Realtime Web: We’re Doing it Wrong” by Jonathan Martin

So–Jonathan was a good speaker, and the slides were fascinating to watch because they had plenty of animations that at that point of the day seemed really cool to look at. However, after the talk I still don’t know what are we actually doing wrong. I was expecting some sort of utterly mind-blowing revelation but it never happened.

Lunch

With Mikeal Rogers, Angelina Fabbro and Mike MacCana, at a place with lots of pork meat and lots of drawings and watercolours of happy pigs in the walls. There’s absolutely nothing to complain about this.

Yum.

“No more `grunt watch`: Modern build workflows with Broccoli” by Jo Liss

Revelation time: for the longest time, I thought Broccoli was a joke-response to the myriad of task runners such as Grunt, Gulp, and etc. So imagine when I learned that it was for reals!

Jo went straight to the point and showed why Broccoli was better than Grunt in specific domains, and demonstrated how to use Broccoli to build a project. The syntax seemed way more imperative than Grunt’s–and way easier to understand. The core concept in Broccoli is trees of files, which, I confirmed with her afterwards, is why Broccoli is called Broccoli. I mean, just look at one and see how it branches into smaller broccolis!

“Build Your Own AngularJS” by Tero Parviainen

Being honest: Tero didn’t catch my attention during the first minutes so I zoned out and kept thinking about how to improve canvas performance. Related: this article about graphics performance in Firefox OS/mobile devices.

“Don’t Go Changin'” by Matt Field

This was a disappointment to me: I thought he’d show a magic clever hack to work with immutable objects in JavaScript but instead he started describing how he’d do things with Clojure where those things are actually native. I tried to stay but my brain wasn’t patient enough for a short introduction on Clojure on the spot, so I left. But…

Impromptu conversation in the hallway!

… I happened to find Jo Liss and some other guy (whose name I didn’t catch, sorry) in the hallway. And somehow we engaged on a conversation about how to onboard new contributors to your project, or even how to get someone to contribute at all. A suggestion was to encourage newcomers to start fixing things in the documentation—a simple “task” that shouldn’t break the code. I described how some teams do it in Mozilla: there’s the Bugs Ahoy! website which keeps tracks of bugs marked as “good first bug” and bugs with assigned mentors. Also, at a past work week in my team we had a similar conversation where we agreed that having an explicit and visible list of issues/to do items made it easy for people to actually get their hands dirty.

Also: how do you steer the project in a direction that makes sense without frustrating contributors and/or maintainers, and without falling in the Feature Creep pit either? One suggestion was to make it very clear what the project was about. Don’t try to solve all the issues, but rather do something, and do it well.

And empower your contributors. Once someone steps in and submits something valuable, treat them as equal peers. Give them commit access. People don’t rush to make stupid things when they get superpowers (unless they’re villains). Instead, they feel honoured and try to make the best they can. Or in other words: if you trust people, you’ll get way better outcomes than if you think they’re all silly and irresponsible.

At this point we actually left the conference before it finished because… *~~~ BRAIN FRIED ~~~*

So sorry but I can’t comment on the rest of the talks.

However, I can tell you that I realised that yesterday was the Eurovision finale, and since Angelina had never seen one, we had a fun time watching it, with me explaining all the politics in play. We really appreciated the impromptu booing, felt the pain of the pretty Finland guys losing to the Netherlands and wholeheartedly agreed that

  1. the stage was one of the best parts of the show, and
  2. any act with keytars should get an extra bunch of points.

Conclusion

This was a good conference with WORKING WI-FI, lots of different topics—although not all of them catered to me, which I guess is OK, and good atmosphere overall.

However at times it felt a bit weird that there weren’t even 10 non-males at the room (that’s a guesstimation based on external looks, and includes the three non-male speakers out of a total 26 talks). The speakers were also quite homogeneously white males so mmm. Often I went to the ladies toilette and the automatic light sensor would turn on the lights, because there were so few of us that it had turned off after a long period of no one entering. That’s unnerving. Is it the Scotland JS community that skewed? I see wider diversity in London events, and I want to see wider diversity in other events too.

I also want to thank Pete Aitken and the rest of the team for organising this conference and being as accommodating, efficient and helpful as they were every time we asked them for anything. Thanks!

You can also read ScotlandJS 2014, day 1.

Modules in PhantomJS

PhantomJS has support for CommonJS modules, or as they call them, “user-defined modules”. This means that if your script called main.js is in the same directory as your other script usefulScript.js you could load the second from main.js, using require in the same way that you would use when using nodejs.

However the official post announcing this new feature says that you should use exports to define your module’s exposed functions, but that doesn’t work. The right way to do it is to use module.exports, like this:

function a() {
    // Does a
}

function b() {
    // Does b
}

module.exports = {
    a: a,
    b: b
};

And then in main.js (which is executed with phantomjs main.js):

var useful = require('./usefulScript');
useful.a();
useful.b();

I thought you could only load npm-installed modules by referencing the module by its full path (including node_modules, and referencing the ‘entry point’ script directly), but a superhelpful commenter pointed out that maybe I could just require as normally. And it works! Of course, the modules should be compatible with the available modules and API calls in PhantomJS too.

First an example referencing the whole path:

var nodeThing = require('./node_modules/nodeThing/main.js');
nodeThing.doSomething();

The same could be rewritten as:

var nodeThing = require('nodeThing');
nodeThing.doSomething();

as long as there’s a package.json file in the module that defines what the entry point of the nodeThing module is. For more information, here’s the documentation for the package.json files.

Sorry if this seems a silly post, but I’m quite new to PhantomJS and I’ve been unable to find this answer anywhere else, so hoping this will help more people looking on how to include or use external scripts in their PhantomJS code.