Category Archives: Events

Extensible Web Summit Berlin: notes and thoughts on some of the sessions

As mentioned on this previous post, I attended the Extensible Web Summit past week in Berlin, where I also gave a lightning talk.

We collaboratively wrote notes on the sessions using this etherpad-like installation. Question: how are we going to preserve these notes? who owns OKSoClap?

Since writing everything about the summit in just one post would get unwieldy, these are my notes and thoughts on the sessions I attended afterwards.

Extensible web, huh?

This was held on the main room, as we felt that it was important to establish what “Extensible Web” meant before we continued discussing anything else.

I am not sure we totally established what it meant, but at least I came to the conclusion that “Extensible Web” is one where we can use its own low level APIs to build and explain the essential HTML elements. Once you can do that, it means that developers have equal “powers” as browser vendors, and we’re level for experimenting with new components / features / elements / patterns / APIs that might be included in the browser natively once they have been validated by the community.

For example, Chris Wilson used the example of being able to build the <audio> tag using Web Audio (and other bunch of APIs and techs, of course). Domenic talked about a project that tries to build all existing HTML elements using Web Components, and how that highlighted that some/many aspects of native HTML elements are not specced.

As a developer this is the aspect that excites me most about the notion of the extensible web. Give me the same powers browsers have, so that I don’t need to wait on Vendor X to get its act together and implement the ABC API. Maybe I’ll come up with something that everyone likes, and it becomes a universal pattern, or a universal tag.

There were discussions about how to “standardise” components, and how do we choose the one that is “better”. Speaking from experience, I can tell you the winner API/library/whatever will be the one which is easier to adopt and has the better README/documentation/examples. You can write a super efficient maths library but if all you offer to people is a JS file in a ZIP file in a server that is not available all the time, people won’t adopt your work. Instead they will go for the maybe less efficient maths library hosted in GitHub with a complete README and a bunch of examples that demonstrate how to use the library.

It was mentioned that often there are most popular libraries per region, so how are we going to standardise a library that is very popular in China when we do not understand how it works or even the comments which are written in Chinese? Someone said that if they do not speak English which is the lingua franca of the Internet, it will be their fault if we do not adopt their code.

I used to think like that, but experience has taught me that it’s a wrong view of the world. Being able to speak English other than your mother tongue is an immense privilege not everyone has been gifted with. I have had amazing contributions to my open source projects from brilliant people who hardly speak English, and it’s been truly hard to communicate, but if we make an effort, we all win.

So my answer to this is to be patient, and to try to write simpler English which is easier to machine-translate, and be even more patient, and try to use all the tools we can to communicate. Maybe we need to invest in better machine translation tools. Maybe we need to learn other people’s languages. Maybe we need simpler, evident code examples.

The discussion moved onto how to involve more developers and getting them on board, i.e. involved in the Extensible Web and all that, highlighting again the fact that those developers were not in the room, but the work we do has an effect on their lives. Mailing lists being a scary place was mentioned again. Moderate them? Not moderate them? What about having something else instead of mailing lists?

Performance was a discussion topic too. I felt there was a bit of… FUD dropped by some of the participants, with discussion of not being able to control memory in JS in a fine grained way just as you can in C++, or with JS not being “as efficient as native code”. Show me benchmarks and show me where this supposed slowness is affecting your particular use case.

Honestly I have seen native code that is horribly inefficient when dealing with UI, and it’s way easier to generate leaks and mismanage memory in C than it is with JavaScript. Not to speak about wasted developer time tracking segfaults and leaks, and the fact that your code will only work in one platform unless you invest more money in making your native code work in more platforms.

Finally there were discussions about having spec writers also write code that uses such spec, so they get an idea of whether the spec makes sense or not. I think this is a good idea, and would help in both adopting the spec but also in validating its adequateness.

Some people argued that not everyone codes and we shouldn’t make it a requirement for participating in specs. I partially agree with this.

Future JS

Collaborative notes for this session.

This session was a bit of Q & A with Domenic as he’s the one who’s more closely involved in the TC39 people.

The main take aways for me:

ES6 JS breaks when you try to load ES6 code in platforms that do not support it (because the syntax is just different, you cannot use feature detection). But you can use Traceur in the meantime! Idea being that Traceur is to JS as SASS is to CSS.

Sadly I’m slightly opposed to using transpilers because they add complexity to projects, and that extra compilation step sometimes prevents people from using/contributing to a project.

JS modules were discussed too. One reason for them not quite being there yet is because they were specified but their loading mechanism was not, which is actually crucial when it comes to deduplication, dependencies, etc. So that has been holding them down. I’m glad I use Browserify, nanananana :-P

There was also a good point made about JS modules making it harder to overwrite/touch native objects. For example, if you do

import Math from 'Math';

// Mwhahaha
Math.sin = function(x) {
  return Math.cos(x);
}

then other scripts using the Math module should not be affected by your local modification, which is something that does happen right now if you do the above evil reassignment and then use Math.sin in another part of your code, due to Math being a global object. This is neat!

Parallel JS is coming! I just love to do data intensive manipulation (an example is Animated GIF). Being able to run those calculations in parallel, but using the GPU and not Web Workers, would be amazing in terms of performance and battery life. ParallelJS transpiles some JavaScript code to something that can run on the GPU in parallel, and coincidentally there was a talk about Parallel JS in JSConf later that week-end, but I didn’t know it by then. I’ll speak about that in my JSConf post.

Web Components

And this was the last session I attended, as I had to prepare for the Web Audio Hackday happening next day!

Collaborative notes for this session here.

Since I had already exposed my opinions during the lightning talk, I decided to take a more observant role during this session, so I could take note of people’s concerns with regards to Web Components.

I’m a bit sad this session was slightly monopolised by the discussion/demonstration of somebody’s specific project that did not use Web Components because it was built before Web Components ever existed. People tried to drive the conversation back to Web Components by asking why was this project not rewritten with Web Components now–what was missing from the platform in order for that to happen? but the attempts didn’t quite work.

Yes, your project is amazing. Yes, it is a nice technical achievement. But you should also understand that this is just a one hour discussion, and you shouldn’t feel entitled to take over so much time of it by talking about how much better and cool your idea is instead of proposing solutions to make Web Components better–which is what we are here for.

Take aways:

People want to use web components as an encapsulation method. They want to load and make these components available to their HTML somehow, and then have all the members in their team use them in HTML without having to deal with configuration issues and stuff but getting instant quality code up and running.

I will refer to my previous article written after EdgeConf, and insist that Web Components are NOT a silver bullet.

I don’t see how much better it is to load a bunch of web components that declare config settings and then create instances of objects in HTML than it is to do that with JavaScript–with less weirdnesses and metamagic specific to your project, and the advantage that you can use jshint + uglify + browserify and all the rest of proven and well tested tools.

Some of the people in the room presented their work on Web Components that use non visual tags in JSConf, so I will add my thoughts on that talk and link to them later too.

There was a mention about it being legit to have tags that do not have a visible output–for example the <title> tag or the various <meta> tags which then happen to help search engines or any other sort of bots. While that was a valid point, I also thought that you need to teach bots the meaning of new custom components, and you also need to convince bots to implement an interface that lets you tell them that those elements have a certain meaning. This is open to SEO abuse. Also, do search engines trust titles and meta tags that much nowadays?

There was the customary reminder that DOM elements (including custom elements) are also JavaScript objects and you can interface with them and they can have custom methods and attributes and setters and getters, etc–not everything has to be totally imperative or totally declarative, and you can combine the best of both worlds.

Of course data binding came up in the discussion. Several frameworks are proposing different implementations of data binding in different manners, and there doesn’t seem to be any work on implementing this as a standard. Whatever it is, should be based in Object.observe, Domenic pointed out.

Then we were back to one of my favourite subjects: dependency management in Web Components. How do you make one depend on other(s)? One option being HTML imports which I mentioned already are not that cool, but they can deal with HTML and CSS. Another option is just using npm+browserify. This is what I’ve been using for my audio UI component experiments, where I was using custom elements to build complex GUI for virtual instruments.

There is an often overlooked issue with document.registerElement: once you register a custom element with a certain tag name, it cannot be registered again with the same name. So you can’t use two versions of the same element if they both try to register using the same tag name.

All in all: this is such a new technology we don’t know how to use it or what is the best use case or the best design patterns, and we’re confused.

So, in other words: business as usual. We need to keep using and experimenting with it, polishing its raw edges until it’s all smooth and nice.

Extensible Web Summit Berlin 2014: my lightning talk on Web Components

I was invited to join and give a lightning talk at the Extensible Web Summit that was held in Berlin past week, as part of the whole JSFest.berlin series of events.

The structure of the event consisted in having a series of introductory lightning talks to “set the tone” and then the rest would be a sort of unconference where people would suggest topics to talk about and then we would build a timetable collaboratively.

My lightning talk

The topic for my talk was… Web Components. Which was quite interesting because I have been working/fighting with them and various implementations in various levels of completeness at the same time lately, so I definitely had some things to add!

I didn’t want people to get distracted by slides (including myself) so I didn’t have any. Exciting! Also challenging.

These are the notes I more or less followed for my minitalk:

When I speak to the average developer they often cannot see any reason to use Web Components

The question I’m asked 99% of the time is “why, when I can do the same with divs? with jQuery even? what is the point?”

And that’s probably because they are SO hard to understand

  • The specs–the four of them– are really confusing / dense. Four specs you need to understand to totally grasp how everything works together.
  • Explainer articles too often drink the Kool Aid, so readers are like: fine, this seems amazing, but what can it do for me and why should I use any of this in my projects?
  • Libraries/frameworks built on top of Web Components hide too much of their complexity by adding more complexity, further confusing matters (perhaps they are trying to do too many things at the same time?). Often, people cannot even distinguish between Polymer and Web Components: are they the same? which one does what? do I need Polymer? or only some parts?
  • Are we supposed to use Web Components for visual AND non visual components? Where do you draw the line? How can you explain to people that they let you write your own HTML elements, and next thing you do is use an invisible tag that has no visual output but performs some ~~~encapsulated magic~~~?

And if they are supposed to be a new encapsulation method, they don’t play nicely with established workflows–they are highly disruptive both for humans and computers:

  • It’s really hard to parse in our minds what the dependencies of a component (described in an HTML import) and all its dependencies (described in possibly multiple nested HTML imports) are. Taking over a component based project can easily get horrible.
  • HTML imports totally break existing CSS/JS compression/linting chains and workflows.
  • Yes, there is Vulcaniser, a tool from Polymer that amalgamates all the imports into a couple of files, but it still doesn’t feel quite there: we get an HTML and a CSS file that still need a polyfill to be loaded.
  • We need this polyfill for using HTML imports, and we will need it for a while, and it doesn’t work with file:/// urls because it makes a hidden XMLHttpRequest that no one expects. In contrast, we don’t need one for loading JS and CSS locally.
  • HTML Imports generate a need for tools that parse the imports and identify the dependencies and essentially… pretend to be a browser? Doesn’t this smell like duplication of efforts? And why two dependency loading systems? (ES6 require and HTML imports)

There’s also a problem with hiding too much complexity and encapsulating too much:

  • Users of “third party” Web Components might not be aware of the “hell” they are conjuring in the DOM when said components are “heavyweight” but also encapsulated, so it’s hard to figure out what is going on.
  • It might also make components hard to extend: you might have a widget that almost does all you need except for one thing, but it’s all encapsulated and ooh you can’t hook on any of the things it does so you have to rewrite it all.
  • Perhaps we need to discuss more about use cases and patterns for writing modular components and extending them.

It’s hard to make some things degrade nicely or even just make them work at all when the specs are not fully implemented in a platform–specially the CSS side of the spec

  • For example, the Shadow DOM selectors are not implemented in Firefox OS yet, so a component that uses Shadow DOM needs double styling selectors and some weird tricks to work in Gaia (Firefox OS’ UI layer) and platforms that do have support for Shadow DOM

And not directly related to Web Components, but in relation to spec work and disruptive browser support for new features:

  • Spec and browser people live in a different bubble where you can totally rewrite things from one day to the other. Throw everything away! Change the rules! No backwards compatibility? No problem!
  • But we need to be considerate with “normal” developers.
  • We need to understand that most of the people cannot afford to totally change their chain or workflows, or they just do not understand what we are getting at (reasons above)
  • Then if they try to understand, they go to mailing lists and they see those fights and all the politics and… they step back, or disappear for good. It’s just not a healthy environment. I am subscribed to several API lists and I only read them when I’m on a plane so I can’t go and immediately reply.
  • If the W3C, or any other standardisation organisation wants to attract “normal” developers to get more diverse inputs, they/we should start by being respectful to everyone. Don’t try to show everyone how superclever you are. Don’t be a jerk. Don’t scare people away, because then only the loud ones stay, and the quieter shy people, or people who have more urgent matters to attend (such as, you know, having a working business website even if it’s not using the latest and greatest API) will just leave.
  • So I want to remind everyone that we need to be considerate of each others’ problems and needs. We need to make an effort to speak other people’s language–and specially technical people need to do that. Confusing or opaque specs only lead to errors and misinterpretations.
  • We all want to make the web better, but we need to work on this together!

Thanks

With thanks to all the people whose brain I’ve been picking lately on the subject of Web Components: Angelina, Wilson, Francisco, Les, Potch, Fred and Christian.

Functional JS, IRC servers and the internet of things

I attended the London Functional JS meet up past Wednesday. James Coglan gave a nice walkthrough of the approach he’s been experimenting with to write functional JS. This doesn’t mean just using Array.map and other “functional JS” tools but going way further and encapsulating the data into unified types–for example Promises, or Streams– so that nothing can be synchronous or maybe asynchronous or both, sometimes, anymore (and so we can’t release The Zalgo). Transducers also showed up.

I must confess I am not really an expert in any of these things, which is precisely why I went. I didn’t get all the concepts he discussed, and I’m perfectly fine with that: it is food for thought for when I come back from Berlin. I like having undigested ideas kind of mulling on the back of my brain, and then at some unpredictable point they all come together and voilà, I have the solution to some woe that has been chasing me for weeks.

After the presentation proper there was time for a coding dojo. So James had a bit of skeleton starter for an IRC client written in a functional manner, and we had to implement more commands. Sadly I hadn’t brought my laptop because I didn’t know there was a dojo, and I don’t find value in carrying a laptop to a social meetup generally (I have my phone for tweeting or taking notes anyway), but there was some more people I knew from “the London JS scene” (Howard, Karolis), so we “tri-pair programmed” on Karolis’ laptop. Or might I more accurately say that they did most of the work and typing while I threw random ideas at them based on my limited knowledge of functional and CoffeeScript?

Anyway, it was fun, and again it rekindled an idea that has been lingering on my mind since I attended NodeConf London: this notion of using IRC to have services communicate between them. Yes, you can connect them via a socket or using HTTP requests or whatever invisible protocol but since someone mentioned using Jabber as a protocol for connecting “things” from the Internet of Things, somehow my brain transfigured it into having these “things” use IRC instead of Jabber and I became interested in the idea of poking into the realtime conversation between machines. I’m not really sure what they would be talking about or what kind of messages they’d be exchanging, but it would be weirdly interesting to sort of program them to reprogram themselves as they learn, and see where they would go when they all output their stuff to this common data bus, i.e. an IRC channel. And how would they react to whatever humans said to them?

I tried playing with hubot a few months ago in a bit of spare time I had (that was like 6 hours) but I didn’t quite understand how to access “its brain”–is it just a massive key-value set of pairs? and how long does it persist?

There was also the issue of it being written in CoffeeScript, and how shitty inadequate and oldschool their module management system was, but I could deal with those if I closed my eyes and played the pragmatic game. Perhaps there are better bot-that-speak-to-IRC alternatives, but I don’t know any; ideas are welcome!

I also envision this notion of being able to visualise any of these services’ brains while they are running and learn and remorph their fact base. I imagine it would look like something like this, which has nothing to do with that but it’s how I imagine it:

Now to more Web Audio workshop preparations :D

Audio for the masses

The video above is from LXJS – the Lisbon JavaScript conference, which happened more than a month ago. I gave this talk past week again at VanJS, so I decided it was time for that belated write up on this talk.

If you want to follow along, or play with the examples, the slides are online and you can also check out the code for the slides.

As I’ve given this talk several times I keep changing bits of the content each time depending on what the audience seems more interested in, plus I also sometimes improvise stuff which I don’t remember when writing the final write up, so if you were at any of the talks and see that something’s missing or different now you know why! I’ve also added a section at the end with frequent questions I’ve been asked, hope that’s useful for you too.

Continue reading

Berlin Web Audio Hack Day

As I hinted in my previous Berlin-related post, I’m going to be participating in another event from JSFest.berlin (note this is a real domain!).

The event is the Web Audio Hack Day, and I’m told that it’s sold out already (!!!) but you can try and add yourself to the waiting list just in case someone can’t attend.

This will be held at the SoundCloud office, so we’ve been promised an amazing sound system. We’ll have to produce something worthy!

CHALLENGE ACCEPTED.