Extreme decoupling or all-as-a-module

I opened my laptop in the morning and found one of my open tabs in Nightly was for Vue.js. I don’t even remember how I ended up there. Was I reading about frameworks? Did anyone send me the link? Who knows!

But I was curious. I am not a megafan of frameworks, but I like looking at them. One, because their idioms are often adopted by other developers, so it’s good to be aware of where are things going. And two, because frameworks do lots of “magic” in the background, and I always want to know how they implement their “magic”—maybe I’ll want to adopt some of it!

So instead of closing the tab, I perused the page. It has a virtual DOM as React does, but they seem to take great pride on their overall minimalism (small file size, little intrusiveness). The examples are amongst the most readable I’ve found for frameworks when it comes to the JavaScript API; the HTML directives are as alien-feeling as most frameworks.

Later I was discussing this strange incident with friends (“I found an open tab in my browser—do you think this is a signal from the universe that I should get into Vue.js?”) and Irina also highlighted the fact that Vue.js “components” might be simpler to build than the equivalent in React, and also be less coupled.

This derived into talking about The Dream:

You know what the dream is? Have everything be an npm package that I can plug in to any framework I like. And everything is packages packages packages

📦.js

Oprah giving free packages away to everyone
You get a package! And you get a package! And you get a package! And you get a package! And you get a package… everyone gets a package!

(Irina demanded an Oprah themed meme)

And of course this reminded me to earlier conversations with chameleonic Jen about modularising everything and maximising reuse. She would propose, for example, abstracting a card game into separate modules; one for handling the rendering, other for handling card games in an abstract way, another one for handling a specific type of game. This way you could build multiple games by just providing an implementation for the specific game. (Games are notoriously often not built this way).

Likewise, Aria talked about radical modularity at Web Rebels and the notion that if your modules are small enough, they are done. Finished. You rarely need to ever touch them again (unless there’s a bug). Watch the talk: it’s very inspiring.

I really like this “pure” idea, and can work very nicely as long as you keep your data and logic separate from your view.

Unfortunately, the issue is that UI code often intermingles both data and view, so you end up declaring your data as instances of whatever base component the UI is using, which is not very maintainable on the long run. If you want to change the UI you will need to take the ‘data’ bits out of the UI code, or write some sort of adapter between “UI code” and “data”, to have to only change “adapter” when you decide you don’t like your current view layer. This could be a performance hit, so you might want to sacrifice flexibility for performance.

But hey… everything in computing is always a trade-off!

 

How to write a talk

Hey Sole, you have spoken on a lot of places and go to a lot of conferences, so maybe you have some advice on how to write a talk?

Yes, indeed I do! In fact, this question comes up so often that I figured it would be super useful to share my method with more people, rather than just individually 🙂

Before we start, allow me to highlight that this is my method, and it might not suit you. Talks come in many formats and shapes depending on their content, the audience and many other factors. I usually talk about technical stuff, and this guide is about writing that type of talks.

Also, if you’re the TL;DR type, I made you a flow chart (using draw.io):

how to write a talk flow chart

Continue reading “How to write a talk”

Why I won’t talk about being a woman in tech (and neither should you)

I won’t do talks on “being a female in tech” for a number of reasons.

First, because they prevent me from doing talks on tech, which is what I would actually like to do, because that’s what I am best at. If someone approaches me to talk somewhere just because I’m a woman, they haven’t done their job of finding what my expertise is. Therefore, I am going to insta-decline.

It not only is very insulting and distracting, but also pigeonholes you into “talking about being a woman in tech”, instead of “woman who knows her tech”. It feels like, once again, we’re delegating on women and other vulnerable collectives the “caring for others” matters, in addition to their normal job. That is not OK.

Second, it devaluates the job of diversity and gender studies professionals, as it is implied that just by virtue of me being a women, I not only can talk for all other women, but also know how to fix things for all other women (note: I don’t).

That, in turn makes me the “token woman”, where everyone assumes that I represent all other technical women. This is a handicap on my abilities to actually do my job and be an excellent technical person, as it puts an additional pressure on me to be “perfect or go home” (ref: tokenism).

And it also makes me lose precious opportunities to be a good role model for other women. If they see that women in tech are relegated to “speaking about being a female in tech” instead of building and talking about solid technical stuff, they’re going to be discouraged and uninspired. Why try hard at tech if people are just interested in you because of your physiognomy? It is, once again, focusing on women physically, instead of highlighting their work and personal achievements.

Finally, I really dislike that it so often feels like either…

  • a really lazy attempt to solve the lack of diversity in conferences by organisers that didn’t do their work, but also do not want an internet storm rightly complaining about their all male line-up
  • or a feeble attempt at having a “trendy” topic in your conference, because “diversity is the hot new thing everyone is talking about”

No! The answer to an all male line-up is not a talk on women on tech by a woman. The answer is diverse people in the line-up, talking about tech. And if they want someone to cover that “trendy topic”, they should reach out to qualified people. They need to do their homework, instead of reaching out to the first “tech woman speaker” they can think of, and asking her to do a talk on something she’s not qualified on (which again, puts her in a vulnerable position).

This doesn’t mean that I do not care about other women in tech. Of course I care! I dream of the day we stop having these conversations because “being in tech” has become the normal default, but in the meantime I will contribute to pave the road to equality by talking about tech, inspiring other women and normalising the presence of women in these environments. Which is tiring enough, with all the sexism already present in our industry/society.

The P-word

Remember “progressive house”? As in, the music genre? It emerged sometime during the 90s and increasingly became synonymous with “repetitive songs that last 8 or more minutes with a 6 minutes build up to a 30 seconds chorus which repeats for the rest of the song”.

Or, as quoted from Dave Seaman on the Progressive House Wikipedia entry: “it had gone the same way as progressive rock before it. Pompous, po-faced and full of its own self importance. But basically was really quite boring”

Also, to fully appreciate progressive house, you needed a decent sound system or the drumbeat got lost, muddled with the rest of subtones, and the echoes and reverbs were conflated with the main lead in what resulted an awful amalgamation of things fighting for protagonism. A very confusing cacophony.

While I liked the spectacular pads and the synths, I was more into happy melodic pop songs. They would maybe build up in 15 seconds, and go all in with a 💥BANG💥 of harmonies and fun. Almost instadelivering feel-good moods and all sorts of positive vibes. You could listen to pop songs literally almost anywhere, from crappy earphones to headphones to fancy Bang and Olufsen set ups (although if you have a B&O system are you allowed to have fun with it or just use it for austere minimalist making-a-statement decoration?). It always sounded good, and in various degrees of better or worse, depending on your particular setup.

This might be the most awkward introduction ever to the subject I wanted to cover, but the three things start with a P, and I kind of like the letter P because it’s how my one of my last names starts with too. But I’m digressing. Sole, what is the third thing?

The third thing is…

(DRAMATIC… PAUSE… FOR… SUSPENSE)

Progressive Web Apps. Or PWAs as they’re sometimes affectionately called.

Don’t tell me you didn’t see it coming. It would pamper too much to my non-existent thriller writing skills that I tricked you into reading this far and you didn’t already guess who was behind the curtain holding a deadly weapon, so to speak. And I’d get into writing thrillers and that would be really awkward.

ANYWAY… articles about PWAs them are multiplying lately. It’s like late Spring finally happened and the seeds are finally sprouting. Cool. 🌱 I like plants a lot (see, another thing that starts with P). There’s a new article, or more, every week: Ada‘s, Alex‘s, Andrew‘s.

Now, my name doesn’t start with an A, but I wanted to write something about this whole new “world” because all I have to say won’t fit in a couple of snarky tweets, and fitting my sophisticated thoughts in a “tweetstorm” won’t make it easy to reference let alone read, but too easy to take things out of context. And I have a few things to say.

So – this is my personal opinion. It has nothing to do with my employer:
Continue reading “The P-word”

This is why C is a useful language

My colleague Wilson Page tweeted a couple days ago which was the most useful language we knew after JavaScript:

Since I like to add interesting information to conversations, I looked at the answers first, to make sure no one had mentioned the answer I had in mind. And surprisingly enough, no one had indeed mentioned my suggestion: C!

Wilson also asked for the reasons why I would suggest C as a useful language. I tweeted some:

but Twitter’s interface is horrible, and I feel they deserve a whole entire post, sooooo…

This is why C is a useful language

All the operating systems we interact with on a daily basis are built with C at some point. If not directly built with C, they are built using C inspired/derived languages, such as C++ or Objective C.

There are newer languages such as Java, C#, Rust or Swift that try to build on the experience of C programming, and try to solve the most common errors–often memory allocation and string handling. If you understand how C works, you understand the motivation for these new languages. You also understand better security issues–when someone talks about buffer overflows and about dumps, it doesn’t just sound like obscure jargon to you. Things make sense.

As web developers we are usually happy in our top level abstraction layer: the browser or node.js, or maybe even Electron. But if you try to dig a little bit deeper you’ll soon stumble upon binary stuff. Which is more often than not C (in node.js’s case) or, again, a C-derived language. Understanding how to interface with these parts is great! And often, modules that bind a library or API to JS are not entirely documented. Being able to look at code and understanding it is a great advantage. Not only does it empower you as a user, it could even help you modify the bindings and make them do whatever you need, by using your knowledge of the two worlds: JS and binary. This is the same case with Ruby, Python, … you name it.

You can do really powerful and superoptimised things in C; a frequent example is processing lots of data with little overhead. Normally, in our abstracted world we delegate tasks such as dealing with the stack or allocating and deallocating memory to the script interpreter of choice. In C you have to be aware of this, which can be a huge hassle, but if you understand those aspects you can manipulate them to your advantage. Allocate the right amounts of stack memory and inline a bunch of calls and you can churn away lots of data without having to incurring on that overhead I mentioned above.

Much of that kind of data crunching code is very simple conceptually–just a bunch of integer arithmetic operations. Which is also why it’s relatively easy to also transpile to highly optimised JavaScript via things such as Emscripten and Asm.js. See? JS and C–it all ties together neatly.

Another example of why knowing C will open doors for you is WebGL shaders. They are not written in C, but their syntax and semantics are very similar. They are relatively small programs that are designed to be run in parallel in many GPU cores at the same time. They are also uncannily similar to data crunching algorithms as in being essentially a lot of arithmetic operations. But you need to understand the limitations: using the proper data and operator types, no dependencies with the results from other calculations, no loops, costs of accessing memory for texture reads/writes, etc. You have to be quite close to the ‘metal’, but in exchange you get incredible performance.

Likewise, if you want to get into hardware programming, you’ll probably go through an Arduino first, but once you outgrow their IDE you might want to remove that layer and go to the next one: it will be written in C, and you might be able to save on a few Kbs of memory by removing abstractions here and there. Software for embedded systems is often written in C or C-derivatives which gets then compiled into some kind of binary or “transpiled” and written to programmable hardware.

And if you remove C you find Assembler. C is the closest to Assembler code without being Assembler. As I said, it exposes much of the metal, so much that you can often find bits of Assembler embedded inside C code. You would think that this is for when you get really specialised, but not really. The Linux kernel has many parts written in C + assembler. Many written in assembler only, as e.g. the boot sequence. There are even operating systems entirely written in Assembler, such as MenuetOS.

Learning Assembler might probably be totally useless on a day to day practical basis, but it will enlighten your understanding of general computing, and concepts such as registers, RAM, instruction sets, CPU cores and memory speed will not sound alien to you anymore, and again, you understand what limitations exist. When you stop thinking of a computer as a “black box that does things” and understand all the different subsystems and how they relate to each other, you start thinking about programming in a completely different, more nuanced way.

And at some point you will reach the epiphany moment, and realise that it is actually a miracle that anything works at all, with so many layers and moving parts written by so many different people. And understand why the Web is such a great space to be in, and why abstractions are good because otherwise we would all be trying to debug why our mallocs are failing and not getting anything done!

Happy programming in whatever language you like! \o/