Tag Archives: london

Functional JS, IRC servers and the internet of things

I attended the London Functional JS meet up past Wednesday. James Coglan gave a nice walkthrough of the approach he’s been experimenting with to write functional JS. This doesn’t mean just using Array.map and other “functional JS” tools but going way further and encapsulating the data into unified types–for example Promises, or Streams– so that nothing can be synchronous or maybe asynchronous or both, sometimes, anymore (and so we can’t release The Zalgo). Transducers also showed up.

I must confess I am not really an expert in any of these things, which is precisely why I went. I didn’t get all the concepts he discussed, and I’m perfectly fine with that: it is food for thought for when I come back from Berlin. I like having undigested ideas kind of mulling on the back of my brain, and then at some unpredictable point they all come together and voilà, I have the solution to some woe that has been chasing me for weeks.

After the presentation proper there was time for a coding dojo. So James had a bit of skeleton starter for an IRC client written in a functional manner, and we had to implement more commands. Sadly I hadn’t brought my laptop because I didn’t know there was a dojo, and I don’t find value in carrying a laptop to a social meetup generally (I have my phone for tweeting or taking notes anyway), but there was some more people I knew from “the London JS scene” (Howard, Karolis), so we “tri-pair programmed” on Karolis’ laptop. Or might I more accurately say that they did most of the work and typing while I threw random ideas at them based on my limited knowledge of functional and CoffeeScript?

Anyway, it was fun, and again it rekindled an idea that has been lingering on my mind since I attended NodeConf London: this notion of using IRC to have services communicate between them. Yes, you can connect them via a socket or using HTTP requests or whatever invisible protocol but since someone mentioned using Jabber as a protocol for connecting “things” from the Internet of Things, somehow my brain transfigured it into having these “things” use IRC instead of Jabber and I became interested in the idea of poking into the realtime conversation between machines. I’m not really sure what they would be talking about or what kind of messages they’d be exchanging, but it would be weirdly interesting to sort of program them to reprogram themselves as they learn, and see where they would go when they all output their stuff to this common data bus, i.e. an IRC channel. And how would they react to whatever humans said to them?

I tried playing with hubot a few months ago in a bit of spare time I had (that was like 6 hours) but I didn’t quite understand how to access “its brain”–is it just a massive key-value set of pairs? and how long does it persist?

There was also the issue of it being written in CoffeeScript, and how shitty inadequate and oldschool their module management system was, but I could deal with those if I closed my eyes and played the pragmatic game. Perhaps there are better bot-that-speak-to-IRC alternatives, but I don’t know any; ideas are welcome!

I also envision this notion of being able to visualise any of these services’ brains while they are running and learn and remorph their fact base. I imagine it would look like something like this, which has nothing to do with that but it’s how I imagine it:

Now to more Web Audio workshop preparations :D

“Just turn it into a node module”, and other mantras Edna taught me

Here are the screencast and the write up on the talk I gave today at One-Shot London NodeConf. As usual I diverge a bit from the initial narrative and I forgot to mention a couple of topics I wanted to highlight, but I have had a horrible week and considering that, this has turned out pretty well!

It was great to meet so many interesting people at the conference and seeing old friends again! Also now I’m quite excited to hack with a few things. Damn (or yay!).

Continue reading

PyLadies’ Web scraping workshop at Mozilla London

I’ll be hosting this event at Mozilla London next Sunday 13th of July. If you’re interested in web/data scraping this will be your thing!

Nicola Hughes will be leading the workshop. Bring your laptop and get scraping! :-)

For signing up and more details, visit the event page.

On EdgeConf London 2014

This is one of the posts I wanted to write since about… two months three weeks ago? It’s so long ago that most of the details of what I wanted to comment have just been sort of lost in my mind. So I’ll content myself with being slightly vague.

EdgeConf London 2014 was mostly a good, albeit peculiar, conference. The organisers made a great job of taking care of speakers, bringing us together, speaking to the moderators in advance, etc, and selecting delegates (they had to apply first and then were, or not, accepted–that’s why I say this is a “peculiar” conference). You could feel that the people in the room really cared for the web, or were, at least, deeply interested in learning more about it for either entirely altruistic reasons (“to keep it open”) or just to do their job better (“to reach more customers”). All valid reasons and equally respectable.

The great mix of attendees from different backgrounds coupled with the “debate” style meant there were higher chances of hearing different informed perspectives than usual. Also lots of interesting conversations happened in the hallway, and some afterwards-the “EdgeConf effect” kept going on after the event finished, which is always nice.

My only concern is also a two-fold piece of advice for anyone thinking of organising a similar open discussion / moderated table event:

  • a) you need a suitable layout: even if you have a moderator, being able to see the faces of your panel mates allows you to take visual cues when your time is up or when you’re being confusing or plainly straying away in digressions. A semicircle is the answer to this.
  • b) you need a moderator that takes into account cultural differences. Some people will just wait until asked to speak, whereas other will take as much time as they can and interrupt other people eagerly. A moderator has to keep this in mind and act accordingly to give a voice to everyone in the panel.

And yes, I voiced these concerns to the organisers and they took it well, as I expected they would do :-)

Registration for the next EdgeConf is open already. It will happen in San Francisco, the 20th of September, and you should totally attend if you can.

EdgeConf London, Audio Tags, and Web MIDI

EdgeConf

I am going to be in the Web Components panel at EdgeConf London tomorrow (21th of March).

Being the perfectionist nit-picky person I am, and having never been in a panel of this type, I’m obsessed with getting ready and reading as much as I can on the subject. Still have a bunch of proposed questions to go through, but I hope I’ll do good. Wish me luck!

The conference is supposedly going to be streamed live and talks recorded and professionally subtitled, which is awesome because speech recognition and my command of English don’t seem to mix.

Audio Tags

Also, I forgot to post about my latest Mozilla Hacks article: Audio Tags: Web Components + Web Audio = ♥. It is a write up on my CascadiaJS 2013 talk with the same name, only with better looking audio tags, less jetlag, and a lot of editing–thank Angelina for that!

Web MIDI

Good things that came out of the article: somehow some people got to learn about my interest in Web Audio, then I showed my existing experiments and cried begged for Web MIDI in Firefox so I didn’t have to use node.js as intermediate actor. Then Kyle Machulis (also known as qDot) decided to take the Web MIDI API in Firefox bug. Which means that sometime soon we’ll have support for Web MIDI in Firefox. Which means that a little bit later we could have Web MIDI in Firefox OS and Firefox for Android. Which means… well, do you really need me to explain how cool that can be?

Imagine controlling synthesisers with your really portable device, or using MIDI input devices to control things in your phone… all in JavaScript? There is a huge range of portable audio devices specifically targeted at iPads: there is a market for portable audio creation. Why not making it easier?

But we need help. Kyle is also working on Gaia, and he can’t implement all himself. I’m glad the ever helpful Chris Wilson (one of my personal developer heros) is always keen to give advice (example), but we need more than that. Building Web MIDI support implies building a Hardware Access Layer (HAL for friends) between the browser ‘core’ and the MIDI interfaces for each operating system. We’re talking of at least three different HALs–Linux, Mac OS, Windows. And this is where you can step in and become another Firefox hero! If interested, ping me or Kyle or probably just ask in the bug.