Explicit vs implicit consent

This morning I saw I had been added to somebody else's github repository. I don't know this person other than from the fact that we interacted via one of my projects. They are not a coworker. They are not even in any of the github organisations I am in.

Why could they add me in the first place is a surprise. Not even a confirmation email? Not even a "hey, person A wants to add you to repository B. Are you sure you are OK with this?". No, just straight up added to somebody else's repository.

"So what?", you might ask. Well, so to start with I feel I'm not in control of my account. Anybody can add anyone to any repository, even repositories depicting practices with which you might not be OK with. But until you find out, you're associated to that repository. And also, you get pinged about all that happens on it. Get ready for that flurry of emails into your inbox!

This is akin to being tagged in pictures without your authorisation. You can be tagged "for reals", i.e. a real picture of you is tagged, or you could be tagged "for the lulz", i.e. somebody tags an offensive picture with your name. Anyway, in other social networks you can disable this feature. In GitHub, you can't. Or if you can, it's hidden behind a preference pane which is linked to from a non-evident-at-all link in a non-totally-related-subpane. Like this page for stopping GitHub from autosubscribing you to repositories. Seems like it's deliberately almost hidden so that people keep getting "pings" from the site.

I wonder how much of the popularity of GitHub is due to their usage of implicit rather than explicit consent.

I also wonder how would everyday systems look like if they were built with explicit consent in mind from the ground up. Not adding privacy or blocking features as an afterthought, but the other way round. Is this a consequence of these systems being built by people for whom those concerns are an afterthought, if they ever become a concern, at all?

My worry: implicit consent leads to abuse. As soon as you reach a certain mass, things get nasty. This amount of email and notifications causes me a great deal of stress. Some dear friends of mine are afraid of sharing unfinished code because they instantly get forks, comments, and stupid suggestions that make me wonder what is wrong with life. Managing all this is time consuming, exhausting, debilitating.

Of course, explicit consent results in slower adoption rates. More checks. More "bureaucracy". But safer, less stressful environments, because someone has already thought of "what can go wrong", or---dare I even say it?--- someone has realised that not all users will fit the same "can share by default, happy to be included in all groups, and OK with getting emails from anyone" profile, and therefore the system protects these vulnerable users by default, instead of defending them in reaction to attacks.

Can we work on better, faster, efficient checks so using these explicit-consent systems does not become a chore? Can we design more inclusive systems where users can control their experience, rather than being constantly assaulted by a infinite stream of incoming unwanted stimuli?

Maybe an answer to this is more self hosted federated systems, where people can configure the service to respond to their expectations and preferences, and nothing in their experience changes unless they do want to change it on purpose.

Or perhaps it's about demanding more of the systems we use. Or about switching to systems that let users control their experience, rather than potentially unconsciously forcing an abusive outcome on them by design.