Fixing a “git mess” with cherry pick (from the command line)

Yesterday we were remotely pair programming (by me sharing my screen and my colleague Alex looking at it), and at some point I had to send a PR with a test change to validate the process would trigger the automation he had set up… but turns out he had changed something on the repo after I had forked and cloned it, and so I had a conflict!

I somehow resolved it (note to self: talking while resolving conflicts is not a good idea) but when I then pushed my branch to GitHub and created the pull request, it would not run automation because of that conflict (even if I had ‘resolved’ it).

I did not want to clone again and apply the changes… and neither wanted Alex! He said that everything is fixable with Git, so he showed me how to get myself out of this situation.

First we add the upstream remote to our repository, so we can check it out:

git remote add upstream https://github.com/user/repo.git

We fetch the data from the upstream repo:

git fetch upstream

Run a git log to find the hash of the commit we do want to keep (the Good Commit):

git log

Say it produces something like this:

commit 0BAD0BAD0BAD0BAD0BAD0BAD0BAD0BAD0BAD0BAD
Author: sole
Date:   Thu Mar 2 16:02:03 2017 +0000

    AAAAARGHHHH

commit 1337133713371337133713371337133713371337
Author: sole
Date:   Thu Mar 2 15:55:26 2017 +0000

    testing at the request of @ochameau

commit 0b225a66cf3ad67b3c67360d0e7c1e329ca3ce34
Author: Alexandre Poirot
Date:   Tue Feb 28 11:57:23 2017 +0100

    Upload screenshot and status

We don’t want the last commit (0BAD0BAD), we want the previous one (13371337). So make a note of that hash somewhere.

Now we check out their master branch (which we want to use as the base for our modification):

git checkout upstream/master

And we tell git to apply our changes from The Good Commit, using the hash we found before:

git cherry-pick 1337133713371337133713371337133713371337

Since I didn’t even change the same file he changed in his other commit, this applied neatly. No conflicts!

The problem is that my local repository is now different from the GitHub copy, because I had pushed a version which had an additional commit to resolve ‘the conflict’ (I tell you, this was quite messy!)

The solution is to force push to my GitHub repository (gasp!):

git push origin HEAD:master -f

And you don’t need to update the PR that had “conflicts”–GitHub already picks that you updated the repository, and since there are no conflicts anymore, the integration stuff works 🙂

Using the currentColor CSS keyword

I learnt about this CSS keyword via Glen Maddern’s talk at Cold Front in Copenhagen, back in September, and I was super astonished I hadn’t heard about it before! I guess I do too much JavaScript 😏

Anyway, it represents the current inherited color, so you can use it to create borders and backgrounds and things like that, matching the color of the element, but without actually writing it again! This can help in avoiding repetition and keeping the CSS more manageable, or in Glen’s use case, in writing more responsive components.

Given this HTML code:

<p id="thing">hello <span>world</span></p>

and this CSS code:

#thing {
  color: blue;
  font-size: 3rem;
}

#thing span {
  padding: 3px;
  background-color: white;
  box-shadow: 0px 0px 20px currentColor;
}

the color for the box-shadow in the #thing span will be blue, because it uses currentColor, which at that point has inherited blue from #thing. If we change the color of #thing to something else, we do not need to update the code for #thing span. Beautiful!

You could even use CSS variables to set a global colour variable that is used in the document, and currentColor will inherit values set with var. For example:

body {
  --thecolor: red;
}

#thing {
  color: var(--thecolor);
  font-size: 3rem;
}

#thing span {
  padding: 3px;
  background-color: white;
  box-shadow: 0px 0px 20px currentColor;
}

… renders the text red, and the box shadow is red as well.

Fantastic!

Unfortunately it seems like calc() doesn’t accept color units yet, which means we cannot do maths on the color values. Otherwise, we could do things such as what CSS pre-processors do, generating new colours using hsla functions, etc.

Navigating Bash history

Today I was doing some command line based work and I found myself scrolling too many times to reuse the long command I had written, followed by a lot of ls’s and similar. I thought: this is ridiculous, there has to be a better way!

So I searched and I quickly got to history. If you enter history in a prompt, you get… a list of your last entered commands! And you don’t need to scroll past irrelevant ones until you find the one you actually wanted to use. Beautiful! I was excited so I tweeted it.

Then people that knew more than me told me to use CTRL+R. I had accidentally landed in the CTRL+R weirdness before, when I wanted to reload the current page and I thought the browser window had the focus and it was the bash window instead, but since I could just ESCape, I didn’t pay too much attention to it.

The way CTRL+R works is similar to a “find in page” functionality: you type some text and it will start showing results that match as you type–no need to press a ‘find’ button. Then to scroll across results you can press CTRL+R or CTRL+SHIFT+R (to switch between next and previous results).

Another alternative is to call history and then grep its output (grep is a filter: it will just show lines that match):

history | grep 'what you want to find'

So, for interactive searches: CTRL+R. To reconstruct what you’ve done (e.g. for a tutorial), history, or maybe combined with grep.

I don’t quite understand how I have survived without it for so long, but it also proves that you don’t need advanced command line superskills to survive in life 😛

With thanks to Jan Lehnardt and Reza Akhavan for the tips!

Fixing VirtualBox guests losing access to the network when the host has been suspended

I am running VirtualBox guest instances of Linux on a Mac OS X host. Sometimes when I open the laptop lid I find that the guest instances are unable to connect to the network–it simply times out!

The best solution I’ve found so far is to disable and enable the network interface in the guest, by right clicking on the little network icon on its window, and selecting “Connect Network Adapter…” consecutively (first to ‘disconnect’, then to ‘reconnect’ again). Then the network comes back to normal.

I’m guessing that perhaps the VirtualBox daemon or whatever it uses in Mac gets killed when the host is suspended, but I’ve no idea of how to restart it (and no will to dig it out). Other solutions could be totally closing and starting VirtualBox again, or even your laptop! But they are way more obtrusive, and this is faster. Like a gentle ‘turn off and on again’ 😉

.

Migrating to a new laptop (or: Apple-inflicted misery, once again)

Yesterday I got my new laptop and the technician’s idea was to just migrate all my settings and stuff over from the old one for simplicity, using Mac OS X’s built-in migration assistant.

I actually didn’t want to do this because I liked the notion of a clean slate to get rid of old cruft that I didn’t want anymore, but I thought I would give the migration assistant the benefit of the doubt.

TL;DR: it doesn’t seem to be ready for migrating a laptop that has been given intensive usage and has plenty of small files (think: the contents of node_modules) and big files too (think: screencasts).

The new laptop is one of those ultra light MacBooks with a USB-C connector, so it doesn’t have an Ethernet connector to start with unless you add one via the semidock.

The initial attempt was to migrate data using the wireless network. After three hours and the progress barely changing from “29 hours” to “28 hours” I gave up and started reaching for the Thunderbolt to Ethernet adapters. We stopped the process and set up both computers connected to the same switch with ethernet cables. The estimation was now 4 hours, MUCH BETTER.

I calculated that it would be done at about 20h… so I just kept working with my desktop computer. I had a ton of emails to reply to, so it was OK to not to use my normal environment—you can’t use the computer while it’s being migrated.

A little bit before 20h I looked at the screen and saw “3 minutes to finish copying your documents” and I got all stupidly excited. Things were going according to plan! Yay! So I started to get ready to leave the office.

Next time I looked at the screen it said something way less encouraging: “Copying Applications… 2 hours, 30 minutes left”

I was definitely not going to wait until 22:30 hours… or even worse, because the estimation kept going up–2 hours, 40 minutes now, 3 hours… I decided to go home, not without wondering if the developer in this classic XKCD cartoon was working at Apple nowadays:

Remaining time: Fifteen minutes... Six days... thirty seconds

Urgh.

Today I accidentally slept in (thanks, jetlag) and when I arrived into the office, all full of hope and optimism, I found the screen stuck at “359 hours, 44 minutes left”.

I turned around to Francisco and asked him: “hey, how many days is 359 hours?” He opened up the calculator and quickly found out.

About 14 days.

And 44 minutes, of course.

I gave the migration “assistant” some more benefit of the doubt and went for lunch. When I came back it was still stuck, so it was time to disregard this “assistance” and call rsync into action.

  • I enabled SSH on my old laptop (Preferences – Sharing – Enable remote login)
  • Created an SSH key on my new laptop so I didn’t have to type in the password of the old one each time. Then enabled the new key with ssh-agent, otherwise ssh doesn’t even bother trying to use that key when connecting to remote hosts, and copied and added the new public key to the authorized_keys file in the old computer (the GitHub instructions for generating keys are very good at explaining this).
  • rsync was already installed on the computers I think, or maybe I installed it with brew, but I’d swear it was already there
  • then to copy entire directories I would use something like rsync -avz --exclude '.DS_Store' sole@oldcomputer.local:/Users/sole/folderIWantToCopy/ /path/to/folder/parent
  • except when the folder would have a space in the name, in which case I had to escape it with a TRIPLE BACKSLASH. For example, to copy the VirtualBox VMs folder: rsync -avz --exclude '.DS_Store' sole@oldcomputer.local:/Users/sole/VirtualBox\\\ VMs/ .

I have most of my stuff in a ~/data directory, so migrating between computers should be easy, by just copying that folder.

Whatever wasn’t, I copied manually. For example, the Google Chrome and Chrome Canary profiles, because I didn’t want to set them up from scratch–you can copy them and keep some of the history without having to sign into Google (some of my profiles just don’t have an associated Google ID, thank you very much). Unfortunately things such as cookies are not preserved, so I need to log into websites again. Urgh, passwords.

cd ~/Library/Application\ Support
mkdir Google/Chrome -p
mkdir Google/Chrome\ Canary -p
rsync -avz --exclude '.DS_Store' sole@oldcomputer.local:/Users/sole/Library/Application\\\ Support/Google/Chrome/ ./Google/Chrome/
rsync -avz --exclude '.DS_Store' sole@oldcomputer.local:/Users/sole/Library/Application\\\ Support/Google/Chrome\\\ Canary/ ./Google/Chrome\ Canary/

I also copied the Thunderbird profiles. They are in ~/Library/Thunderbird. That way I avoided setting up my email accounts, and also my custom local rules.

I logged into my Firefox account in Nightly and it just synced and picked up my history, bookmarks, saved passwords and stuff, so I didn’t even need to bother copying the Firefox profiles. It’s very smooth! You should try it too.

Note that I did all this copying before even downloading and running the apps, to avoid them creating a default profile on their first start.

While things were copying I had a look at the list of apps I had installed and carefully selected which ones I wanted to actually re-install. Some of them I installed using homebrew, other using the always-awkward, iTunesque in spirit and behaviour, App Store. Of note: XCode has spent the whole afternoon installing.

I also took this chance to install nvm instead of just node stable, so I can experiment with various versions of node. Maybe. I guess. We’ll see if it’s more of a mess than not!

In short, it’s now almost midnight but I’m done. I started copying things at 17h, and had a few breaks to do things such as laundry, dishwasher, tidying up my flat, grocery shopping, and preparing and eating dinner, so let’s imagine it just took me 4 hours to actually copy the data I was interested in.

Moral of the story: rsync all the things. Don’t trust Apple to know better than you.