Journal tags: coding

26

sparkline

Train coding

When I went up to London for the State of the Browser conference last month, I shared the train journey with Remy.

I always like getting together with Remy. We usually end up discussing sci-fi books we’re reading, commiserating with one another about conference-organising, discussing the minutiae of browser APIs, or talking about the big-picture vision of the World Wide Web.

On this train ride we ended up talking about the march of time and how death comes for us all …and our websites.

Take The Session, for example. It’s been running for two and a half decades in one form or another. I plan to keep it running for many more decades to come. But I’m the weak link in that plan.

If I get hit by a bus tomorrow, The Session will keep running. The hosting is paid up for a while. The domain name is registered for as long as possible. But inevitably things will need to be updated. Even if no new features get added to the site, someone’s got to install updates to keep the underlying software safe and secure.

Remy and I discussed the long-term prospects for widening out the admin work to more people. But we also discussed smaller steps I could take in the meantime.

Like, there’s the actual content of the website. Now, I currently share exports from the database every week in JSON, CSV, and SQLite. That’s good. But you need to be tech nerd to do anything useful with that data.

The more I talked about it with Remy, the more I realised that HTML would be the most useful format for the most people.

There’s a cute acronym in the world of digital preservation: LOCKSS. Lots Of Copies Keep Stuff Safe. If there were multiple copies of The Session’s content out there in the world, then I’d have a nice little insurance policy against some future catastrophe befalling the live site.

With the seed of the idea planted in my head, I waited until I had some time to dive in and see if this was doable.

Fortunately I had plenty of opportunity to do just that on some other train rides. When I was in Spain and France recently, I spent hours and hours on trains. For some reason, I find train journeys very conducive to coding, especially if you don’t need an internet connection.

By the time I was back home, the code was done. Here’s the result:

The Session archive: a static copy of the content on thesession.org.

If you want to grab a copy for yourself, go ahead and download this .zip file. Be warned that it’s quite large! The .zip file is over two gigabytes in size and the unzipped collection of web pages is almost ten gigabytes. I plan to update the content every week or so.

I’ve put a copy up on Netlify and I’m serving it from the subdomain archive.thesession.org if you want to check out the results without downloading the whole thing.

Because this is a collection of static files, there’s no search. But you can use your browser’s “Find in Page” feature to search within the (very long) index pages of each section of the site.

You don’t need to a web server to click around between the pages: they should all work straight from your file system. Double-clicking any HTML file should give a starting point.

I wanted to reduce the dependencies on each page to as close to zero as I could. All the CSS is embedded in the the page. Likewise with most of the JavaScript (you’ll still need an internet connection to get audio playback and dynamic maps). This keeps the individual pages nice and self-contained. That means they can be shared around (as an email attachment, for example).

I’ve shared this project with the community on The Session and people are into it. If nothing else, it could be handy to have an offline copy of the site’s content on your hard drive for those situations when you can’t access the site itself.

Codebar Brighton

I went to codebar Brighton yesterday evening. I hadn’t been in quite a while, but this was a special occasion: a celebration of codebar Brighton’s tenth anniversary!

The Brighton chapter of codebar was the second one ever, founded six months after the initial London chapter. There are now 33 chapters all around the world.

Clearleft played host to that first ever codebar in Brighton. We had already been hosting local meetups like Async in our downstairs event space, so we were up for it when Rosa, Dot, and Ryan asked about having codebar happen there.

In fact, the first three Brighton codebars were all at 68 Middle Street. Then other places agreed to play host and it moved to a rota system, with the Clearleft HQ as just one of the many Brighton venues.

With ten years of perspective, it’s quite amazing to see how many people went from learning to code in the evenings, to getting jobs in web development, and becoming codebar coaches themselves. It’s a really wonderful community.

Over the years the baton of organising codebar has been passed on to a succession of fantastic people. These people are my heroes.

It worked out well for Clearleft too. Thanks to codebar, we hired Charlotte. Later we hired Cassie. And it was thanks to codebar that I first met Amber.

Codebar Brighton has been very, very good to me. Here’s to the next ten years!

Schooltijd

I was in Amsterdam last week. Usually I’m in that city for an event like the excellent CSS Day. Not this time. I was there as a guest of Vasilis. He invited me over to bother his students at the CMD (Communications and Multimedia Design) school.

There’s a specific module his students are partaking in that’s right up my alley. They’re given a PDF inheritance-tax form and told to convert it for the web.

Yes, all the excitement of taxes combined with the thrilling world of web forms.

Seriously though, I genuinely get excited by the potential for progressive enhancement here. Sure, there’s the obvious approach of building in layers; HTML first, then CSS, then a sprinkling of JavaScript. But there’s also so much potential for enhancement within each layer.

Got your form fields marked up with the right input types? Great! Now what about autocomplete, inputmode, or pattern attributes?

Got your styles all looking good on the screen? Great! Now what about print styles?

Got form validation working? Great! Now how might you use local storage to save data locally?

As well as taking this practical module, most of the students were also taking a different module looking at creative uses of CSS, like making digital fireworks, or creating works of art with a single div. It was fascinating to see how the different students responded to the different tasks. Some people loved the creative coding and dreaded the progressive enhancement. For others it was exactly the opposite.

Having to switch gears between modules reminded me of switching between prototypes and production:

Alternating between production projects and prototyping projects can be quite fun, if a little disorienting. It’s almost like I have to flip a switch in my brain to change tracks.

Here’s something I noticed: the students love using :has() in CSS. That’s so great to see! Whereas I might think about how to do something for a few minutes before I think of reaching for :has(), they’ve got front of mind. I’m jealous!

In general, their challenges weren’t with the vocabulary or syntax of HTML, CSS, and JavaScript. The more universal problem was project management. Where to start? What order to do things in? How long to spend on different tasks?

If you can get good at dealing with those questions and not getting overwhelmed, then the specifics of the actual coding will be easier to handle.

This was particularly apparent when it came to JavaScript, the layer of the web stack that was scariest for many of the students.

I encouraged them to break their JavaScript enhancements into two tasks: what you want to do, and how you then execute that.

Start by writing out the logic of your script not in JavaScript, but in whatever language you’re most comfortable with: English, Dutch, whatever. In the course of writing this down, you’ll discover and solve some logical issues. You can also run your plain-language plan past a peer to sense-check it.

It’s only then that you move on to translating your logic into JavaScript. Under each line of English or Dutch, write the corresponding JavaScript. You might as well put // in front of the plain-language sentence while you’re at it to make it a comment—now you’ve got documentation baked in.

You’ll still run into problems at this point, but they’ll be the manageable problems of syntax and typos.

So in the end, it wasn’t my knowledge of specific HTML, CSS, or JavaScript APIs that proved most useful to pass on to the students. It was advice like that around how to approach HTML, CSS, or JavaScript.

I also learned a lot during my time at the school. I had some very inspiring conversations with the web developers of tomorrow. And I was really impressed by how much the students got done just in the three days I was hanging around.

I’d love to do it again sometime.

Indie webbing

The past weekend’s Indie Web Camp Brighton was wonderful! Many thanks to Mark and Paul for all their work putting it together.

There was a great turn-out. It felt like the perfect time for an Indie Web Camp. There’s a real appetite for getting away from ever more extractive silos and staking claim to our own corners of the web. Most of the attendees were at their first ever Indie Web Camp.

Paul asked me to oversee the schedule planning on day one, which I was happy to do. We made sure that first-timers got first dibs on proposing sessions. In the end, every single session was proposed by new attendees.

Day two was all about putting ideas into practice: coding, designing, and writing on our own website. I’m always blown away by how much gets done in just one short day. Best of all is when there’s someone who starts the weekend without their own website but finishes with a live site. That happened again this time.

I spent the second day tinkering with something I started at Indie Web Camp Nuremberg in October. Back then, I got related posts working here on my journal; a list of suggested follow-up posts to read based on the tags of the current post.

I wanted to do the same for my links; show links related to the one I’m currently linking to. It didn’t take too long to get that up and running.

But then I thought about it some more and realised it would be good to also show blog posts related to the link. So I did that. Then I realised it would be really good to show related links under blog posts too.

So now, if everything’s working correctly, then at the end of this post you will not only see related blog posts I’ve previously written, but also links related to the content of this post.

It was a very inspiring weekend. There’s something about being in a room with other people working on their websites that makes me super productive.

While we were hacking away on day two, somebody mentioned that they still find hard to explain the indie web to people.

“It’s having your own website”, I said.

But surely there’s more to it than that, they wondered.

Nope. If someone has their own website, then they’re part of the indie web. It doesn’t matter if that website is made with a complicated home-rolled tech stack or if it’s a Squarespace site.

What you do with your own website is entirely up to you. The technologies are just plumbing wether it’s webmentions, RSS, or anything else. None of it is a requirement. Heck, even HTML is optional. If you want to put plain text files on your website, go for it. It’s your website.

Federation syndication

I’m quite sure this is of no interest to anyone but me, but I finally managed to fix a longstanding weird issue with my website.

I realise that me telling you about a bug specific to my website is like me telling you about a dream I had last night—fascinating for me; incredibly dull for you.

For some reason, my site was being brought to its knees anytime I syndicated a note to Mastodon. I rolled up my sleeves to try to figure out what the problem could be. I was fairly certain the problem was with my code—I’m not much of a back-end programmer.

My tech stack is classic LAMP: Linux, Apache, MySQL and PHP. When I post a note, it gets saved to my database. Then I make a curl request to the Mastodon API to syndicate the post over there. That’s when my CPU starts climbing and my server gets all “bad gateway!” on me.

After spending far too long pulling apart my PHP and curl code, I had to come to the conclusion that I was doing nothing wrong there.

I started watching which processes were making the server fall over. It was MySQL. That seemed odd, because I’m not doing anything too crazy with my database reads.

Then I realised that the problem wasn’t any particular query. The problem was volume. But it only happened when I posted a note to Mastodon.

That’s when I had a lightbulb moment about how the fediverse works.

When I post a note to Mastodon, it includes a link back to the original note to my site. At this point Mastodon does its federation magic and starts spreading the post to all the instances subscribed to my account. And every single one of them follows the link back to the note on my site …all at the same time.

This isn’t a problem when I syndicate my blog posts, because I’ve got a caching mechanism in place for those. I didn’t think I’d need any caching for little ol’ notes. I was wrong.

A simple solution would be not to include the link back to the original note. But I like the reminder that what you see on Mastodon is just a copy. So now I’ve got the same caching mechanism for my notes as I do for my journal (and I did my links while I was at it). Everything is hunky-dory. I can syndicate to Mastodon with impunity.

See? I told you it would only be of interest to me. Although I guess there’s a lesson here. Something something caching.

Indie Web Camp Nuremberg

After two days at border:none in Nuremberg, it was time for two days at Indie Web Camp, also in Nuremberg.

I hadn’t been to an Indie Web Camp since before The Situation. It felt very good to be back. I had almost forgotten how inspiring and productive they can be.

This one had a good turnout of around twenty people. We had ourselves an excellent first day of thought-provoking sessions. Then on day two it was time to put some of those ideas into action.

A little trick I like to do on the practical day is to have two tasks to attempt: one of them quite simple, and the other more ambitious. That way, as long as I get the simpler task done, I’ll always have at least something to demo at the end of the day.

This time I attempted three bits of home improvement on my website.

Autolinking Mastodon usernames

The first problem I set myself was ostensibly the simple one. But it involved regular expressions, so then I had two problems.

I wanted to automatically link up Mastodon usernames if I mentioned one in my notes. For example, during border:none I mentioned Brian’s mastodon username in a note: @briansuda@loðfíll.is.

That turned out to be an excellent test case. Those Icelandic characters made sure I wasn’t making unwarranted assumptions about character sets.

Here’s the regular expression I came up with. It’s not foolproof by any means. Basically it looks for @something@something.something.

Good enough. Ship it.

Related posts

My next task was a bit more ambitious. It involved SQL queries, something I’m slightly better at than regular expressions but that’s a very low bar.

I wanted to show related posts when you get to the end of one of my blog posts.

I’ve been tagging all my blog posts for years so that’s the mechanism I used for finding similar posts. There’s probably a clever SQL statement that could do this, but I ended up brute-forcing it a bit.

I don’t feel too bad about the hacky clunky nature of my solution, because I cache blog post pages. That means only the first person to view the blog post (usually me) will suffer any performance impacts from my clunky database queries. After that everything’s available straight from a cached file.

Let’s say you’re reading a blog post of mine that I’ve tagged with ten different keywords. I make a separate SQL query for each keyword to get all the other posts that use that tag. Then it’s a matter of sorting through all the results.

I loop through the results of each tag and apply a score to the tagged post. If the post shares one tag with the post you’re looking at, it has a score of one. If it shares two tags, it has a score of two, and so on.

I decided that for a post to be considered related, it had to share at least three tags. I also decided to limit the list of related posts to a maximum of five.

It worked out pretty well. If you scroll down on my recent post about JavaScript, you’ll see links to related posts about JavaScript. If you read through a post on accessibility testing, you’ll find other posts about accessibility testing. If you make it to the end of this post about Mars colonisation you’ll see links to more posts about exploring our solar system.

Right now I’m just doing this for my blog but I’d like to do it for my links too. A job for a future Indie Web Camp.

Link rot

I was very inspired by Remy’s recent post on how he’s tackling link rot on his site. I wanted to do the same for mine.

On the first day at Indie Web Camp I led a session on link rot to gather ideas and alternative approaches. We had a really good discussion, though it’s always worth bearing in mind that there’ll never be a perfect solution. There’ll always be some false positives and some false negatives.

The other Jeremy at Indie Camp Nuremberg blogged about the session. Sebastian Greger was attending remotely and the session inspired him to spend the second day also tackling linkrot.

In the end I decided to stick with Remy’s two-pronged approach:

  1. a client-side script that—as a progressive enhancement—intercepts outbound links and re-routes them to
  2. a server-side script that redirects to the Internet Archive if the link is broken.

Here’s the JavaScript I wrote for the first part.

It’s very similar to Remy’s but with one little addition. I check to see if the clicked link is inside an h-entry and if it is, I pass on the date from the post’s dt-published value.

Here’s the PHP I wrote for the server-side redirector. The comments tell the story of what the code is doing:

  • Check that the request is coming from my site.
  • There also has to be a URL provided in the query string.
  • Make a very quick curl request to get the response headers from the URL. The time limit is set to 1 second.
  • If there was any error (like a time out), give up and go to the URL.
  • Pick the response headers apart to get the HTTP status code.
  • If the response is OK, go to the URL.
  • If the response is a redirect, go around again but this time use the redirect URL.
  • Construct the archive.org search endpoint.
  • If we have a date, provide it. Otherwise ask for the latest snapshot.
  • Ping that archive.org URL. This time there’s no time limit; this might take a while.
  • If there’s an archived copy, redirect to that.
  • There’s no archived copy. Give up and go the URL anyway.

Not perfect by any means, but it works for the most common cases of link rot.

For the demo at the end of the day I went back into my archive of over 10,000 links and plucked out some old posts, like this one from December 2005. It takes a little while to do the rerouting but eventually you get to see the archived version from the same time period as when I linked to it.

Here’s another link from 2005. Here’s another. Those links are broken now, but with a little patience, you’ll still get to read them on the Internet Archive.

The Internet Archive’s wayback machine really is a gift. I can’t imagine how would it be even remotely possible to try to address link rot on my site without archive.org.

I will continue to donate money to the Internet Archive and I encourage you to do the same.

event.target.closest

Eric mentioned the JavaScript closest method. I use it all the time.

When I wrote the book DOM Scripting back in 2005, I’d estimate that 90% of the JavaScript I was writing boiled down to:

  1. Find these particular elements in the DOM and
  2. When the user clicks on one of them, do something.

It wasn’t just me either. I reckon that was 90% of most JavaScript on the web: progressive disclosure widgets, accordions, carousels, and so on.

That’s one of the reasons why jQuery became so popular. That first step (“find these particular elements in the DOM”) used to be a finicky affair involving getElementsByTagName, getElementById, and other long-winded DOM methods. jQuery came along and allowed us to use CSS selectors.

These days, we don’t need jQuery for that because we’ve got querySelector and querySelectorAll (and we can thank jQuery for their existence).

Let’s say you want to add some behaviour to every button element with a class of special. Or maybe you use a data- attribute instead of the class attribute; the same principle applies. You want something special to happen when the user clicks on one of those buttons.

  1. Use querySelectorAll('button.special') to get a list of all the right elements,
  2. Loop through the list, and
  3. Attach addEventListener('click') to each element.

That’s fine for a while. But if you’ve got a lot of special buttons, you’ve now got a lot of event listeners. You might be asking the browser to do a lot of work.

There’s another complication. The code you’ve written runs once, when the page loads. Suppose the contents of the page have changed in the meantime. Maybe elements are swapped in and out using Ajax. If a new special button shows up on the page, it won’t have an event handler attached to it.

You can switch things around. Instead of adding lots of event handlers to lots of elements, you can add one event handler to the root element. Then figure out whether the element that just got clicked is special or not.

That’s where closest comes in. It makes this kind of event handling pretty straightforward.

To start with, attach the event listener to the document:

document.addEventListener('click', doSomethingSpecial, false);

That function doSomethingSpecial will be executed whenever the user clicks on anything. Meanwhile, if the contents of the document are updated via Ajax, no problem!

Use the closest method in combination with the target property of the event to figure out whether that click was something you’re interested in:

function doSomethingSpecial(event) {
  if (event.target.closest('button.special')) {
    // do something
  }
}

There you go. Like querySelectorAll, the closest method takes a CSS selector—thanks again, jQuery!

Oh, and if you want to reduce the nesting inside that function, you can reverse the logic and return early like this:

function doSomethingSpecial(event) {
  if (!event.target.closest('button.special')) return;
  // do something
}

There’s a similar method to closest called matches. But that will only work if the user clicks directly on the element you’re interested in. If the element is nested within other elements, matches might not work, but closest will.

Like Eric said:

Very nice.

Automation

I just described prototype code as code to be thrown away. On that topic…

I’ve been observing how people are programming with large language models and I’ve seen a few trends.

The first thing that just about everyone agrees on is that the code produced by a generative tool is not fit for public consumption. At least not straight away. It definitely needs to be checked and tested. If you enjoy debugging and doing code reviews, this might be right up your street.

The other option is to not use these tools for production code at all. Instead use them for throwaway code. That could be prototyping. But it could also be the code for those annoying admin tasks that you don’t do very often.

Take content migration. Say you need to grab a data dump, do some operations on the data to transform it in some way, and then pipe the results into a new content management system.

That’s almost certainly something you’d want to automate with bespoke code. Once the content migration is done, the code can be thrown away.

Read Matt’s account of coding up his Braggoscope. The code needed to spider a thousand web pages, extract data from those pages, find similarities, and output the newly-structured data in a different format.

I’ve noticed that these are just the kind of tasks that large language models are pretty good at. In effect you’re training the tool on your own very specific data and getting it to do your drudge work for you.

To me, it feels right that the usefulness happens on your own machine. You don’t put the machine-generated code in front of other humans.

Steam

Picture someone tediously going through a spreadsheet that someone else has filled in by hand and finding yet another error.

“I wish to God these calculations had been executed by steam!” they cry.

The year was 1821 and technically the spreadsheet was a book of logarithmic tables. The frustrated cry came from Charles Babbage, who channeled his frustration into a scheme to create the world’s first computer.

His difference engine didn’t work out. Neither did his analytical engine. He’d spend his later years taking his frustrations out on street musicians, which—as a former busker myself—earns him a hairy eyeball from me.

But we’ve all been there, right? Some tedious task that feels soul-destroying in its monotony. Surely this is exactly what machines should be doing?

I have a hunch that this is where machine learning and large language models might turn out to be most useful. Not in creating breathtaking works of creativity, but in menial tasks that nobody enjoys.

Someone was telling me earlier today about how they took a bunch of haphazard notes in a client meeting. When the meeting was done, they needed to organise those notes into a coherent summary. Boring! But ChatGPT handled it just fine.

I don’t think that use-case is going to appear on the cover of Wired magazine anytime soon but it might be a truer glimpse of the future than any of the breathless claims being eagerly bandied about in Silicon Valley.

You know the way we no longer remember phone numbers, because, well, why would we now that we have machines to remember them for us? I’d be quite happy if machines did that for the annoying little repetitive tasks that nobody enjoys.

I’ll give you an example based on my own experience.

Regular expressions are my kryptonite. I’m rubbish at them. Any time I have to figure one out, the knowledge seeps out of my brain before long. I think that’s because I kind of resent having to internalise that knowledge. It doesn’t feel like something a human should have to know. “I wish to God these regular expressions had been calculated by steam!”

Now I can get a chatbot with a large language model to write the regular expression for me. I still need to describe what I want, so I need to write the instructions clearly. But all the gobbledygook that I’m writing for a machine now gets written by a machine. That seems fair.

Mind you, I wouldn’t blindly trust the output. I’d take that regular expression and run it through a chatbot, maybe a different chatbot running on a different large language model. “Explain what this regular expression does,” would be my prompt. If my input into the first chatbot matches the output of the second, I’d have some confidence in using the regular expression.

A friend of mine told me about using a large language model to help write SQL statements. He described his database structure to the chatbot, and then described what he wanted to select.

Again, I wouldn’t use that output without checking it first. But again, I might use another chatbot to do that checking. “Explain what this SQL statement does.”

Playing chatbots off against each other like this is kinda how machine learning works under the hood: generative adverserial networks.

Of course, the task of having to validate the output of a chatbot by checking it with another chatbot could get quite tedious. “I wish to God these large language model outputs had been validated by steam!”

Sounds like a job for machines.

Web Audio API update on iOS

I documented a weird bug with web audio on iOS a while back:

On some pages of The Session, as well as the audio player for tunes (using the Web Audio API) there are also embedded YouTube videos (using the video element). Press play on the audio player; no sound. Press play on the YouTube video; you get sound. Now go back to the audio player and suddenly you do get sound!

It’s almost like playing a video or audio element “kicks” the browser into realising it should be playing the sound from the Web Audio API too.

This was happening on iOS devices set to mute, but I was also getting reports of it happening on devices with the sound on. But it’s that annoyingly intermittent kind of bug that’s really hard to reproduce consistently. Sometimes the sound doesn’t play. Sometimes it does.

I found a workaround but it was really hacky. By playing a one-second long silent mp3 file using audio, you could “kick” the sound into behaving. Then you can use the Web Audio API and it would play consistently.

Well, that’s all changed with the latest release of Mobile Safari. Now what happens is that the Web Audio stuff plays …for one second. And then stops.

I removed the hacky workaround and the Web Audio API started behaving itself again …but your device can’t be set to silent.

The good news is that the Web Audio behaviour seems to be consistent now. It only plays if the device isn’t muted. This restriction doesn’t apply to video and audio elements; they will still play even if your device is set to silent.

This descrepancy between the two different ways of playing audio is kind of odd, but at least now the Web Audio behaviour is predictable.

You can hear the Web Audio API in action by going to any tune on The Session and pressing the “play audio” button.

Syndicating to Mastodon

I’ve been contemplating a checkbox. The label for this checkbox reads:

This is a bot account

Let me back up…

In what seems like decades ago, but was in fact just a few weeks, Elon Musk bought Twitter and began burning it to the ground. His admirers insist he’s playing some form of four-dimensional chess, but to the rest of us, his actions are indistinguishable from a spoilt rich kid not understanding what a social network is.

It wasn’t giving me much cause for anguish personally. For the past eight years, I’ve only used Twitter as a syndication endpoint for my own notes. But I understand that’s a very privileged position to be in. Most people on Twitter don’t have the same luxury of independence. It’s genuinely maddening and saddening to see their years of sharing destroyed by one cruel idiot.

Lots of people started moving to Mastodon. I figured I should do the same for my syndicated notes.

At first, I signed up for an account on mastodon.cloud. No particular reason. But that’s where I saw this very insightful post from Anil Dash:

When it came time to reckon with social media’s failings, nobody ran to the “web3” platforms. Nobody asked “can I get paid per message”? Nobody asked about the blockchain. The community of people who’ve been quietly doing this work for years (decades!) ended up being the ones who welcomed everyone over, as always.

I was getting my account all set up and beginning to follow some other folks, when I realised that I actually already had an exisiting account over on mastodon.social. Doh! Turns out that I signed up back in 2017 to kick the tyres, but never did much else because there weren’t many other people around back then. Oh, how times have changed!

Anyway, I thought I had really screwed up by having two accounts but this turned out to be an opportunity to experience some of the thoughtfulness in Mastodon’s design. The process of migrating from one Mastodon account to another—on a completely different instance—was very smooth! It was clear that this wasn’t an afterthought. This is an essential part of the fediverse and the design of the migration flow reflects that.

This gives me enormous peace of mind. If I ever want to switch to a different instance and still keep my network intact, I know it won’t be a problem. Mastodon is like the opposite of the roach-motel mentality that permeates most VC-backed so-called social networks.

As I played around some more—reading, following, exploring—my feelings of fondness only grew stronger. I like this place a lot!

I definitely wanted to syndicate my notes to Mastodon. At first, I implemented a straightforward RSS-to-Mastodon syndication using IFTTT (IF This, Then That), thanks to Matthias’s excellent tutorial.

But that didn’t feel quite right. When I syndicate to Twitter, I make a conscious choice each time. There’s a “Twitter” toggle that I can enable or disable in my posting interface. Mastodon deserved the same level of thoughtfulness.

So I switched off the IFTTT recipe and started exploring the Mastodon API. It’s going to sound like a humblebrag when I tell you that I got cross-posting working in almost no time at all, but that’s not a testament to my coding prowess (I’m really not very good), but rather a testament to the Mastodon API, which was a joy to work with.

  1. On your Mastodon instance, go to /settings/applications.
  2. Click on New Application.
  3. Fill in the details about your website and select write:statuses (and probably write:media) from the Scopes list.
  4. Copy Your access token to use in API calls.
  5. Write some sloppy code (in my case, PHP that uses CURL).

I did hit a wall when it came to posting images. That took me a while to get working, and I couldn’t figure out why. Was it something at Mastodon’s end while it was struggling under the influx of new users? As it turns out, no. It was entirely down to me being an idiot. (You know that situation where you’re working on a problem for ages and you’ve become convinced it’s an extremely gnarly rocket-science problem, but then turns out to be something stupid like a typo? Yeah. That.)

Then there’s the whole question of how to receive replies, likes, and reboosts from Mastodon here on my own site. Luckily, that was super easy, thanks to Brid.gy. One click and I was done. I love Brid.gy!

Take this note, for example. There’s a version on Twitter and a version on Mastodon. The original version on my own site gets responses from both places.

If I’m replying to a response on Twitter, I do not syndicate that to Mastodon.

Likewise, if I’m replying to a response on Mastodon, I do not syndicate that to Twitter.

Oh, one thing worth mentioning: if you’re sending a reply to something on Mastodon using the API, there’s an in_reply_to_id field for you to provide. But you should also include the full @username@instance of the person you’re replying to at the beginning of the message to ensure that it’s displayed as a reply rather than showing up as a regular post. Note the difference between this note on my site and its syndicated version on Mastodon.

Anyway, now I’m posting to Mastodon, but I’m doing it through the the interface of my own website. Which brings me to that checkbox in Mastodon’s profile settings:

This is a bot account

The help text reads:

Signal to others that the account mainly performs automated actions and might not be monitored

If I were doing the automatic cross-posting from RSS, I’d definitely tick that box. But as I’m making a conscious decision whenever I syndicate to Mastodon, I think I’m going to leave that checkbox unticked.

My cross-posting is not automated and I’m very much monitoring my Mastodon account …because I’m enjoying my Mastodon experience more than I’ve enjoyed anything online for quite some time. Highly recommended!

JavaScript

A recurring theme in my writing and talks is “lay off the JavaScript, people!” But I have to make a conscious effort to specify that I mean client-side JavaScript.

I thought it would be obvious from the context that I was talking about the copious amounts of JavaScript being shipped to end users to download, parse, and execute. But nothing’s ever really obvious. If I don’t explicitly say JavaScript in the browser, then someone inevitably thinks I’m having a go at JavaScript, the language.

I have absolutely nothing against JavaScript the language. Just like I have nothing against Python or Ruby or any other language that you might write with on your machine or your web server. But as soon as you deliver bytes over the wire, I start having opinions. It just so happens that JavaScript is the universal language for client-side coding so that’s why I call for restraint with JavaScript specifically.

There was a time when JavaScript only existed in web browsers. That changed with Node. Now it’s possible to write code for your web server and code for web browsers using the same language. Very handy!

But just because it’s the same language doesn’t mean you should treat it the same in both circumstance. As Remy puts it:

There are two JavaScripts.

One for the server - where you can go wild.

One for the client - that should be thoughtful and careful.

I was reading something recently that referred to Eleventy as a JavaScript library. It really brought me up short. I mean, on the one hand, yes, it’s a library of code and it’s written in JavaScript. It is absolutely technically correct to call it a JavaScript library.

But in my mind, a JavaScript library is something you ship to web browsers—jQuery, React, Vue, and so on. Whereas Eleventy executes its code in order to generate HTML and that’s what gets sent to end users. Conceptually, it’s like the opposite of a JavaScript library. Eleventy does its work before any user requests a URL—JavaScript libraries do their work after a user requests a URL.

To me it seems obvious that there should an entirely different mindset for writing code intended for a web browser. But nothing’s ever really obvious.

I remember when Node was getting really popular and npm came along as a way to manage all the bundles of code that people were assembling in their Node programmes. Makes total sense. But then I thought I heard about people using npm to do the same thing for client-side code. “That can’t be right!” I thought. I must’ve misunderstood. So I talked to someone from npm and explained how I must be misunderstanding something.

But it turned out that people really were treating client-side JavaScript no different than server-side JavaScript. People really were pulling in megabytes of other people’s code to ship to end users so that they could, I dunno, left pad numbers or something.

Listen, I don’t care what you get up to in the privacy of your own codebase. But don’t poison the well of the web with profligate client-side JavaScript.

Inertia

When I’ve spoken in the past about evaluating technology, I’ve mentioned two categories of tools for web development. I still don’t know quite what to call these categories. Internal and external? Developer-facing and user-facing?

The first category covers things like build tools, version control, transpilers, pre-processers, and linters. These are tools that live on your machine—or on the server—taking what you’ve written and transforming it into the raw materials of the web: HTML, CSS, and JavaScript.

The second category of tools are those that are made of the raw materials of the web: CSS frameworks and JavaScript libraries.

I think the criteria for evaluating these different kinds of tools should be very different.

For the first category, developer-facing tools, use whatever you want. Use whatever makes sense to you and your team. Use whatever’s effective for you.

But for the second category, user-facing tools, that attitude is harmful. If you make users download a CSS or JavaScript framework in order to benefit your workflow, then you’re making users pay a tax for your developer convenience. Instead, I firmly believe that user-facing tools should provide some direct benefit to end users.

When I’ve asked developers in the past why they’ve chosen to use a particular JavaScript framework, they’ve been able to give me plenty of good answers. But all of those answers involved the benefit to their developer workflow—efficiency, consistency, and so on. That would be absolutely fine if we were talking about the first category of tools, developer-facing tools. But those answers don’t hold up for the second category of tools, user-facing tools.

If a user-facing tool is only providing a developer benefit, is there any way to turn it into a developer-facing tool?

That’s very much the philosophy of Svelte. You can compare Svelte to other JavaScript frameworks like React and Vue but you’d be missing the most important aspect of Svelte: it is, by design, in that first category of tools—developer-facing tools:

Svelte takes a different approach from other frontend frameworks by doing as much as it can at the build step—when the code is initially compiled—rather than running client-side. In fact, if you want to get technical, Svelte isn’t really a JavaScript framework at all, as much as it is a compiler.

You install it on your machine, you write your code in Svelte, but what it spits out at the other end is HTML, CSS, and JavaScript. Unlike Vue or React, you don’t ship the library to end users.

In my opinion, this is an excellent design decision.

I know there are ways of getting React to behave more like a category one tool, but it is most definitely not the default behaviour. And default behaviour really, really matters. For React, the default behaviour is to assume all the code you write—and the tool you use to write it—will be sent over the wire to end users. For Svelte, the default behaviour is the exact opposite.

I’m sure you can find a way to get Svelte to send too much JavaScript to end users, but you’d be fighting against the grain of the tool. With React, you have to fight against the grain of the tool in order to not send too much JavaScript to end users.

But much as I love Svelte’s approach, I think it’s got its work cut out for it. It faces a formidable foe: inertia.

If you’re starting a greenfield project and you’re choosing a JavaScript framework, then Svelte is very appealing indeed. But how often do you get to start a greenfield project?

React has become so ubiquitous in the front-end development community that it’s often an unquestioned default choice for every project. It feels like enterprise software at this point. No one ever got fired for choosing React. Whether it’s appropriate or not becomes almost irrelevant. In much the same way that everyone is on Facebook because everyone is on Facebook, everyone uses React because everyone uses React.

That’s one of its biggest selling points to managers. If you’ve settled on React as your framework of choice, then hiring gets a lot easier: “If you want to work here, you need to know React.”

The same logic applies from the other side. If you’re starting out in web development, and you see that so many companies have settled on using React as their framework of choice, then it’s an absolute no-brainer: “if I want to work anywhere, I need to know React.”

This then creates a positive feedback loop. Everyone knows React because everyone is hiring React developers because everyone knows React because everyone is hiring React developers because…

At no point is there time to stop and consider if there’s a tool—like Svelte, for example—that would be less harmful for end users.

This is where I think Astro might have the edge over Svelte.

Astro has the same philosophy as Svelte. It’s a developer-facing tool by default. Have a listen to Drew’s interview with Matthew Phillips:

Astro does not add any JavaScript by default. You can add your own script tags obviously and you can do anything you can do in HTML, but by default, unlike a lot of the other component-based frameworks, we don’t actually add any JavaScript for you unless you specifically tell us to. And I think that’s one thing that we really got right early.

But crucially, unlike Svelte, Astro allows you to use the same syntax as the incumbent, React. So if you’ve learned React—because that’s what you needed to learn to get a job—you don’t have to learn a new syntax in order to use Astro.

I know you probably can’t take an existing React site and convert it to Astro with the flip of a switch, but at least there’s a clear upgrade path.

Astro reminds me of Sass. Specifically, it reminds me of the .scss syntax. You could take any CSS file, rename its file extension from .css to .scss and it was automatically a valid Sass file. You could start using Sass features incrementally. You didn’t have to rewrite all your style sheets.

Sass also has a .sass syntax. If you take a CSS file and rename it with a .sass file extension, it is not going to work. You need to rewrite all your CSS to use the .sass syntax. Some people used the .sass syntax but the overwhelming majority of people used .scss

I remember talking with Hampton about this and he confirmed the proportions. It was also the reason why one of his creations, Sass, was so popular, but another of his creations, Haml, was not, comparitively speaking—Sass is a superset of CSS but Haml is not a superset of HTML; it’s a completely different syntax.

I’m not saying that Svelte is like Haml and Astro is like Sass. But I do think that Astro has inertia on its side.

Web Audio API weirdness on iOS

I told you about how I’m using the Web Audio API on The Session to generate synthesised audio of each tune setting. I also said:

Except for some weirdness on iOS that I had to fix.

Here’s that weirdness…

Let me start by saying that this isn’t anything to do with requiring a user interaction (the Web Audio API insists on some kind of user interaction to prevent developers from having auto-playing sound on websites). All of my code related to the Web Audio API is inside a click event handler. This is a different kind of weirdness.

First of all, I noticed that if you pressed play on the audio player when your iOS device is on mute, then you don’t hear any audio. Seems logical, right? Except if using the same device, still set to mute, you press play on a video or audio element, the sound plays just fine. You can confirm this by going to Huffduffer and pressing play on any of the audio elements there, even when your iOS device is set on mute.

So it seems that iOS has different criteria for the Web Audio API than it does for audio or video. Except it isn’t quite that straightforward.

On some pages of The Session, as well as the audio player for tunes (using the Web Audio API) there are also embedded YouTube videos (using the video element). Press play on the audio player; no sound. Press play on the YouTube video; you get sound. Now go back to the audio player and suddenly you do get sound!

It’s almost like playing a video or audio element “kicks” the browser into realising it should be playing the sound from the Web Audio API too.

This was happening on iOS devices set to mute, but I was also getting reports of it happening on devices with the sound on. But it’s that annoyingly intermittent kind of bug that’s really hard to reproduce consistently. Sometimes the sound doesn’t play. Sometimes it does.

Following my theory that the browser needs a “kick” to get into the right frame of mind for the Web Audio API, I resorted to a messy little hack.

In the event handler for the audio player, I generate the “kick” by playing a second of silence using the JavaScript equivalent of the audio element:

var audio = new Audio('1-second-of-silence.mp3');
audio.play();

I’m not proud of that. It’s so hacky that I’ve even wrapped the code in some user-agent sniffing on the server, and I never do user-agent sniffing!

Still, if you ever find yourself getting weird but inconsistent behaviour on iOS using the Web Audio API, this nasty little hack could help.

Update: Time to remove this workaround. Mobile Safari has been updated.

Plumbing

On Monday, I linked to Tom’s latest video. It uses a clever trick whereby the title of the video is updated to match the number of views the video has had. But there’s a lot more to the video than that. Stick around and you’ll be treated to a meditation on the changing nature of APIs, from a shared open lake to a closed commercial drybed.

It reminds me of (other) Tom’s post from a couple of year’s ago called Pouring one out for the Boxmakers, wherein he talks about Twitter’s crackdown on fun bots:

Web 2.0 really, truly, is over. The public APIs, feeds to be consumed in a platform of your choice, services that had value beyond their own walls, mashups that merged content and services into new things… have all been replaced with heavyweight websites to ensure a consistent, single experience, no out-of-context content, and maximising the views of advertising. That’s it: back to single-serving websites for single-serving use cases.

A shame. A thing I had always loved about the internet was its juxtapositions, the way it supported so many use-cases all at once. At its heart, a fundamental one: it was a medium which you could both read and write to. From that flow others: it’s not only work and play that coexisted on it, but the real and the fictional; the useful and the useless; the human and the machine.

Both Toms echo the sentiment in Anil’s The Web We Lost, written back in 2012:

Five years ago, if you wanted to show content from one site or app on your own site or app, you could use a simple, documented format to do so, without requiring a business-development deal or contractual agreement between the sites. Thus, user experiences weren’t subject to the vagaries of the political battles between different companies, but instead were consistently based on the extensible architecture of the web itself.

I know, I know. We’re a bunch of old men shouting at The Cloud. But really, Anil is right:

This isn’t our web today. We’ve lost key features that we used to rely on, and worse, we’ve abandoned core values that used to be fundamental to the web world. To the credit of today’s social networks, they’ve brought in hundreds of millions of new participants to these networks, and they’ve certainly made a small number of people rich.

But they haven’t shown the web itself the respect and care it deserves, as a medium which has enabled them to succeed. And they’ve now narrowed the possibilites of the web for an entire generation of users who don’t realize how much more innovative and meaningful their experience could be.

In his video, Tom mentions Yahoo Pipes as an example of a service that has been shut down for commercial and idealogical reasons. In many ways, it was the epitome of what Anil was talking about—a sort of meta-API that allowed you to connect different services together. Kinda like IFTTT but with a visual interface that made it as empowering as something like the Scratch programming language.

There are services today that provide some of that functionality, but they’re more developer-focused. Trys pointed me to Pipedream, which looks good but you need to know how to write Node.js code and import npm packages. I’m sure it’s great if you’re into serverless Jamstack lambda thingamybobs but I don’t think it’s going to unlock the potential for non-coders to create cool stuff.

On the more visual pipes-esque Scratchy side, Cassie pointed me to Cables:

Cables is a tool for creating beautiful interactive content.

It isn’t about making mashups, but it does look something that non-coders could potentially use to make something that looks cool. It reminds me a bit of Bret Victor and his classic talk on Inventing On Principle—always worth revisting!

Indy maps

Remember when I wrote about adding travel maps to my site at the recent Indie Web Camp Brighton? I must confess that the last line I wrote was an attempt to catch a fish from the river of the lazy web:

It’s a shame that I can’t use the lovely Stamen watercolour tiles for these static maps though.

In the spirit of Cunningham’s Law, I was hoping that somebody was going to respond with “It’s totally possible to use Stamen’s watercolour tiles for static maps, dumbass—look!” (to which my response would have been “thank you very much!”).

Alas, no such response was forthcoming. The hoped-for schooling never forthcame.

Still, I couldn’t quite let go of the idea of using those lovely watercolour maps somewhere on my site. But I had decided that dynamic maps would have been overkill for my archive pages:

Sure, it looked good, but displaying the map required requests for a script, a style sheet, and multiple map tiles.

Then I had a thought. What if I keep the static maps on my archive pages, but make them clickable? Then, on the other end of that link, I can have the dynamic version. In other words, what if I had a separate URL just for the dynamic maps?

These seemed like a good plan to me, so while I was travelling by Eurostar—the only way to travel—back from the lovely city of Antwerp where I had been speaking at Full Stack Europe, I started hacking away on making the dynamic maps even more dynamic. After all, now that they were going to have their own pages, I could go all out with any fancy features I wanted.

I kept coming back to my original goal:

I was looking for something more like the maps in Indiana Jones films—a line drawn from place to place to show the movement over time.

I found a plug-in for Leaflet.js that animates polylines—thanks, Iván! With a bit of wrangling, I was able to get it to animate between the lat/lon points of whichever archive section the map was in. Rather than have it play out automatically, I also added a control so that you can start and stop the animation. While I was at it, I decided to make that “play/pause” button do something else too. Ahem.

If you’d like to see the maps in action, click the “play” button on any of these maps:

You get the idea. It’s all very silly really. It’s right up there with the time I made my sparklines playable. But that’s kind of the point. It’s my website so I can do whatever I want with it, no matter how silly.

First of all, the research department for adactio.com (that’s me) came up with the idea. Then that had to be sold in to upper management (that’s me too). A team was spun up to handle design and development (consisting of me and me). Finally, the finished result went live thanks to the tireless efforts of the adactio.com ops group (that would be me). Any feedback should be directed at the marketing department (no idea who that is).

Indy web

It was Indie Web Camp Brighton on the weekend. After a day of thought-provoking discussions, I thoroughly enjoyed spending the second day tinkering on my website.

For a while now, I’ve wanted to add maps to my monthly archive pages (to accompany the calendar heatmaps I added at a previous Indie Web Camp). Whenever I post anything to my site—a blog post, a note, a link—it’s timestamped and geotagged. I thought it would be fun to expose that in a glanceable way. A map seems like the right medium for that, but I wanted to avoid the obvious route of dropping a load of pins on a map. Instead I was looking for something more like the maps in Indiana Jones films—a line drawn from place to place to show the movement over time.

I talked to Aaron about this and his advice was that a client-side JavaScript embedded map would be the easiest option. But that seemed like overkill to me. This map didn’t need to be pannable or zoomable; just glanceable. So I decided to see if how far I could get with a static map. I timeboxed two hours for it.

After two hours, I admitted defeat.

I was able to find the kind of static maps I wanted from Mapbox—I’m already using them for my check-ins. I could even add a polyline, which is exactly what I wanted. But instead of passing latitude and longitude co-ordinates for the points on the polyline, the docs explain that I needed to provide …cur ominous thunder and lightning… The Encoded Polyline Algorithm Format.

Go to that link. I’ll wait.

Did you read through the eleven steps of instructions? Did you also think it was a piss take?

  1. Take the initial signed value.
  2. Multiply it by 1e5.
  3. Convert that decimal value to binary.
  4. Left-shift the binary value one bit.
  5. If the original decimal value is negative, invert this encoding.
  6. Break the binary value out into 5-bit chunks.
  7. Place the 5-bit chunks into reverse order.
  8. OR each value with 0x20 if another bit chunk follows.
  9. Convert each value to decimal.
  10. Add 63 to each value.
  11. Convert each value to its ASCII equivalent.

This was way beyond my brain’s pay grade. But surely someone else had written the code I needed? I did some Duck Duck Going and found a piece of PHP code to do the encoding. It didn’t work. I Ducked Ducked and Went some more. I found a different piece of PHP code. That didn’t work either.

At this point, my allotted time was up. If I wanted to have something to demo by the end of the day, I needed to switch gears. So I did.

I used Leaflet.js to create the maps I wanted using client-side JavaScript. Here’s the JavaScript code I wrote.

It waits until the page has finished loading, then it searches for any instances of the h-geo microformat (a way of encoding latitude and longitude coordinates in HTML). If there are three or more, it generates a script element to pull in the Leaflet library, and a corresponding style element. Then it draws the map with the polyline on it. I ended up using Stamen’s beautiful watercolour map tiles.

Had some fun at Indie Web Camp Brighton on the weekend messing around with @Stamen’s lovely watercolour map tiles. (I was trying to create Indiana Jones style travel maps for my site …a different kind of Indy web.)

That’s what I demoed at the end of the day.

But I wasn’t happy with it.

Sure, it looked good, but displaying the map required requests for a script, a style sheet, and multiple map tiles. I made sure that it didn’t hold up the loading of the rest of the page, but it still felt wasteful.

So after Indie Web Camp, I went back to investigate static maps again. This time I did finally manage to find some PHP code for encoding lat/lon coordinates into a polyline that worked. Finally I was able to construct URLs for a static map image that displays a line connecting multiple points with a line.

I’ve put this maps on any of the archive pages that also have calendar heat maps. Some examples:

If you go back much further than that, the maps start to trail off. That’s because I wasn’t geotagging everything from the start.

I’m pretty happy with the final results. It’s certainly far more responsible from a performance point of view. Oh, and I’ve also got the maps inside a picture element so that I can swap out the tiles if you switch to dark mode.

It’s a shame that I can’t use the lovely Stamen watercolour tiles for these static maps though.

Going Offline—the talk of the book

I gave a new talk at An Event Apart in Seattle yesterday morning. The talk was called Going Offline, which the eagle-eyed amongst you will recognise as the title of my most recent book, all about service workers.

I was quite nervous about this talk. It’s very different from my usual fare. Usually I have some big sweeping arc of history, and lots of pretentious ideas joined together into some kind of narrative arc. But this talk needed to be more straightforward and practical. I wasn’t sure how well I would manage that brief.

I knew from pretty early on that I was going to show—and explain—some code examples. Those were the parts I sweated over the most. I knew I’d be presenting to a mixed audience of designers, developers, and other web professionals. I couldn’t assume too much existing knowledge. At the same time, I didn’t want to teach anyone to such eggs.

In the end, there was an overarching meta-theme to talk, which was this: logic is more important than code. In other words, figuring out what you’re trying to accomplish (and describing it clearly) is more important than typing curly braces and semi-colons. Programming is an act of translation. Before you can translate something, you need to be able to articulate it clearly in your own language first. By emphasising that point, I hoped to make the code less overwhelming to people unfamilar with it.

I had tested the talk with some of my Clearleft colleagues, and they gave me great feedback. But I never know until I’ve actually given a talk in front of a real conference audience whether the talk is any good or not. Now that I’ve given the talk, and received more feedback, I think I can confidentally say that it’s pretty damn good.

My goal was to explain some fairly gnarly concepts—let’s face it: service workers are downright weird, and not the easiest thing to get your head around—and to leave the audience with two feelings:

  1. This is exciting, and
  2. This is something I can do today.

I deliberately left time for questions, bribing people with free copies of my book. I got some great questions, and I may incorporate some of them into future versions of this talk (conference organisers, if this sounds like the kind of talk you’d like at your event, please get in touch). Some of the points brought up in the questions were:

  • Is there some kind of wizard for creating a typical service worker script for any site? I didn’t have a direct answer to this, but I have attempted to make a minimal viable service worker that could be used for just about any site. Mostly I encouraged the questioner to roll their sleeves up and try writing a bespoke script. I also mentioned the Workbox library, but I gave my opinion that if you’re going to spend the time to learn the library, you may as well spend the time to learn the underlying language.
  • What are some state-of-the-art progressive web apps for offline user experiences? Ooh, this one kind of stumped me. I mean, the obvious poster children for progressive webs apps are things like Twitter, Instagram, and Pinterest. They’re all great but the offline experience is somewhat limited. To be honest, I think there’s more potential for great offline experiences by publishers. I especially love the pattern on personal sites like Una’s and Sara’s where people can choose to save articles offline to read later—like a bespoke Instapaper or Pocket. I’d love so see that pattern adopted by some big publications. I particularly like that gives so much more control directly to the end user. Instead of trying to guess what kind of offline experience they want, we give them the tools to craft their own.
  • Do caches get cleaned up automatically? Great question! And the answer is mostly no—although browsers do have their own heuristics about how much space you get to play with. There’s a whole chapter in my book about being a good citizen and cleaning up your caches, but I didn’t include that in the talk because it isn’t exactly exciting: “Hey everyone! Now we’re going to do some housekeeping—yay!”
  • Isn’t there potential for abuse here? This is related to the previous question, and it’s another great question to ask of any technology. In short, yes. Bad actors could use service workers to fill up caches uneccesarily. I’ve written about back door service workers too, although the real problem there is with iframes rather than service workers—iframes and cookies are technologies that are already being abused by bad actors, and we’re going to see more and more interventions by ethical browser makers (like Mozilla) to clamp down on those technologies …just as browsers had to clamp down on the abuse of pop-up windows in the early days of JavaScript. The cache API could become a tragedy of the commons. I liken the situation to regulation: we should self-regulate, but if we prove ourselves incapable of that, then outside regulation (by browsers) will be imposed upon us.
  • What kind of things are in the future for service workers? Excellent question! If you think about it, a service worker is kind of a conduit that gives you access to different APIs: the Cache API and the Fetch API being the main ones now. A service worker is like an airport and the APIs are like the airlines. There are other APIs that you can access through service workers. Notifications are available now on desktop and on Android, and they’ll be coming to iOS soon. Background Sync is another powerful API accessed through service workers that will get more and more browser support over time. The great thing is that you can start using these APIs today even if they aren’t universally supported. Then, over time, more and more of your users will benefit from those enhancements.

If you attended the talk and want to learn more about about service workers, there’s my book (obvs), but I’ve also written lots of blog posts about service workers and I’ve linked to lots of resources too.

Finally, here’s a list of links to all the books, sites, and articles I referenced in my talk…

Books

Sites

Progressive Web Apps

A tiny lesson in query selection

We have a saying at Clearleft:

Everything is a tiny lesson.

I bet you learn something new every day, even if it’s something small. These small tips and techniques can easily get lost. They seem almost not worth sharing. But it’s the small stuff that takes the least effort to share, and often provides the most reward for someone else out there. Take for example, this great tip for getting assets out of Sketch that Cassie shared with me.

Cassie was working on a piece of JavaScript yesterday when we spotted a tiny lesson that tripped up both of us. The script was a fairly straightforward piece of DOM scripting. As a general rule, we do a sort of feature detection near the start of the script. Let’s say you’re using querySelector to get a reference to an element in the DOM:

var someElement = document.querySelector('.someClass');

Before going any further, check to make sure that the reference isn’t falsey (in other words, make sure that DOM node actually exists):

if (!someElement) return;

That will exit the script if there’s no element with a class of someClass on the page.

The situation that tripped us up was like this:

var myLinks = document.querySelectorAll('a.someClass');

if (!myLinks) return;

That should exit the script if there are no A elements with a class of someClass, right?

As it turns out, querySelectorAll is subtly different to querySelector. If you give querySelector a reference to non-existent element, it will return a value of null (I think). But querySelectorAll always returns an array (well, technically it’s a NodeList but same difference mostly). So if the selector you pass to querySelectorAll doesn’t match anything, it still returns an array, but the array is empty. That means instead of just testing for its existence, you need to test that it’s not empty by checking its length property:

if (!myLinks.length) return;

That’s a tiny lesson.

Switching

Chris has written about switching code editors. I’m a real stick-in-the-mud when it comes to switching editors. Partly that’s because I’m generally pretty happy with whatever I’m using (right now it’s Atom) but it’s also because I just don’t get that excited about software like this. I probably should care more; I spend plenty of time inside a code editor. And I should really take the time to get to grips with features like keyboard shortcuts—I’m sure I’m working very inefficiently. But, like I said, I find it hard to care enough, and on the whole, I’m content.

I was struck by this observation from Chris:

When moving, I have to take time to make sure it works pretty much like the old one.

That reminded me of a recent switch I made, not with code editors, but with browsers.

I’ve been using Chrome for years. One day it started crashing a lot. So I decided to make the switch to Firefox. Looking back, I’m glad to have had this prompt—I think it’s good to shake things up every now and then, so I don’t get too complacent (says the hypocrite who can’t be bothered to try a new code editor).

Just as Chris noticed with code editors, it was really important that I could move bookmarks (and bookmarklets!) over to my new browser. On the whole, it went pretty smoothly. I had to seek out a few browser extensions but that was pretty much it. And because I use a password manager, logging into all my usual services wasn’t a hassle.

Of all the pieces of software on my computer, the web browser is the one where I definitely spend the most time: reading, linking, publishing. At this point, I’m very used to life with Firefox as my main browser. It’s speedy and stable, and the dev tools are very similar to Chrome’s.

Maybe I’ll switch to Safari at some point. Like I said, I think it’s good to shake things up and get out of my comfort zone.

Now, if I really wanted to get out of my comfort zone, I’d switch operating systems like Dave did with his move to Windows. And I should really try using a different phone OS. Again, this is something that Dave tried with his switch to Android (although that turned out to be unacceptably creepy), and Paul did it ages ago using a Windows phone for a week.

There’s probably a balance to be struck here. I think it’s good to change code editors, browsers, even operating systems and phones every now and then, but I don’t want to feel like I’m constantly in learning mode. There’s something to be said for using tools that are comfortable and familiar, even if they’re outdated.