Journal tags: back

27

sparkline

Federation syndication

I’m quite sure this is of no interest to anyone but me, but I finally managed to fix a longstanding weird issue with my website.

I realise that me telling you about a bug specific to my website is like me telling you about a dream I had last night—fascinating for me; incredibly dull for you.

For some reason, my site was being brought to its knees anytime I syndicated a note to Mastodon. I rolled up my sleeves to try to figure out what the problem could be. I was fairly certain the problem was with my code—I’m not much of a back-end programmer.

My tech stack is classic LAMP: Linux, Apache, MySQL and PHP. When I post a note, it gets saved to my database. Then I make a curl request to the Mastodon API to syndicate the post over there. That’s when my CPU starts climbing and my server gets all “bad gateway!” on me.

After spending far too long pulling apart my PHP and curl code, I had to come to the conclusion that I was doing nothing wrong there.

I started watching which processes were making the server fall over. It was MySQL. That seemed odd, because I’m not doing anything too crazy with my database reads.

Then I realised that the problem wasn’t any particular query. The problem was volume. But it only happened when I posted a note to Mastodon.

That’s when I had a lightbulb moment about how the fediverse works.

When I post a note to Mastodon, it includes a link back to the original note to my site. At this point Mastodon does its federation magic and starts spreading the post to all the instances subscribed to my account. And every single one of them follows the link back to the note on my site …all at the same time.

This isn’t a problem when I syndicate my blog posts, because I’ve got a caching mechanism in place for those. I didn’t think I’d need any caching for little ol’ notes. I was wrong.

A simple solution would be not to include the link back to the original note. But I like the reminder that what you see on Mastodon is just a copy. So now I’ve got the same caching mechanism for my notes as I do for my journal (and I did my links while I was at it). Everything is hunky-dory. I can syndicate to Mastodon with impunity.

See? I told you it would only be of interest to me. Although I guess there’s a lesson here. Something something caching.

Button types

I’ve been banging the drum for a button type="share" for a while now.

I’ve also written about other potential button types. The pattern I noticed was that, if a JavaScript API first requires a user interaction—like the Web Share API—then that’s a good hint that a declarative option would be useful:

The Fullscreen API has the same restriction. You can’t make the browser go fullscreen unless you’re responding to user gesture, like a click. So why not have button type=”fullscreen” in HTML to encapsulate that? And again, the fallback in non-supporting browsers is predictable—it behaves like a regular button—so this is trivial to polyfill.

There’s another “smell” that points to some potential button types: what functionality do browsers provide in their interfaces?

Some browsers provide a print button. So how about button type="print"? The functionality is currently doable with button onclick="window.print()" so this would be a nicer, more declarative way of doing something that’s already possible.

It’s the same with back buttons, forward buttons, and refresh buttons. The functionality is available through a browser interface, and it’s also scriptable, so why not have a declarative equivalent?

How about bookmarking?

And remember, the browser interface isn’t always visible: progressive web apps that launch with minimal browser UI need to provide this functionality.

Šime Vidas was wondering about button type="copy” for copying to clipboard. Again, it’s something that’s currently scriptable and requires a user gesture. It’s a little more complex than the other actions because there needs to be some way of providing the text to be copied, but it’s definitely a valid use case.

  • button type="share"
  • button type="fullscreen"
  • button type="print"
  • button type="bookmark"
  • button type="back"
  • button type="forward"
  • button type="refresh"
  • button type="copy"

Any more?

JavaScript

A recurring theme in my writing and talks is “lay off the JavaScript, people!” But I have to make a conscious effort to specify that I mean client-side JavaScript.

I thought it would be obvious from the context that I was talking about the copious amounts of JavaScript being shipped to end users to download, parse, and execute. But nothing’s ever really obvious. If I don’t explicitly say JavaScript in the browser, then someone inevitably thinks I’m having a go at JavaScript, the language.

I have absolutely nothing against JavaScript the language. Just like I have nothing against Python or Ruby or any other language that you might write with on your machine or your web server. But as soon as you deliver bytes over the wire, I start having opinions. It just so happens that JavaScript is the universal language for client-side coding so that’s why I call for restraint with JavaScript specifically.

There was a time when JavaScript only existed in web browsers. That changed with Node. Now it’s possible to write code for your web server and code for web browsers using the same language. Very handy!

But just because it’s the same language doesn’t mean you should treat it the same in both circumstance. As Remy puts it:

There are two JavaScripts.

One for the server - where you can go wild.

One for the client - that should be thoughtful and careful.

I was reading something recently that referred to Eleventy as a JavaScript library. It really brought me up short. I mean, on the one hand, yes, it’s a library of code and it’s written in JavaScript. It is absolutely technically correct to call it a JavaScript library.

But in my mind, a JavaScript library is something you ship to web browsers—jQuery, React, Vue, and so on. Whereas Eleventy executes its code in order to generate HTML and that’s what gets sent to end users. Conceptually, it’s like the opposite of a JavaScript library. Eleventy does its work before any user requests a URL—JavaScript libraries do their work after a user requests a URL.

To me it seems obvious that there should an entirely different mindset for writing code intended for a web browser. But nothing’s ever really obvious.

I remember when Node was getting really popular and npm came along as a way to manage all the bundles of code that people were assembling in their Node programmes. Makes total sense. But then I thought I heard about people using npm to do the same thing for client-side code. “That can’t be right!” I thought. I must’ve misunderstood. So I talked to someone from npm and explained how I must be misunderstanding something.

But it turned out that people really were treating client-side JavaScript no different than server-side JavaScript. People really were pulling in megabytes of other people’s code to ship to end users so that they could, I dunno, left pad numbers or something.

Listen, I don’t care what you get up to in the privacy of your own codebase. But don’t poison the well of the web with profligate client-side JavaScript.

Media queries with display-mode

It’s said that the best way to learn about something is to teach it. I certainly found that to be true when I was writing the web.dev course on responsive design.

I felt fairly confident about some of the topics, but I felt somewhat out of my depth when it came to some of the newer modern additions to browsers. The last few modules in particular were unexplored areas for me, with topics like screen configurations and media features. I learned a lot about those topics by writing about them.

Best of all, I got to put my new-found knowledge to use! Here’s how…

The Session is a progressive web app. If you add it to the home screen of your mobile device, then when you launch the site by tapping on its icon, it behaves just like a native app.

In the web app manifest file for The Session, the display-mode property is set to “standalone.” That means it will launch without any browser chrome: no address bar and no back button. It’s up to me to provide the functionality that the browser usually takes care of.

So I added a back button in the navigation interface. It only appears on small screens.

Do you see the assumption I made?

I figured that the back button was most necessary in the situation where the site had been added to the home screen. That only happens on mobile devices, right?

Nope. If you’re using Chrome or Edge on a desktop device, you will be actively encourged to “install” The Session. If you do that, then just as on mobile, the site will behave like a standalone native app and launch without any browser chrome.

So desktop users who install the progressive web app don’t get any back button (because in my CSS I declare that the back button in the interface should only appear on small screens).

I was alerted to this issue on The Session:

It downloaded for me but there’s a bug, Jeremy - there doesn’t seem to be a way to go back.

Luckily, this happened as I was writing the module on media features. I knew exactly how to solve this problem because now I knew about the existence of the display-mode media feature. It allows you to write media queries that match the possible values of display-mode in a web app manifest:

.goback {
  display: none;
}
@media (display-mode: standalone) {
  .goback {
    display: inline;
  }
}

Now the back button shows up if you “install” The Session, regardless of whether that’s on mobile or desktop.

Previously I made the mistake of inferring whether or not to show the back button based on screen size. But the display-mode media feature allowed me to test the actual condition I cared about: is this user navigating in standalone mode?

If I hadn’t been writing about media features, I don’t think I would’ve been able to solve the problem. It’s a really good feeling when you’ve just learned something new, and then you immediately find exactly the right use case for it!

Getting back

The three-part almost nine-hour long documentary Get Back is quite fascinating.

First of all, the fact that all this footage exists is remarkable. It’s as if Disney had announced that they’d found the footage for a film shot between Star Wars and The Empire Strikes Back.

Still, does this treasure trove really warrant the daunting length of this new Beatles documentary? As Terence puts it:

There are two problems with this Peter Jackson documentary. The first is that it is far too long - are casual fans really going to sit through 9 hours of a band bickering? The second problem is that it is far too short! Beatles obsessives (like me) could happily drink in a hundred hours of this stuff.

In some ways, watching Get Back is liking watching one of those Andy Warhol art projects where he just pointed a camera at someone for 24 hours. It’s simultaneously boring and yet oddly mesmerising.

What struck myself and Jessica watching Get Back was how much it was like our experience of playing with Salter Cane. I’m not saying Salter Cane are like The Beatles. I’m saying that The Beatles are like Salter Cane and every other band on the planet when it comes to how the sausage gets made. The same kind of highs. The same kind of lows. And above all, the same kind of tedium. Spending hours and hours in a practice room or a recording studio is simultaneously exciting and dull. This documentary captures that perfectly.

I suppose Peter Jackson could’ve made a three-part fly-on-the-wall documentary series about any band and I would’ve found it equally interesting. But this is The Beatles and that means there’s a whole mythology that comes along for the ride. So, yes, it’s like watching paint dry, but on the other hand, it’s paint painted by The Beatles.

What I liked about Get Back is that it demystified the band. The revelation for me was really understanding that this was just four lads from Liverpool making music together. And I know I shouldn’t be surprised by that—the Beatles themselves spent years insisting they were just four lads from Liverpool making music together, but, y’know …it’s The Beatles!

There’s a scene in the Danny Boyle film Yesterday where the main character plays Let It Be for the first time in a world where The Beatles have never existed. It’s one of the few funny parts of the film. It’s funny because to everyone else it’s just some new song but we, the audience, know that it’s not just some new song…

Christ, this is Let It Be! You’re the first people on Earth to hear this song! This is like watching Da Vinci paint the Mona Lisa right in front of your bloody eyes!

But truth is even more amusing than fiction. In the first episode of Get Back, we get to see when Paul starts noodling on the piano playing Let It Be for the first time. It’s a momentous occasion and the reaction from everyone around him is …complete indifference. People are chatting, discussing a set design that will never get built, and generally ignoring the nascent song being played. I laughed out loud.

There’s another moment when George brings in the song he wrote the night before, I Me Mine. He plays it while John and Yoko waltz around. It’s in 3/4 time and it’s minor key. I turned to Jessica and said “That’s the most Salter Cane sounding one.” Then, I swear at that moment, after George has stopped playing that song, he plays a brief little riff on the guitar that sounded exactly like a Salter Cane song we’re working on right now. Myself and Jessica turned to each other and said, “What was that‽”

Funnily enough, when we told this to Chris, the singer in Salter Cane, he mentioned how that was the scene that had stood out to him as well, but not for that riff (he hadn’t noticed the similarity). For him, it was about how George had brought just a scrap of a song. Chris realised it was the kind of scrap that he would come up with, but then discard, thinking there’s not enough there. So maybe there’s a lesson here about sharing those scraps.

Watching Get Back, I was trying to figure out if it was so fascinating to me and Jessica (and Chris) because we’re in a band. Would it resonate with other people?

The answer, it turns out, is yes, very much so. Everyone’s been sharing that clip of Paul coming up with the beginnings of the song Get Back. The general reaction is one of breathless wonder. But as Chris said, “How did you think songs happened?” His reaction was more like “yup, accurate.”

Inevitably, there are people mining the documentary for lessons in creativity, design, and leadership. There are already Medium think-pieces and newsletters analysing the processes on display. I guarantee you that there will be multiple conference talks at UX events over the next few years that will include footage from Get Back.

I understand how you could watch this documentary and take away the lesson that these were musical geniuses forging remarkable works of cultural importance. But that’s not what I took from it. I came away from it thinking they’re just a band who wrote and recorded some songs. Weirdly, that made me appreciate The Beatles even more. And it made me appreciate all the other bands and all the other songs out there.

Upgrade paths

After I jotted down some quick thoughts last week on the disastrous way that Google Chrome rolled out a breaking change, others have posted more measured and incisive takes:

In fairness to Google, the Chrome team is receiving the brunt of the criticism because they were the first movers. Mozilla and Apple are on baord with making the same breaking change, but Google is taking the lead on this.

As I said in my piece, my issue was less to do with whether confirm(), prompt(), and alert() should be deprecated but more to do with how it was done, and the woeful lack of communication.

Thinking about it some more, I realised that what bothered me was the lack of an upgrade path. Considering that dialog is nowhere near ready for use, it seems awfully cart-before-horse-putting to first remove a feature and then figure out a replacement.

I was chatting to Amber recently and realised that there was a very different example of a feature being deprecated in web browsers…

We were talking about the KeyboardEvent.keycode property. Did you get the memo that it’s deprecated?

But fear not! You can use the KeyboardEvent.code property instead. It’s much nicer to use too. You don’t need to look up a table of numbers to figure out how to refer to a specific key on the keyboard—you use its actual value instead.

So the way that change was communicated was:

Hey, you really shouldn’t use the keycode property. Here’s a better alternative.

But with the more recently change, the communication was more like:

Hey, you really shouldn’t use confirm(), prompt(), or alert(). So go fuck yourself.

Foundations

There was quite a kerfuffle recently about a feature being removed from Google Chrome. To be honest, the details don’t really matter for the point I want to make, but for the record, this was about removing alert and confirm dialogs from cross-origin iframes (and eventually everywhere else too).

It’s always tricky to remove a long-established feature from web browsers, but in this case there were significant security and performance reasons. The problem was how the change was communicated. It kind of wasn’t. So the first that people found out about it about was when things suddenly stopped working (like CodePen embeds).

The Chrome team responded quickly and the change has now been pushed back to next year. Hopefully there will be significant communication before that to let site owners know about the upcoming breakage.

So all’s well that ends well and we’ve all learned a valuable lesson about the importance of communication.

Or have we?

While this was going on, Emily Stark tweeted a more general point about breakage on the web:

Breaking changes happen often on the web, and as a developer it’s good practice to test against early release channels of major browsers to learn about any compatibility issues upfront.

Yikes! To me, this appears wrong on almost every level.

First of all, breaking changes don’t happen often on the web. They are—and should be—rare. If that were to change, the web would suffer massively in terms of predictability.

Secondly, the onus is not on web developers to keep track of older features in danger of being deprecated. That’s on the browser makers. I sincerely hope we’re not expected to consult a site called canistilluse.com.

I wasn’t the only one surprised by this message.

Simon says:

No, no, no, no! One of the best things about developing for the web is that, as a rule, browsers don’t break old code. Expecting every website and application to have an active team of developers maintaining it at all times is not how the web should work!

Edward Faulkner:

Most organizations and individuals do not have the resources to properly test and debug their website against Chrome canary every six weeks. Anybody who published a spec-compliant website should be able to trust that it will keep working.

Evan You:

This statement seriously undermines my trust in Google as steward for the web platform. When did we go from “never break the web” to “yes we will break the web often and you should be prepared for it”?!

It’s worth pointing out that the original tweet was not an official Google announcement. As Emily says right there on her Twitter account:

Opinions are my own.

Still, I was shaken to see such a cavalier attitude towards breaking changes on the World Wide Web. I know that removing dangerous old features is inevitable, but it should also be exceptional. It should not be taken lightly, and it should certainly not be expected to be an everyday part of web development.

It’s almost miraculous that I can visit the first web page ever published in a modern web browser and it still works. Let’s not become desensitised to how magical that is. I know it’s hard work to push the web forward, constantly add new features, while also maintaining backward compatibility, but it sure is worth it! We have collectively banked three decades worth of trust in the web as a stable place to build a home. Let’s not blow it.

If you published a website ten or twenty years ago, and you didn’t use any proprietary technology but only stuck to web standards, you should rightly expect that site to still work today …and still work ten and twenty years from now.

There was something else that bothered me about that tweet and it’s not something that I saw mentioned in the responses. There was an unspoken assumption that the web is built by professional web developers. That gave me a cold chill.

The web has made great strides in providing more and more powerful features that can be wielded in learnable, declarative, forgiving languages like HTML and CSS. With a bit of learning, anyone can make web pages complete with form validation, lazily-loaded responsive images, and beautiful grids that kick in on larger screens. The barrier to entry for all of those features has lowered over time—they used to require JavaScript or complex hacks. And with free(!) services like Netlify, you could literally drag a folder of web pages from your computer into a browser window and boom!, you’ve published to the entire world.

But the common narrative in the web development community—and amongst browser makers too apparently—is that web development has become more complex; so complex, in fact, that only an elite priesthood are capable of making websites today.

Absolute bollocks.

You can choose to make it really complicated. Convince yourself that “the modern web” is inherently complex and convoluted. But then look at what makes it complex and convoluted: toolchains, build tools, pipelines, frameworks, libraries, and abstractions. Please try to remember that none of those things are required to make a website.

This is for everyone. Not just for everyone to consume, but for everyone to make.

Employee experience design on the Clearleft podcast

The second episode of the second season of the Clearleft podcast is out. It’s all about employee experience design.

This topic came out of conversations with Katie. She really enjoys getting stuck into to the design challenges of the “backstage” tools that are often neglected. This is an area that Chris has been working in recently too, so I quized him on this topic.

They’re both super smart people which makes for a thoroughly enjoyable podcast episode. I usually have more guests on a single episode but it was fun to do a two-hander for once.

The whole thing comes in at just under seventeen minutes and there are some great stories and ideas in there. Have a listen.

And if you’re enjoying listening to the Clearleft podcast as much as I’m enjoying making it, be sure to spread the word wherever you share your recommnedations: Twitter, LinkedIn, Slack, your own website, the rooftop.

Unobtrusive feedback

Ten years ago I gave a talk at An Event Apart all about interaction design. It was called Paranormal Interactivity. You can watch the video, listen to the audio or read the transcript if you like.

I think it holds up pretty well. There’s one interaction pattern in particular that I think has stood the test of time. In the talk, I introduce this pattern as something you can see in action on Huffduffer:

I was thinking about how to tell the user that something’s happened without distracting them from their task, and I thought beyond the web. I thought about places that provide feedback mechanisms on screens, and I thought of video games.

So we all know Super Mario, right? And if you think about when you’re collecting coins in Super Mario, it doesn’t stop the game and pop up an alert dialogue and say, “You have just collected ten points, OK, Cancel”, right? It just does it. It does it in the background, but it does provide you with a feedback mechanism.

The feedback you get in Super Mario is about the number of points you’ve just gained. When you collect an item that gives you more points, the number of points you’ve gained appears where the item was …and then drifts upwards as it disappears. It’s unobtrusive enough that it won’t distract you from the gameplay you’re concentrating on but it gives you the reassurance that, yes, you have just gained points.

I think this a neat little feedback mechanism that we can borrow for subtle Ajax interactions on the web. These are actions that don’t change much of the content. The user needs to be able to potentially do lots of these actions on a single page without waiting for feedback every time.

On Huffduffer, for example, you might be looking at a listing of people that you can choose to follow or unfollow. The mechanism for doing that is a button per person. You might potentially be clicking lots of those buttons in quick succession. You want to know that each action has taken effect but you don’t want to be interrupted from your following/unfollowing spree.

You get some feedback in any case: the button changes. Maybe the text updates from “follow” to “unfollow” accompanied by a change in colour (this is what you’ll see on Twitter). The Super Mario style feedback is in addition to that, rather than instead of.

I’ve made a Codepen so you can see a reduced test case of the Super Mario feedback in action.

See the Pen Unobtrusive feedback by Jeremy Keith (@adactio) on CodePen.

Here’s the code available as a gist.

It’s a function that takes two arguments: the element that the feedback originates from (pass in a DOM node reference for this), and the contents of the feedback (this can be a string of text or it can be HTML …or SVG). When you call the function with those two arguments, this is what happens:

  1. The JavaScript generates a span element and puts the feedback contents inside it.
  2. Then it positions that element right over the element that the feedback originates from.
  3. Then there’s a CSS transform. The feedback gets a translateY applied so it drifts upward. At the same time it gets its opacity reduced from 1 to 0 so it’s fading away.
  4. Finally there’s a transitionend event that fires when the animation is over. Once that event fires, the generated span is destroyed.

When I first used this pattern on Huffduffer, I’m pretty sure I was using jQuery. A few years later I rewrote it in vanilla JavaScript. That was four years ago so I wonder if the code could be improved. Have a go if you fancy it.

Still, even if the code could benefit from an update, I’m pleased that the underlying pattern still holds true. I used it recently on The Session and it’s working a treat for a new Ajax interaction there (bookmarking or unbookbarking an item).

If you end up using this unobtrusive feedback pattern anyway, please let me know—I’d love to see more examples of it in the wild.

BetrayURL

Back in February, I wrote about an excellent proposal by Jake for how browsers could display URLs in a safer way. Crucially, this involved highlighting the important part of the URL, but didn’t involve hiding any part. It’s a really elegant solution.

Turns out it was a Trojan horse. Chrome are now running an experiment where they will do the exact opposite: they will hide parts of the URL instead of highlighting the important part.

You can change this behaviour if you’re in the less than 1% of people who ever change default settings in browsers.

I’m really disappointed to see that Jake’s proposal isn’t going to be implemented. It was a much, much better solution.

No doubt I will hear rejoinders that the “solution” that Chrome is experimenting with is pretty similar to what Jake proposed. Nothing could be further from the truth. Jake’s solution empowered users with knowledge without taking anything away. What Chrome will be doing is the opposite of that, infantalising users and making decisions for them “for their own good.”

Seeing a complete URL is going to become a power-user feature, like View Source or user style sheets.

I’m really sad about that because, as Jake’s proposal demonstrates, it doesn’t have to be that way.

CSS custom properties and the cascade

When I wrote about programming CSS to perform Sass colour functions I said this about the brilliant Lea Verou:

As so often happens when I’m reading something written by Lea—or seeing her give a talk—light bulbs started popping over my head (my usual response to Lea’s knowledge bombs is either “I didn’t know you could do that!” or “I never thought of doing that!”).

Well, it happened again. This time I was reading her post about hybrid positioning with CSS variables and max() . But the main topic of the post wasn’t the part that made go “Huh! I never knew that!”. Towards the end of her article she explained something about the way that browsers evaluate CSS custom properties:

The browser doesn’t know if your property value is valid until the variable is resolved, and by then it has already processed the cascade and has thrown away any potential fallbacks.

I’m used to being able to rely on the cascade. Let’s say I’m going to set a background colour on paragraphs:

p {
  background-color: red;
  background-color: color(display-p3 1 0 0);
}

First I’ve set a background colour using a good ol’ fashioned keyword, supported in browsers since day one. Then I declare the background colour using the new-fangled color() function which is supported in very few browsers. That’s okay though. I can confidently rely on the cascade to fall back to the earlier declaration. Paragraphs will still have a red background colour.

But if I store the background colour in a custom property, I can no longer rely on the cascade.

:root {
  --myvariable: color(display-p3 1 0 0);
}
p {
  background-color: red;
  background-color: var(--myvariable);
}

All I’ve done is swapped out the hard-coded color() value for a custom property but now the browser behaves differently. Instead of getting a red background colour, I get the browser default value. As Lea explains:

…it will make the property invalid at computed value time.

The spec says:

When this happens, the computed value of the property is either the property’s inherited value or its initial value depending on whether the property is inherited or not, respectively, as if the property’s value had been specified as the unset keyword.

So if a browser doesn’t understand the color() function, it’s as if I’ve said:

background-color: unset;

This took me by surprise. I’m so used to being able to rely on the cascade in CSS—it’s one of the most powerful and most useful features in this programming language. Could it be, I wondered, that the powers-that-be have violated the principle of least surprise in specifying this behaviour?

But a note in the spec explains further:

Note: The invalid at computed-value time concept exists because variables can’t “fail early” like other syntax errors can, so by the time the user agent realizes a property value is invalid, it’s already thrown away the other cascaded values.

Ah, right! So first of all browsers figure out the cascade and then they evaluate custom properties. If a custom property evaluates to gobbledygook, it’s too late to figure out what the cascade would’ve fallen back to.

Thinking about it, this makes total sense. Remember that CSS custom properties aren’t like Sass variables. They aren’t evaluated once and then set in stone. They’re more like let than const. They can be updated in real time. You can update them from JavaScript too. It’s entirely possible to update CSS custom properties rapidly in response to events like, say, the user scrolling or moving their mouse. If the browser had to recalculate the cascade every time a custom property didn’t evaluate correctly, I imagine it would be an enormous performance bottleneck.

So even though this behaviour surprised me at first, it makes sense on reflection.

I’ve probably done a terrible job explaining the behaviour here, so I’ve made a Codepen. Although that may also do an equally terrible job.

(Thanks to Amber for talking through this with me and encouraging me to blog about it. And thanks to Lea for expanding my mind. Again.)

Mental models

I’ve found that the older I get, the less I care about looking stupid. This is remarkably freeing. I no longer have any hesitancy about raising my hand in a meeting to ask “What’s that acronym you just mentioned?” This sometimes has the added benefit of clarifying something for others in the room who might have been to shy to ask.

I remember a few years back being really confused about npm. Fortunately, someone who was working at npm at the time came to Brighton for FFConf, so I asked them to explain it to me.

As I understood it, npm was intended to be used for managing packages of code for Node. Wasn’t it actually called “Node Package Manager” at one point, or did I imagine that?

Anyway, the mental model I had of npm was: npm is to Node as PEAR is to PHP. A central repository of open source code projects that you could easily add to your codebase …for your server-side code.

But then I saw people talking about using npm to manage client-side JavaScript. That really confused me. That’s why I was asking for clarification.

It turns out that my confusion was somewhat warranted. The npm project had indeed started life as a repo for server-side code but had since expanded to encompass client-side code too.

I understand how it happened, but it confirmed a worrying trend I had noticed. Developers were writing front-end code as though it were back-end code.

On the one hand, that makes total sense when you consider that the code is literally in the same programming language: JavaScript.

On the other hand, it makes no sense at all! If your code’s run-time is on the server, then the size of the codebase doesn’t matter that much. Whether it’s hundreds or thousands of lines of code, the execution happens more or less independentally of the network. But that’s not how front-end development works. Every byte matters. The more code you write that needs to be executed on the user’s device, the worse the experience is for that user. You need to limit how much you’re using the network. That means leaning on what the browser gives you by default (that’s your run-time environment) and keeping your code as lean as possible.

Dave echoes my concerns in his end-of-the-year piece called The Kind of Development I Like:

I now think about npm and wonder if it’s somewhat responsible for some of the pain points of modern web development today. Fact is, npm is a server-side technology that we’ve co-opted on the client and I think we’re feeling those repercussions in the browser.

Writing back-end and writing front-end code require very different approaches, in my opinion. But those differences have been erased in “modern” JavaScript.

The Unix Philosophy encourages us to write small micro libraries that do one thing and do it well. The Node.js Ecosystem did this in spades. This works great on the server where importing a small file has a very small cost. On the client, however, this has enormous costs.

In a funny way, this situation reminds me of something I saw happening over twenty years ago. Print designers were starting to do web design. They had a wealth of experience and knowledge around colour theory, typography, hierarchy and contrast. That was all very valuable to bring to the world of the web. But the web also has fundamental differences to print design. In print, you can use as many typefaces as you want, whereas on the web, to this day, you need to be judicious in the range of fonts you use. But in print, you might have to limit your colour palette for cost reasons (depending on the printing process), whereas on the web, colours are basically free. And then there’s the biggest difference of all: working within known dimensions of a fixed page in print compared to working within the unknowable dimensions of flexible viewports on the web.

Fast forward to today and we’ve got a lot of Computer Science graduates moving into front-end development. They’re bringing with them a treasure trove of experience in writing robust scalable code. But web browsers aren’t like web servers. If your back-end code is getting so big that it’s starting to run noticably slowly, you can throw more computing power at it by scaling up your server. That’s not an option on the front-end where you don’t really have one run-time environment—your end users have their own run-time environment with its own constraints around computing power and network connectivity.

That’s a very, very challenging world to get your head around. The safer option is to stick to the mental model you’re familiar with, whether you’re a print designer or a Computer Science graduate. But that does a disservice to end users who are relying on you to deliver a good experience on the World Wide Web.

Periodic background sync

Yesterday I wrote about how much I’d like to see silent push for the web:

I’d really like silent push for the web—the ability to update a cache with fresh content as soon as it’s published; that would be nifty! At the same time, I understand the concerns. It feels more powerful than other permission-based APIs like notifications.

Today, John Holt Ripley responded on Twitter:

hi there, just read your blog post about Silent Push for acthe web, and wondering if Periodic Background Sync would cover a few of those use cases?

Periodic background sync looks very interesting indeed!

It’s not the same as silent push. As the name suggests, this is about your service worker waking up periodically and potentially fetching (and caching) fresh content from the network. So the service worker is polling rather than receiving a push. But I’ll take it! It’s definitely close enough for the kind of use-cases I’ve been thinking about.

Interestingly, periodic background sync also ties into the other part of what I was writing about: permissions. I mentioned that adding a site the home screen could be interpreted as a signal to potentially allow more permissions (or at least allow prompts for more permissions).

Well, Chromium has a document outlining metrics for attempting to gauge site engagement. There’s some good thinking in there.

Indy web

It was Indie Web Camp Brighton on the weekend. After a day of thought-provoking discussions, I thoroughly enjoyed spending the second day tinkering on my website.

For a while now, I’ve wanted to add maps to my monthly archive pages (to accompany the calendar heatmaps I added at a previous Indie Web Camp). Whenever I post anything to my site—a blog post, a note, a link—it’s timestamped and geotagged. I thought it would be fun to expose that in a glanceable way. A map seems like the right medium for that, but I wanted to avoid the obvious route of dropping a load of pins on a map. Instead I was looking for something more like the maps in Indiana Jones films—a line drawn from place to place to show the movement over time.

I talked to Aaron about this and his advice was that a client-side JavaScript embedded map would be the easiest option. But that seemed like overkill to me. This map didn’t need to be pannable or zoomable; just glanceable. So I decided to see if how far I could get with a static map. I timeboxed two hours for it.

After two hours, I admitted defeat.

I was able to find the kind of static maps I wanted from Mapbox—I’m already using them for my check-ins. I could even add a polyline, which is exactly what I wanted. But instead of passing latitude and longitude co-ordinates for the points on the polyline, the docs explain that I needed to provide …cur ominous thunder and lightning… The Encoded Polyline Algorithm Format.

Go to that link. I’ll wait.

Did you read through the eleven steps of instructions? Did you also think it was a piss take?

  1. Take the initial signed value.
  2. Multiply it by 1e5.
  3. Convert that decimal value to binary.
  4. Left-shift the binary value one bit.
  5. If the original decimal value is negative, invert this encoding.
  6. Break the binary value out into 5-bit chunks.
  7. Place the 5-bit chunks into reverse order.
  8. OR each value with 0x20 if another bit chunk follows.
  9. Convert each value to decimal.
  10. Add 63 to each value.
  11. Convert each value to its ASCII equivalent.

This was way beyond my brain’s pay grade. But surely someone else had written the code I needed? I did some Duck Duck Going and found a piece of PHP code to do the encoding. It didn’t work. I Ducked Ducked and Went some more. I found a different piece of PHP code. That didn’t work either.

At this point, my allotted time was up. If I wanted to have something to demo by the end of the day, I needed to switch gears. So I did.

I used Leaflet.js to create the maps I wanted using client-side JavaScript. Here’s the JavaScript code I wrote.

It waits until the page has finished loading, then it searches for any instances of the h-geo microformat (a way of encoding latitude and longitude coordinates in HTML). If there are three or more, it generates a script element to pull in the Leaflet library, and a corresponding style element. Then it draws the map with the polyline on it. I ended up using Stamen’s beautiful watercolour map tiles.

Had some fun at Indie Web Camp Brighton on the weekend messing around with @Stamen’s lovely watercolour map tiles. (I was trying to create Indiana Jones style travel maps for my site …a different kind of Indy web.)

That’s what I demoed at the end of the day.

But I wasn’t happy with it.

Sure, it looked good, but displaying the map required requests for a script, a style sheet, and multiple map tiles. I made sure that it didn’t hold up the loading of the rest of the page, but it still felt wasteful.

So after Indie Web Camp, I went back to investigate static maps again. This time I did finally manage to find some PHP code for encoding lat/lon coordinates into a polyline that worked. Finally I was able to construct URLs for a static map image that displays a line connecting multiple points with a line.

I’ve put this maps on any of the archive pages that also have calendar heat maps. Some examples:

If you go back much further than that, the maps start to trail off. That’s because I wasn’t geotagging everything from the start.

I’m pretty happy with the final results. It’s certainly far more responsible from a performance point of view. Oh, and I’ve also got the maps inside a picture element so that I can swap out the tiles if you switch to dark mode.

It’s a shame that I can’t use the lovely Stamen watercolour tiles for these static maps though.

Inlining SVG background images in CSS with custom properties

Here’s a tiny lesson that I picked up from Trys that I’d like to share with you…

I was working on some upcoming changes to the Clearleft site recently. One particular component needed some SVG background images. I decided I’d inline the SVGs in the CSS to avoid extra network requests. It’s pretty straightforward:

.myComponent {
    background-image: url('data:image/svg+xml;utf8,<svg> ... </svg>');
}

You can basically paste your SVG in there, although you need to a little bit of URL encoding: I found that converting # to %23 to was enough for my needs.

But here’s the thing. My component had some variations. One of the variations had multiple background images. There was a second background image in addition to the first. There’s no way in CSS to add an additional background image without writing a whole background-image declaration:

.myComponent--variant {
    background-image: url('data:image/svg+xml;utf8,<svg> ... </svg>'), url('data:image/svg+xml;utf8,<svg> ... </svg>');
}

So now I’ve got the same SVG source inlined in two places. That negates any performance benefits I was getting from inlining in the first place.

That’s where Trys comes in. He shared a nifty technique he uses in this exact situation: put the SVG source into a custom property!

:root {
    --firstSVG: url('data:image/svg+xml;utf8,<svg> ... </svg>');
    --secondSVG: url('data:image/svg+xml;utf8,<svg> ... </svg>');
}

Then you can reference those in your background-image declarations:

.myComponent {
    background-image: var(--firstSVG);
}
.myComponent--variant {
    background-image: var(--firstSVG), var(--secondSVG);
}

Brilliant! Not only does this remove any duplication of the SVG source, it also makes your CSS nice and readable: no more big blobs of SVG source code in the middle of your style sheet.

You might be wondering what will happen in older browsers that don’t support CSS custom properties (that would be Internet Explorer 11). Those browsers won’t get any background image. Which is fine. It’s a background image. Therefore it’s decoration. If it were an important image, it wouldn’t be in the background.

Progressive enhancement, innit?

Push without notifications

On the first day of Indie Web Camp Berlin, I led a session on going offline with service workers. This covered all the usual use-cases: pre-caching; custom offline pages; saving pages for offline reading.

But on the second day, Sebastiaan spent a fair bit of time investigating a more complex use of service workers with the Push API.

The Push API is what makes push notifications possible on the web. There are a lot of moving parts—browser, server, service worker—and, frankly, it’s way over my head. But I’m familiar with the general gist of how it works. Here’s a typical flow:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker creates a notification linking to the URL, interrupting the user, and generally adding to the weight of information overload.

Here’s what Sebastiaan wanted to investigate: what if that last step weren’t so intrusive? Here’s the alternate flow he wanted to test:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker fetches the contents of the URL provided in the push message and caches the page. Silently.

It worked.

I think this could be a real game-changer. I don’t know about you, but I’m very, very wary of granting websites the ability to send me push notifications. In fact, I don’t think I’ve ever given a website permission to interrupt me with push notifications.

You’ve seen the annoying permission dialogues, right?

In Firefox, it looks like this:

Will you allow name-of-website to send notifications?

[Not Now] [Allow Notifications]

In Chrome, it’s:

name-of-website wants to

Show notifications

[Block] [Allow]

But in actual fact, these dialogues are asking for permission to do two things:

  1. Receive messages pushed from the server.
  2. Display notifications based on those messages.

There’s no way to ask for permission just to do the first part. That’s a shame. While I’m very unwilling to grant permission to be interrupted by intrusive notifications, I’d be more than willing to grant permission to allow a website to silently cache timely content in the background. It would be a more calm technology.

Think of the use cases:

  • I grant push permission to a magazine. When the magazine publishes a new article, it’s cached on my device.
  • I grant push permission to a podcast. Whenever a new episode is published, it’s cached on my device.
  • I grant push permission to a blog. When there’s a new blog post, it’s cached on my device.

Then when I’m on a plane, or in the subway, or in any other situation without a network connection, I could still visit these websites and get content that’s fresh to me. It’s kind of like background sync in reverse.

There’s plenty of opportunity for abuse—the cache could get filled with content. But websites can already do that, and they don’t need to be granted any permissions to do so; just by visiting a website, it can add multiple files to a cache.

So it seems that the reason for the permissions dialogue is all about displaying notifications …not so much about receiving push messages from the server.

I wish there were a way to implement this background-caching pattern without requiring the user to grant permission to a dialogue that contains the word “notification.”

I wonder if the act of adding a site to the home screen could implicitly grant permission to allow use of the Push API without notifications?

In the meantime, the proposal for periodic synchronisation (using background sync) could achieve similar results, but in a less elegant way; periodically polling for new content instead of receiving a push message when new content is published. Also, it requires permission. But at least in this case, the permission dialogue should be more specific, and wouldn’t include the word “notification” anywhere.

Service workers and videos in Safari

Alright, so I’ve already talked about some gotchas when debugging service worker issues. But what if you don’t even realise the problem has anything to do with your service worker?

This is not a hypothetical situation. I encountered this very thing myself. Gather ‘round the campfire, children…

One of the latest case studies on the Clearleft site is a nice write-up by Luke of designing a mobile app for Virgin Holidays. The case study includes a lovely video that demonstrates the log-in flow. I implemented that using a video element (with a poster image). Nice and straightforward. Super easy. All good.

But I hadn’t done my due diligence in browser testing (I guess I didn’t even think of it in this case). Hana informed me that the video wasn’t working at all in Safari. The poster image appeared just fine, but when you clicked on it, the video didn’t load.

I ducked, ducked, and went, uncovering what appeared to be the root of the problem. It seems that Safari is fussy about having servers support something called “byte-range requests”.

I had put the video in question on an Amazon S3 server. I came to the conclusion that S3 mustn’t support these kinds of headers correctly, or something.

Now I had a diagnosis. The next step was figuring out a solution. I thought I might have to move the video off of S3 and onto a server that I could configure a bit more.

Luckily, I never got ‘round to even starting that process. That’s good. Because it turns out that my diagnosis was completely wrong.

I came across a recent post by Phil Nash called Service workers: beware Safari’s range request. The title immediately grabbed my attention. Safari: yes! Video: yes! But service workers …wait a minute!

There’s a section in Phil’s post entitled “Diagnosing the problem”, in which he says:

I first thought it could have something to do with the CDN I’m using. There were some false positives regarding streaming video through a CDN that resulted in some extra research that was ultimately fruitless.

That described my situation exactly. Except Phil went further and nailed down the real cause of the problem:

Nginx was serving correct responses to Range requests. So was the CDN. The only other problem? The service worker. And this broke the video in Safari.

Doh! I hadn’t even thought about service workers!

Phil came up with a solution, and he has kindly shared his code.

I decided to go for a dumber solution:

if ( request.url.match(/\.(mp4)$/) ) {
  return;
}

That tells the service worker to just step out of the way when it comes to video requests. Now the video plays just fine in Safari. It’s a bit of a shame, because I’m kind of penalising all browsers for Safari’s bug, but the Clearleft site isn’t using much video at all, and in any case, it might be good not to fill up the cache with large video files.

But what’s more important than any particular solution is correctly identifying the problem. I’m quite sure I never would’ve been able to fix this issue if Phil hadn’t gone to the trouble of sharing his experience. I’m very, very grateful that he did.

That’s the bigger lesson here: if you solve a problem—even if you think it’s hardly worth mentioning—please, please share your solution. It could make all the difference for someone out there.

Frustration

I had some problems with my bouzouki recently. Now, I know my bouzouki pretty well. I can navigate the strings and frets to make music. But this was a problem with the pickup under the saddle of the bouzouki’s bridge. So it wasn’t so much a musical problem as it was an electronics problem. I know nothing about electronics.

I found it incredibly frustrating. Not only did I have no idea how to fix the problem, but I also had no idea of the scope of the problem. Would it take five minutes or five days? Who knows? Not me.

My solution to a problem like this is to pay someone else to fix it. Even then I have to go through the process of having the problem explained to me by someone who understands and cares about electronics much more than me. I nod my head and try my best to look like I’m taking it all in, even though the truth is I have no particular desire to get to grips with the inner workings of pickups—I just want to make some music.

That feeling of frustration I get from having wiring issues with a musical instrument is the same feeling I get whenever something goes awry with my web server. I know just enough about servers to be dangerous. When something goes wrong, I feel very out of my depth, and again, I have no idea how long it will take the fix the problem: minutes, hours, days, or weeks.

I had a very bad day yesterday. I wanted to make a small change to the Clearleft website—one extra line of CSS. But the build process for the website is quite convoluted (and clever), automatically pulling in components from the site’s pattern library. Something somewhere in the pipeline went wrong—I still haven’t figured out what—and for a while there, the Clearleft website was down, thanks to me. (Luckily for me, Danielle saved the day …again. I’d be lost without her.)

I was feeling pretty down after that stressful day. I felt like an idiot for not knowing or understanding the wiring beneath the site.

But, on the other hand, considering I was only trying to edit a little bit of CSS, maybe the problem didn’t lie entirely with me.

There’s a principle underlying the architecture of the World Wide Web called The Rule of Least Power. It somewhat counterintuitively states that you should:

choose the least powerful language suitable for a given purpose.

Perhaps, given the relative simplicity of the task I was trying to accomplish, the plumbing was over-engineered. That complexity wouldn’t matter if I could circumvent it, but without the build process, there’s no way to change the markup, CSS, or JavaScript for the site.

Still, most of the time, the build process isn’t a hindrance, it’s a help: concatenation, minification, linting and all that good stuff. Most of my frustration when something in the wiring goes wrong is because of how it makes me feel …just like with the pickup in my bouzouki, or the server powering my website. It’s not just that I find this stuff hard, but that I also feel like it’s stuff I’m supposed to know, rather than stuff I want to know.

On that note…

Last week, Paul wrote about getting to grips with JavaScript. On the very same day, Brad wrote about his struggle to learn React.

I think it’s really, really, really great when people share their frustrations and struggles like this. It’s very reassuring for anyone else out there who’s feeling similarly frustrated who’s worried that the problem lies with them. Also, this kind of confessional feedback is absolute gold dust for anyone looking to write explanations or documentation for JavaScript or React while battling the curse of knowledge. As Paul says:

The challenge now is to remember the pain and anguish I endured, and bare that in mind when helping others find their own path through the knotted weeds of JavaScript.

Navigating Team Friction by Lara Hogan

It’s day two of An Event Apart Seattle (Special Edition). Lara is here to tell us about Navigating Team Friction. These are my notes…

Lara started as a developer, and then moved into management. Now she consults with other organisations. So she’s worked with teams of all sizes, and her conclusion is that humans are amazing. She has seen teams bring a site down; she has seen teams ship amazing features; she has seen teams fall apart because they had to move desks. But it’s magical that people can come together and build something.

Bruce Tuckman carried out research into the theory of group dynamics. He published stages of group development. The four common stages are:

  1. Forming. The group is coming together. There is excitement.
  2. Storming. This is when we start to see some friction. This is necessary.
  3. Norming. Things start to iron themselves out.
  4. Performing. Now you’re in the flow state and you’re shipping.

So if your team is storming (experiencing friction), that’s absolutely normal. It might be because of disagreement about processes. But you need to move past the friction. Team friction impacts your co-workers, company, and users.

An example. Two engineers passively-aggressively commenting each other’s code reviews; they feign surprise at the other’s technology choices; one rewrites the others code; one ships to production with code review; a senior team member or manager has to step in. But it costs a surprising amount of time and energy before a manager even notices to step in.

Brains

The Hulk gets angry. This is human. We transform into different versions of ourselves when we are overcome by our emotions.

Lara has learned a lot about management by reading about how our brains work. We have a rational part of our brain, the pre-frontal cortex. It’s very different to our amygdala, a much more primal part of our brain. It categorises input into either threat or reward. If a threat is dangerous enough, the amygdala takes over. The pre-frontal cortex is too slow to handle dangerous situations. So when you have a Hulk moment, that was probably an amygdala hijack.

We have six core needs that are open to being threatened (leading to an amygdala hijacking):

  1. Belonging. Community, connection; the need to belong to a tribe. From an evolutionary perspective, this makes sense—we are social animals.
  2. Improvement/Progress. Progress towards purpose, improving the lives of others. We need to feel that we do matters, and that we are learning.
  3. Choice. Flexibility, autonomy, decision-making. The power to make decisions over your own work.
  4. Equality/Fairness. Access to resources and information; equal reciprocity. We have an inherent desire for fairness.
  5. Predictability. Resources, time, direction future challenges. We don’t like too many surprises …but we don’t like too much routine either. We want a balance.
  6. Significance. Status, visibility, recognition. We want to feel important. Being assigned to a project you think is useless feels awful.

Those core needs are B.I.C.E.P.S. Thinking back to your own Hulk moment, which of those needs was threatened?

We value those needs differently. Knowing your core needs is valuable.

Desk Moves

Lara has seen the largest displays of human emotion during something as small as moving desks. When you’re asked to move your desk, your core need of “Belonging” may be threatened. Or it may be a surprise that disrupts the core need of “Improvement/Progress.” If a desk move is dictated to you, it feels like “Choice” is threatened. The move may feel like it favours some people over others, threatening “Equality/Fairness.” The “Predictability” core need may be threatened by an unexpected desk move. If the desk move feels like a demotion, your core need of “Significance” will be threatened.

We are not mind readers, so we can’t see when someone’s amygdala takes over. But we can look out for the signs. Forms of resistance can be interpreted as data. The most common responses when a threat is detected are:

  1. Doubt. People double-down on the status quo; they question the decision.
  2. Avoidance. Avoiding the problem; too busy to help with the situation.
  3. Fighting. People create arguments against the decision. They’ll use any logic they can. Or they simply refuse.
  4. Bonding. Finding someone else who is also threatened and grouping together.
  5. Escape-route. Avoiding the threat by leaving the company.

All of these signals are data. Rather than getting frustrated with these behaviours, use them as valuable data. Try not to feel threatened yourself by any of these behaviours.

Open questions are powerful tool in your toolbox. Asked from a place of genuine honesty and curiosity, open questions help people feel less threatened. Closed questions are questions that can be answered with “yes” or “no”. When you spot resistance, get some one-on-one time and try to ask open questions:

  • What do you think folks are liking or disliking about this so far?
  • I wanted to get your take on X. What might go wrong? What do you think might be good about it?
  • What feels most upsetting about this?

You can use open questions like these to map resistance to threatened core needs. Then you can address those core needs.

This is a good time to loop in your manager. It can be very helpful to bounce your data off someone else and get their help. De-escalating resistance is a team effort.

Communication ✨

Listen with compassion, kindness, and awareness.

  • Reflect on the dynamics in the room. Maybe somebody thinks a topic is very important to them. Be aware of your medium. Your body language; your tone of voice; being efficient with words could be interpreted as a threat. Consider the room’s power dynamics. Be aware of how influential your words could be. Is this person in a position to take the action I’m suggesting?
  • Elevate the conversation. Meet transparency with responsibility.
  • Assume best intentions. Remember the prime directive. Practice empathy. Ask yourself what else is going on for this person in their life.
  • Listen to learn. Stay genuinely curious. This is really hard. Remember your goal is to understand, not make judgement. Prepare to be surprised when you walk into a room. Operate under the assumption that you don’t have the whole story. Be willing to have your mind changed …no, be excited to have your mind changed!

This tips are part of mindful communication. amy.tech has some great advice for mindful communication in code reviews.

Feedback

Mindful communication won’t solve all your problems. There are times when you’ll have to give actionable feedback. The problem is that humans are bad at giving feedback, and we’re really bad at receiving feedback. We actively avoid feedback. Sometimes we try to give constructive feedback in a compliment sandwich—don’t do that.

We can get better at giving and receiving feedback.

Ever had someone say, “Hey, you’re doing a great job!” It feels good for a few minutes, but what we crave is feedback that addresses our core needs.

GeneralSpecific and Actionable
Positive Feedback
Negative Feedback

The feedback equation starts with an observation (“You’re emails are often short”)—it’s not how you feel about the behaviour. Next, describe the impact of the behaviour (“The terseness of your emails makes me confused”). Then pose a question or request (“Can you explain why you write your emails that way?”).

observation + impact + question/request

Ask people about their preferred feedback medium. Some people prefer to receive feedback right away. Others prefer to digest it. Ask people if it’s a good time to give them feedback. Pro tip: when you give feedback, ask people how they’d like to receive feedback in the future.

Prepare your brain to receive feedback. It takes six seconds for your amygdala to chill out. Take six seconds before responding. If you can’t de-escalate your amygdala, ask the person giving feedback to come back later.

Think about one piece of feedback you’ll ask for back at work. Write it down. When your back at work, ask about it.

You’ll start to notice when your amygdala or pre-frontal cortex is taking over.

Prevention

Talking one-on-one is the best way to avoid team friction.

Retrospectives are a great way of normalising of talking about Hard Things and team friction.

It can be helpful to have a living document that states team processes and expectations (how code reviews are done; how much time is expected for mentoring). Having it written down makes it a North star you can reference.

Mapping out roles and responsibilities is helpful. There will be overlaps in that Venn diagram. The edges will be fuzzy.

What if you disagree with what management says? The absence of trust is at the centre of most friction.

DisgreeAgree
CommitMature and TransparentEasiest
Don’t CommitAcceptable but ToughBad Things

Practice finding other ways to address B.I.C.E.P.S. You might not to be able to fix the problem directly—the desk move still has to happen.

But no matter how empathic or mindful you are, sometimes it will be necessary to bring in leadership or HR. Loop them in. Restate the observation + impact. State what’s been tried, and what you think could help now. Throughout this process, take care of yourself.

Remember, storming is natural. You are now well-equipped to weather that storm.

See also:

Making progress

When I was talking about Async, Ajax, and animation, I mentioned the little trick I’ve used of generating a progress element to indicate to the user that an Ajax request is underway.

I sometimes use the same technique even if Ajax isn’t involved. When a form is being submitted, I find it’s often good to provide explicit, immediate feedback that the submission is underway. Sure, the browser will do its own thing but a browser doesn’t differentiate between showing that a regular link has been clicked, and showing that all those important details you just entered into a form are on their way.

Here’s the JavaScript I use. It’s fairly simplistic, and I’m limiting it to POST requests only. At the moment that a form begins to submit, a progress element is inserted at the end of the form …which is usually right by the submit button that the user will have just pressed.

While I’m at it, I also set a variable to indicate that a POST submission is underway. So even if the user clicks on that submit button multiple times, only one request is set.

You’ll notice that I’m attaching an event to each form element, rather than using event delegation to listen for a click event on the parent document and then figuring out whether that click event was triggered by a submit button. Usually I’m a big fan of event delegation but in this case, it’s important that the event I’m listening to is the submit event. A form won’t fire that event unless the data is truly winging its way to the server. That means you can do all the client-side validation you want—making good use of the required attribute where appropriate—safe in the knowledge that the progess element won’t be generated until the form has passed its validation checks.

If you like this particular pattern, feel free to use the code. Better yet, improve upon it.