The Unraveling of Space-Time | Quanta Magazine
This special in-depth edition of Quanta is fascinating and very nicely put together.
This special in-depth edition of Quanta is fascinating and very nicely put together.
What are your own scribbles, your own ordinary plenty, not worth much to you now but that someone in the future may treasure?
Snook’s Law in action:
Big, flashy things get noticed. Quiet, boring things don’t.
There isn’t much infrastructure in place to quantify the constant, silent, boring, predictable, round-the-clock passive successes of this aspect of design systems after the initial effort is complete.
A lack of bug reports, accessibility issues, design tweaks, etc. are all objectively great, but there are no easy datapoints you can measure here.
Beautiful writing from Rebecca Solnit, that encapsulates what I’ve been trying to say:
You want tomorrow to be different than today, and it may seem the same, or worse, but next year will be different than this one, because those tiny increments added up. The tree today looks a lot like the tree yesterday, and so does the baby.
I love these black and white photos from the border:none event that just wrapped up in Nuremberg!
There are a lot of astute observations in here.
Temporal standards bodies.
Ethan highlights a classic case of the McNamara Fallacy—measuring adoption of design system components.
This describes how I iterate on The Session:
It comes down to this annoying, upsetting, stupid fact: the only way to build a great product is to use it every day, to stare at it, to hold it in your hands to feel its lumps. The data and customers will lie to you but the product never will.
This whole post reminded of the episode of the Clearleft podcast on measuring design.
The problem underlying all this is that when it comes to building a product, all data is garbage, a lie, or measuring the wrong thing. Folks will be obsessed with clicks and charts and NPS scores—the NFTs of product management—and in this sea of noise they believe they can see the product clearly. There are courses and books and talks all about measuring happiness and growth—surveys! surveys! surveys!—with everyone in the field believing that they’ve built a science when they’ve really built a cult.
Reminder:
em
andrem
work with the user’s font size;px
completely overrides it.
It sounds like Remix takes a sensible approach to progressive enhancement.
Spoiler: the answer to the question in the title is a resounding “hell yeah!”
Scott brings receipts.
A fascinating interactive journey through biometrics using your face.
I remember when Google Chrome launched. I still have a physical copy of the Scott McCloud explanatory comic knocking around somewhere. Now that comic has been remixed by Leah Elliott to explain how Google Chrome is undermining privacy online.
Laying bare the inner workings of the controversial browser, she creates the ultimate guide to one of the world‘s most widely used surveillance tools.
One of my favourite episodes of the Clearleft podcast is on measuring design. This post from Chris is a complements that episode in a sensible and practical style.
What gets measured gets done. You are what you measure. Measurement eliminates argument. If you work in an environment that puts store in these oft-quoted business adages then I urge you to take a moment to challenge your calculations. Let’s review our metrics to ensure they can stand up and be counted.
Eric’s response to Chris’s question—“What is one thing people can do to make their website better?”—dovetails nicely with my own answer:
The two real problems here are:
- Third-party assets, such as the very analytics and CRM packages you use to determine who is using your product and how they go about it. There’s no real control over the quality or amount of code they add to your site, and setting up the logic to block them loading their own third-party resources is difficult to do.
- The people who tell you to add these third-party assets. These people typically aren’t aware of the performance issues caused by the ask, or don’t care because it’s not part of the results they’re judged by.
We’ve got click rates, impressions, conversion rates, open rates, ROAS, pageviews, bounces rates, ROI, CPM, CPC, impression share, average position, sessions, channels, landing pages, KPI after never ending KPI.
That’d be fine if all this shit meant something and we knew how to interpret it. But it doesn’t and we don’t.
The reality is much simpler, and therefore much more complex. Most of us don’t understand how data is collected, how these mechanisms work and most importantly where and how they don’t work.
A good post by Andy on “the language of business,” which is most cases turns out to be numbers, numbers, numbers.
While it seems reasonable and fair to expect a modicum of self-awareness of why you’re employed and what business value you drive in the the context of the work you do, sometimes the incessant self-flagellation required to justify and explain this to those who hired you may be a clue to a much deeper and more troubling question at the heart of the organisation you work for.
This pairs nicely with the Clearleft podcast episode on measuring design.
I’ve noticed a trend in recent years—a trend that I’ve admittedly been part of myself—where performance-minded developers will rebuild a site and then post a screenshot of their Lighthouse score on social media to show off how fast it is.
Mea culpa! I should post my CrUX reports too.
But I’m going to respectfully decline Phil’s advice to use any of the RUM analytics providers he recommends that require me to put another script
element on my site. One third-party script is one third-party script too many.
An oldie but a goodie. If you think you’re getting statistically significant results from A/B testing, you should probably consider doing some A/A testing.
In an A/A test, you run a test using the exact same options for both “variants” in your test.