Tags: is

3565

sparkline

Tuesday, February 18th, 2025

Naz Hamid • Your Site Is a Home

You can still have a home. A place to hang up your jacket, or park your shoes. A place where you can breathe out. A place where you can hear yourself think critically. A place you might share with loved ones who you can give to, and receive from.

Own what’s yours

Now, more than ever, it’s critical to own your data. Really own it. Like, on your hard drive and hosted on your website.

Is taking control of your content less convenient? Yeah–of course. That’s how we got in this mess to begin with. It can be a downright pain in the ass. But it’s your pain in the ass. And that’s the point.

Monday, February 17th, 2025

The Imperfectionist: Seventy per cent

If you’re roughly 70% happy with a piece of writing you’ve produced, you should publish it.

Works for me!

You’re also expanding your ability to act in the presence of feelings of displeasure, worry and uncertainty, so that you can take more actions, and more ambitious actions, later on.

Crucially, you’ll also be creating a body of evidence to prove to yourself that when you move forward at 70%, the sky stubbornly fails to fall in. People don’t heap scorn on you or punish you.

Sunday, February 16th, 2025

My Life in Weeks by Gina Trapani

This is one way of putting things into perspective.

The hardest working font in Manhattan – Aresluna

This is absolutely wonderful!

There’s deep dives and then there’s Marcin’s deeeeeeep dives. Sit back and enjoy this wholesome detective work, all beautifully presented with lovely interactive elements.

This is what the web is for!

Friday, February 14th, 2025

The Tyranny of Now — The New Atlantis

I’m not a fan of Nicholas Carr and his moral panics, but this is an excellent dive into some historical media theory.

What Innis saw is that some media are particularly good at transporting information across space, while others are particularly good at transporting it through time. Some are space-biased while others are time-biased. Each medium’s temporal or spatial emphasis stems from its material qualities. Time-biased media tend to be heavy and durable. They last a long time, but they are not easy to move around. Think of a gravestone carved out of granite or marble. Its message can remain legible for centuries, but only those who visit the cemetery are able to read it. Space-biased media tend to be lightweight and portable. They’re easy to carry, but they decay or degrade quickly. Think of a newspaper printed on cheap, thin stock. It can be distributed in the morning to a large, widely dispersed readership, but by evening it’s in the trash.

Reason

A couple of days ago I linked to a post by Robin Sloan called Is it okay?, saying:

Robin takes a fair and balanced look at the ethics of using large language models.

That’s how it came across to me: fair and balanced.

Robin’s central question is whether the current crop of large language models might one day lead to life-saving super-science, in which case, doesn’t that outweigh the damage they’re doing to our collective culture?

Baldur wrote a response entitled Knowledge tech that’s subtly wrong is more dangerous than tech that’s obviously wrong. (Or, where I disagree with Robin Sloan).

Baldur pointed out that one side of the scale that Robin is attempting to balance is based on pure science fiction:

There is no path from language modelling to super-science.

Robin responded pointing out that some things that we currently have would have seemed like science fiction a few years ago, right?

Well, no. Baldur debunks that in a post called Now I’m disappointed.

(By the way, can I just point out how great it is to see a blog-to-blog conversation like this, regardless of how much they might be in disagreement.)

Baldur kept bringing the receipts. That’s when it struck me that Robin’s stance is largely based on vibes, whereas Baldur’s viewpoint is informed by facts on the ground.

In a way, they’ve got something in common. They’re both advocating for an interpretation of the precautionary principle, just from completely opposite ends.

Robin’s stance is that if these tools one day yield amazing scientific breakthroughs then that’s reason enough to use them today. It’s uncomfortably close to the reasoning of the effective accelerationist nutjobs, but in a much milder form.

Baldur’s stance is that because of the present harms being inflicted by current large language models, we should be slamming on the brakes. If anything, the harms are going to multiply, not magically reduce.

I have to say, Robin’s stance doesn’t look nearly as fair and balanced as I initially thought. I’m on Team Baldur.

Michelle also weighs in, pointing out the flaw in Robin’s thinking:

AI isn’t LLMs. Or not just LLMs. It’s plausible that AI (or more accurately, Machine Learning) could be a useful scientific tool, particularly when it comes to making sense of large datasets in a way no human could with any kind of accuracy, and many people are already deploying it for such purposes. This isn’t entirely without risk (I’ll save that debate for another time), but in my opinion could feasibly constitute a legitimate application of AI.

LLMs are not this.

In other words, we’ve got a language collision:

We call them “AI”, we look at how much they can do today, and we draw a straight line to what we know of “AI” in our science fiction.

This ridiculous situation could’ve been avoided if we had settled on a more accurate buzzword like “applied statistics” instead of “AI”.

There’s one other flaw in Robin’s reasoning. I don’t think it follows that future improvements warrant present use. Quite the opposite:

The logic is completely backwards! If large language models are going to improve their ethical shortcomings (which is debatable, but let’s be generous), then that’s all the more reason to avoid using the current crop of egregiously damaging tools.

You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.

Anyway, this back-and-forth between Robin and Baldur (and Michelle) was interesting. But it all pales in comparison to the truth bomb that Miriam dropped in her post Tech continues to be political:

When eugenics-obsessed billionaires try to sell me a new toy, I don’t ask how many keystrokes it will save me at work. It’s impossible for me to discuss the utility of a thing when I fundamentally disagree with the purpose of it.

Boom!

Maybe we should consider the beliefs and assumptions that have been built into a technology before we embrace it? But we often prefer to treat each new toy as as an abstract and unmotivated opportunity. If only the good people like ourselves would get involved early, we can surely teach everyone else to use it ethically!

You know what? I could quote every single line. Just go read the whole thing. Please.

Thursday, February 13th, 2025

Putting the ink into design thinking | Clearleft

The power of prototyping:

Most of my work is a set of disposables rather than deliverables, and I celebrate this.

I like the three questions that Chris asks himself:

  1. What’s the quickest, cheapest thing I can create to help make the next design decision?
  2. What can I create to best demonstrate the essence of the concept?
  3. How can I most effectively share the thinking behind the design with decision-makers?

We Live Like Royalty and Don’t Know It — The New Atlantis

Strong Deb Chachra vibes in this ongoing series by Charles C. Mann:

he great European cathedrals were built over generations by thousands of people and sustained entire communities. Similarly, the electric grid, the public-water supply, the food-distribution network, and the public-health system took the collective labor of thousands of people over many decades. They are the cathedrals of our secular era. They are high among the great accomplishments of our civilization. But they don’t inspire bestselling novels or blockbuster films. No poets celebrate the sewage treatment plants that prevent them from dying of dysentery. Like almost everyone else, they rarely note the existence of the systems around them, let alone understand how they work.

Wednesday, February 12th, 2025

What happens to what we’ve already created? - The History of the Web

We wonder often if what is created by AI has any value, and at what cost to artists and creators. These are important considerations. But we need to also wonder what AI is taking from what has already been created.

Tuesday, February 4th, 2025

Here Come the Lionfish – James Bridle

A terrific article by James.

Tuesday, January 28th, 2025

Saturday, January 25th, 2025

Blog Questions Challenge

I’ve been tagged in a good ol’-fashioned memetic chain letter, first by Jon and then by Luke. Only by answering these questions can my soul find peace…

Why did you start blogging in the first place?

All the cool kids were doing it. I distinctly remember thinking it was far too late to start blogging. Clearly I had missed the boat. That was in the year 2001.

So if you’re ever thinking of starting something but you think it might be too late …it isn’t.

Back then, I wrote:

I’ll try and post fairly regularly but I don’t want to make any promises I can’t keep.

I’m glad I didn’t commit myself but I’m also glad that I’m still posting 24 years later.

What platform are you using to manage your blog and why did you choose it? Have you blogged on other platforms before?

I use my own hand-cobbled mix of PHP and MySQL. Before that I had my own hand-cobbled mix of PHP and static XML files.

On the one hand, I wouldn’t recommend anybody to do what I’ve done. Just use an off-the-shelf content management system and start publishing.

On the other hand, the code is still working fine decades later (with the occasional tweak) and the control freak in me likes knowing what every single line of code is doing.

It’s very bare-bones though.

How do you write your posts? For example, in a local editing tool, or in a panel/dashboard that’s part of your blog?

I usually open a Mardown text editor and write in that. I use the Mac app Focused which was made by Realmac software. I don’t think you can even get hold of it these days, but it does the job for me. Any Markdown text editor would do though.

Then I copy what I’ve written and paste it into the textarea of my hand-cobbled CMS. It’s pretty rare for me to write directly into that textarea.

When do you feel most inspired to write?

When I’m supposed to be doing something else.

Blogging is the greatest procrastination tool there is. You’re skiving off doing the thing you should be doing, but then when you’ve published the blog post, you’ve actually done something constructive so you don’t feel too bad about avoiding that thing you were supposed to be doing.

Sometimes it takes me a while to get around to posting something. I find myself blogging out loud to my friends, which is a sure sign that I need to sit down and bash out that blog post.

When there’s something I’m itching to write about but I haven’t ’round to it yet, it feels a bit like being constipated. Then, when I finally do publish that blog post, it feels like having a very satisfying bowel movement.

No doubt it reads like that too.

Do you publish immediately after writing, or do you let it simmer a bit as a draft?

I publish immediately. I’ve never kept drafts. Usually I don’t even save theMarkdown file while I’m writing—I open up the text editor, write the words, copy them, paste them into that textarea and publish it. Often it takes me longer to think of a title than it takes to write the actual post.

I try to remind myself to read it through once to catch any typos, but sometimes I don’t even do that. And you know what? That’s okay. It’s the web. I can go back and edit it at any time. Besides, if I miss a typo, someone else will catch it and let me know.

Speaking for myself, putting something into a draft (or even just putting it on a to-do list) is a guarantee that it’ll never get published. So I just write and publish. It works for me, though I totally understand that it’s not for everyone.

What’s your favourite post on your blog?

I’ve got a little section of “recommended reading” in the sidebar of my journal:

But I’m not sure I could pick just one.

I’m very proud of the time I wrote 100 posts in 100 days and each post was exactly 100 words long. That might be my favourite tag.

Any future plans for your blog? Maybe a redesign, a move to another platform, or adding a new feature?

I like making little incremental changes. Usually this happens at Indie Web Camps. I add some little feature or tweak.

I definitely won’t be redesigning. But I might add another “skin” or two. I’ve got one of those theme-switcher things, y’see. It was like a little CSS Zen Garden before that existed. I quite like having redesigns that are cumulative instead of destructive.

Next?

You. Yes, you.

Tuesday, January 21st, 2025

On Transient Slash Pages • Robb Knight

This is a great idea that I’m going to file away for later:

I like the idea of redirecting /now to the latest post tagged as now so one could see the latest version of what I’m doing now.

Wednesday, January 15th, 2025

Prescriptive and Descriptive Information Architectures | Jorge Arango

Interesting—this is exactly the same framing I used to talk about design systems a few years ago.

Tuesday, January 14th, 2025

A long-awaited talk

Back in 2019 I had the amazing experience of going to CERN and being part of a team building an emulator of the first ever browser.

Remy was on the team too. He did the heavy lifting of actually making the thing work—quite an achievement in just five days!

Coming into this, I thought it was hugely ambitious to try to not only recreate the experience of using the first ever web browser (called WorldWideWeb, later Nexus), but to also try to document the historical context of the time. Now that it’s all done, I’m somewhat astounded that we managed to achieve both.

Remy and I were both keen to talk about the work, which is why we did a joint talk at Fronteers in Amsterdam that year. We’re both quite sceptical of talks given by duos; people think it means it’ll be half the work, when actually it’s twice the work. In the end we come up with a structure for the talk that we both liked:

Now, we could’ve just done everything chronologically, but that would mean I’d do the first half of the talk and Remy would do the second half. That didn’t appeal. And it sounded kind of boring. So then we come up with the idea of interweaving the two timelines.

That worked remarkably well.

You can watch the video of that talk in Amsterdam. You can also read the transcript.

After putting so much work into the talk, we were keen to give it again somewhere. We had the chance to do that in Nottingham in early March 2020. (cue ominous foreboding)

The folks from local Brighton meetup Async had also asked if we wanted to give the talk. We were booked in for May 2020. (ominous foreboding intensifies)

We all know what happened next. The Situation. Lockdown. No conferences. No meetups.

But technically the talk wasn’t cancelled. It was just postponed. And postponed. And postponed. Before you know it, five years have passed.

Part of the problem was that Async is usually on the first Thursday of the month and that’s when I host an Irish music session in Hove. I can’t miss that!

But finally the stars aligned and last week Remy and I finally did the Async talk. You can watch a video of it.

I really enjoyed giving the talk and the discussion that followed. There was a good buzz.

It also made me appreciate the work that we put into stucturing the talk. We’ve only given it a few times but with a five year gap between presentations, I can confidentally say that’s it’s a timeless topic.

Tuesday, January 7th, 2025

Progressive enhancement brings everyone in - The History of the Web

This is a great history of the idea of progressive enhancement:

It is an idea that has been lasting and enduring for two decades, and will continue.

Tuesday, December 31st, 2024

Words I wrote in 2024

People spent a lot of time and energy in 2024 talking about (and on) other people’s websites. Twitter. Bluesky. Mastodon. Even LinkedIn.

I observed it all with the dispassionate perspective of Dr. Manhattan on Mars. While I’m happy to see more people abondoning the cesspool that is Twitter, I’m not all that invested in either Mastodon or Bluesky. Or any other website, for that matter. I’m glad they’re there, but if they disappeared tomorrow, I’d carry on posting here on my own site.

I posted to my website over 850 times in 2024. sparkline

I shared over 350 links. sparkline

I posted over 400 notes. sparkline

I published just one article.

And I wrote almost 100 blog posts here in my journal this year. sparkline

Here are some cherry-picked highlights:

Tuesday, December 17th, 2024

Authors Apart

Another handy list of where you can get works published by A Book Apart authors.

Monday, December 16th, 2024

Choosing a geocoding provider

Yesterday when I mentioned my paranoia of third-party dependencies on The Session, I said:

I’ve built in the option to switch between multiple geocoding providers. When one of them inevitably starts enshittifying their service, I can quickly move on to another. It’s like having a “go bag” for geocoding.

(Geocoding, by the way, is when you provide a human-readable address and get back latitude and longitude coordinates.)

My paranoia is well-founded. I’ve been using Google’s geocoding API, which is changing its pricing model from next March.

You wouldn’t know it from the breathlessly excited emails they’ve been sending about it, but this is not a good change for me. I don’t do that much geocoding on The Session—around 13,000 or 14,000 requests a month. With the new pricing model that’ll be around $15 to $20 a month. Currently I slip by under the radar with the free tier.

So it might be time for me to flip that switch in my code. But which geocoding provider should I use?

There are plenty of slop-like listicles out there enumerating the various providers, but they’re mostly just regurgitating the marketing blurbs from the provider websites. What I need is more like a test kitchen.

Here’s what I did…

I took a representative sample of six recent additions to the sessions section of thesession.org. These examples represent places in the USA, Ireland, England, Scotland, Northern Ireland, and Spain, so a reasonable spread.

For each one of those sessions, I’m taking:

  • the venue name,
  • the town name,
  • the area name, and
  • the country.

I’m deliberately not including the street address. Quite often people don’t bother including this information so I want to see how well the geocoding APIs cope without it.

I’ve scored the results on a simple scale of good, so-so, and just plain wrong.

  • A good result gets a score of one. This is when the result gives back an accurate street-level result.
  • A so-so result gets a score of zero. This when it’s got the right coordinates for the town, but no more than that.
  • A wrong result gets a score of minus one. This is when the result is like something from a large language model: very confident but untethered from reality, like claiming the address is in a completely different country. Being wrong is worse than being vague, hence the difference in scoring.

Then I tot up those results for an overall score for each provider.

When I tried my six examples with twelve different geocoding providers, these were the results:

Geocoding providers
Provider USA England Ireland Spain Scotland Northern Ireland Total
Google 1111117
Mapquest 1111117
Geoapify 0110103
Here 1101003
Mapbox 11011-13
Bing 1000001
Nominatim 0000-110
OpenCage -11000-1-1
Tom Tom -1-100-11-2
Positionstack 0-10-11-1-2
Locationiq -10-100-1-3
Map Maker -10-1-1-1-1-5

Some interesting results there. I was surprised by how crap Bing is. I was also expecting better results from Mapbox.

Most interesting for me, Mapquest is right up there with Google.

So now that I’ve got a good scoring system, my next question is around pricing. If Google and Mapquest are roughly comparable in terms of accuracy, how would the pricing work out for each of them?

Let’s say I make 15,000 API requests a month. Under Google’s new pricing plan, that works out at $25. Not bad.

But if I’ve understood Mapquest’s pricing correctly, I reckon I’ll just squeek in under the free tier.

Looks like I’m flipping the switch to Mapquest.

If you’re shopping around for geocoding providers, I hope this is useful to you. But I don’t think you should just look at my results; they’re very specific to my needs. Come up with your own representative sample of tests and try putting the providers through their paces with your data.

If, for some reason, you want to see the terrible PHP code I’m using for geocoding on The Session, here it is.