Tags: ads

59

sparkline

Sunday, August 11th, 2024

Aboard Newsletter: Why So Bad, AI Ads?

The human desire to connect with others is very profound, and the desire of technology companies to interject themselves even more into that desire—either by communicating on behalf of humans, or by pretending to be human—works in the opposite direction. These technologies don’t seem to be encouraging connection as much as commoditizing it.

Wednesday, April 10th, 2024

Ad revenue

It’s been dispiriting but unsurprising to see American commentators weigh in on the EU’s Digital Markets Act. I really wish they’d read Baldur’s excellent explainer first.

John has been doing his predictable “leave Britney alone!” schtick with regards to Apple (and in this case, Google and Facebook too). Ian Betteridge does an excellent job of setting him straight:

A lot of commentators seem to have the same issue as John: that it’s weird that a governmental body can or should define how products should be designed.

But governments mandate how products are designed all the time, and not just in the EU. Take another market which is pretty big: cars. All cars have to feature safety equipment, which varies from region to region but will broadly include everything from seatbelts to crumple zones. Cars have rules for emissions, for fuel efficiency, all of which are designing how a car should work.

But there’s one assumption in John’s post that Ian didn’t push back on. John said:

It’s certainly possible that Meta can devise ways to serve non-personalized contextual ads that generate sufficient revenue per user.

That comes with a footnote:

One obvious solution would be to show more ads — a lot more ads — to make up for the difference in revenue. So if contextual ads generate, say, one-tenth the revenue of targeted ads, Meta could show 10 times as many ads to users who opt out of targeting. I don’t think 10× is an outlandish multiplier there — given how remarkably profitable Meta’s advertising business is, it might even need to be higher than that.

It’s almost like an article of faith that behavioural advertising is more effective than contextual advertising. But there’s no data to support this. Quite the opposite. I wrote about this four years ago.

Once again, I urge you to read the excellent analysis by Jesse Frederik and Maurits Martijn.

There’s also Tim Hwang’s book, Subprime Attention Crisis:

From the unreliability of advertising numbers and the unregulated automation of advertising bidding wars, to the simple fact that online ads mostly fail to work, Hwang demonstrates that while consumers’ attention has never been more prized, the true value of that attention itself—much like subprime mortgages—is wildly misrepresented.

More recently Dave Karpf said what we’re all thinking:

The thing I want to stress about microtargeted ads is that the current version is perpetually trash, and we’re always just a few years away from the bugs getting worked out.

The EFF are calling for a ban. Should that happen, the sky would not fall. Contrary to what John thinks, revenue would not plummet. Contextual advertising works just fine …without the need for invasive surveillance and tracking.

Like I said:

Tracker-driven behavioural advertising is bad for users. The advertisements are irrelevant most of the time, and on the few occasions where the advertising hits the mark, it just feels creepy.

Tracker-driven behavioural advertising is bad for advertisers. They spend their hard-earned money on invasive ad tech that results in no more sales or brand recognition than if they had relied on good ol’ contextual advertising.

Tracker-driven behavioural advertising is very bad for the web. Megabytes of third-party JavaScript are injected at exactly the wrong moment to make for the worst possible performance. And if that doesn’t ruin the user experience enough, there are still invasive overlays and consent forms to click through (which, ironically, gets people mad at the legislation—like GDPR—instead of the underlying reason for these annoying overlays: unnecessary surveillance and tracking by the site you’re visiting).

Tuesday, February 27th, 2024

JavaScript Bloat in 2024 @ tonsky.me

This really is a disgusting exlusionary state of affairs.

I hate to be judgy, but I honestly wonder how the people behind some of these decisions can call themselves web developers.

Friday, February 10th, 2023

Stories on the Road - UK 23 | Storyblok

I’ll be speaking at this free early evening event with Arisa Fukusaki and Cassie in Brighton on Monday, February 27th. Grab a ticket and come along for some pizza and nerdiness.

Wednesday, May 25th, 2022

Alt writing

I made the website for this year’s UX London by hand.

Well, that’s not entirely true. There’s exactly one build tool involved. I’m using Sergey to include global elements—the header and footer—something that’s still not possible in HTML.

So it’s minium viable static site generation rather than actual static files. It’s still very hands-on though and I enjoy that a lot; editing HTML and CSS directly without intermediary tools.

When I update the site, it’s usually to add a new speaker to the line-up (well, not any more now that the line up is complete). That involves marking up their bio and talk description. I also create a couple of different sized versions of their headshot to use with srcset. And of course I write an alt attribute to accompany that image.

By the way, Jake has an excellent article on writing alt text that uses the specific example of a conference site. It raises some very thought-provoking questions.

I enjoy writing alt text. I recently described how I updated my posting interface here on my own site to put a textarea for alt text front and centre for my notes with photos. Since then I’ve been enjoying the creative challenge of writing useful—but also evocative—alt text.

Some recent examples:

But when I was writing the alt text for the headshots on the UX London site, I started to feel a little disheartened. The more speakers were added to the line-up, the more I felt like I was repeating myself with the alt text. After a while they all seemed to be some variation on “This person looking at the camera, smiling” with maybe some detail on their hair or clothing.

  • Videha Sharma
    The beaming bearded face of Videha standing in front of the beautiful landscape of a riverbank.
  • Candi Williams
    Candi working on her laptop, looking at the camera with a smile.
  • Emma Parnell
    Emma smiling against a yellow background. She’s wearing glasses and has long straight hair.
  • John Bevan
    A monochrome portrait of John with a wry smile on his face, wearing a black turtleneck in the clichéd design tradition.
  • Laura Yarrow
    Laura smiling, wearing a chartreuse coloured top.
  • Adekunle Oduye
    A profile shot of Adekunle wearing a jacket and baseball cap standing outside.

The more speakers were added to the line-up, the harder I found it not to repeat myself. I wondered if this was all going to sound very same-y to anyone hearing them read aloud.

But then I realised, “Wait …these are kind of same-y images.”

By the very nature of the images—headshots of speakers—there wasn’t ever going to be that much visual variation. The experience of a sighted person looking at a page full of speakers is that after a while the images kind of blend together. So if the alt text also starts to sound a bit repetitive after a while, maybe that’s not such a bad thing. A screen reader user would be getting an equivalent experience.

That doesn’t mean it’s okay to have the same alt text for each image—they are all still different. But after I had that realisation I stopped being too hard on myself if I couldn’t come up with a completely new and original way to write the alt text.

And, I remind myself, writing alt text is like any other kind of writing. The more you do it, the better you get.

Sunday, March 13th, 2022

Building a Digital Homestead, Bit by Brick

A personal site, or a blog, is more than just a collection of writing. It’s a kind of place - something that feels like home among the streams. Home is a very strong mental model.

Friday, March 4th, 2022

Nelson’s Weblog: Goodreads lost all of my reviews

Goodreads lost my entire account last week. Nine years as a user, some 600 books and 250 carefully written reviews all deleted and unrecoverable. Their support has not been helpful. In 35 years of being online I’ve never encountered a company with such callous disregard for their users’ data.

Ouch! Lesson learned:

My plan now is to host my own blog-like collection of all my reading notes like Tom does.

Wednesday, April 21st, 2021

Lena @ Things Of Interest

The format of a Wikipedia page is used as the chilling delivery mechanism for this piece of speculative fiction. The distancing effect heightens the horror.

Friday, December 18th, 2020

Conditional JavaScript - JavaScript - Dev Tips

This is a good round-up of APIs you can use to decide if and how much JavaScript to load. I might look into using storage.estimate() in service workers to figure out how much gets pre-cached.

Wednesday, December 2nd, 2020

Creating websites with prefers-reduced-data | Polypane Browser for Developers

There’s no browser support yet but that doesn’t mean we can’t start adding prefers-reduced-data to our media queries today. I like the idea of switching between web fonts and system fonts.

Wednesday, November 11th, 2020

Upgrades and polyfills

I started getting some emails recently from people having issues using The Session. The issues sounded similar—an interactive component that wasn’t, well …interacting.

When I asked what device or browser they were using, the answer came back the same: Safari on iPad. But not a new iPad. These were older iPads running older operating systems.

Now, remember, even if I wanted to recommend that they use a different browser, that’s not an option:

Safari is the only browser on iOS devices.

I don’t mean it’s the only browser that ships with iOS devices. I mean it’s the only browser that can be installed on iOS devices.

You can install something called Chrome. You can install something called Firefox. Those aren’t different web browsers. Under the hood they’re using Safari’s rendering engine. They have to.

It gets worse. Not only is there no choice when it comes to rendering engines on iOS, but the rendering engine is also tied to the operating system.

If you’re on an old Apple laptop, you can at least install an up-to-date version of Firefox or Chrome. But you can’t install an up-to-date version of Safari. An up-to-date version of Safari requires an up-to-date version of the operating system.

It’s the same on iOS devices—you can’t install a newer version of Safari without installing a newer version of iOS. But unlike the laptop scenario, you can’t install any version of Firefox of Chrome.

It’s disgraceful.

It’s particularly frustrating when an older device can’t upgrade its operating system. Upgrades for Operating system generally have some hardware requirements. If your device doesn’t meet those requirements, you can’t upgrade your operating system. That wouldn’t matter so much except for the Safari issue. Without an upgraded operating system, your web browsing experience stagnates unnecessarily.

For want of a nail

  • A website feature isn’t working so
  • you need to upgrade your browser which means
  • you need to upgrade your operating sytem but
  • you can’t upgrade your operating system so
  • you need to buy a new device.

Apple doesn’t allow other browsers to be installed on iOS devices so people have to buy new devices if they want to use the web. Handy for Apple. Bad for users. Really bad for the planet.

It’s particularly galling when it comes to iPads. Those are exactly the kind of casual-use devices that shouldn’t need to be caught in the wasteful cycle of being used for a while before getting thrown away. I mean, I get why you might want to have a relatively modern phone—a device that’s constantly with you that you use all the time—but an iPad is the perfect device to just have lying around. You shouldn’t feel pressured to have the latest model if the older version still does the job:

An older tablet makes a great tableside companion in your living room, an effective e-book reader, or a light-duty device for reading mail or checking your favorite websites.

Hang on, though. There’s another angle to this. Why should a website demand an up-to-date browser? If the website has been built using the tried and tested approach of progressive enhancement, then everyone should be able to achieve their goals regardless of what browser or device or operating system they’re using.

On The Session, I’m using progressive enhancement and feature detection everywhere I can. If, for example, I’ve got some JavaScript that’s going to use querySelectorAll and addEventListener, I’ll first test that those methods are available.

if (!document.querySelectorAll || !window.addEventListener) {
  // doesn't cut the mustard.
  return;
}

I try not to assume that anything is supported. So why was I getting emails from people with older iPads describing an interaction that wasn’t working? A JavaScript error was being thrown somewhere and—because of JavaScript’s brittle error-handling—that was causing all the subsequent JavaScript to fail.

I tracked the problem down to a function that was using some DOM methods—matches and closest—as well as the relatively recent JavaScript forEach method. But I had polyfills in place for all of those. Here’s the polyfill I’m using for matches and closest. And here’s the polyfill I’m using for forEach.

Then I spotted the problem. I was using forEach to loop through the results of querySelectorAll. But the polyfill works on arrays. Technically, the output of querySelectorAll isn’t an array. It looks like an array, it quacks like an array, but it’s actually a node list.

So I added this polyfill from Chris Ferdinandi.

That did the trick. I checked with the people with those older iPads and everything is now working just fine.

For the record, here’s the small collection of polyfills I’m using. Polyfills are supposed to be temporary. At some stage, as everyone upgrades their browsers, I should be able to remove them. But as long as some people are stuck with using an older browser, I have to keep those polyfills around.

I wish that Apple would allow other rendering engines to be installed on iOS devices. But if that’s a hell-freezing-over prospect, I wish that Safari updates weren’t tied to operating system updates.

Apple may argue that their browser rendering engine and their operating system are deeply intertwingled. That line of defence worked out great for Microsoft in the ‘90s.

Monday, August 17th, 2020

Netlify redirects and downloads

Making the Clearleft podcast is a lot of fun. Making the website for the Clearleft podcast was also fun.

Design wise, it’s a riff on the main Clearleft site in terms of typography and general layout. On the development side, it was an opportunity to try out an exciting tech stack. The workflow goes something like this:

  • Open a text editor and type out HTML and CSS.

Comparing this to other workflows I’ve used in the past, this is definitely the most productive way of working. Some stats:

  • Time spent setting up build tools: 00:00
  • Time spent wrangling the pipeline to do exactly what you want: 00:00
  • Time spent trying to get the damn build tools to work again when you return to the project after leaving it alone for more than a few months: 00:00:00

I have some files. Some images, three font files, a few pages of HTML, one RSS feed, one style sheet, and one minimal service worker script. I don’t need a web server to do anything more than serve up those files. No need for any dynamic server-side processing.

I guess this is JAMstack. Though, given that the J stands for JavaScript, the A stands for APIs, and I’m not using either, technically it’s Mstack.

Netlify suits my hosting needs nicely. It also provides the added benefit that, should I need to update my CSS, I don’t need to add a query string or anything to the link elements in the HTML that point to the style sheet: Netlify does cache invalidation for you!

The mp3 files of the actual podcast episodes are stored on S3. I link to those mp3 files from enclosure elements in the RSS feed, which is what makes it a podcast. I also point to the mp3 files from audio elements on the individual episode pages—just above the transcript of each episode. Here’s the page for the most recent episode.

I also want people to be able to download the mp3 file directly if they want (or if they want to huffduff an episode). So I provide a link to the mp3 file with a good ol’-fashioned a element with an href attribute.

I throw in one more attribute on that link. The download attribute tells the browser that the URL in the href attribute should be downloaded instead of visited. If you give a value for the download attribute, it will over-ride the file name:

<a href="/files/ugly-file-name.xyz" download="nice-file-name.xyz">download</a>

Or you can use it as a Boolean attribute without any value if you’re happy with the file name:

<a href="/files/nice-file-name.xyz" download>download</a>

There’s one catch though. The download attribute only works for files on the same origin. That’s an issue for me. My site is podcast.clearleft.com but my audio files are hosted on clearleft-audio.s3.amazonaws.com—the download attribute will be ignored and the mp3 files will play in the browser instead of downloading.

Trys pointed me to the solution. It turns out that Netlify can do some server-side processing. It can do redirects.

I added a file called _redirects to the root of my project. It contains one line:

/download/*  https://clearleft-audio.s3.amazonaws.com/podcast/:splat  200

That says that any URLs beginning with /download/ should redirect to clearleft-audio.s3.amazonaws.com/podcast/. Everything after the closing slash is captured with that wild card asterisk. That’s then passed along to the redirect URL as :splat. That’s a new one on me. I hadn’t come across that terminology, but as someone who can never remember the syntax of regular expressions, it works for me.

Oh, and the 200at the end is the status code: okay.

Now I can use this /download/ path in my link:

<a href="/download/season01episode06.mp3" download>Download mp3</a>

Because this URL on the same origin, the download attribute works just fine.

Tuesday, July 28th, 2020

Google’s Top Search Result? Surprise! It’s Google – The Markup

I’ve been using Duck Duck Go for ages so I didn’t realise quite how much of a walled garden Google search has become.

41% of the first page of Google search results is taken up by Google products.

This is some excellent reporting. The data and methodology are entirely falsifiable so feel free to grab the code and replicate the results.

Note the fear with which publishers talk about Google (anonymously). It’s the same fear that app developers exhibit when talking about Apple (anonymously).

Ain’t centralisation something?

Wednesday, July 15th, 2020

The Shape Of The Machine « blarg?

On AMP:

Google could have approached the “be better on mobile” problem, search optimization and revenue sharing any number of ways, obviously, but the one they’ve chosen and built out is the one that guarantees that either you let them middleman all of your traffic or they cut off your oxygen.

There’s also this observation, which is spot-on:

Google has managed to structure this surveillance-and-value-extraction machine entirely out of people who are convinced that they, personally, are doing good for the world. The stuff they’re working on isn’t that bad – we’ve got such beautiful intentions!

Thursday, April 30th, 2020

Dams Public Website

I had the great pleasure of visiting the Museum Plantin-Moretus in Antwerp last October. Their vast collection of woodblocks are available to dowload in high resolution (and they’re in the public domain).

14,000 examples of true craftmanship, drawings masterly cut in wood. We are supplying this impressive collection of woodcuts in high resolution. Feel free to browse as long as you like, get inspired and use your creativity.

Friday, February 28th, 2020

Why the GOV.UK Design System team changed the input type for numbers - Technology in government

Some solid research here. Turns out that using input type=”text” inputmode=”numeric” pattern="[0-9]*" is probably a better bet than using input type="number".

Monday, February 3rd, 2020

Google Maps Hacks, Performance and Installation, 2020 By Simon Weckert

I can’t decide if this is industrial sabotage or political protest. Either way, I like it.

99 second hand smartphones are transported in a handcart to generate virtual traffic jam in Google Maps.Through this activity, it is possible to turn a green street red which has an impact in the physical world by navigating cars on another route to avoid being stuck in traffic

Saturday, January 25th, 2020

Draw all roads in a city at once

A lovely little bit of urban cartography.

Saturday, August 3rd, 2019

Standard Ebooks: Free and liberated ebooks, carefully produced for the true book lover.

Books in the public domain, lovingly designed and typeset, available in multiple formats for free. Great works of fiction from Austen, Conrad, Stevenson, Wells, Hardy, Doyle, and Dickens, along with classics of non-fiction like Darwin’s The Origin of Species and Shackleton’s South!

Thursday, August 1st, 2019

Navigation preloads in service workers

There’s a feature in service workers called navigation preloads. It’s relatively recent, so it isn’t supported in every browser, but it’s still well worth using.

Here’s the problem it solves…

If someone makes a return visit to your site, and the service worker you installed on their machine isn’t active yet, the service worker boots up, and then executes its instructions. If those instructions say “fetch the page from the network”, then you’re basically telling the browser to do what it would’ve done anyway if there were no service worker installed. The only difference is that there’s been a slight delay because the service worker had to boot up first.

  1. The service worker activates.
  2. The service worker fetches the file.
  3. The service worker does something with the response.

It’s not a massive performance hit, but it’s still a bit annoying. It would be better if the service worker could boot up and still be requesting the page at the same time, like it would do if no service worker were present. That’s where navigation preloads come in.

  1. The service worker activates while simultaneously requesting the file.
  2. The service worker does something with the response.

Navigation preloads—like the name suggests—are only initiated when someone navigates to a URL on your site, either by following a link, or a bookmark, or by typing a URL directly into a browser. Navigation preloads don’t apply to requests made by a web page for things like images, style sheets, and scripts. By the time a request is made for one of those, the service worker is already up and running.

To enable navigation preloads, call the enable() method on registration.navigationPreload during the activate event in your service worker script. But first do a little feature detection to make sure registration.navigationPreload exists in this browser:

if (registration.navigationPreload) {
  addEventListener('activate', activateEvent => {
    activateEvent.waitUntil(
      registration.navigationPreload.enable()
    );
  });
}

If you’ve already got event listeners on the activate event, that’s absolutely fine: addEventListener isn’t exclusive—you can use it to assign multiple tasks to the same event.

Now you need to make use of navigation preloads when you’re responding to fetch events. So if your strategy is to look in the cache first, there’s probably no point enabling navigation preloads. But if your default strategy is to fetch a page from the network, this will help.

Let’s say your current strategy for handling page requests looks like this:

addEventListener('fetch', fetchEvent => {
  const request = fetchEvent.request;
  if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
      fetch(request)
      .then( responseFromFetch => {
        // maybe cache this response for later here.
        return responseFromFetch;
      })
      .catch( fetchError => {
        return caches.match(request)
        .then( responseFromCache => {
          return responseFromCache || caches.match('/offline');
        });
      })
    );
  }
});

That’s a fairly standard strategy: try the network first; if that doesn’t work, try the cache; as a last resort, show an offline page.

It’s that first step (“try the network first”) that can benefit from navigation preloads. If a preload request is already in flight, you’ll want to use that instead of firing off a new fetch request. Otherwise you’re making two requests for the same file.

To find out if a preload request is underway, you can check for the existence of the preloadResponse promise, which will be made available as a property of the fetch event you’re handling:

fetchEvent.preloadResponse

If that exists, you’ll want to use it instead of fetch(request).

if (fetchEvent.preloadResponse) {
  // do something with fetchEvent.preloadResponse
} else {
  // do something with fetch(request)
}

You could structure your code like this:

addEventListener('fetch', fetchEvent => {
  const request = fetchEvent.request;
  if (request.headers.get('Accept').includes('text/html')) {
    if (fetchEvent.preloadResponse) {
      fetchEvent.respondWith(
        fetchEvent.preloadResponse
        .then( responseFromPreload => {
          // maybe cache this response for later here.
          return responseFromPreload;
        })
        .catch( preloadError => {
          return caches.match(request)
          .then( responseFromCache => {
            return responseFromCache || caches.match('/offline');
          });
        })
      );
    } else {
      fetchEvent.respondWith(
        fetch(request)
        .then( responseFromFetch => {
          // maybe cache this response for later here.
          return responseFromFetch;
        })
        .catch( fetchError => {
          return caches.match(request)
          .then( responseFromCache => {
            return responseFromCache || caches.match('/offline');
          });
        })
      );
    }
  }
});

But that’s not very DRY. Your logic is identical, regardless of whether the response is coming from fetch(request) or from fetchEvent.preloadResponse. It would be better if you could minimise the amount of duplication.

One way of doing that is to abstract away the promise you’re going to use into a variable. Let’s call it retrieve. If a preload is underway, we’ll assign it to that variable:

let retrieve;
if (fetchEvent.preloadResponse) {
  retrieve = fetchEvent.preloadResponse;
}

If there is no preload happening (or this browser doesn’t support it), assign a regular fetch request to the retrieve variable:

let retrieve;
if (fetchEvent.preloadResponse) {
  retrieve = fetchEvent.preloadResponse;
} else {
  retrieve = fetch(request);
}

If you like, you can squash that into a ternary operator:

const retrieve = fetchEvent.preloadResponse ? fetchEvent.preloadResponse : fetch(request);

Use whichever syntax you find more readable.

Now you can apply the same logic, regardless of whether retrieve is a preload navigation or a fetch request:

addEventListener('fetch', fetchEvent => {
  const request = fetchEvent.request;
  if (request.headers.get('Accept').includes('text/html')) {
    const retrieve = fetchEvent.preloadResponse ? fetchEvent.preloadResponse : fetch(request);
    fetchEvent.respondWith(
      retrieve
      .then( responseFromRetrieve => {
        // maybe cache this response for later here.
       return responseFromRetrieve;
      })
      .catch( fetchError => {
        return caches.match(request)
        .then( responseFromCache => {
          return responseFromCache || caches.match('/offline');
        });
      })
    );
  }
});

I think that’s the least invasive way to update your existing service worker script to take advantage of navigation preloads.

Like I said, preload navigations can give a bit of a performance boost if you’re using a network-first strategy. That’s what I’m doing here on adactio.com and on thesession.org so I’ve updated their service workers to take advantage of navigation preloads. But on Resilient Web Design, which uses a cache-first strategy, there wouldn’t be much point enabling navigation preloads.

Jeff Posnick made this point in his write-up of bringing service workers to Google search:

Adding a service worker to your web app means inserting an additional piece of JavaScript that needs to be loaded and executed before your web app gets responses to its requests. If those responses end up coming from a local cache rather than from the network, then the overhead of running the service worker is usually negligible in comparison to the performance win from going cache-first. But if you know that your service worker always has to consult the network when handling navigation requests, using navigation preload is a crucial performance win.

Oh, and those browsers that don’t yet support navigation preloads? No problem. It’s a progressive enhancement. Everything still works just like it did before. And having a service worker on your site in the first place is itself a progressive enhancement. So enabling navigation preloads is like a progressive enhancement within a progressive enhancement. It’s progressive enhancements all the way down!

By the way, if all of this service worker stuff sounds like gibberish, but you wish you understood it, I think my book, Going Offline, will prove quite valuable.