Tags: seamful

15

sparkline

Wednesday, May 1st, 2024

Sunday, March 19th, 2023

ongoing by Tim Bray · The LLM Problem

It doesn’t bother me much that bleeding-edge ML technology sometimes gets things wrong. It bothers me a lot when it gives no warnings, cites no sources, and provides no confidence interval.

Yes! Like I said:

Expose the wires. Show the workings-out.

Tuesday, March 14th, 2023

Guessing

The last talk at the last dConstruct was by local clever clogs Anil Seth. It was called Your Brain Hallucinates Your Conscious Reality. It’s well worth a listen.

Anil covers a lot of the same ground in his excellent book, Being You. He describes a model of consciousness that inverts our intuitive understanding.

We tend to think of our day-to-day reality in a fairly mechanical cybernetic manner; we receive inputs through our senses and then make decisions about reality informed by those inputs.

As another former dConstruct speaker, Adam Buxton, puts it in his interview with Anil, it feels like that old Beano cartoon, the Numskulls, with little decision-making homonculi inside our head.

But Anil posits that it works the other way around. We make a best guess of what the current state of reality is, and then we receive inputs from our senses, and then we adjust our model accordingly. There’s still a feedback loop, but cause and effect are flipped. First we predict or guess what’s happening, then we receive information. Rinse and repeat.

The book goes further and applies this to our very sense of self. We make a best guess of our sense of self and then adjust that model constantly based on our experiences.

There’s a natural tendency for us to balk at this proposition because it doesn’t seem rational. The rational model would be to make informed calculations based on available data …like computers do.

Maybe that’s what sets us apart from computers. Computers can make decisions based on data. But we can make guesses.

Enter machine learning and large language models. Now, for the first time, it appears that computers can make guesses.

The guess-making is not at all like what our brains do—large language models require enormous amounts of inputs before they can make a single guess—but still, this should be the breakthrough to be shouted from the rooftops: we’ve taught machines how to guess!

And yet. Almost every breathless press release touting some revitalised service that uses AI talks instead about accuracy. It would be far more honest to tout the really exceptional new feature: imagination.

Using AI, we will guess who should get a mortgage.

Using AI, we will guess who should get hired.

Using AI, we will guess who should get a strict prison sentence.

Reframed like that, it’s easy to see why technologists want to bury the lede.

Alas, this means that large language models are being put to use for exactly the wrong kind of scenarios.

(This, by the way, is also true of immersive “virtual reality” environments. Instead of trying to accurately recreate real-world places like meeting rooms, we should be leaning into the hallucinatory power of a technology that can generate dream-like situations where the pleasure comes from relinquishing control.)

Take search engines. They’re based entirely on trust and accuracy. Introducing a chatbot that confidentally conflates truth and fiction doesn’t bode well for the long-term reputation of that service.

But what if this is an interface problem?

Currently facts and guesses are presented with equal confidence, hence the accurate descriptions of the outputs as bullshit or mansplaining as a service.

What if the more fanciful guesses were marked as such?

As it is, there’s a “temperature” control that can be adjusted when generating these outputs; the more the dial is cranked, the further the outputs will stray from the safest predictions. What if that could be reflected in the output?

I don’t know what that would look like. It could be typographic—some markers to indicate which bits should be taken with pinches of salt. Or it could be through content design—phrases like “Perhaps…”, “Maybe…” or “It’s possible but unlikely that…”

I’m sure you’ve seen the outputs when people request that ChatGPT write their biography. Perfectly accurate statements are generated side-by-side with complete fabrications. This reinforces our scepticism of these tools. But imagine how differently the fabrications would read if they were preceded by some simple caveats.

A little bit of programmed humility could go a long way.

Right now, these chatbots are attempting to appear seamless. If 80% or 90% of their output is accurate, then blustering through the other 10% or 20% should be fine, right? But I think the experience for the end user would be immensely more empowering if these chatbots were designed seamfully. Expose the wires. Show the workings-out.

Mind you, that only works if there is some way to distinguish between fact and fabrication. If there’s no way to tell how much guessing is happening, then that’s a major problem. If you can’t tell me whether something is 50% true or 75% true or 25% true, then the only rational response is to treat the entire output as suspect.

I think there’s a fundamental misunderstanding behind the design of these chatbots that goes all the way back to the Turing test. There’s this idea that the way to make a chatbot believable and trustworthy is to make it appear human, attempting to hide the gears of the machine. But the real way to gain trust is through honesty.

I want a machine to tell me when it’s guessing. That won’t make me trust it less. Quite the opposite.

After all, to guess is human.

Wednesday, October 26th, 2022

Programming Portals

A terrific piece by Maggie Appleton that starts with a comparison of graphical user interfaces and command line tools—which reminds me of the trade-offs between seamless and seamful design—and then moves into a proposed paradigm for declarative design tools:

Small, scoped areas within a graphical interface that allow users to read and write simple programmes

Wednesday, March 31st, 2021

Design as (un)ethical illusion

Many, if not all, of our world’s most wicked problems are rooted in the excessive hiding of complexity behind illusions of simplicity—the relentless shielding of messy details in favor of easy-to-use interfaces.

Seams.

But there’s always a tradeoff between complexity, truth, and control. The more details are hidden, the harder it is to understand how the system actually works. (And the harder it is to control). The map becomes less and less representative of the territory. We often trade completeness and control for simplicity. We’d rather have a map that’s easy to navigate than a map that shows us every single detail about the territory. We’d rather have a simple user interface than an infinitely flexible one that exposes a bunch of switches and settings. We don’t want to have to think too hard. We just want to get where we’re going.

Seamful and seamless design are reframed here as ethical and deceptive design:

Ethical design is like a glove. It obscures the underlying structure (i.e. your hand) but preserves some truth about its shape and how it works. Deceptive design is like a mitten. It obscures the underlying structure and also hides a lot about its shape and how it works.

Saturday, March 27th, 2021

Middle Management — Real Life

The introduction to this critique of Keller Easterling’s Medium Design is all about seams:

Imagine the tech utopia of mainstream science fiction. The bustle of self-driving cars, helpful robot assistants, and holograms throughout the sparkling city square immediately marks this world apart from ours, but something else is different, something that can only be described in terms of ambiance. Everything is frictionless here: The streets are filled with commuters, as is the sky, but the vehicles attune their choreography to one another so precisely that there is never any traffic, only an endless smooth procession through space. The people radiate a sense of purpose; they are all on their way somewhere, or else, they have already arrived. There’s an overwhelming amount of activity on display at every corner, but it does not feel chaotic, because there is no visible strife or deprivation. We might appreciate its otherworldly beauty, but we need not question the underlying mechanics of this utopia — everything works because it was designed to work, and in this world, design governs the space we inhabit as surely and exactly as the laws of physics.

Friday, October 23rd, 2020

This page is a truly naked, brutalist html quine.

I think this is quite beautiful—no need to view source; the style sheet is already in the document.

Friday, August 28th, 2020

Revisiting Adaptive Design, a lost design movement (Interconnected)

This sounds like seamful design:

How to enable not users but adaptors? How can people move from using a product, to understanding how it hangs together and making their own changes? How do you design products with, metaphorically, screws not nails?

Make Me Think | Jim Nielsen’s Weblog

The removal of all friction should’t be a goal. Making things easy and making things hard should be a design tool, employed to aid the end user towards their loftiest goals.

Friday, July 24th, 2020

Make me think! – Ralph Ammer

This is about seamful design.

We need to know things better if we want to be better.

It’s also about progressive enhancement.

Highly sophisticated systems work flawlessly, as long as things go as expected.

When a problem occurs which hasn’t been anticipated by the designers, those systems are prone to fail. The more complex the systems are, the higher are the chances that things go wrong. They are less resilient.

Friday, August 2nd, 2019

Seamful Design and Ubicomp Infrastructure (PDF)

Seams:

Seamful design involves deliberately revealing seams to users, and taking advantage of features usually considered as negative or problematic.

Wednesday, December 12th, 2018

Is Tech Too Easy to Use? - The New York Times

Seams!

Of all the buzzwords in tech, perhaps none has been deployed with as much philosophical conviction as “frictionless.” Over the past decade or so, eliminating “friction” — the name given to any quality that makes a product more difficult or time-consuming to use — has become an obsession of the tech industry, accepted as gospel by many of the world’s largest companies.

Tuesday, February 27th, 2018

On Weaponised Design - Our Data Our Selves

A catalogue of design decisions that have had harmful effects on users. This is a call for more inclusive design, but also a warning on the fetishisation of seamlessness:

The focus on details and delight can be traced to manifestos like Steve Krug’s Don’t Make Me Think, which propose a dogmatic adherence to cognitive obviousness and celebrates frictionless interaction as the ultimate design accomplishment.

Wednesday, July 26th, 2017

Seamfulness

I was listening to some items in my Huffduffer feed when I noticed a little bit of synchronicity.

First of all, I was listening to Tom talking about Thington, and he mentioned seamful design—the idea that “seamlessness” is not necessarily a desirable quality. I think that’s certainly true in the world of connected devices.

Then I listened to Jeff interviewing Matt about hardware startups. They didn’t mention seamful design specifically (it was more all cricket and cables), but again, I think it’s a topic that’s lurking behind any discussion of the internet of things.

I’ve written about seams before. I really feel there’s value—and empowerment—in exposing the points of connection in a system. When designers attempt to airbrush those seams away, I worry that they are moving from “Don’t make me think” to “Don’t allow me to think”.

In many ways, aiming for seamlessness in design feels like the easy way out. It’s a surface-level approach that literally glosses over any deeper problems. I think it might be driven my an underlying assumption that seams are, by definition, ugly. Certainly there are plenty of daily experiences where the seams are noticeable and frustrating. But I don’t think it needs to be this way. The real design challenge is to make those seams beautiful.

Monday, May 12th, 2014

Seams

You can listen to an audio version of Seams.

“The function of science fiction,” said Ray Bradbury, “is not only to predict the future, but to prevent it.”

Dystopias are the default setting for science fiction. It’s rare to find utopian sci-fi, and when you do—as in the post-singularity Culture novels of Iain M.Banks—there’s always more than a germ of dystopia; the dystutopias that Margaret Atwood speaks of.

You’ve got your political dystopias—1984 and all its imitators. Then there’s alien invasion dystopias, machine-intelligence dystopias, and a whole slew of post-apocalyptic dystopias: nuclear war, pandemic disease, environmental collapse, genetic engineering …take your pick. From the cosy catastrophes of John Wyndham to Cormac McCarthy’s The Road, this is the stock and trade of speculative fiction.

Of all these undesirable futures, one that troubles more than any other is the Wall·E dystopia. I’m not talking about the environmental wasteland depicted on Earth. I mean the dystutopia depicted aboard the generation starship The Axiom. Here, humanity’s every need is catered to without requiring any thought. And so humanity atrophies, becoming physically obese and intellectually lazy.

It’s not a new idea. H. G. Wells had already shown us a distant future like this in his classic novel The Time Machine. In the far future of that book’s timeline, humanity splits into two. The savagery of the canabalistic Morlocks is contrasted with the docile passive stupidity of the Eloi, but as Jaron Lanier points out, both endpoints are equally horrific.

In Wall·E, the Eloi have advanced technology. Their technology has been designed according to a design principle enshrined in the title of a Dead Kennedys album: Give Me Convenience Or Give Me Death.

That’s the reason why the Wall·E dystopia disturbs me so much. It’s all-too believable. For many years now, the rallying cry of digital designers has been epitomised by the title of Steve Krug’s terrific book, Don’t Make Me Think. But what happens when that rallying cry is taken too far? What happens when it stops being “don’t make think while I’m trying to complete a task” to simply “don’t make me think” full stop?

Convenience. Ease of use. Seamlessness.

On the face of it, these all seem like desirable traits in digital and physical products alike. But they come at a price. When we design, we try to do the work so that the user doesn’t have to. We do the thinking so the user doesn’t have to. Don’t make the user think. But taken too far, that mindset becomes dangerous.

Marshall McLuhan said that every extension is also an amputution. As we augment the abilities of people to accomplish their tasks, we should be careful not to needlessly curtail what they can do:

Here we are, a society hell bent on extending our reach through phones, through computers, through “seamless integration” and yet all along the way we’re unwittingly losing perhaps as much as we gain. The mediums we create are built to carry out specific tasks efficiently, but by doing so they have a tendency to restrict our options for accomplishing that task by other means. We begin to learn the “One” way to do it, when in fact there are infinite ways. The medium begins to restrict our thinking, our imagination, our potential.

The idea of “seamlessness” as a desirable trait in what we design is one that bothers me. Technology has seams. By hiding those seams, we may think we are helping the end user, but we are also making a conscience choice to deceive them (or at least restrict what they can do).

I see this a lot in the world of web devlopment. We’re constantly faced with challenges like dealing with users on slow networks or small screens. So we try to come up with solutions (bandwidth media queries, responsive images) that have at their heart an assumption that we know better than the end user what they should get.

I’m not saying that everything should be an option in a menu for the user to figure out—picking smart defaults is very much part of our job. But I do think there’s real value in giving the user the final choice.

I remember Jake giving a good example of this. If he’s travelling and he’s on a 3G network on his phone, or using shitty hotel WiFi on his laptop, and someone sends him a link to a video of some cats, he doesn’t mind if he gets the low-quality version as long as he gets to see the feline shenanigans in short order. But if he’s in the same situation and someone sends him a link to the just-released trailer for the new Star Trek movie, he’s willing to wait for hours so that he can watch in high-definition.

That’s a choice. All too often, these kind of choices are pre-made by designers and developers instead of being offered to the end user. We probably mean well, but there’s a real danger in assuming that just because someone is using a particular device that we can infer what their context is:

Mind reading is no way to base fundamental content decisions.

My point is that while we don’t want to overwhelm the user with choice overload, we also need to be careful not to unintentionally remove valuable choices that can empower people. In our quest to make experiences seamless, we run the risk of also making those experiences rigid and inflexible.

The drive for a “seamless experience” has been used to justify some harsh amputations. When Twitter declared war on the very developers it used to champion, and changed its API and terms of service so that tweets had to be displayed the same way everywhere, it was done in the name of “a consistent user experience.” Twitter knows best.

The web is made up of parts and there are seams between those parts: HTML, HTTP, and URLs. The software that can expose or hide those seams is the web browser. Web browsers are made by human beings and it’s the mindset and assumptions of those human beings that determines whether web browsers are enabling or disabling users to make use of those seams.

“View source” is a seam that exposes the HTML lying beneath every web page. That kind of X-ray vision can be quite powerful. Clearly it’s not an important feature for most users, but it is directly responsible for showing people how web pages are made …and intimating that anyone can do it. In the introduction to my first book I thanked “view source” along with my other teachers like Jeff Veen, Steve Champeon, and Jeffrey Zeldman.

These days, browsers don’t like to expose “view source” as easily as they once did. It’s hidden amongst the developer tools. There’s an assumption there that it’s not intended for regular users. The browser makers know best.

There are seams between the technologies that make up a web page: HTML, CSS, and JavaScript. The ability to enable or disable those layers can be empowering. It has become harder and harder to disable JavaScript in the browser. Another little amputation. The browser makers know best.

The CSS that styles web pages can be over-ridden by the end user. This is not a bug. It is a very powerful feature. That feature is being removed:

I understand that vendors can do whatever they want to control how you experience the web, because it is their software, their product, but removing user stylesheets feels sooo un-web to me, which is irony. A browser’s largest responsibility is to give people access to the web. It’s like the web is this open hand, but software is this closed fist.

Then there’s the URL. The ultimate seam.

Historically, browsers have exposed this seam, but now—just as with “view source” and user stylesheets—the visibility of the URL is being relegated to being a power-user tool.

The ultimate amputation.

The irony here is that the justification for this change is not the usual mantra of providing “a more seamless user experience.” Instead, the justification is supposedly security.

This strike me as really strange. Security is the one area where seamlessness is definitely not a desirable characteristic. A secure system requires people to be mindful and aware of their situation. This is certainly true on the web, as Tom points out:

Hiding information away makes me less able to make decisions: it makes me a less informed user.

The whole reason that phishing is a problem is because users don’t pay any bloody attention to what they see in their location bar. Putting less information in the location bar makes the location bar less useful and thus there’s less point paying any attention to it.

Tom has hit on the fundamental mismatch here. Chrome is a piece of software that wants to provide a good user experience—“don’t make me think!”—while at the same trying to make users mindful of their surroundings:

Security requires educated, pro-active, informed thinking users.

Usability is about making the whole process of using the web seamless and thoughtless: a child should be able to do it.

So from the security standpoint, obfuscating the URL is exactly the wrong thing to do.

In order to actually stay safe online, you need to see the “seams” of the web, you need to pay attention, use your brain.

Chrome knows best.

Making it harder to “view source” might seem like an inconsequentail decision. Removing the ability to apply user stylesheets might seem like an inconsequential decision. Heck, even hiding the URL might seem like an inconsequential decision. But each one of those decisions has repercussions. And each one of those decisions reflects an underlying viewpoint.

Make no mistake, all software is political. We talk about opinionated software but really, all software is opinionated, whether we like it or not. Seemingly inconsequential interface decisions are actually reflections of assumptions, biases and beliefs.

As Nat points out, like all political decisions, this is about power:

There’s been much debate about whether the URLs are ‘ugly’ or ‘beautiful’ and whether people really understand them. This debate misses the point.

The URLs are the cornerstone of the interconnected, decentralised web. Removing the URLs from the browser is an attempt to expand and consolidate centralised power.

If that’s the case, then it really doesn’t matter what we think about Chrome removing visible URLs. What appears to be a design decision about the user interface is in fact a manifestation of a much deeper vision. It’s a vision of a future where people can have everything their heart desires without having to expend needless thought. It’s a bright future filled with seamless experiences.

Welcome aboard The Axiom.

Buy n Large knows best.