Tags: met

189

sparkline

Tuesday, November 12th, 2024

The meaning of “AI”

There are different kinds of buzzwords.

Some buzzwords are useful. They take a concept that would otherwise require a sentence of explanation and package it up into a single word or phrase. Back in the day, “ajax” was a pretty good buzzword.

Some buzzwords are worse than useless. This is when a word or phrase lacks definition. You could say this buzzword in a meeting with five people, and they’d all understand five different meanings. Back in the day, “web 2.0” was a classic example of a bad buzzword—for some people it meant a business model; for others it meant rounded corners and gradients.

The worst kind of buzzwords are the ones that actively set out to obfuscate any actual meaning. “The cloud” is a classic example. It sounds cooler than saying “a server in Virginia”, but it also sounds like the exact opposite of what it actually is. Great for marketing. Terrible for understanding.

“AI” is definitely not a good buzzword. But I can’t quite decide if it’s merely a bad buzzword like “web 2.0” or a truly terrible buzzword like “the cloud”.

The biggest problem with the phrase “AI” is that there’s a name collision.

For years, the term “AI” has been used in science-fiction. HAL 9000. Skynet. Examples of artificial general intelligence.

Now the term “AI” is also used to describe large language models. But there is no connection between this use of the term “AI” and the science fictional usage.

This leads to the ludicrous situation of otherwise-rational people wanted to discuss the dangers of “AI”, but instead of talking about the rampant exploitation and energy usage endemic to current large language models, they want to spend the time talking about the sci-fi scenarios of runaway “AI”.

To understand how ridiculous this is, I’d like you to imagine if we had started using a different buzzword in another setting…

Suppose that when ride-sharing companies like Uber and Lyft were starting out, they had decided to label their services as Time Travel. From a marketing point of view, it even makes sense—they get you from point A to point B lickety-split.

Now imagine if otherwise-sensible people began to sound the alarm about the potential harms of Time Travel. Given the explosive growth we’ve seen in this sector, sooner or later they’ll be able to get you to point B before you’ve even left point A. There could be terrible consequences from that—we’ve all seen the sci-fi scenarios where this happens.

Meanwhile the actual present-day harms of ride-sharing services around worker exploitation would be relegated to the sidelines. Clearly that isn’t as important as the existential threat posed by Time Travel.

It sounds ludicrous, right? It defies common sense. Just because a vehicle can get you somewhere fast today doesn’t mean it’s inevitably going to be able to break the laws of physics any day now, simply because it’s called Time Travel.

And yet that is exactly the nonsense we’re being fed about large language models. We call them “AI”, we look at how much they can do today, and we draw a straight line to what we know of “AI” in our science fiction.

This ridiculous situation could’ve been avoided if we had settled on a more accurate buzzword like “applied statistics” instead of “AI”.

It’s almost as if the labelling of the current technologies was more about marketing than accuracy.

Sunday, June 16th, 2024

Your brain does not process information and it is not a computer | Aeon Essays

We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.

Saturday, May 25th, 2024

InstAI

If you use Instagram, there may be a message buried in your notifications. It begins:

We’re getting ready to expand our AI at Meta experiences to your region.

Fuck that. Here’s the important bit:

To help bring these experiences to you, we’ll now rely on the legal basis called legitimate interests for using your information to develop and improve AI at Meta. This means that you have the right to object to how your information is used for these purposes. If your objection is honoured, it will be applied going forwards.

Follow that link and fill in the form. For the field labelled “Please tell us how this processing impacts you” I wrote:

It’s fucking rude.

That did the trick. I got an email saying:

We’ve reviewed your request and will honor your objection.

Mind you, there’s still this:

We may still process information about you to develop and improve AI at Meta, even if you object or don’t use our products and services.

Thursday, May 2nd, 2024

It’s OK to Say if You Went Back in Time and Killed Baby Hitler — Big Echo

Primer was a film about a start-up …and time travel. This is a short story about big tech …and time travel.

Thursday, April 18th, 2024

Invisible success – Eric Bailey

Snook’s Law in action:

Big, flashy things get noticed. Quiet, boring things don’t.

There isn’t much infrastructure in place to quantify the constant, silent, boring, predictable, round-the-clock passive successes of this aspect of design systems after the initial effort is complete.

A lack of bug reports, accessibility issues, design tweaks, etc. are all objectively great, but there are no easy datapoints you can measure here.

Wednesday, April 10th, 2024

Ad revenue

It’s been dispiriting but unsurprising to see American commentators weigh in on the EU’s Digital Markets Act. I really wish they’d read Baldur’s excellent explainer first.

John has been doing his predictable “leave Britney alone!” schtick with regards to Apple (and in this case, Google and Facebook too). Ian Betteridge does an excellent job of setting him straight:

A lot of commentators seem to have the same issue as John: that it’s weird that a governmental body can or should define how products should be designed.

But governments mandate how products are designed all the time, and not just in the EU. Take another market which is pretty big: cars. All cars have to feature safety equipment, which varies from region to region but will broadly include everything from seatbelts to crumple zones. Cars have rules for emissions, for fuel efficiency, all of which are designing how a car should work.

But there’s one assumption in John’s post that Ian didn’t push back on. John said:

It’s certainly possible that Meta can devise ways to serve non-personalized contextual ads that generate sufficient revenue per user.

That comes with a footnote:

One obvious solution would be to show more ads — a lot more ads — to make up for the difference in revenue. So if contextual ads generate, say, one-tenth the revenue of targeted ads, Meta could show 10 times as many ads to users who opt out of targeting. I don’t think 10× is an outlandish multiplier there — given how remarkably profitable Meta’s advertising business is, it might even need to be higher than that.

It’s almost like an article of faith that behavioural advertising is more effective than contextual advertising. But there’s no data to support this. Quite the opposite. I wrote about this four years ago.

Once again, I urge you to read the excellent analysis by Jesse Frederik and Maurits Martijn.

There’s also Tim Hwang’s book, Subprime Attention Crisis:

From the unreliability of advertising numbers and the unregulated automation of advertising bidding wars, to the simple fact that online ads mostly fail to work, Hwang demonstrates that while consumers’ attention has never been more prized, the true value of that attention itself—much like subprime mortgages—is wildly misrepresented.

More recently Dave Karpf said what we’re all thinking:

The thing I want to stress about microtargeted ads is that the current version is perpetually trash, and we’re always just a few years away from the bugs getting worked out.

The EFF are calling for a ban. Should that happen, the sky would not fall. Contrary to what John thinks, revenue would not plummet. Contextual advertising works just fine …without the need for invasive surveillance and tracking.

Like I said:

Tracker-driven behavioural advertising is bad for users. The advertisements are irrelevant most of the time, and on the few occasions where the advertising hits the mark, it just feels creepy.

Tracker-driven behavioural advertising is bad for advertisers. They spend their hard-earned money on invasive ad tech that results in no more sales or brand recognition than if they had relied on good ol’ contextual advertising.

Tracker-driven behavioural advertising is very bad for the web. Megabytes of third-party JavaScript are injected at exactly the wrong moment to make for the worst possible performance. And if that doesn’t ruin the user experience enough, there are still invasive overlays and consent forms to click through (which, ironically, gets people mad at the legislation—like GDPR—instead of the underlying reason for these annoying overlays: unnecessary surveillance and tracking by the site you’re visiting).

Tuesday, April 2nd, 2024

Revisiting Metadesign for Murph – Smithery

I’m really excited about John’s talk at this year’s UX London. Feels like a good time to revisit his excellent talk from dConstruct 2015:

I’m going to be opening up the second day of UX London 2024, 18th-20th June. As part of that talk, I’ll be revisiting a talk called Metadesign for Murph which I gave at dConstruct in 2015. It might be one of my favourite talks that I’ve ever given.

Tuesday, March 26th, 2024

Fidinpamp

If you’re a fan of gratuitous initialisms, you’ll love Google’s core web vitals. Just get a load of the obfuscation in the important-sounding metrics like CLS, FCP, LCP, and more.

To be fair to Google, this is a problem in the web performance world in general. Practioners prefer to talk about TTFB rather than “time to first byte” even though both contain exactly the same number of syllables.

The big news in the web performance community this month is the arrival of a new initialism. INP sounds like one of those pseudo-scientific psychologic profiles but it’s meant to stand for Interaction to Next Paint (even if they were to swear off pointless initialisms, you’d still have to pry Pointless Capitalisation from Google’s cold dead hands).

This new metric is a welcome one. It’s replacing first input delay. Sorry, First Input Delay, or FID, one of the few web vital initialisms that can be spoken as a word, making it a true acronym (fortunately fid’s successor, inp, also works as an acronym).

First Input Delay has long outstayed its welcome. It was always an outlier in the core web vitals. It didn’t seem to measure anything actually useful. I know it sounds like it’s measuring the delay until the user can interact with a web page, but when you dive into what it actually does, it’s a mess:

FID measures the time from when a user first interacts with a page (that is, when they click a link, tap on a button, or use a custom, JavaScript-powered control) to the time when the browser is actually able to begin processing event handlers in response to that interaction.

See that word “begin” in there? It’s doing a lot of work. First Input Delay doesn’t measure the lag between the user interaction and the browser response; it only measures the lag between the user interaction and the browser beginning to respond. The actual response could take ages, but that lag doesn’t get measured. Unlike the other core web vitals, this metric is very far removed from what actually matters to the user’s experience.

What the fid where they thinking? How the fid did this measurement ever get included in core web vitals in the first place?

Well, feel free to take what I’m about to say as pure gossip, but I have my sources, I trust ’em, and no, I’m not going to reveal ’em…

It’s because of AMP.

Remember Google AMP? An acronym so pointless they eventually just forgot it ever stood for anything?

The AMP project ended up doing incredible damage to Google’s developer relations. By colluding with the search team to privilege the appearance of AMP pages in the top news carousel, Google effectively blackmailed the entire publishing industry into using their format.

In the end, it didn’t work. It was a shit format. All they did was foster resentment and animosity:

AMP seems to have faded away. Most publishers have started dropping support, and even Google doesn’t seem to care much anymore.

It turns out that Google search wasn’t the only team infected by AMP. The core web vitals team also had to play ball.

Originally they had a genuinely useful metric for measuring the lag between input and response. But guess which pages did terribly? That’s right: AMP pages.

Rather than ship an actually-useful measurement, the core web vitals team instead had to include the broken First Input Delay, brainchild of a certain someone on the AMP team.

Now it all makes sense.

So good riddance to FID. Welcome to INP. And here’s hoping it won’t be much longer till we’re finally burying AMP.

Thursday, February 22nd, 2024

PageSpeed Insights bookmarklet

I’m a little obsessed with web performance. I like being able to check a page’s core web vitals quickly and easily.

Four years ago, I made a Lighthouse bookmarklet. Whatever web page you were on, when you clicked on the bookmarklet you’d get the Lighthouse results for that page. Handy!

It doesn’t work anymore. This is probably because Google are in the loop. Four years is pretty good innings for anything involving that company.

I kid (mostly). Lighthouse itself is still going strong, despite being a Google product. But the bookmarklet needs updating.

Rather than just get Lighthouse results, I figured that the full PageSpeed Insights results would be even better. If your website is in the Chrome UX report, you get to see those CrUX details too.

So here’s the updated bookmarklet:

PageSpeed Insights

Drag that up to your desktop browser’s bookmarks toolbar. Press it whenever you want to test the page you’re on.

Tuesday, November 14th, 2023

Pluralistic: The (open) web is good, actually (13 Nov 2023) – Pluralistic: Daily links from Cory Doctorow

The web wasn’t inevitable – indeed, it was wildly improbable. Tim Berners Lee’s decision to make a new platform that was patent-free, open and transparent was a complete opposite approach to the strategy of the media companies of the day. They were building walled gardens and silos – the dialup equivalent to apps – organized as “branded communities.” The way I experienced it, the web succeeded because it was so antithetical to the dominant vision for the future of the internet that the big companies couldn’t even be bothered to try to kill it until it was too late.

Companies have been trying to correct that mistake ever since.

A great round-up from Cory, featuring heavy dollops of Anil and Aaron.

Sunday, November 12th, 2023

[this is aaronland] generative adversarial therapy sessions

Mobile phones and the “app economy”, an environment controlled by exactly two companies and designed to extract a commission from almost every interaction and to promote native and not-portable applications over web applications. But we also see the same behaviour from so-called “native to the web” companies like Facebook who have explicitly monetized reach, access and discovery. Facebook is also the company that gave the world React which is difficult not to understand as deliberate attempt to embrace and extend, to redefine, HTML itself.

Perversely, nearly everything about the mobile/app economy is built on, and designed to use, HTTP precisely because it’s a common and easy to implement standard free and unencumbered by licensing.

Wednesday, October 18th, 2023

One man playing flute with his eyes closed while two fiddlers listen.

Listening to tunes on the flute.

Thursday, October 12th, 2023

event.target.closest

Eric mentioned the JavaScript closest method. I use it all the time.

When I wrote the book DOM Scripting back in 2005, I’d estimate that 90% of the JavaScript I was writing boiled down to:

  1. Find these particular elements in the DOM and
  2. When the user clicks on one of them, do something.

It wasn’t just me either. I reckon that was 90% of most JavaScript on the web: progressive disclosure widgets, accordions, carousels, and so on.

That’s one of the reasons why jQuery became so popular. That first step (“find these particular elements in the DOM”) used to be a finicky affair involving getElementsByTagName, getElementById, and other long-winded DOM methods. jQuery came along and allowed us to use CSS selectors.

These days, we don’t need jQuery for that because we’ve got querySelector and querySelectorAll (and we can thank jQuery for their existence).

Let’s say you want to add some behaviour to every button element with a class of special. Or maybe you use a data- attribute instead of the class attribute; the same principle applies. You want something special to happen when the user clicks on one of those buttons.

  1. Use querySelectorAll('button.special') to get a list of all the right elements,
  2. Loop through the list, and
  3. Attach addEventListener('click') to each element.

That’s fine for a while. But if you’ve got a lot of special buttons, you’ve now got a lot of event listeners. You might be asking the browser to do a lot of work.

There’s another complication. The code you’ve written runs once, when the page loads. Suppose the contents of the page have changed in the meantime. Maybe elements are swapped in and out using Ajax. If a new special button shows up on the page, it won’t have an event handler attached to it.

You can switch things around. Instead of adding lots of event handlers to lots of elements, you can add one event handler to the root element. Then figure out whether the element that just got clicked is special or not.

That’s where closest comes in. It makes this kind of event handling pretty straightforward.

To start with, attach the event listener to the document:

document.addEventListener('click', doSomethingSpecial, false);

That function doSomethingSpecial will be executed whenever the user clicks on anything. Meanwhile, if the contents of the document are updated via Ajax, no problem!

Use the closest method in combination with the target property of the event to figure out whether that click was something you’re interested in:

function doSomethingSpecial(event) {
  if (event.target.closest('button.special')) {
    // do something
  }
}

There you go. Like querySelectorAll, the closest method takes a CSS selector—thanks again, jQuery!

Oh, and if you want to reduce the nesting inside that function, you can reverse the logic and return early like this:

function doSomethingSpecial(event) {
  if (!event.target.closest('button.special')) return;
  // do something
}

There’s a similar method to closest called matches. But that will only work if the user clicks directly on the element you’re interested in. If the element is nested within other elements, matches might not work, but closest will.

Like Eric said:

Very nice.

Saturday, September 30th, 2023

Wednesday, August 9th, 2023

Monday, June 12th, 2023

A history of metaphors for the internet - The Verge

No one says “information superhighway” anymore, but whenever anyone explains net neutrality, they do so in terms of fast lanes and tolls. Twitter is a “town square,” a metaphor that was once used for the internet as a whole. These old metaphors had been joined by a few new ones: I have a feeling that “the cloud” will soon feel as dated as “cyberspace.”

Monday, May 8th, 2023

Tragedy

There are two kinds of time-travel stories.

There are time-travel stories that explore the many-worlds hypothesis. Going back in time and making a change forks the universe. But the universe is constantly forking anyway. So effectively the time travel is a kind of universe-hopping (there’s a big crossover here with the alternative history subgenre).

The problem with multiverse stories is that there’s always a reset available. No matter how bad things get, there’s a parallel universe where everything is hunky dory.

The other kind of time travel story explores the idea of a block universe. There is one single timeline.

This is what you’ll find in Tenet, for example, or for a beautiful reduced test case, the Ted Chiang short story What’s Expected Of Us. That gets straight to the heart of the biggest implication of a block universe—the lack of free will.

There’s no changing what has happened or what will happen. In fact, the very act of trying to change the past often turns out to be the cause of what you’re trying to prevent in the present (like in Twelve Monkeys).

I’ve often referred to these single-timeline stories as being like Greek tragedies. But only recently—as I’ve been reading quite a bit of Greek mythology—have I realised that the reverse is also true:

Greek tragedies are time-travel stories.

Hear me out…

Time-travel stories aren’t actually about physically travelling in time. That’s just a convenience for the important part—information travelling in time. That’s at the heart of most time-travel stories; informaton from the future travelling back to the past.

William Gibson’s The Peripheral—very much a many-worlds story with its alternate universe “stubs”—takes this to its extreme. Nothing phyiscal ever travels in time. But in an age of telecommuting, nothing has to. Our time travellers are remote workers.

That book also highlights the power dynamics inherent in information wealth. Knowledge of the future gives you an advantage that you can exploit in the past. This is what Mark Twain’s Connecticut yankee does in King Arthur’s court.

This power dynamic is brilliantly inverted in Octavia Butler’s brilliant Kindred. No amount of information can help you if your place in society is determined by the colour of your skin.

Anyway, the point is that information flow is what matters in time-travel stories. Therefore any story where information travels backwards in time is a time-travel story.

That includes any story with a prophecy. A prophecy is information about the future, like:

Oedipus will kill his father and marry his mother.

You can try to change your fate, but you’ll just end up triggering it instead.

Greek tragedies are time-travel stories.

Monday, March 20th, 2023

Monday, March 6th, 2023

Like

We use metaphors all the time. To quote George Lakoff, we live by them.

We use analogies some of the time. They’re particularly useful when we’re wrapping our heads around something new. By comparing something novel to something familiar, we can make a shortcut to comprehension, or at least, categorisation.

But we need a certain amount of vigilance when it comes to analogies. Just because something is like something else doesn’t mean it’s the same.

With that in mind, here are some ways that people are describing generative machine learning tools. Large language models are like…

Tuesday, January 24th, 2023

Efficiency over performance

I quite like this change of terminology when it comes to making fast websites. After all, performance can sound like a process of addition, whereas efficiency can be a process of subtraction.

The term ‘performance’ brings to mind exotic super-cars suitable only for impractical demonstrations (or ‘performances’). ‘Efficiency’ brings to mind an electric car (or even better, a bicycle), making effective use of limited resources.