Beach daydreams, lost at sea (Interconnected)
Matt’s beach thoughts are like a satisfying susurrus in my RSS reader.
Matt’s beach thoughts are like a satisfying susurrus in my RSS reader.
I was at the State Of The Browser event recently, which was great as always.
Manu gave a great talk about colour in CSS. A lot of it focused on OKLCH. I was already convinced of the benefits of this colour space after seeing a terrific talk by Anton Lovchikov a while back.
After Manu’s talk, someone mentioned that even though OKLCH is well supported in browsers now, it’s a shame that it isn’t (yet) in design tools like Figma. So designers are still handing over mock-ups with hex values.
I get the frustration, but in my experience it’s not that big a deal in practice. Here’s why: oklch()
isn’t just a way of defining colours with lightness, chroma, and hue in CSS. It’s also a function. You can use the magical from
keyword in this function to convert hex colours to l, c, and h:
--page-colour: oklch(from #49498D l c h);
So even if you’re being handed hex colour values, you can still use OKLCH in your CSS. Once you’re doing that, you can use all of the good stuff that comes with having those three values separated out—something that was theoretically possible with hsl
, but problematic in practice.
Let’s say you want to encode something into your CSS like this: “if the user has specified that they prefer higher contrast, the background colour should be four times darker.”
@media (prefers-contrast: more) {
--page-colour: oklch(from #49498D calc(l / 4) c h);
}
It’s really handy that you can use calc()
within oklch()
—functions within functions. And I haven’t even touched on the color-mix()
function.
Hmmm …y’know, this is starting to sound an awful lot like functional programming:
Functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values.
Make yourself a nice cup of tea and settle in with Julian Gough’s magnum opus:
How early, sustained, supermassive black hole jets carved out cosmic voids, shaped filaments, and generated magnetic fields
I can’t recommend React to any project or customer anymore.
Using almost any other modern alternative, you will save time, money and nerves, even if you haven’t used them before.
Don’t stick to technology just because you know it.
This sums up my experience of companies and products trying to inject AI in to the products I use to communicate with other people. It’s always just in the way, making stupid suggestions.
Many of us got excited about technology because of the web, and are discovering, latterly, that it was always the web itself — rather than technology as a whole — that we were excited about. The web is a movement: more than a set of protocols, languages, and software, it was always about bringing about a social and cultural shift that removed traditional gatekeepers to publishing and being heard.
The web is open, apps are closed. The majority of web users have installed an ad blocker (which is also a privacy blocker). But no one installs an ad blocker for an app, because it’s a felony to distribute that tool, because you have to reverse-engineer the app to make it. An app is just a website wrapped in enough IP so that the company that made it can send you to prison if you dare to modify it so that it serves your interests rather than theirs.
The tech bros advocating for generative AI to take over art are at the same level of cultural refinement as the characters in Severance. They’re creating apps to summarize books to people, tweeting from accounts with Greek statue profile pictures.
GenAI would automate Lumon’s cultural mission, allowing humans to sever themselves from the production of art and culture.
I miss being excited by technology. I wish I could see a way out of the endless hype cycles that continue to elicit little more than cynicism from me. The version of technology that we’re mostly being sold today has almost nothing to do with improving lives, but instead stuffing the pockets of those who already need for nothing. It’s not making us smarter. It’s not helping heal a damaged planet. It’s not making us happier or more generous towards each other. And it’s entrenched in everything — meaning a momentous challenge to re-wire or meticulously disconnect. I’m slowly finding my own ways of breaking free to regain a sense of self and purpose.
You can think of flying to Mars like one of those art films where the director has to shoot the movie in a single take. Even if no scene is especially challenging, the requirement that everything go right sequentially, with no way to pause or reshoot, means that even small risks become unacceptable in the aggregate.
You do not have to use generative AI.
AI itself cannot be held to account.
If you use AI, you are the one who is accountable for whatever you produce with it.
There are contexts in which it is immoral to use generative AI.
Correcting or fact checking generative AI may take longer than just doing a task yourself, or with conventional AI tools.
You do not have to use generative AI.
My main problem with AI is not that that it creates ugly, immoral, boring slop (which it does). Nor even that it disenfranchises artists and impoverishes workers, (though it does that too).
No, my main problem with AI is that its current pitch to the public is suffused with so much unsubstantiated bullshit, that I cannot banish from my thoughts the sight of a well-dressed man peddling a miraculous talking dog.
Also, trust:
They’ve also managed to muddy the waters of online information gathering to the point that that even if we scrubbed every trace of those hallucinations from the internet – a likely impossible task - the resulting lack of trust could never quite be purged. Imagine, if you will, the release of a car which was not only dangerous and unusable in and of itself, but which made people think twice before ever entering any car again, by any manufacturer, so long as they lived. How certain were you, five years ago, that an odd ingredient in an online recipe was merely an idiosyncratic choice by a quirky, or incompetent, chef, rather than a fatal addition by a robot? How certain are you now?
I Feel Like I’m Going Insane
Everywhere you look, the media is telling you that OpenAI and their ilk are the future, that they’re building “advanced artificial intelligence” that can take “human-like actions,” but when you look at any of this shit for more than two seconds it’s abundantly clear that it absolutely isn’t and absolutely can’t.
Despite the hype, the marketing, the tens of thousands of media articles, the trillions of dollars in market capitalization, none of this feels real, or at least real enough to sustain this miserable, specious bubble.
We are in the midst of a group delusion — a consequence of an economy ruled by people that do not participate in labor of any kind outside of sending and receiving emails and going to lunches that last several hours — where the people with the money do not understand or care about human beings.
Their narrative is built on a mixture of hysteria, hype, and deeply cynical hope in the hearts of men that dream of automating away jobs that they would never, ever do themselves.
Generative AI is a financial, ecological and social time bomb, and I believe that it’s fundamentally damaging the relationship between the tech industry and society, while also shining a glaring, blinding light on the disconnection between the powerful and regular people. The fact that Sam Altman can ship such mediocre software and get more coverage and attention than every meaningful scientific breakthrough of the last five years combined is a sign that our society is sick, our media is broken, and that the tech industry thinks we’re all fucking morons.
Want to use all those great features that have been in landing in browsers over the past year or two? View transitions! Scroll-driven animations! So much more!
Well, your coding co-pilot is not going to going to be of any help.
Large language models, especially those on the scale of many of the most accessible, popular hosted options, take humongous datasets and long periods to train. By the time everything has been scraped and a dataset has been built, the set is on some level already obsolete. Then, before a model can reach the hands of consumers, time must be taken to train and evaluate it, and then even more to finally deploy it.
Once it has finally released, it usually remains stagnant in terms of having its knowledge updated. This creates an AI knowledge gap. A period between the present and AI’s training cutoff. This gap creates a time between when a new technology emerges and when AI systems can effectively support user needs regarding its adoption, meaning that models will not be able to service users requesting assistance with new technologies, thus disincentivising their use.
So we get this instead:
I’ve anecdotally noticed that many AI tools have a ‘preference’ for React and Tailwind when asked to tackle a web-based task, or even to create any app involving an interface at all.
A couple of days ago I linked to a post by Robin Sloan called Is it okay?, saying:
Robin takes a fair and balanced look at the ethics of using large language models.
That’s how it came across to me: fair and balanced.
Robin’s central question is whether the current crop of large language models might one day lead to life-saving super-science, in which case, doesn’t that outweigh the damage they’re doing to our collective culture?
Baldur wrote a response entitled Knowledge tech that’s subtly wrong is more dangerous than tech that’s obviously wrong. (Or, where I disagree with Robin Sloan).
Baldur pointed out that one side of the scale that Robin is attempting to balance is based on pure science fiction:
There is no path from language modelling to super-science.
Robin responded pointing out that some things that we currently have would have seemed like science fiction a few years ago, right?
Well, no. Baldur debunks that in a post called Now I’m disappointed.
(By the way, can I just point out how great it is to see a blog-to-blog conversation like this, regardless of how much they might be in disagreement.)
Baldur kept bringing the receipts. That’s when it struck me that Robin’s stance is largely based on vibes, whereas Baldur’s viewpoint is informed by facts on the ground.
In a way, they’ve got something in common. They’re both advocating for an interpretation of the precautionary principle, just from completely opposite ends.
Robin’s stance is that if these tools one day yield amazing scientific breakthroughs then that’s reason enough to use them today. It’s uncomfortably close to the reasoning of the effective accelerationist nutjobs, but in a much milder form.
Baldur’s stance is that because of the present harms being inflicted by current large language models, we should be slamming on the brakes. If anything, the harms are going to multiply, not magically reduce.
I have to say, Robin’s stance doesn’t look nearly as fair and balanced as I initially thought. I’m on Team Baldur.
Michelle also weighs in, pointing out the flaw in Robin’s thinking:
AI isn’t LLMs. Or not just LLMs. It’s plausible that AI (or more accurately, Machine Learning) could be a useful scientific tool, particularly when it comes to making sense of large datasets in a way no human could with any kind of accuracy, and many people are already deploying it for such purposes. This isn’t entirely without risk (I’ll save that debate for another time), but in my opinion could feasibly constitute a legitimate application of AI.
LLMs are not this.
In other words, we’ve got a language collision:
We call them “AI”, we look at how much they can do today, and we draw a straight line to what we know of “AI” in our science fiction.
This ridiculous situation could’ve been avoided if we had settled on a more accurate buzzword like “applied statistics” instead of “AI”.
There’s one other flaw in Robin’s reasoning. I don’t think it follows that future improvements warrant present use. Quite the opposite:
The logic is completely backwards! If large language models are going to improve their ethical shortcomings (which is debatable, but let’s be generous), then that’s all the more reason to avoid using the current crop of egregiously damaging tools.
You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.
Anyway, this back-and-forth between Robin and Baldur (and Michelle) was interesting. But it all pales in comparison to the truth bomb that Miriam dropped in her post Tech continues to be political:
When eugenics-obsessed billionaires try to sell me a new toy, I don’t ask how many keystrokes it will save me at work. It’s impossible for me to discuss the utility of a thing when I fundamentally disagree with the purpose of it.
Boom!
Maybe we should consider the beliefs and assumptions that have been built into a technology before we embrace it? But we often prefer to treat each new toy as as an abstract and unmotivated opportunity. If only the good people like ourselves would get involved early, we can surely teach everyone else to use it ethically!
You know what? I could quote every single line. Just go read the whole thing. Please.
Being “in tech” in 2025 is depressing, and if I’m going to stick around, I need to remember why I’m here.
This. A million times, this.
I urge you to read what Miriam has written here. She has articulated everything I’ve been feeling.
I don’t know how to participate in a community that so eagerly brushes aside the active and intentional/foundational harms of a technology. In return for what? Faster copypasta? Automation tools being rebranded as an “agentic” web? Assurance that we won’t be left behind?
AI has the same problem that I saw ten year ago at IBM. And remember that IBM has been at this AI game for a very long time. Much longer than OpenAI or any of the new kids on the block. All of the shit we’re seeing today? Anyone who worked on or near Watson saw or experienced the same problems long ago.
Heydon’s latest video is particularly good:
All of my videos are black and white, but especially this one.
Robin takes a fair and balanced look at the ethics of using large language models.
Every UI control you roll yourself is a liability. You have to design it, test it, ship it, document it, debug it, maintain it — the list goes on.
It makes you wonder why we insist on rolling (or styling) our own common UI controls so often. Perhaps we’d be better off asking: What are the fewest amount of components we have to build to deliver value to our users?