Tags: ethics

194

sparkline

Thursday, April 17th, 2025

I Hate Wasting Time on Identifying AI Slop • Buttondown

It’s an annoying cognitive task: detecting weird photo artifacts, bizarre movement in videos, impossible animals and body horror, and reading through reams of anodyne text to determine if the person who prompted the synthetic media machine cared enough to dedicate time and energy to the task of communicating to their audience.

I hate that this is the bleak future which venture capitalists and AI boosters have gleefully laid out for us, that they consider this to be a “democratizing” technology in any real sense of the word. Far from strengthening democracy, these are technologies more apt at propping up scam capitalism and multi-level marketing schemes. I would like my time and mental space back.

Tuesday, April 8th, 2025

‘An Overwhelmingly Negative And Demoralizing Force’: What It’s Like Working For A Company That’s Forcing AI On Its Developers - Aftermath

Grim reading from the games industry, especially if you work at Shopify where the CEbrO has just mandated that you have to use this shite.

Monday, April 7th, 2025

Denial

The Wikimedia Foundation, stewards of the finest projects on the web, have written about the hammering their servers are taking from the scraping bots that feed large language models.

Our infrastructure is built to sustain sudden traffic spikes from humans during high-interest events, but the amount of traffic generated by scraper bots is unprecedented and presents growing risks and costs.

Drew DeVault puts it more bluntly, saying Please stop externalizing your costs directly into my face:

Over the past few months, instead of working on our priorities at SourceHut, I have spent anywhere from 20-100% of my time in any given week mitigating hyper-aggressive LLM crawlers at scale.

And no, a robots.txt file doesn’t help.

If you think these crawlers respect robots.txt then you are several assumptions of good faith removed from reality. These bots crawl everything they can find, robots.txt be damned.

Free and open source projects are particularly vulnerable. FOSS infrastructure is under attack by AI companies:

LLM scrapers are taking down FOSS projects’ infrastructure, and it’s getting worse.

You try to do the right thing by making knowledge and tools freely available. This is how you get repaid. AI bots are destroying Open Access:

There’s a war going on on the Internet. AI companies with billions to burn are hard at work destroying the websites of libraries, archives, non-profit organizations, and scholarly publishers, anyone who is working to make quality information universally available on the internet.

My own experience with The Session bears this out.

Ars Technica has a piece on this: Open source devs say AI crawlers dominate traffic, forcing blocks on entire countries .

So does MIT Technology Review: AI crawler wars threaten to make the web more closed for everyone.

When we talk about the unfair practices and harm done by training large language models, we usually talk about it in the past tense: how they were trained on other people’s creative work without permission. But this is an ongoing problem that’s just getting worse.

The worst of the internet is continuously attacking the best of the internet. This is a distributed denial of service attack on the good parts of the World Wide Web.

If you’re using the products powered by these attacks, you’re part of the problem. Don’t pretend it’s cute to ask ChatGPT for something. Don’t pretend it’s somehow being technologically open-minded to continuously search for nails to hit with the latest “AI” hammers.

If you’re going to use generative tools powered by large language models, don’t pretend you don’t know how your sausage is made.

Wednesday, April 2nd, 2025

Poisoning Well: HeydonWorks

Heydon is employing a different tactic to what I’m doing to sabotage large language model crawlers. These bots don’t respect the nofollow rel value …so now they pay the price.

Raising my own middle finger to LLM manufacturers will achieve little on its own. If doing this even works at all. But if lots of writers put something similar in place, I wonder what the effect would be. Maybe we would start seeing more—and more obvious—gibberish emerging in generative AI output. Perhaps LLM owners would start to think twice about disrespecting the nofollow protocol.

Sunday, March 30th, 2025

Friday, March 28th, 2025

Open source devs say AI crawlers dominate traffic, forcing blocks on entire countries - Ars Technica

As it currently stands, both the rapid growth of AI-generated content overwhelming online spaces and aggressive web-crawling practices by AI firms threaten the sustainability of essential online resources. The current approach taken by some large AI companies—extracting vast amounts of data from open-source projects without clear consent or compensation—risks severely damaging the very digital ecosystem on which these AI models depend.

Wednesday, March 26th, 2025

Go To Hellman: AI bots are destroying Open Access

AI companies with billions to burn are hard at work destroying the websites of libraries, archives, non-profit organizations, and scholarly publishers, anyone who is working to make quality information universally available on the internet.

Friday, March 21st, 2025

FOSS infrastructure is under attack by AI companies

More on how large language bots are DDOSing the web:

LLM scrapers are taking down FOSS projects’ infrastructure, and it’s getting worse.

My Web Values: Why I Quit X and Feed the Fediverse Instead | Cybercultural

  1. Support open source software
  2. Support open web platform technology
  3. Distribution on the web should never be throttled
  4. External links should be encouraged, not de-emphasized

Thursday, March 20th, 2025

Please stop externalizing your costs directly into my face

Over the past few months, instead of working on our priorities at SourceHut, I have spent anywhere from 20-100% of my time in any given week mitigating hyper-aggressive LLM crawlers at scale.

This matches my experience with The Session. In fact, while I had this article open in a tab, I had to go deal with a tsunami of large language model bots. It’s really fucking depressing.

Please stop legitimizing LLMs or AI image generators or GitHub Copilot or any of this garbage. I am begging you to stop using them, stop talking about them, stop making new ones, just stop. If blasting CO2 into the air and ruining all of our freshwater and traumatizing cheap laborers and making every sysadmin you know miserable and ripping off code and books and art at scale and ruining our fucking democracy isn’t enough for you to leave this shit alone, what is?

Tuesday, March 18th, 2025

Design processing

Dan wrote an interesting post with a somewhat clickbaity title; This Competition Exposed How AI is Reshaping Design:

I watched two designers go head-to-head in a high-speed battle to create the best landing page in 45 minutes. One was a seasoned pro. The other was a non-designer using AI.

If you can ignore the title (and the fact that Dan still actively posts on Twitter; something I find very hard to ignore), then there’s a really thoughtful analysis in there.

It’s less about one platform or tool vs. another more than it is a commentary on how design happens, and whether or not that’s changing in a significant way.

In particular, there’s a very revealing graph that shows the pros and cons of both approaches.

There’s no doubt about it, using a generative large language model helped a non-designer to get past the blank page. But it was less useful in subsequent iterations that rely on decision-making:

I’ve said it before and I’ll say it again: design is deciding. The best designers are the best deciders.

Dan finishes by saying that what he’d really like to see is an experienced designer/decider using these tools to turbo-boost their process:

AI raises the floor for non-designers. But can it raise the ceiling for designers?

Meanwhile, Matt has been writing about Vibe-designing. Matt is an experienced designer, but he’s not experienced with Figma. He’s found that he can work around that using a large language model:

Where in the past 30 years I might have had to cajole a more technically adept colleague into making something through sketches, gesticulating and making sound effects – I open up a Claude window and start what-iffing.

The “vibe” part of the equation often defaults to the mean, which is not a surprise when you think about what you’re asking to help is a staggeringly-massive machine for producing generally-unsurprising satisfactory answers quickly. So, you look at the output as a basis for the next sketch, and the next sketch and quickly, together, you move to something more novel as a result.

Interesting! Just as Dan insisted, the important work is making the decision and moving on to the next stage. If the actual outputs at each stage are mediocre, that seems to be okay, as long as they’re just good enough to inform a go/no-go decision.

This certainly seems more centaur-like than the usual boring uses of large language models to simply do what people are already doing.

Rich gets at something similar when he talks about using large language models for prototyping, where it’s okay if the code is kind of shitty:

If all you need is crappy code to try out a concept or a solution, then an LLM might well enable you (the designer) to do that.

Mind you, even if you do end up finding useful and appropriate ways to use these tools, you’re still using a tool built on exploitation and unfairness:

It’s hard (and reckless) to ignore the heartfelt and cogent perspective laid out by Miriam on the role of AI companies in the current geopolitical crisis:

When eugenics-obsessed billionaires try to sell me a new toy, I don’t ask how many keystrokes it will save me at work. It’s impossible for me to discuss the utility of a thing when I fundamentally disagree with the purpose of it.

Another uncalled-for blog post about the ethics of using AI | Clagnut by Richard Rutter

This is a really thoughtful piece by Rich, who’s got conflicted feelings about large language models in the design process. I suspect a lot of people can relate to this.

What I do know is that I find LLMs useful on occasion, but every time I use one I die a little inside.

Monday, March 10th, 2025

Twittotage

I left Twitter in 2022. With every day that has passed since then, that decision has proven to be correct.

(I’m honestly shocked that some people I know still have active Twitter accounts. At this point there is no justification for giving your support to a place that’s literally run by a nazi.)

I also used to have some Twitter bots. There were Twitter accounts for my blog and for my links. A simple If-This-Then-That recipe would poll my RSS feeds and then post an update whenever there was a new item.

I had something something similar going for The Session. Its Twitter bot has been replaced with automated accounts on Mastodon and Bluesky (I couldn’t use IFTTT directly to post to Bluesky from RSS, but I was able to set up Buffer to do the job).

I figured The Session’s Twitter account would probably just stop working at some point, but it seems like it’s still going.

Hah! I spoke too soon. I just decided to check that URL and nothing is loading. Now, that may just be a temporary glitch because Alan Musk has decided to switch off a server or something. Or it might be that the account has been cancelled because of how I modified its output.

I’ve altered the IFTTT recipe so that whenever there’s a new item in an RSS feed, the update is posted to Twitter along with a message like “Please use Bluesky or Mastodon instead of Twitter” or “Please stop using Twitter/X”, or “Get off Twitter—please. It’s a cesspit” or “If you’re still on Twitter, you’re supporting a fascist.”

That’s a start but I need to think about how I can get the bot to do as much damage as possible before it’s destroyed.

Friday, February 14th, 2025

Reason

A couple of days ago I linked to a post by Robin Sloan called Is it okay?, saying:

Robin takes a fair and balanced look at the ethics of using large language models.

That’s how it came across to me: fair and balanced.

Robin’s central question is whether the current crop of large language models might one day lead to life-saving super-science, in which case, doesn’t that outweigh the damage they’re doing to our collective culture?

Baldur wrote a response entitled Knowledge tech that’s subtly wrong is more dangerous than tech that’s obviously wrong. (Or, where I disagree with Robin Sloan).

Baldur pointed out that one side of the scale that Robin is attempting to balance is based on pure science fiction:

There is no path from language modelling to super-science.

Robin responded pointing out that some things that we currently have would have seemed like science fiction a few years ago, right?

Well, no. Baldur debunks that in a post called Now I’m disappointed.

(By the way, can I just point out how great it is to see a blog-to-blog conversation like this, regardless of how much they might be in disagreement.)

Baldur kept bringing the receipts. That’s when it struck me that Robin’s stance is largely based on vibes, whereas Baldur’s viewpoint is informed by facts on the ground.

In a way, they’ve got something in common. They’re both advocating for an interpretation of the precautionary principle, just from completely opposite ends.

Robin’s stance is that if these tools one day yield amazing scientific breakthroughs then that’s reason enough to use them today. It’s uncomfortably close to the reasoning of the effective accelerationist nutjobs, but in a much milder form.

Baldur’s stance is that because of the present harms being inflicted by current large language models, we should be slamming on the brakes. If anything, the harms are going to multiply, not magically reduce.

I have to say, Robin’s stance doesn’t look nearly as fair and balanced as I initially thought. I’m on Team Baldur.

Michelle also weighs in, pointing out the flaw in Robin’s thinking:

AI isn’t LLMs. Or not just LLMs. It’s plausible that AI (or more accurately, Machine Learning) could be a useful scientific tool, particularly when it comes to making sense of large datasets in a way no human could with any kind of accuracy, and many people are already deploying it for such purposes. This isn’t entirely without risk (I’ll save that debate for another time), but in my opinion could feasibly constitute a legitimate application of AI.

LLMs are not this.

In other words, we’ve got a language collision:

We call them “AI”, we look at how much they can do today, and we draw a straight line to what we know of “AI” in our science fiction.

This ridiculous situation could’ve been avoided if we had settled on a more accurate buzzword like “applied statistics” instead of “AI”.

There’s one other flaw in Robin’s reasoning. I don’t think it follows that future improvements warrant present use. Quite the opposite:

The logic is completely backwards! If large language models are going to improve their ethical shortcomings (which is debatable, but let’s be generous), then that’s all the more reason to avoid using the current crop of egregiously damaging tools.

You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.

Anyway, this back-and-forth between Robin and Baldur (and Michelle) was interesting. But it all pales in comparison to the truth bomb that Miriam dropped in her post Tech continues to be political:

When eugenics-obsessed billionaires try to sell me a new toy, I don’t ask how many keystrokes it will save me at work. It’s impossible for me to discuss the utility of a thing when I fundamentally disagree with the purpose of it.

Boom!

Maybe we should consider the beliefs and assumptions that have been built into a technology before we embrace it? But we often prefer to treat each new toy as as an abstract and unmotivated opportunity. If only the good people like ourselves would get involved early, we can surely teach everyone else to use it ethically!

You know what? I could quote every single line. Just go read the whole thing. Please.

Tech continues to be political | Miriam Eric Suzanne

Being “in tech” in 2025 is depressing, and if I’m going to stick around, I need to remember why I’m here.

This. A million times, this.

I urge you to read what Miriam has written here. She has articulated everything I’ve been feeling.

I don’t know how to participate in a community that so eagerly brushes aside the active and intentional/foundational harms of a technology. In return for what? Faster copypasta? Automation tools being rebranded as an “agentic” web? Assurance that we won’t be left behind?

Wednesday, February 12th, 2025

Is it okay?

Robin takes a fair and balanced look at the ethics of using large language models.

Tuesday, January 28th, 2025

Friday, January 17th, 2025

Changing

It always annoys me when a politician is accused of “flip-flopping” when they change their mind on something. Instead of admiring someone for being willing to re-examine previously-held beliefs, we lambast them. We admire conviction, even though that’s a trait that has been at the root of history’s worst attrocities.

When you look at the history of human progress, some of our greatest advances were made by people willing to question their beliefs. Prioritising data over opinion is what underpins the scientific method.

But I get it. It can be very uncomfortable to change your mind. There’s inevitably going to be some psychological resistance, a kind of inertia of opinion that favours the sunk cost of all the time you’ve spent believing something.

I was thinking back to times when I’ve changed my opinion on something after being confronted with new evidence.

In my younger days, I was staunchly anti-nuclear power. It didn’t help that in my younger days, nuclear power and nuclear weapons were conceptually linked in the public discourse. In the intervening years I’ve come to believe that nuclear power is far less destructive than fossil fuels. There are still a lot of issues—in terms of cost and time—which make nuclear less attractive than solar or wind, but I honestly can’t reconcile someone claiming to be an environmentalist while simultaneously opposing nuclear power. The data just doesn’t support that conclusion.

Similarly, I remember in the early 2000s being opposed to genetically-modified crops. But the more I looked into the facts, there was nothing—other than vibes—to bolster that opposition. And yet I know many people who’ve maintainted their opposition, often the same people who point to the scientific evidence when it comes to climate change. It’s a strange kind of cognitive dissonance that would allow for that kind of cherry-picking.

There are other situations where I’ve gone more in the other direction—initially positive, later negative. Google’s AMP project is one example. It sounded okay to me at first. But as I got into the details, its fundamental unfairness couldn’t be ignored.

I was fairly neutral on blockchains at first, at least from a technological perspective. There was even some initial promise of distributed data preservation. But over time my opinion went down, down, down.

Bitcoin, with its proof-of-work idiocy, is the poster-child of everything wrong with the reality of blockchains. The astoundingly wasteful energy consumption is just staggeringly pointless. Over time, any sufficiently wasteful project becomes indistinguishable from evil.

Speaking of energy usage…

My feelings about large language models have been dominated by two massive elephants in the room. One is the completely unethical way that the training data has been acquired (by ripping off the work of people who never gave their permission). The other is the profligate energy usage in not just training these models, but also running queries on the network.

My opinion on the provenance of the training data hasn’t changed. If anything, it’s hardened. I want us to fight back against this unethical harvesting by poisoning the well that the training data is drawing from.

But my opinion on the energy usage might just be swaying a little.

Michael Liebreich published an in-depth piece for Bloomberg last month called Generative AI – The Power and the Glory. He doesn’t sugar-coat the problems with current and future levels of power consumption for large language models, but he also doesn’t paint a completely bleak picture.

Effectively there’s a yet-to-decided battle between Koomey’s law and the Jevons paradox. Time will tell which way this will go.

The whole article is well worth a read. But what really gave me pause was a recent piece by Hannah Ritchie asking What’s the impact of artificial intelligence on energy demand?

When Hannah Ritchie speaks, I listen. And I’m well aware of the irony there. That’s classic argument from authority, when the whole point of Hannah Ritchie’s work is that it’s the data that matters.

In any case, she does an excellent job of putting my current worries into a historical context, as well as laying out some potential futures.

Don’t get me wrong, the energy demands of large language models are enormous and are only going to increase, but we may well see some compensatory efficiencies.

Personally, I’d just like to see these tools charge a fair price for their usage. Right now they’re being subsidised by venture capital. If people actually had to pay out of pocket for the energy used per query, we’d get a much better idea of how valuable these tools actually are to people.

Instead we’re seeing these tools being crammed into existing products regardless of whether anybody actually wants them (and in my anecdotal experience, most people resent this being forced on them).

Still, I thought it was worth making a note of how my opinion on the energy usage of large language models is open to change.

But I still won’t use one that’s been trained on other people’s work without their permission.

Thursday, January 16th, 2025

Conference line-ups

When I was looking back at 2024, I mentioned that I didn’t give a single conference talk (though I did host three conferences—Patterns Day, CSS Day, and UX London).

I almost spoke at a conference though. I was all set to speak at an event in the Netherlands. But then the line-up was announced and I was kind of shocked at the lack of representation. The schedule was dominated by white dudes like me. There were just four women in a line-up of 30 speakers.

When I raised my concerns, I was told:

We did receive a lot of talks, but almost no women because there are almost no women in this kind of jobs.

Yikes! I withdrew my participation.

I wish I could say that it was one-off occurrence, but it just happened again.

I was looking forward to speaking at DevDays Europe. I’ve never been to Vilnius but I’ve heard it’s lovely.

Now, to be fair, I don’t think the line-up is finalised, but it’s not looking good.

Once again, I raised my concerns. I was told:

Unfortunately, we do not get a lot of applications from women and have to work with what we have.

Even though I knew I was just proving Brandolini’s law, I tried to point out the problems with that attitude (while also explaining that I’ve curated many confernce line-ups myself):

It’s not really conference curation if you rely purely on whoever happens to submit a proposal. Surely you must accept some responsibility for ensuring a good diverse line-up?

The response began with:

I agree that it’s important to address the lack of diversity.

…but then went on:

I just wanted to share that the developer field as a whole tends to be male-dominated, not just among speakers but also attendees.

At this point, I’m face-palming. I tried pointing out that there might just be a connection between the make-up of the attendees and the make-up of the speaker line-up. Heck, if I feel uncomfortable attending such a homogeneous conference, imagine what a woman developer would think!

Then they dropped the real clanger:

While we always aim for a diverse line-up, our main focus has been on ensuring high-quality presentations and providing the best experience for our audience.

Double-yikes! I tried to remain calm in my response. I asked them to stop and think about what they were implying. They’re literally setting up a dichotomy between having a diverse line-up and having a good line-up. Like it’s inconceivable you could have both. As though one must come at the expense of the other. Just think about the deeply embedded bias that would enable that kind of worldview.

Needless to say, I won’t be speaking at that event.

This is depressing. It feels like we’re backsliding to what conferences were like 15 years ago.

I can’t help but spot the commonalaties between the offending events. Both of them have multiple tracks. Both of them have a policy of not paying their speakers. Both of them seem to think that opening up a form for people to submit proposals counts as curation. It doesn’t.

Don’t get me wrong. Having a call for proposals is great …as long as it’s part of an overall curation strategy that actually values diversity.

You can submit a proposal to speak at FFconf, for example. But Remy doesn’t limit his options to what people submit. He puts a lot of work into creating a superb line-up that is always diverse, and always excellent.

By the way, you can also submit a proposal for UX London. I’ve had lots of submissions so far, but again, I’m not going to limit my pool of potential speakers to just the people who know about that application form. That would be a classic example of the streetlight effect:

The streetlight effect, or the drunkard’s search principle, is a type of observational bias that occurs when people only search for something where it is easiest to look.

It’s quite depressing to see this kind of minimal-viable conference curation result in such heavily skewed line-ups. Withdrawing from speaking at those events is literally the least I can do.

I’m with Karolina:

What I’m looking for: at least 40% of speakers have to be women speaking on the subject of their expertise instead of being invited to present for the sake of adjusting the conference quotas. I want to see people of colour too. In an ideal scenario, I’d like to see as many gender identities, ethnical backgrounds, ages and races as possible.

Thursday, January 9th, 2025

Dark Patterns Detective

Deceptive design meets gamification in this explanatory puzzle game (though I wish it weren’t using the problematic label “dark patterns”).

I created this interactive experience to explore the intersection of design ethics and human psychology, helping us all make more informed choices while browsing the web.