Trains are offices | hidde.blog
This tracks (ahem) with my experience of coding on trains.
Hidde lists the potentially flaky connectivity as a downside, but for many kinds of deep work I’d say it’s very much a feature, not a bug.
This tracks (ahem) with my experience of coding on trains.
Hidde lists the potentially flaky connectivity as a downside, but for many kinds of deep work I’d say it’s very much a feature, not a bug.
I went to the UX Brighton conference yesterday.
The quality of the presentations was really good this year, probably the best yet. Usually there are one or two stand-out speakers (like Tom Kerwin last year), but this year, the standard felt very high to me.
But…
The theme of the conference was UX and “AI”, and I’ve never been more disappointed by what wasn’t said at a conference.
Not a single speaker addressed where the training data for current large language models comes from (it comes from scraping other people’s copyrighted creative works).
Not a single speaker addressed the energy requirements for current large language models (the requirements are absolutely mahoosive—not just for the training, but for each and every query).
My charitable reading of the situation yesterday was that every speaker assumed that someone else would cover those issues.
The less charitable reading is that this was a deliberate decision.
Whenever the issue of ethics came up, it was only ever in relation to how we might use these tools: considering user needs, being transparent, all that good stuff. But never once did the question arise of whether it’s ethical to even use these tools.
In fact, the message was often the opposite: words like “responsibility” and “duty” came up, but only in the admonition that UX designers have a responsibility and duty to use these tools! And if that carrot didn’t work, there’s always the stick of scaring you into using these tools for fear of being left behind and having a machine replace you.
I was left feeling somewhat depressed about the deliberately narrow focus. Maggie’s talk was the only one that dealt with any externalities, looking at how the firehose of slop is blasting away at society. But again, the focus was only ever on how these tools are used or abused; nobody addressed the possibility of deliberately choosing not to use them.
If audience members weren’t yet using generative tools in their daily work, the assumption was that they were lagging behind and it was only a matter of time before they’d get on board the hype train. There was no room for the idea that someone might examine the roots of these tools and make a conscious choice not to fund their development.
There’s a quote by Finnish architect Eliel Saarinen that UX designers like repeating:
Always design a thing by considering it in its next larger context. A chair in a room, a room in a house, a house in an environment, an environment in a city plan.
But none of the speakers at UX Brighton chose to examine the larger context of the tools they were encouraging us to use.
One speaker told us “Be curious!”, but clearly that curiosity should not extend to the foundations of the tools themselves. Ignore what’s behind the curtain. Instead look at all the cool stuff we can do now. Don’t worry about the fact that everything you do with these tools is built on a bedrock of exploitation and environmental harm. We should instead blithely build a new generation of user interfaces on the burial ground of human culture.
Whenever I get into a discussion about these issues, it always seems to come back ’round to whether these tools are actually any good or not. People point to the genuinely useful tasks they can accomplish. But that’s not my issue. There are absolutely smart and efficient ways to use large language models—in some situations, it’s like suddenly having a superpower. But as Molly White puts it:
The benefits, though extant, seem to pale in comparison to the costs.
There are no ethical uses of current large language models.
And if you believe that the ethical issues will somehow be ironed out in future iterations, then that’s all the more reason to stop using the current crop of exploitative large language models.
Anyway, like I said, all the talks at UX Brighton were very good. But I just wish just one of them had addressed the underlying questions that any good UX designer should ask: “Where did this data come from? What are the second-order effects of deploying this technology?”
Having a talk on those topics would’ve been nice, but I would’ve settled for having five minutes of one talk, or even one minute. But there was nothing.
There’s one possible explanation for this glaring absence that’s quite depressing to consider. It may be that these topics weren’t covered because there’s an assumption that everybody already knows about them, and frankly, doesn’t care.
To use an outdated movie reference, imagine a raving Charlton Heston shouting that “Soylent Green is people!”, only to be met with indifference. “Everyone knows Soylent Green is people. So what?”
This seems to be the attitude of many of my fellow nerds—designers and developers—when presented with tools based on large language models that produce dubious outputs based on the unethical harvesting of other people’s work and requiring staggering amounts of energy to run:
This is the future! I need to start using these tools now, even if they’re flawed, because otherwise I’ll be left behind. They’ll only get better. It’s inevitable.
Whereas this seems to be the attitude of those same designers and developers when faced with stable browser features that can be safely used today without frameworks or libraries:
I’m sceptical.
A solid detailed in-depth report.
The sheer amount of resources needed to support the current and forecast demand from AI is colossal and unprecedented.
I’ve noticed a really strange justification from people when I ask them about their use of generative tools that use large language models (colloquially and inaccurately labelled as artificial intelligence).
I’ll point out that the training data requires the wholesale harvesting of creative works without compensation. I’ll also point out the ludicrously profligate energy use required not just for the training, but for the subsequent queries.
And here’s the thing: people will acknowledge those harms but they will justify their actions by saying “these things will get better!”
First of all, there’s no evidence to back that up.
If anything, as the well gets poisoned by their own outputs, large language models may well end up eating their own slop and getting their own version of mad cow disease. So this might be as good as they’re ever going to get.
And when it comes to energy usage, all the signals from NVIDIA, OpenAI, and others are that power usage is going to increase, not decrease.
But secondly, what the hell kind of logic is that?
It’s like saying “It’s okay for me to drive my gas-guzzling SUV now, because in the future I’ll be driving an electric vehicle.”
The logic is completely backwards! If large language models are going to improve their ethical shortcomings (which is debatable, but let’s be generous), then that’s all the more reason to avoid using the current crop of egregiously damaging tools.
You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.
I suspect that most people know full well that the “they’ll get better!” defence doesn’t hold water. But you can convince yourself of anything when everyone around is telling you that this is the future baby, and you’d better get on board or you’ll be left behind.
Baldur reminds us that this is how people talked about asbestos:
Every time you had an industry campaign against an asbestos ban, they used the same rhetoric. They focused on the potential benefits – cheaper spare parts for cars, cheaper water purification – and doing so implicitly assumed that deaths and destroyed lives, were a low price to pay.
This is the same strategy that’s being used by those who today talk about finding productive uses for generative models without even so much as gesturing towards mitigating or preventing the societal or environmental harms.
It reminds me of the classic Ursula Le Guin short story, The Ones Who Walk Away from Omelas that depicts:
…the utopian city of Omelas, whose prosperity depends on the perpetual misery of a single child.
Once citizens are old enough to know the truth, most, though initially shocked and disgusted, ultimately acquiesce to this one injustice that secures the happiness of the rest of the city.
It turns out that most people will blithely accept injustice and suffering not for a utopia, but just for some bland hallucinated slop.
Don’t get me wrong: I’m not saying large language models aren’t without their uses. I love seeing what Simon and Matt are doing when it comes to coding. And large language models can be great for transforming content from one format to another, like transcribing speech into text. But the balance sheet just doesn’t add up.
As Molly White put it: AI isn’t useless. But is it worth it?:
Even as someone who has used them and found them helpful, it’s remarkable to see the gap between what they can do and what their promoters promise they will someday be able to do. The benefits, though extant, seem to pale in comparison to the costs.
Despite all of this hype, all of this media attention, all of this incredible investment, the supposed “innovations” don’t even seem capable of replacing the jobs that they’re meant to — not that I think they should, just that I’m tired of being told that this future is inevitable.
The reality is that generative AI isn’t good at replacing jobs, but commoditizing distinct acts of labor, and, in the process, the early creative jobs that help people build portfolios to advance in their industries.
One of the fundamental misunderstandings of the bosses replacing these workers with generative AI is that you are not just asking for a thing, but outsourcing the risk and responsibility.
Generative AI costs far too much, isn’t getting cheaper, uses too much power, and doesn’t do enough to justify its existence.
Every time you had an industry campaign against an asbestos ban, they used the same rhetoric. They focused on the potential benefits – cheaper spare parts for cars, cheaper water purification – and doing so implicitly assumed that deaths and destroyed lives, were a low price to pay.
This is the same strategy that’s being used by those who today talk about finding productive uses for generative models without even so much as gesturing towards mitigating or preventing the societal or environmental harms.
My colleague Chris has written a terrific post over on the Clearleft blog: Is the planet the missing member of your project team?
Rather than hand-wringing and finger-wagging, it gets down to some practical steps that you—we—can take on every project.
Chris finishes by asking:
Let me know how you design with the environment in mind. What practical advice would you suggest?
Well, here’s something that I keep coming up against…
Chris shows that the environment can be part of project management, specifically the RACI methodology:
We list who is responsible, accountable, consulted, and informed within the project. It’s a simple exercise but the clarity is useful for identifying what expertise and input we should seek from the named individuals.
Having the planet be a proactive partner in your project ensures its needs are considered.
Whenever responsibilities are being assigned there are some things that inevitably fall through the cracks. One I’ve seen over and over again is responsibility for third-party scripts.
On the face of it this seems like another responsibility for developers. We’re talking about code here, right?
But in my experience it is never the developers adding “beacons” and other third-party embedded scripts.
Chris rightly points out:
Development decisions, visual design choices, content approach, and product strategy all contribute to the environmental impact of your website.
But what about sales and marketing? Often they’re the ones who’ll drop in a third-party script to track user journeys. That’s understandable. That’s kind of their job.
Dropping in one line of JavaScript seems like a victimless crime. It’s just one small script, right? But JavaScript can import more JavaScript. Tools like Request Map Generator can show just how much destruction third-party JavaScript can wreak:
You pop in a URL, it fetches the page and maps out all the subsequent requests in a nifty interactive diagram of circles, showing how many requests third-party scripts are themselves generating. I’ve found it to be a very effective way of showing the impact of third-party scripts to people who aren’t interested in looking at waterfall diagrams.
Just to be clear, the people adding third-party scripts to websites usually aren’t doing so maliciously. They often don’t realise the negative effect the scripts will have on performance and the environment.
As is so often the case, this isn’t a technical problem. At root it’s about understanding people’s needs (like “I need a way to see what pages are converting!”) and finding a way to meet those needs without negatively impacting the planet. A good open-minded discussion can go a long way.
So I echo Chris’s call to think about environmental impacts from the very start of a project. Establish early on who will have the ability to add third-party scripts to the site. Do all of those people understand the responsibility that gives them?
I saw this lack of foresight in action on a project recently. The front-end development was going really well and the site was going to be exceptionally performant: green Lighthouse scores across the board. But when the site went live it had tracking scripts. That meant that users needed to consent to being tracked. That meant adding another third-party script to generate a consent banner. It completely tanked the Lighthouse scores.
I’m sure the people who added the tracking scripts and consent banners thought they had no choice. But there are alternatives. There are ways to get the data you need without the intrusive surveillance and performance-wrecking JavaScript.
The problem is that it’s not the norm. “Everyone else is doing it” was the justification for Flash intros two decades ago and it’s the justification for enshittification via third-party scripts now.
It doesn’t have to be this way.
Even the smallest of business websites now seems to have cookie popups simultaneously telling us they ‘value your privacy’ while harvesting data about who we are, where we are, what we’re looking for and what we were doing online before we landed there.
Tracking scripts have become so pervasive that they have effectively become an industry standard, and most businesses deploy them not only without question, but without consideration of what it means for customer privacy.
What we’re seeing is FOMO-driven dumb money thrown at technology by people who have no hope of understanding it. Just because everybody else is and because the GPTs and image generators have cool demos.
What’s going to happen, I’m pretty sure, is that AI/ML will, inevitably, disappoint; in the financial sense I mean, probably doing some useful things, maybe even a lot, but not generating the kind of profit explosions that you’d need to justify the bubble. So it’ll pop, and my bet it is takes a bunch of the finance world with it.
This is mostly about the intersection of finance, hype, and technology, but Tim mentions something that I’ve also been saying:
I’m super impressed by something nobody else seems to talk about: Prompt parsing.
Maybe it’s because I spent formative users playing text-only adventure games, but I am way more impressed by the way generative tools do natural language parsing than I am by their output.
Beautiful writing from Rebecca Solnit, that encapsulates what I’ve been trying to say:
You want tomorrow to be different than today, and it may seem the same, or worse, but next year will be different than this one, because those tiny increments added up. The tree today looks a lot like the tree yesterday, and so does the baby.
The opening of my talk Of Time And The Web deals with our collective negativity bias. The general consensus is that the world has become worse. Crime. Inequality. Poverty. Pollution. Most people think these things are heading in the wrong direction.
But they’re not. Every year the world gets better and better. But it’s happening gradually. Like I said:
If something changes gradually, we don’t notice it. We literally exhibit something called change blindness.
But we are hard-wired to notice sudden changes. We pay attention to moments of change.
“Where were you when JFK was assassinated?”
“Where were you on September 11th?”
Nobody is ever going to ask “where were you when smallpox was eradicated?”
I know it might seem obscene to suggest that the world is getting better given the horrific situation in Gaza and the ongoing quagmire in Ukraine. But the very fact that the world is united in outrage is testament to how far we’ve come.
I try to balance my news intake with more positive stories of progress. Reasons to Be Cheerful is one good source:
We tell stories that reveal that there are, in fact, a surprising number of reasons to feel cheerful. Many of these reasons come in the form of smart, proven, replicable solutions to the world’s most pressing problems. Through sharp reporting, our stories balance a sense of healthy optimism with journalistic rigor, and find cause for hope. We are part magazine, part therapy session, part blueprint for a better world.
Most news outlets don’t operate that way. If it bleeds, it leads.
Even if you’re not actively tracking positive news on a daily or weekly basis, the end of the year feels like a suitable time to step back and take note of our collective progress.
Future Crunch has 66 Good News Stories You Didn’t Hear About in 2023:
The American journalist Krista Tippett says that we’re all fluent enough by now in the language of catastrophe and dysfunction, and what’s needed are more of what she calls ‘generative narratives.’ This year, we found over 2,000 of those kinds of stories, and shared them with tens of thousands of readers in a weekly email. Not dog-on-a-surfboard, baby-survives-a-tornado stories, but genuine, world changing stuff about how millions of lives are improving, about human rights victories, diseases being eliminated, falling emissions, how vast swathes of our planet are being protected and how entire species have been saved.
The Progress Network reports that something good happened every week of 2023:
Despite the wars, emergencies, and crises of 2023, the year was full of substantive good news.
Positive.news has its own round-up. What went right in 2023: the top 25 good news stories of the year:
The ‘golden age of medicine’ arrived, animals came back from the brink, the renewables juggernaut gathered pace, climate reparations became reality and scientists showed how to slow ageing, plus more good news.
On the topic of climate change, the BBC has nine breakthroughs for climate and nature in 2023 you may have missed:
Record-setting spending on clean energy in the US. A clean energy milestone in the world’s power sector. A surge in lawsuits against polluters. A treaty for the oceans 40 years in the making.
This year has seen some remarkable steps forward in tackling the nature and climate crises.
That’s the kind of reporting we need more of. As Kate Marvel wrote in the New York Times, “I’m a Climate Scientist. I’m Not Screaming Into the Void Anymore.”:
In the last decade, the cost of wind energy has declined by 70 percent and solar has declined 90 percent. Renewables now make up 80 percent of new electricity generation capacity. Our country’s greenhouse gas emissions are falling, even as our G.D.P. and population grow.
There’s a pernicious myth that a crisis mindset is necessary to drive change. I think that might be true for short-term emergencies, but it’s counter-productive for long-term problems.
Speaking for myself, I am far more likely to take action if I can see that progress has already been made, and that my actions won’t be pointless. Constant doomerism isn’t just lazy, it’s demotivational. See my excoriating words when reviewing Paolo Bacigalupi’s The Water Knife:
Instead of asking what the future might actually be like, it instead asks “what’s the absolute worst that could happen?” Frankly, it’s a cop-out.
As we head in 2024 it’s worth taking stock of the big-picture improvements we’ve collectively made so that we can continue the work.
If the news headlines continue to get you down, take some time to browse around Our World In Data.
And if you find yourself instinctively rejecting all these reports of progress, ask yourself why that might be. As I said in my talk:
We have this phrase: “sounds too good to be true.”
But we don’t have this phrase: “sounds too bad to be true.”
Humans are allergic to change. And, as Jeremy impressively demonstrated, we tend to overlook the changes that happen more gradually. We want the Big Bang, the sudden change, the headline that reads, “successful nuclear fusion solves climate change for good.” But that’s (usually) not how change works. Change often happens gradually, first very slowly, and then, once it reaches a certain threshold, it can happen overnight.
Earlier this month, Jeremy Keith posed the question: “How green is my server?”. As Jeremy notes, it’s surprisingly hard to get that information! So how do you ensure that you’re hosting your website on a green server?
The Session does very well in terms of performance. You can see the data from the Chrome UX Report (CRUX).
What’s good for performance is good for the environment. Sure enough, The Session gets a very high score from the website carbon calculator:
Hurrah! This web page achieves a carbon rating of A+
This is cleaner than 99% of all web pages globally
But under the details about hosting it says:
Oh no, it looks like this web page uses bog standard energy
The Session is hosted on DigitalOcean, who tend to be quite tight-lipped about their energy suppliers. Fortunately others have done some sleuthing to figure out which facilities are running on green energy.
One of the locations to get the green thumnbs up is the Amsterdam facility housed by Equinix. That’s where The Session is hosted.
I’m glad that I was able to find out that the site is running on 100% renewable energy, but I wish I didn’t need to go searching to find this out. DigitalOcean need to be a lot more transparent about the energy sources for their hosting facilities.
You might think that any individual effort to reduce the web’s environmental impact is a drop in the ocean. But as tech workers, we are in a position of relative power compared to other industries. We build products that might be used by thousands, even millions of users. Any improvements we make have the potential for a vast impact when scaled up to that level.
A good overview from Michelle.
If we’re serious about creating a sustainable future, perhaps we should change this common phrase from “Form follows Function” to “Form – Function – Future”. While form and function are essential considerations, the future, represented by sustainability, should be at the forefront of our design thinking. And actually, if sustainability is truly at the forefront of the way we create new products, then maybe we should revise the phrase even further to “Future – Function – Form.” This revised approach would place our future, represented by sustainability, at the forefront of our design thinking. It would encourage us to first ask ourselves, “What is the most sustainable way to design X?” and then consider how the function of X can be met while ensuring it remains non-harmful to people and the planet.
Web performance is an unalloyed good. No one has ever complained that a website is too fast.
So the benefit is pretty obvious. Users like fast websites. But there are other benefits to web performance. And they don’t all get equal airtime.
A lot of good web performance practices come down to the first half of Postel’s Law: be conservative in what send. Images, fonts, JavaScript …remove what you don’t need and optimise the hell out of what’s left.
That can translate to savings. If you’re paying for the bandwidth every time a hefty file is downloaded, your monthly bill could get pretty big.
So apart from the indirect business benefits of happy users converting to happy customers, there can be a real nuts’n’bolts bottom-line saving to be made by having a snappy website.
This is related to the cost-savings benefit. If you’re shipping less stuff down the wire, and you’re optimising what you do send, then there’s less energy required.
Whether less energy directly translates to a smaller carbon footprint depends on how the energy is being generated. If your servers are running on 100% renewable energy sources, then reducing the output of your responses won’t reduce your carbon footprint.
But there’s an energy cost at the other end too. Think of all the devices making requests to your server. If you’re making those devices work hard—by downloading, parsing, executing lots of JavaScript, for example—then you’re draining battery life. And you can’t guarantee that the battery will be replenished from renewable energy sources.
That’s why sites like the website carbon calculator have so much crossover with web performance:
From data centres to transmission networks to the billions of connected devices that we hold in our hands, it is all consuming electricity, and in turn producing carbon emissions equal to or greater than the global aviation industry. Yikes!
There comes a point when a slow website isn’t just inconvenient, it’s inaccessible.
I’ve always liked the German phrase for accessible: barrierefrei—free of barriers. With every file you add to a website’s dependencies, you’re adding one more barrier. Eventually the barrier is insurmountable for people with older devices or slower internet connections. If they can no longer access your website, your website is quite literally inaccessible.
I’ve noticed that when it comes to making the argument in favour of better web performance, people often default to the business benefits.
I get it. We’re always being told to speak the language of business. The psychology seems pretty straightforward; if you think that the people you’re trying to convince are mostly concerned with the bottom line, use the language of commerce to change their minds.
But that’s always felt reductive to me.
Sure, those people almost certainly do care about the business. Who doesn’t? But they’re also humans. I feel like if really want to convince them, speak to their hearts. Show them the bigger picture.
Eliel Saarinen said:
Always design a thing by considering it in its next larger context; a chair in a room, a room in a house, a house in an environment, an environment in a city plan.
I think the same could apply to making the case for web performance. Don’t stop at the obvious benefits. Go wider. Show the big-picture implications.
It’s a popular myth that a Bitcoin’s value is based on nothing, just pulled out of thin air by math. But that’s not true—Bitcoin is a way to commoditize energy consumption without accidentally producing anything useful. Other energy-intensive industries tend to convert energy into useful materials like aluminum or cement. Bitcoin converts electricity into waste heat and records its destruction in the form of numbers, which can then be traded for other numbers but not used to make anything people need or converted back into energy.
Solarpunk and synthetic biology as a two-pronged approach to the future:
Neither synbio nor Solarpunk has all the right answers, but when they are joined in a symbiotic relationship, they become greater than the sum of their parts. If people could express what they needed, and if scientists could champion those desires — then Solarpunk becomes a will and synbio becomes a way.