email from Nathan: 31 (The (Absolute) State of Technology Criticism)

What's up
My experience as an artist lately has been much the same as others: I apply for a funding opportunity or a residency, wait a few weeks, and the organisation in question receives a huge volume of applications and turns down my application with a generic email. I wish I could say I'm stuck in a loop, but unfortunately the circle never completes itself; the last part never feeds back into the first. In the end I need to connect the two points myself with imaginary data and take a guess at where my work is lacking. I'm obviously frustrated by this process and losing interest, so I find myself looking to alternative avenues to produce work I care about.
I think a lot about non-commercial or anti-commercial artworks. The art I have the most respect for is a practice which exists without interests, serving no rational purpose outside of its creator's compulsion. At the risk of maybe delving into some superficially right-wing ideas, I care a lot about work which has demanded some sacrificial commitment from its creator. Which sounds ominous but is simply to say that often more authentic meaning is inferred by a small piece of art which a working person has devoted an evening a week towards creating at their own cost, than by grandiose work which has been brought into being by a trust fund.
This for me at least provides the notional rationale for me to keep churning at my regular digital design/motion/interaction etc work, and I've been working on a lot of it lately. Some highlights over the past few months have been a video for Budos Band built in Unreal Engine, this video for Squid's latest, a large scale video installation at Outernet for Gay Times' pride programming, this Sapling artwork, and finally launching the app I've been developing for EHFM for ages (available on Android and iOS).
In the meantime, I've established the window gallery space in my studio I've been talking about for a while, with an exhibition in April of WIP work from several artists followed by a really successful exhibition with Daydream Nation throughout May. As I mentioned in my last email, I joined Hybrid Realities Lab for a 9-week programme and used this space to develop the studio into a hybrid virtual gallery, which you can view here, or watch a walkthrough where I discuss the work. These things can only run when I have the time and capacity to pour into it so there's nothing on display right now while I'm working on a proper online presence for the studio and gallery. Hopefully back with some new work next month!
Rose and I (finally) released a new Town Centre track, and have been playing a lot of gigs. We'll be playing at Hand of God 002 in Glasgow this Saturday, and Kelburn Garden Party in a couple weeks. We did a chaotic Radio Buena Vida show last week featuring Rose's nephews. Now this intro has gotten too long! I'll save some reading/watching/listening recommendations for next time.
The (Absolute) State of Technology Criticism
I need to write and I'm not sure what to cover; everything has been too big and too mundane recently: issues are too massive to discuss with the comprehension I'd like, and similarly the crushing enormity of these issues has become deflating. For much of this newsletter series I've discussed AI and its boundaries, and there are precious few interesting thoughts being published in that field right now. Critics are becoming ossified in an ever-stricter booster versus doomer dichotomy ever since circa six months ago when formerly credible journalist Casey Newton decided that nuance and criticism was no longer relevant to a discussion of technology, declaring that only two schools of thought exist: "AI is fake and sucks" and "AI is real and dangerous."
If you're new to this, then I'm sorry but no that's not a joke and the quotation marks I've used are unfortunately literal. Newton planted his flag in the latter camp, openly joining the cadre of true believers who emphasise AI's danger and invoke it as significant of AI's importance. I imagine it might seem obvious to a casual observer that AI could maybe both be real and simultaneously suck, or perhaps even exhibit combinatorial diversity, in some cases being fake and/or dangerous while in other cases being real and/or safe. Regardless of any such logical coherency however, it seems that throughout this year tech criticism has been characterised by an increased entrenchment amongst these camps rather than a rejection of their false dichotomy.
I'm deeply, deeply bored by this trend. Thoughtful authors and critics who had developed solid (read: salary-paying) substack followings over the past few years have neatly slotted themselves into their chosen battle lines, posting week-in-week-out well-rehearsed rationalisations for the side they have picked while leaving little room for curiosity and nuanced critique.
Being generous I'd surmise that this is in part the inevitable result of a nascent field, which was full of opportunity and optimistic potential, maturing into an established industry in which those exciting affordances have vanished in favour of corporate slop and cynical profiteering. I'll discuss in a couple of paragraphs why I think that naivete should be inexcusable, but being less generous, I think it's also the result of the difficult finances of the independent publishing phenomenon and the need to appeal to a certain audience which finds the validation of their fears and angers so deeply gratifying that as long as you keep writing it, they'll keep buying it. In some ways it's a twitter-ification of the substack sphere, where we visit the platform not really to discover new information or to learn, but to massage our atrophying minds with layer upon layer of content which affirms conclusions we have already committed ourselves to.
Popular tech commentariat-cum-journo figure Ed Zitron wrote a blistering takedown of Casey Newton back in January, complete with detailed receipts of his journalistic missteps, and I'd be lying if I told you it isn't a delight to read. Yet Zitron himself is a prototypical example of what I'm describing, having developed a sort of cult following around such blistering takedowns, most often directed at tech companies and tech policies and their apologists. An Englishman living in the US (in Las Vegas of all places? Immediate red flag), he thrives on playing the role of the mouthy Brit; a well-rehearsed character played by such exports as Piers Morgan, Simon Cowell, Jeremy Clarkson, John Oliver, Ricky Gervais. Audiences seem to be entranced by the witty man with a disarming accent and a propensity to 'tell it like it is' appearing in podcasts and columns, turning pithy concepts into lengthy tomes through iteratively repeating ideas over and over, each round increasing in its breathless exasperation and frequency of titillating swearwords to hint at the raw authenticity of this figure so unencumbered by the genteel restraints of the establishment.
Don't get me wrong: I agree politically and structurally with the thrust of Ed Zitron's takes, but I'm unsure of the utility of this brand of critique. The only way I can imagine you would enjoy reading the entirety of one of Zitron's 10,000+ word newsletter screeds is if your only purpose for reading things is to achieve that ha ha ha yes sicko moment where your mind is gently tickled by the self-congratulatory feeling of being one of the sensible and reasonable smart guys while those depraved normies flail around in pathetic idiocy.
Very often, commentators within the technology sector reduce these questions into a simple battle between machines and humans. Either the forces of βprogressβ will prevail against retrograde Luddite tendencies, or on the other hand, human beings will successfully resist the inhuman encroachment of artificial technology. Not only does this fail to appreciate the complexities of past distributional struggles, struggles that long predate the computer, but it ignores the many different possible paths that future progress might take, each with its own mix of technological possibilities and choices.
Farrell et al., Large AI models are cultural and social technologies, Science
I'd appreciate a nuanced criticism that does more legwork than simply affirming Casey Newton's frankly brain-dead 'either-or' view; I'm not sure even Newton would have thought that by describing this dichotomy he was throwing down the gauntlet, some sort of perverse challenge to the critics to dig their trenches even deeper. This dichotomy is harmful at the very least because it inevitably pushes the more sceptical of us to a losing position; technology (and especially this technology) is in fealty to capital, and serving public good is not its default purpose. Rather, egalitarian benefits need to be wrested from the tight fists of industry and this is not a battle which can be fought by a pigeonholed criticism which is stuck in the simplistic rhetorical rut of denial, rejection and doomerism.
It's massively beneficial for tech's boosters to trap critics in this echoing bunker; it feels to me that this is where the crypto criticism of the early 2020s failed so risibly. Sure, a lot of popular perception was negatively influenced and undoubtedly the term crypto is largely pejorative in 2025, but in real life this represents no actual failure for the industry. Despite all their volatility popular cryptocurrencies like Bitcoin and Ethereum hold values an order of magnitude greater than they did five years ago, and perhaps more crucially, in 2025 Bitcoin mining alone uses more energy than the entire AI industry, without even taking into account the rest of crypto. The industry marches onward unfazed, and it feels as though the voices railing so vehemently against its structures faded years ago, drowning themselves into their own irrelevance.
The endless reiterations of being simply aghast and appalled by the outrageous state of things from commentators (I don't want to dunk on Zitron too hard, so see also: Paris Marx, Dan McQuillan, everybody on Bluesky) is all-too-reminiscent of the reactions of liberal commentators to every emblem of social decline in the past decade or two: the astonishment at the Brexit referendum result, or the shocking election of Trump in 2016, the appalling success of misogynists like Jordan Peterson or Andrew Tate. Beyond a certain point, continuing to be surprised by these developments perhaps doesn't signify that you're a conscientious member of society but rather that you're grossly unaware of the society you're a member of. Not one of us should be surprised by the capacity for private industry to appropriate new technologies and use them for exploitative purposes. Implicit in every critique of modern AI, whether a decade ago discussing DeepMind research at Google or more recently tackling multiple AI companies founded by the wealthiest man in the world, should always have been the expectation that this was a capitalist project with extractivist aims. Any alternative and more egalitarian purposes would always have required tactical manipulation and countervailing intervention to wrench it out of the grasp of exploitative industry, and we should demand at least this level of awareness from critique.
So, what?
What might these interventions look like? Obviously I don't have a definitive answer β if I did, I'd have told you long before now β but I have ideas, and frameworks of ideas, and I feel like there are doors to start pushing open. I suppose this is why I write and why I spend time with these concepts, but it's taken far too long for me to recognise that the feeling of malaise when I've dipped into my reading list this year hasn't been my own apathy, but rather a lack of imagination from the texts to explore ideas more radical than a pacifying and over-simplistic self-congratulatory luddism.
There are threads of discussion I catch glimpses of but feel elusive, ideas which offer a spark of countervailing intervention to the trend towards, as historian Jeffrey Herf terms it (via John Ganz), reactionary modernism. Two potentially outrageous ideas which interest me right now, catalysed by the technological landscape: first, a radical change in how we perceive individual ownership, hopefully in favour of a new and robust public domain. Second, a fundamental shift in what we understand intelligence to be, hopefully in favour of a humanitarian and anti-individualist challenge to the myth of genius. On the former, Molly White has a good piece prompting some thoughts on this. On the latter, I hesitate to recommend anything from Benjamin Bratton because it often seems like pseudo-intellectual bullshit but I love that defining intelligence is Antikythera's current focus.
I'm desperate to read and talk about these things. Do you have these glimpses of ideas? What are they?
Utopian or dystopian, both extremes of AI hype are united in one tragic sense: they are responding to the only vision of a future anybody is offering at all. Casting these technologies in terms chiefly limited to specific kinds of endpoints and trajectories of AI oriented toward the future β whether it is faith in market crashes or AGI or casting forward dystopian or utopian narratives β is part of the hype, which muddies critical work in the present. It is a form of wishful thinking β a Deux Ex Machina β that should not displace the real work of addressing immediate and concrete harms.
Eryk Salvaggio, Future Fatigue: How Hype has Replaced Hope in the 21st Century, Tech Policy Press
Thanks for taking the time to read or listen. Please get in touch if you have thoughts, and if you have friends who you think would be interested in joining the discussion then pass my email on.