28: Translucent Technology

I've been thinking about a concept which I might unimaginatively call translucent technology or invisible technology (or shadow technology or ghost tech, idk something like this, hopefully someone else has already determined a much better name for it). The rough gist is that as a tech bubble deflates, the technology that fuelled it will continue to exist and function and even develop, but working mostly or entirely **invisible **to its users. In this scenario, the technology in question might even abandon its nomenclature, becoming subsumed into the ordinary mélange of utilities we just call technology.

Obviously I'm thinking about this in reference to AI, but contextualising it by considering some past waves of invention and hype and normalisation, like the way that online shopping eventually just became shopping, or how social media transitioned from a distinct technological form of relationship to a standard mediator of normal friendships.

Making a couple of assumptions here: if we're 1) looking beyond the past couple years of AI boom towards what might come next, ie. an inevitable refactoring of the industry (don't call it a bust, nobody can decide what a bust looks like) and 2) if we're understanding that 'AI' is less of a technology and more of a brand, then it stands to reason that once the brand declines there will still remain a hard technological product. It'll be a product that can't achieve the wild ambitions its boosters promised, but still a substantial thing, much of which is open-source and freely available.

I think we can see some of this in motion, maybe in an accelerated fashion, but wobbling through an uncertain path. Looking at the end of 2024 and the start of 2025, a lot of art and tech critics tried to move away from AI discourse, sensing a general mood shift towards fatigue with the topic. Fast forward a few months however and it still dominates discourse to the begrudging acquiescence of its commentariat, as no matter how sick of the subject we might be, the technology and industry steams ahead undaunted forging new headlines, crises and controversies. This pushes me to wonder whether the process of a technology becoming ubiquitous occurs because the technology satiates, stabilises, becomes more complete, or whether it is simply through tiredness and weak acquiescence.

The term AI is at a difficult juncture, as ambiguity surrounds its PR efficacy; it is an umbrella term, and while there is ongoing consternation around how AGI might be defined, we've hardly settled on what counts as normal AI yet. Mozilla's Firefox dev team previewed a new tab grouping feature recently which is at the same time the most boring and most interesting AI tool I've seen in a while.

All this feature does is run Mozilla's own very small, efficient language model when you group some tabs into a folder, which checks the headers of the pages and quickly (almost imperceptibly) suggests a name for the group. The model runs locally on the user's device, doesn't share any data, and only takes up about 90mb of disk space. Mozilla announced this feature as AI-Powered Tab Grouping, presumably to rival the many other browsers and extensions which do this by uploading your personal data to OpenAI or Google, and a mind-numbing shitstorm ensued as dweebs on the internet instantly decried this "AI bullshit nobody asked for", "sending all our data to the cloud" amid claims of Mozilla killing the planet by funnelling everything through massive datacentres, despite none of these statements being true for the product in question.

It's easy to call these claims the brain-dead alarmist proclamations of people who read the tagline and not the text, but I can't help but empathise in some way if we view the crisis as being primarily semantic. If the technical definition of AI is so loose, but conceptually it is defined by being tightly linked to things like invasion of privacy, corporate enshittification, and climate crisis, maybe we have to admit that these things are what 'AI' is. Max Read and Sam Biddle recently asked, What if "A.I." is just more surveillance? and questions like this are apt; I'm not sure we can conceptually extricate all these bad things from AI.

We might conjecture then that this categorises Mozilla's AI- Powered Tab Grouping as in fact *not *AI despite being based on an LLM, and I could get on board with this. To the user a feature like this bears few hallmarks of anything we've experienced of AI technologies so far: there's no chatbot interface, no account login or personal data sharing, no whirling notification encouraging us to marvel at how magical the AI-powered future is going to be. So what is AI-Powered Tab Grouping if it's not AI? I guess it's just… tab grouping.

So this is what I mean by translucent technology. The literal tech might be there, but its overtures aren't; we see through the technology and describe it by its function. I'm not a forecaster and making predictions feels cringe, but I'm looking ahead assuming we'll see many potentially useful tools using this approach to supplant their current strategy, which seems to be plastering the ✨ emoji wherever possible.

There's another, more scintillating angle on this which I want to explore, which is the technology working to erase itself. When you use an AI tool unknowingly, and the mechanism that prevents this intervention from being noticeable is the AI tool itself concealing its existence. If you have any thoughts on that please let me know!