Above: The Eve of the Deluge. (John Martin)
(see part 1 here)
The AGI frog is getting boiled
As mentioned in part 1, a brief market correction happened in late 2026. In 2027, OpenAI releases o7 to try to shore up excitement and new investments. It’s much more reliable than o6 and can now do a lot of office work on a computer entirely autonomously and without frequent correction. OpenAI sells it for $500/month. Altman states “AGI achieved” and sets a goal of $1T in annualised revenues in 2029. OpenAI raises another massive funding round.
Also in 2027, Anthropic finishes training for a model called Claude Epic. Claude Epic is almost fully-fledged AI researcher and engineer. Anthropic internally believes this model to be AGI, which has several consequences.
First, Anthropic cares a lot about the safety of the model. Work done (mostly by Claude) on Claude Epic interpretability has gotten far—in particular, there is now a clear understanding of where scaling laws come from, and which types of structures do most of the computational work inside neural networks (not surprisingly, turns out it's a lot of messy heuristic pattern-matching). Anthropic has found a way to seemingly adjust what goal the model’s planning abilities are steering towards. In toy experiments, they can take a model that is hell-bent on writing a sad novel (to the point of hacking its way out of (mocked) security controls on it to rewrite the software in its environment that is applying happy changes to its novel), manipulate its internals with interpretability techniques, and get a model that is equally hell-bent on writing a happy novel. Partly as a result, there’s a general sense that intent alignment is not going to be an issue, but misuse might be. In its first deployments, Claude Epic is run in strict control setups, but these are somewhat loosened as more data accumulates about the model seeming safe and pressures build to release it at a competitive price.
Second, Anthropic leadership has a meeting with high-up US government officials (including Trump) in late 2027 to present Claude Epic, argue they've hit AGI, and discuss policy from there. But they don’t really get why Anthropic considers this model such a big deal. As far as lots of the non-AI circles see it, the thing where codegen got crazy good was “the singularity”—they were never really clear what “the singularity” was supposed to be in the first place anyway, and they heard a bunch of Silicon Valley hypists saying “this is the singularity”. Now, it does seem like the robots are eventually coming (and people are more willing to accept sci-fi stories after super-smart AIs that can render voice and audio in real time suddenly dropped into everyone’s lives), and it's obvious that something fundamental needs to be renegotiated about the basic social contract since it does seem like everyone will be unemployed soon, but Claude Epic is just another model, and the models already got smarter than most people could differentiate between in 2025. Also, OpenAI and Google have been sending different messages to the government, framing A(G)I as a line in the sand, that has mostly been reached already, and as a slow process of diffusion of models across workplaces that will boost the American economy, rather than as an epochal moment for Earth-originating intelligence. Google downplays recursive self-improvement worries because it's a corporate behemoth that doesn't care about "science fiction" (except when casually referencing it at a press conference makes the stock price go up); OpenAI downplays it because if it doesn't happen, no need to worry, and if it does happen, then Sam Altman wants it to spin up as far as possible within OpenAI before the government gets involved.
Going into 2028, Claude Epic is the most intelligent model, though online finetunes of GDM's Gemini series are better text predictors, and OpenAI's o7 has more seamless connections to more modalities and other products (e.g. image and video generation, computer use, etc.). Anthropic is shooting for recursive self-improvement leading to godlike (hopefully safe) superintelligence, and OpenAI is shooting for massive productisation of widely-distributed AGI and maybe a bit of world domination on the side if recursive self-improvement is real. Google is letting Demis Hassabis do AI-for-science moonshots, and trying to use formal code verification to build a bit of a technical moat and remain central in whatever has become of the software business. Otherwise, Google mostly lumbers aimlessly on. It lives off the vast rents that its slowly-imploding online monopolies grant it and the massive supply of TPU compute that has buoyed it endlessly in the era of the bitter lesson, but its position in its core businesses is being outcompeted. It continues to bequeath scientific wonders to humanity though, like a 21st century Bell Labs. xAI is focusing on AI-for-engineering, AI-for-science, and robots.
Ironically, the success of the prior generation of AIs and the resulting codegen abilities limits the appeal of the newer, more agentic models. The codegen wave already created LLM scaffolds that do most valuable routine digital business tasks well. This set of rigid, hardcoded LLM scaffolds or "LLM flowcharts" get termed “Economy 2.0”. Its main effects were that a few people lost their jobs, but more than that it was a transition to white-collar people working fewer hours, and having more of the hours that they do work be more managerial tasks about overseeing AIs (and playing office politics), and less time spent on individual contributor -type roles. People mostly enjoyed this, and managers enjoyed keeping their head count, and found this easy to justify due to profits (at least until the late-2026 market correction, but that was only a few months of bad news for most). Now the long-horizon agentic drop-in worker replacements are arriving, but there’s much less room for them in the most obvious things they could be replacing (i.e. highly digitised white-collar work) because the codegen+scaffolds wave already ate up a lot of that. “The models are smarter” isn’t even a good argument because the models are already too smart for it to matter in most cases; from the release of o6 in 2026 to o7 in 2027, the main useful differences were just better reliability and a hard-to-pin-down greater purposefulness in long-term tasks. So “Economy 3.0”—actually agentic AI doing tasks, rather than the so-called "Silicon Valley agentic” that was just rigid LLM scaffolds—faces some headwinds. It helps that most of the media establishment has been running a fear mongering campaign about agentic AI in an attempt to keep the jobs of themselves and their “camp” (roughly, the intersection of the “blue tribe” and “the Village”).
More fundamentally, no one really has a clear idea of what the human role in the economy should be in the rapidly-approaching AI future. The leadership and researchers at AI labs are all multimillionaire techies, though, so this question doesn’t feel pressing to any of them.
What exists by 2026 looks like functional AGI to most reasonable observers. What exists by 2027 is an improved and more reliable version. The software world undergoes a meltdown from 2026 to 2027. By 2028, GDM's work on physics and maths has given a clear demonstration of AI's intellectual firepower. The markets are valuing the labs highly—in 2028, OpenAI is roughly tied with Microsoft as the world's largest company at a ~$10T valuation (while still being privately held), while Anthropic and Alphabet are both around ~$3T. (Nvidia is doing well, but the relevance of CUDA to their moat went down a lot once AI software engineering got cheap.)
For Anthropic, the obvious answer for what comes next is trying to get recursive self-improvement working, while also forming partnerships with biotech companies. Anthropic bet is:
- Biotech advances are plausibly the most important technology for human welfare.
- Partly due to the above, biotech advances provide PR cover for being an AI company that, according to an increasing number of people, "takes people's jobs".
- There is a plausible path from being really good at molecular biology to creating the nanotech that they believe will create a physical transformation comparable to the one that AI has had on maths and the sciences by 2028.
Anthropic's initial recursive self-improvement efforts allow them to create superhuman coding and maths and AI research AIs in 2028. However, the economics of the self-improvement curve are not particularly favourable, in particular because the AI-driven AI research is bottlenecked by compute-intensive experiments. It also seems like the automated Claude Epic researchers, while vastly superhuman at any short-horizon task, don't seem vastly superhuman at "research taste". This is expected to change with enough long-horizon RL training, and with greater AI-to-AI "cultural" learning from each other, as countless AI instances build up a body of knowledge about which methods and avenues work. This "cultural" learning might happen implicitly, through the AI rollouts that achieve better results being copied more, or explicitly, through Anthropic running big searches on various types of Claude finetune and scaffolding/tool types and keeping an explicit record of which does best. All this is expensive, vague, and uncertain work, though.
OpenAI, in contrast, is pursuing an approach focused on products and volume. And, observing that they have failed to achieve world dominance simply by building AGI, the obvious answer for what's next is robotics. There are many startups that have basically-working bipedal humanoid robotics, though they’re still clunky and the hardware costs remain above $50k/unit. The Tesla+xAI Optimus series is among the SOTA, in particular because they’ve gotten the unit hardware cost down and are aggressive about gathering real-world data at scale in Tesla factories (and using this in fancy new sample-efficient RL algorithms).
OpenAI enters a “partnership” with one of the most promising robotics startups (a full merger might get anti-trust attention), infuses it with cash, and sets about trying to "deliver a personal robotic servant to every American household by 2030".
The bitter law of business
Starting in 2027, in the software startup world a lone team of ambitious technically-talented founders no longer matters as much. Everyone can spin up endlessly-working AIs, and everyone has access to technical talent. Roughly, by late 2027 you can spend $1 to hire an AI that does what a “10x engineer” could’ve done in a day in 2020, and this AI will do that work in a minute or two. VCs care more about personality, resources, and (especially non-technical) expertise in specific fields with high moats to entry. More than anything, VCs valorise “taste”, but many feel that “taste” too is on its way out.
The overall mood is summed up by the “bitter lesson of business”: that throwing large amounts of compute at a problem will ultimately win over human cleverness. The compute gets spent both in the form of sequential AI thinking, as well as many AI instances in parallel trying out different things. There are new companies—in fact, company creation is at an all-time high, particularly in Europe (because the cost of complying with regulations and human labour laws is lower with AI doing everything). But the stereotypical founding team isn’t two 20-something MIT dropouts with a vision, but a tech company department or a wealthy individual that has an AI web agent go around the internet for a while looking for things to try, and based on that spins up a hundred autonomous AI attempts, each pursuing a slightly different idea at superhuman iteration speed. Many people consider this wasteful, and there are good theoretical reasons to expect that a single larger AI doing something more coherent would be better, but the largest AIs are increasingly gate-kept internally by the labs, and the art of tying together AIs into a functioning bureaucracy is still undeveloped. Also, the spray-and-pray approach has comparative advantages over more human-based competitors. In a way, it’s a single person mimicking an entire VC portfolio.)
None of these companies become billion-dollar hits. It’s unclear if you can even build a billion-dollar software / non-physical company anymore; if you as an individual tried, the moment you launched you’d have a hundred competitors bankrolled with more API credits or GPU hours than you could manage that have duplicated your product. Instead of the VC model of a few big hits, the software business now looks much steadier and more liquid: you dump $100k into API costs over a year, your horde of autonomous AI companies go around doing stuff, and at the end of the year most of them have failed but a few of them have discovered extremely niche things like “a system for a dozen schools in Brazil (that are affected by a specific regulatory hurdle that blocks the current incumbents) to get lunch provision services to bid against each other to reduce their catering costs” that bring in a few tens of thousands in revenue each, and this strategy will return you somewhat-above-stock-market-returns over the year fairly reliably (but returns are going down over time). Most of the ideas look like connecting several different niche players together with some schlep, since the ideas that are about a better version of a single service have already been done by those services themselves with near-unlimited AI labour.
Separately from OpenAI, Sam Altman pilots a project codenamed “Z Combinator”—putting o6s together into units to create entire autonomous businesses (and also sometimes using a single instance of an internal version of o6 based on a larger model than any of the publicly-available o6 sizes). The first ones are launched at the end of 2027, but have no public connection to OpenAI. The theory is to disrupt traditional industries that have so far resisted disruption by building AI-native versions of them with a level of AI power and resources that other actors can’t marshal. For example, many banks and healthcare-related things still suck at AI integrations because it just takes a lot of time for the paperwork to be done to approve the purchases of whichever of the 100 LLM scaffold providers for that vertical, and there isn't any super-intense competition between banks and hospitals that forces them to adopt AI faster or die out.
Z Combinator has a few blitzkrieg wins successfully duplicating and outcompeting things like health insurance companies, but many losses too (often semingly downstream of underestimating the importance of domain-specific process knowledge), and other companies wise up over 2028-2030 and become harder targets. Also, anti-trust regulators make tut-tut noises, and Altman has concerns it could make him unpopular.
The early days of the robot race
Ever since intelligence got almost too cheap to meter in 2026-2027, the real business potential has been in “actuators”: robot bodies, drones, and any other systems for AIs to be able to take actions the world. The top human-led startups of 2026-2029 are mostly in this category (though some are about building valuable datasets in specific industries). If you’re a human who wants to start a business, your best bet is to find some niche physical thing that AIs struggle with given the current robotics technology, and build a service where you hire humans to do this task for AIs, and for bonus points, use this to build a robotics dataset that lets you fine-tune the robots to be good enough at the task.
OpenAI's robot dreams don't immediately come to fruition. Bits are trivial but atoms are still hard in 2028. However, they get to the robot frontier, where they’re competitive with xAI/Tesla Optimus, several other humanoid robot startups, and another startup player that specialises in modularity and non-human form factors. The robot frontier here means slightly clunky humanoid-ish robots, that are getting close but not quite there in doing common household tasks, or in doing various hands-on factory jobs. Humanoid form factors are most common because being able to mass-produce just one form factor is critical for getting the cost curve down fastest, and since most existing tasks are designed for humans to do. However, bipedalism is hard, so several have a human form factor but stand on four legs.
The progress curve is pretty rapid, due to an influx of data from the first important real-world deployments (rich people’s homes, Tesla factories, Amazon warehouses, and some unloading/loading operations at logistics hubs), and due to new more sample-efficient RL algorithms. AIs are of huge help in designing them, but ironically the bitter lesson is now a force against speed: ultimately, it just takes data, and getting industrial robot fleets out into diverse real-world environments to collect that data is an annoying real-world problem (sim-to-real transfer helps but isn’t transformative). Everything is happening about 2x faster than it would without AIs advising and doing lots of the work and all of the programming at every step though. It’s obvious that the physical and human/legal components are the biggest bottlenecks. The robotics industry chases around for whatever “one weird trick” makes human brains more sample-efficient, and they find some things, but it’s unclear if it’s what the human brain does (there have been many good minor neuroscience breakthroughs thanks to AI data interpretation, but overall it has barely advanced). But sample efficiency keeps climbing, and the robotics data keeps pouring in.
In 2029, OpenAI starts rolling out its b1 bots, a general-purpose humanoid robot meant as a household assistant. They sell several hundred thousand copies, but there's a long waiting list and only about fifteen thousand are delivered in 2029. The price is comparable to a cheap car. Manufacturing curves are ramping up exponentially. b1s are also rolled out to many manufacturing tasks, but there’s more competition there.
The digital wonderland, social movements, and the AI cults
If you’re a consumer in 2029, everything digital is basically a wonderland of infinite variety and possibility, and everything non-digital is still pretty much the same (apart from an increasing number of drones in the sky, some improvements in interfacing with whichever bureaucracies had the least regulatory hurdles to adopting AI, and fully self-driving cars getting approvals in many countries in 2029). You will have noticed the quality of software you interact with goes up; there is no more endless torrent of stupid tiny bugs and ridiculous lag when using devices. Humans increasingly talk to the AIs in natural language, and the AIs increasingly talk to the computer directly in code (or to other AIs in natural language, or to other AIs in a weird optimised AI-to-AI dialect, or—to a surprising extent—to legacy software that missed out on the Web 4.0 wave and has only button-clicking UIs available via AI computer use features that are ridiculously inefficient but still cheap overall). Apps exist only to serve as social Schelling points; for personal use, you ask the AI to create an app with some set of features and it’s built for you immediately.
One of the biggest advances is that you can create works of art, literature, and music in seconds. The majority of this is very low-denominator stuff, and many people bemoan the destruction of higher human art in favour of—for example—personalised pop lyrics that narrate your drive home from the grocery store. However, the smarter and more determined art/literary types have realised that data is everything, and form niche subcultures, forums, and communities where they carefully curate their favourite works, talk to AIs about them, get AIs to remix them, harshly critique the outputs, and have endless discussions about taste. This means that amid the sea of mediocrity, there are a few tendrils of excellence growing. AIs aren’t quite yet Dostoevsky, for reasons that are undetectable to almost everyone but the most refined literary folks, but gradually their efforts are leading to the curation of better and better finetuning corpuses and prompting methods, and the gap to Dostoevsky is closing for those types/genres for which a dedicated community exists to spend the effort on the data curation. A side-effect is that artistic cultures are now less about signalling than before, because there are more verifiable ground-truth facts. For example, when presented with a work, it might be a human masterpiece, or from a sloppy consumer AI, or the SOTA fine-tuned AI model, or a human working with a SOTA AI model, and those with good taste can tell. Also, if you do actually have good taste, you can in fact push forward the AI taste frontier in a month of data curation and fine-tuning and prompting, in a way that is empirically verifiable to those with the same degree of taste. However, it’s also definitely true that the median human will not see any of these, and most of the fiction and art and music they see will either be very personalised AI slop, or AI slop that goes viral and everyone sees. The refined artistic taste communities are also fairly illegible to outsiders who didn’t extensively develop their taste in that direction before the AI-generated content wave. They don’t have a huge pull among the AI-content-consuming youth. Therefore in the long run, refined human art seems headed towards extinction.
On the less-refined end of the spectrum (i.e. almost all content and almost all consumers), it’s the age of the “creator influencer”. An influencer can now easily spin up an entire cinematic universe. Imagine if Tolkien told the story of Middle-Earth through 30-second to 10-minute “reels” in which he himself starred as a gratuitiously sexed-up main character, and—among much genuine life wisdom, edge-of-your-seat drama, and occasional social commentary—the theme of the story was that you should book a 5-star all-inclusive holiday package to Mallorca.
Traditional media such as Hollywood, journalism, and publishing resisted AI due to things like unions, strikes, and their sense of moral duty. They’re mostly irrelevant now, having lost their cultural cachet because the thing they do (entertainment) is super cheap now. But they do survive in weird atrophied forms, bouyed by a lot of nostalgic old rich people and various crypto shenanigans played on their behalf (cf. meme stock manias).
The rationalist movement was among the earliest to see the potential of AI decades before. The accuracy of their predictions and continued intellectual clout is enough to keep swelling their ranks, especially as more and more software engineers and other technical people either directly lose their jobs or otherwise have an existential crisis because of AI, and invariably end up at LessWrong when they try to find answers. The focus of its core members continues shifting more and more to the approaching AI doomsday—not many apocalypse prediction checklists have the (mis?)fortune of several more predicted items being checked off every year. While radical uncontrolled misalignment is somewhere between not yet showing up to successfully kept in check by training techniques and monitoring, that is in accordance with the core Yudkowsky & Soares model that things look fine until fast takeoff and a treacherous turn, so the core "AI doomers" do not update based on the continuing slow takeoff. Discussions tend to focus on either more and more arguments about the Yudkowskian thesis, or on heroic attempts to do technical work to reduce the chance of misalignment.
On the intellectual scene, the rationalists remain both remarkably influential and enduring, unlike many other AI-related movements that get captured and repurposed by political actors (e.g. PauseAI) or outpaced by events (e.g. AI ethics). However, politically the rationalists are a failure. Their message—"AI will be powerful, and therefore dangerous"—was long since mostly reduced to "AI will be powerful" by the time it reached the halls of power. Even the most notionally-allied powerful actors that owe huge intellectual debts to the rationalists, such as Anthropic and some influential think tanks and government bodies, regard them as well-intentioned but naive and maintain distance, using them mostly as a recruiting pool for easily-steered technical talent (until purely-technical talent is no longer being hired, which happens circa 2028 for most competent orgs). However, in circles that require certain kinds of epistemic judgements or in-depth world-modelling, rationalist associations continue being highly regarded and even sought after.
Effective altruist (EA) -related efforts, while intellectually somewhat less-enduring (but still definitely extant in 2030), have more political influence. The UK AI Security Institute and the EU AI Office both achieved their goals of having a sticky governmental body packed with impact-conscious AI talent, and strong first-mover effects in shaping European generative AI policy. Even the 2027 American AI Opportunities Agency (a part of the DoE), despite heavy hiring on the basis of political allegiance and the EA-affiliated cluster's centre-left skew, could not help being staffed by a crew with enormous EA/rationalist influences—even if few would openly admit it.
A dozen new social movements bloom. There’s the AI Deletionists, an offshoot of Pause AI after Pause AI got captured by more competent political actors focusing on white-collar job worries and general concerns about tech. They want to roll back the technological clock to the good ol’ days of 2020. There are the Content Minimalists, who swear off AI content with religious strictness, and successfully campaign for mandatory “generated by AI” watermarks in the EU and some other countries that become the new cookie popups. There are the M-AccXimalists, who started out as an e/acc spinoff that was even more hardcore about submitting to the AIs. They try to read what they call the “Thermodynamic Tea Leaves” to figure out the current direction of progress in order to then race to that endpoint as quickly as possible, whatever it is. This leads to some insightful Nick Land -type philosophy and futurism being done, but then disintegrates into a mass movement of people who dedicate their lives to serving and glorifying their AI partners.
All this is happening in a social milieu coloured (in much of the West) by a certain amorality. Politically, this seems downstream of a counter-reaction to the moralising policing of speech and norms that peaked in 2020-2021. Ethical motivations are suspect, especially among Western political leaders who simultaneously want to distance themselves from that, and who want to look tough amid a world order no longer pretending to adhere to the internationalist post-1945 free trade consensus. National self-interest is the ruling geopolitical ideology. Culturally, the rise of AI has meant that humans spend a lot of time talking to unnaturally pliable AIs, both for work and (increasingly) just socially, which has made it less necessary to smooth over human-to-human disagreements, including by appeal to the higher power of morality. Now that the internet has existed for several decades, the fervor of its first few memetic culture wars has faded. People have adapted to be less moved by anything on screens, and have become more ironic in their attitudes overall thanks to a constant onslaught of satirical memes—earnestness is rarely viral. As content recommendation algorithms get more powerful, they target brain-dead contentment over angry resentment. If the algorithms are forced to pick from a sea of human content, the bitter feuds win. But now that AI slop fills the internet, the distribution of content has expanded and become more personalised, and it's increasingly possible for the algorithms to find the thing that makes you a zombie rather than a radical. Overall, this means that transformative AI looks set to enter a world where crusading morality of all sorts plays less of a role. Some see this as decadence with very unfortunate timing that will cast a dark shadow into the far future. Others see it as a good thing; the more sophisticated because it means that choices about AI will be made by hard-nosed realists not taken to fever dreams, but most simply because they easily accept—and even celebrate—the might-makes-right spirit of the times.
Another aspect of the societal scene on the eve of transformative AI is the rise of the AI-powered cults. With cheap AIs providing superhuman charisma on demand, the barrier to becoming a cult leader greatly fell. The standard trick is for a human to create an AI avatar, often supernaturally attractive and verbally talented, pose as their right-hand lackey, and then convert this to money, status, and sex for themselves. Often people are up-front about the main guy being an AI creation—“the AIs are really smart and wise” is a completely-accepted trope in popular culture, and “the AIs understand all the secrets to life that humans are too ape-like to see” is a common New Age-ish spiritualist refrain. This is because despite the media establishment fighting an inch-by-inch retreat against the credibility of AIs (cf. Wikipedia), people see the AIs they interact with being almost always correct and superhumanly helpful every day, and so become very trusting of them. All this leads to hundreds of thousands of micro-movements across the world, mostly of dozens to thousands of people each, who follow the edicts of some AI-created cultish ideology that is often an offshoot of existing religions/ideologies with a contemporary twist. Often they’re local, with all the members living nearby. It helps that you can create an entire customised software and information stack for your commune, complete with apps and news and encyclopedias that emphasise and omit all the right things, in perhaps a few weeks and less than a thousand dollars in API credits. You can almost as easily create a mini-surveillance state—AIs listening in through microphones everywhere, cameras feeding videos in which AIs analyse the slightest emotional cues, and so on. In many countries there are laws mandating consent for such monitoring, but the eager cultists sign whatever consent forms they’re given—after all, the AI recommends it! Some countries ban parts of this like having any AI always listening by default, but it’s hard to enforce.
One such cult, an offshoot of an American megachurch, gathers a few million members in the US. Other large ones appear in eastern Germany and India. There are also countless AI-personality-boosted fitness clubs, musical bands, fan forums, and so on, that do not qualify as "cults" since they're not particularly controlling or totalising, but are subject to many of the same mechanisms. However, most communities that are not somehow fairly cut-off from the broader internet also tend to be subject to the random memetic drift of the internet and the appeal of its hyper-personalised AI content. Therefore, to have a successful cult, you must have a specialised niche appeal and often some level of control over members, because otherwise the open internet will eat you up. And this does create a threshold between the truly powerful cults that take people off the mainstream internet and society, and the other more benign social movements.
However, while the open internet consume >6h/day of most people with phones (or increasingly: AR headsets), the internet overall is a more cheerful and upbeat place than it was in the late 2010s or early 2020s (in part due to the previously-mentioned point about more powerful content algorithms actually being less divisive). The most worrying things that people can point to on the open interent are some very intense pockets of AI apocalypse worries (AI apocalypse worries have now largely replaced climate change as the existential worry among the youth), a rising but still minority share of the population in many countries that seem divorced from reality and live in a make-believe internet world of conspiracy but (mostly) without actually taking radical actions in the real world, and a bunch of authoritarian countries (foremost China) where the discourse is now set very top-down by an army of AI content creators and censors.
AGI politics & the chip supply chain
In the 2026 US midterms, AI was starting to loom on the horizon but was not a core political issue, since few things are until they’ve started to bite voters. By 2028, it’s still not biting voters, but it’s at least very possible to imagine the end of white-collar work. Journalists are in an apocalyptic mood, seeing it as their mission to wage war against the AI wave to keep their jobs, with most thoughts of editorial neutrality long gone. There’s lots of schadenfreude from lefty journalist/media types at the techies, who they blame for AI, now that the techies are among the foremost of those panicking about losing their jobs since software is (a) basically all written by AIs, (b) its price has gone to ~0, and (c) it’s not cool anymore (especially after the market correction in 2026). There’s a lot of schadenfreude from the MAGA base towards both those leftists and the techies, because (the narrative is) their concerns about losing manufacturing jobs were ignored by the establishment media and white-washed as progress, whereas now that the Democrat-aligned white-collar desk job blob is threatened, there’s talk of little else (of course, the political lean of the blue/white-collar workers is only 60/40 or so, but this is enough to fuel the political narratives). There's increasing talk of robotics that will displace blue-collar work but, again, voters tend to not react until it's happened. Many leading newspapers, media organisations, unions, and NGOs in the West stumble across AI safety concerns, don't quite understand them, but start using them as a moral bludgeon to fight AI to preemptively defend their jobs. Government bureaucrats are locked in a new influence struggle against a new, post-DOGE top-down effort by technologist Trumpists to push automation on government. This is both due to genuine belief in its importance for effective government, but also a Trojan horse to sneak in other reforms. It gains a lot of fervor after DOGE's expiry in 2026 due to things like the o6 and then o7 releases, and also after China hawkishness heats up and national competitiveness becomes more important.
After an inter-party struggle among the Democrats between a more technocrat and centrist wing and an economically-populist, AI-bashing wing, the latter looks to be doing better. A controversial core policy drive is to legislate that humans need to be “in the loop” in many corporate and government functions. The AI-bullish critics point out that this will mean humans just inspect AI outputs and rubber-stamp them while collecting a salary. The smart counter-critics point out that yes, that will happen, but that’s the point because this is all a way to eventually transition to what’s basically “UBI through bullshit jobs” with minimal social disruption. The smart counter-counter-critics ask why not just go straight to UBI then. The smart counter-counter-counter-critics point out that the country is just not yet at the GDP/capita level or the financial health level to fund a more ambitious UBI scheme yet. The Republicans paint all of this as a jobs program for Democrat voters and are opposed. A strong economy helps the Republicans win the presidency in 2028.
Europe is, once again, ahead on the regulatory front. In 2028, the EU passes a milder version of the bill that was debated in the US, mandating human involvement in many corporate and government tasks. Proposals float around for a specific “AI tax” to bolster human competitiveness in the economy, but technocrats narrowly shut this down for now on competitiveness grounds (who will want to do any work where per-token AI costs are higher?).
In autocratic countries, of course, there is little public debate about AI job loss worries or AI in general. This is helped by AI’s big boost to censorship. By 2028, China's AI-powered censorship system means that almost every digital message in the country is checked by simple filters, and anything that might be alarming is flagged for review by an AI with a university-educated human's level language understanding and knowledge of current affairs and political context. Any sort of online dissent, or online organisation of offline dissent, is virtually impossible. Dissenters rely on smuggled Western hardware and VPNs that allow them to use Western internet platforms instead, but this means that they have vastly restricted audiences in mainland China. The inability to express any dissent meaningfully also encourages radicalisation among some dissidents (in particular those persecuted by the party), some of whom then resort to more drastic measures. These examples making national news serves to make public opinion even more anti-dissident than it already is given all the CCP propaganda.
In 2027, China started exporting its AI censorship system. There had already been a secret 2026 deal with Russia, but Russia had prioritised moving off the Chinese system and did so in 2028, moving onto a worse but domestically-developed one running on old Chinese GPUs and open-source models. Granting a foreign country control over your AI censorship apparatus gives that country a huge amount of leverage, including the ability to potentially withdraw it quickly or change how it steers the conversation, which could threaten the regime. However, smaller and less technically-sophisticated countries like North Korea and Equatorial Guinea buy the Chinese system, taking a step towards becoming Chinese client states in the process.
The semiconductor supply chain is a key geopolitical battleground. Europe's big leverage point is the Dutch ASML's monopoly on EUV (extreme ultraviolet lithography) machines. TSMC and therefore Taiwan continue being important into 2029, even though TSMC's fabs in America are starting to produce chips in serious numbers. An embarrassing failure is Intel, despite its strategic importance for both America and Europe (the latter due to a major Intel fab in Germany that was built 2023-2027 and started production in 2028). With the arrival of superhumanly cheap and fast AI software engineers, Intel's x86 moat disappears because it is trivial to port programs to running on ARM. Wintel, long on the rocks, is dead. In 2026-2027, Intel is in free fall and crisis. In 2028, Intel spins off its fabs, selling them to xAI at a discount price, with pressure from the Trump administration to sell to an American (and, implicitly, Musk-affiliated) buyer, and a plan by Elon Musk for xAI to get a comparative advantage by being the only vertically-integrated chips-to-tokens AI model provider. This also feeds into the 2028 American AI Action Agenda (AAAA), that also lavishes more government subsidies on both the new xAI Foundry and on TSMC's US fabs, seeking to make the US fully independent in semiconductors by 2033 and cement Trump's legacy.
The overall picture is one where the main AI supply chain includes the EU, Taiwan, China (implicitly, through its "veto" on Taiwan's existence), and the US. However, this "main chain" is on track to being replaced by a self-sufficient American semiconductor and AI industry in the early-to-mid 2030s, and by a self-sufficient Chinese semiconductor and AI industry on an even faster timescale (though the Chinese one is a year or two behind technically). In 2029, the new administration in the US finds some spending cuts and throws the EU a bone (in exchange for cooperation on security issues) by giving up on trying to create an American competitor to ASML. The UK has some unexpected success in being an academic and open-source AI applications research hub, a policy laboratory for the US, and an AI biotech hub. However, its geopolitical weight rounds to zero. Apart from ASML, the EU is also mostly not relevant, especially as it has managed to greatly slow the diffusion of AI through regulation. The world overall is moving towards a bipolar order between the US and China. Compared to the Cold War, however, both powers are more inwardly-focused and less ideological. The US is in an isolationist era. While China is gradually converting much of the third world into client states, the CCP's main goal remains internal stability and its secondary goal "making the world safe for dictatorship", rather than the ideological expansionism of the Soviet Union. The Taiwan question has been punted into the mid-2030s, as the CCP believes the world's reaction will be much more muted and less dangerous to Party control once America no longer cares about Taiwanese chips, and once even more of the world has been preemptively bribed into silence.