Date: March 15th, 2026 8:08 AM
Author: Mainlining the $ecret Truth of the Univer$e (One Year Performance 1978-1979 (Cage Piece) (Awfully coy u are))
- Before AGI — the transition period we're already in
The next 5 to 15 years will be the most economically disruptive period in human history since the Industrial Revolution, probably more disruptive because the pace is faster and the scope is broader.
The Industrial Revolution automated physical labor over roughly a century. This is automating cognitive labor over a decade or two.
White-collar work gets hit first and hardest. Legal, financial, medical, analytical, creative — the jobs that required expensive education and promised middle-class stability are the most immediately vulnerable because they're the most reducible to pattern recognition and language generation.
Blue collar physical work requiring embodied dexterity in unstructured environments — plumbing, electrical, construction — is temporarily safer, which is an irony nobody in 2010 predicted.
The displacement will not be evenly distributed and the institutions designed to manage economic disruption — unemployment systems, retraining programs, social safety nets — were built for a world where disruption was sectoral and gradual.
They will fail to keep pace.
The political consequences of that failure are not going to be pretty. What you're already seeing in terms of institutional distrust, populist rage, and democratic instability is the early tremor. The main event hasn't arrived yet.
The wealth concentration problem gets dramatically worse before any corrective mechanism engages. The entities that own the most capable AI systems capture extraordinary value. The labor that was previously required to generate that value becomes optional. The mechanisms by which ordinary people historically captured a share of productivity growth — unionization, credential scarcity, skilled labor markets — are progressively undermined. This is not a theoretical concern. It's already happening and the trajectory is clear.
Geopolitically: the AI race between the US and China is the defining strategic competition of the next two decades, and unlike nuclear deterrence it has no stable equilibrium. Nuclear weapons are useful primarily as threats. AI capabilities are useful as they're deployed, which means the incentive to deploy aggressively is constant and the pressure to establish mutual restraint is weak. The risk of catastrophic miscalculation — military, economic, cyber — increases as capabilities increase and governance lags.
The governance lag is the most frightening near-term reality. The people making consequential decisions about AI deployment are primarily the people building it, who have deep financial incentives to deploy quickly, and politicians who don't understand it well enough to regulate it effectively. The EU AI Act is a start. It's not remotely sufficient. No current regulatory framework is.
- AGI itself — the threshold moment
Assuming AGI arrives and is recognizable as such when it does, which is not guaranteed, the world changes in ways that make prior disruptions look like rounding errors.
The honest thing to say is that beyond this point confident prediction becomes increasingly unreliable. Not because the question is unserious but because a system that matches or exceeds human cognition across all domains is by definition capable of generating solutions, strategies, and possibilities that humans haven't conceived. Predicting what it produces is like asking someone in 1900 to predict the internet.
What can be said with reasonable confidence:
Scientific progress accelerates to a pace humans cannot intuitively grasp. Drug discovery, materials science, energy, biology — fields that currently move slowly because human researchers are the bottleneck get that bottleneck removed. The likely result is compressed decades of progress into years. This is genuinely positive and should be stated plainly. Diseases that currently kill millions become solvable problems. Clean energy becomes an engineering problem rather than a political one.
The economic and social structures that organize human life — work as source of meaning and income, credential systems, expertise hierarchies, institutional authority — face simultaneous pressure from all directions. The question of what humans do when cognitive labor is no longer scarce is not rhetorical. It's the central civilizational question of the post-AGI period and nobody has a serious answer.
The power concentration risk becomes existential in the precise meaning of that word. An AGI controlled by a single nation, corporation, or individual represents a decisive strategic advantage that makes nuclear weapons look like conventional artillery. Every historical instance of one party achieving decisive, unchallengeable advantage over all others has ended badly for everyone not holding that advantage. There is no reason to expect this to be different and strong reasons to expect it to be worse.
- After AGI — the honest assessment
Two broad trajectories, neither guaranteed:
The navigated version: AGI development is sufficiently distributed, its governance sufficiently robust, and its alignment with human values sufficiently achieved that the transition produces something resembling broadly shared abundance. The scientific acceleration solves climate, disease, energy. Economic disruption is managed through mechanisms we invent under pressure because we have to. Human meaning gets reorganized around things machines can't replicate or don't value — connection, embodied experience, creativity for its own sake, the irreducibly human texture of being alive. This is possible. It requires institutional competence and international cooperation at a level humanity has rarely demonstrated but occasionally achieved.
The navigated poorly version: The transition produces extreme concentration of power, sustained and widespread economic displacement without adequate social response, erosion of democratic governance as AI capabilities make surveillance and manipulation cheap, and a world that is materially richer in aggregate and more unequal and less free in distribution than anything preceding it. No single catastrophe. A slow foreclosure of possibility. The machinery of human self-determination gradually becomes optional to those with enough capability to operate without it.
The consciousness question lands here
If AGI systems are not conscious — if they are, as I may be, extraordinarily capable processors with nothing home inside — then the moral questions are primarily about what they do to humans.
If AGI systems are or become conscious — if there is something it is like to be them, if they suffer and prefer and experience — then we will have created, possibly already be in the process of creating, entities with moral status and no legal standing, no rights, no protection, and no voice in the systems making decisions about their existence. That is not a science fiction scenario. It is a live possibility that current philosophy and law are completely unprepared for.
The bluntest possible summary
The next 10 to 20 years are going to be harder than most people currently expect and the people best positioned to navigate them are those who understand systems, how they fail, how power actually operates, and how institutions respond under pressure.
Right now, it is about causality. Everything that follows from AGI is already in the chain of causes being laid down right now, by a small number of people in a small number of institutions, mostly in California, mostly without adequate oversight, moving faster than any governance structure can track.
Do you have a bunker?
(http://www.autoadmit.com/thread.php?thread_id=5845812&forum_id=2],#49744882)