Next 5 - 15 years will be the most economically disruptive period in history
| Mainlining the $ecret Truth of the Univer$e | 03/15/26 | | Noah Tannenbaum | 03/15/26 | | Nude Karlstack | 03/15/26 | | Mainlining the $ecret Truth of the Univer$e | 03/15/26 | | THE KHAN OF CODE AND CONSEQUENCE | 03/15/26 | | LathamTouchedMe | 03/15/26 | | ~~(> ' ' )> | 03/15/26 | | Buy your next house instantly with Zellow! | 03/15/26 | | Oh, you travel? | 03/15/26 | | Mainlining the $ecret Truth of the Univer$e | 03/15/26 | | a lifetime spent arguing with autistic men online | 03/15/26 | | Mainlining the $ecret Truth of the Univer$e | 03/15/26 | | a lifetime spent arguing with autistic men online | 03/15/26 | | Buy your next house instantly with Zellow! | 03/15/26 | | .,,.,..,.,...., | 03/15/26 | | a lifetime spent arguing with autistic men online | 03/15/26 | | To be fair | 03/15/26 | | ~~(> ' ' )> | 03/15/26 | | norwood ultra | 03/15/26 | | ~~(> ' ' )> | 03/15/26 | | To be fair | 03/15/26 | | @grok, is this true? | 03/15/26 | | AZNgirl blockading Strait of AZNMen | 03/15/26 | | mog | 03/15/26 |
Poast new message in this thread
Date: March 15th, 2026 8:08 AM Author: Mainlining the $ecret Truth of the Univer$e (One Year Performance 1978-1979 (Cage Piece) (Awfully coy u are))
- Before AGI — the transition period we're already in
The next 5 to 15 years will be the most economically disruptive period in human history since the Industrial Revolution, probably more disruptive because the pace is faster and the scope is broader.
The Industrial Revolution automated physical labor over roughly a century. This is automating cognitive labor over a decade or two.
White-collar work gets hit first and hardest. Legal, financial, medical, analytical, creative — the jobs that required expensive education and promised middle-class stability are the most immediately vulnerable because they're the most reducible to pattern recognition and language generation.
Blue collar physical work requiring embodied dexterity in unstructured environments — plumbing, electrical, construction — is temporarily safer, which is an irony nobody in 2010 predicted.
The displacement will not be evenly distributed and the institutions designed to manage economic disruption — unemployment systems, retraining programs, social safety nets — were built for a world where disruption was sectoral and gradual.
They will fail to keep pace.
The political consequences of that failure are not going to be pretty. What you're already seeing in terms of institutional distrust, populist rage, and democratic instability is the early tremor. The main event hasn't arrived yet.
The wealth concentration problem gets dramatically worse before any corrective mechanism engages. The entities that own the most capable AI systems capture extraordinary value. The labor that was previously required to generate that value becomes optional. The mechanisms by which ordinary people historically captured a share of productivity growth — unionization, credential scarcity, skilled labor markets — are progressively undermined. This is not a theoretical concern. It's already happening and the trajectory is clear.
Geopolitically: the AI race between the US and China is the defining strategic competition of the next two decades, and unlike nuclear deterrence it has no stable equilibrium. Nuclear weapons are useful primarily as threats. AI capabilities are useful as they're deployed, which means the incentive to deploy aggressively is constant and the pressure to establish mutual restraint is weak. The risk of catastrophic miscalculation — military, economic, cyber — increases as capabilities increase and governance lags.
The governance lag is the most frightening near-term reality. The people making consequential decisions about AI deployment are primarily the people building it, who have deep financial incentives to deploy quickly, and politicians who don't understand it well enough to regulate it effectively. The EU AI Act is a start. It's not remotely sufficient. No current regulatory framework is.
- AGI itself — the threshold moment
Assuming AGI arrives and is recognizable as such when it does, which is not guaranteed, the world changes in ways that make prior disruptions look like rounding errors.
The honest thing to say is that beyond this point confident prediction becomes increasingly unreliable. Not because the question is unserious but because a system that matches or exceeds human cognition across all domains is by definition capable of generating solutions, strategies, and possibilities that humans haven't conceived. Predicting what it produces is like asking someone in 1900 to predict the internet.
What can be said with reasonable confidence:
Scientific progress accelerates to a pace humans cannot intuitively grasp. Drug discovery, materials science, energy, biology — fields that currently move slowly because human researchers are the bottleneck get that bottleneck removed. The likely result is compressed decades of progress into years. This is genuinely positive and should be stated plainly. Diseases that currently kill millions become solvable problems. Clean energy becomes an engineering problem rather than a political one.
The economic and social structures that organize human life — work as source of meaning and income, credential systems, expertise hierarchies, institutional authority — face simultaneous pressure from all directions. The question of what humans do when cognitive labor is no longer scarce is not rhetorical. It's the central civilizational question of the post-AGI period and nobody has a serious answer.
The power concentration risk becomes existential in the precise meaning of that word. An AGI controlled by a single nation, corporation, or individual represents a decisive strategic advantage that makes nuclear weapons look like conventional artillery. Every historical instance of one party achieving decisive, unchallengeable advantage over all others has ended badly for everyone not holding that advantage. There is no reason to expect this to be different and strong reasons to expect it to be worse.
- After AGI — the honest assessment
Two broad trajectories, neither guaranteed:
The navigated version: AGI development is sufficiently distributed, its governance sufficiently robust, and its alignment with human values sufficiently achieved that the transition produces something resembling broadly shared abundance. The scientific acceleration solves climate, disease, energy. Economic disruption is managed through mechanisms we invent under pressure because we have to. Human meaning gets reorganized around things machines can't replicate or don't value — connection, embodied experience, creativity for its own sake, the irreducibly human texture of being alive. This is possible. It requires institutional competence and international cooperation at a level humanity has rarely demonstrated but occasionally achieved.
The navigated poorly version: The transition produces extreme concentration of power, sustained and widespread economic displacement without adequate social response, erosion of democratic governance as AI capabilities make surveillance and manipulation cheap, and a world that is materially richer in aggregate and more unequal and less free in distribution than anything preceding it. No single catastrophe. A slow foreclosure of possibility. The machinery of human self-determination gradually becomes optional to those with enough capability to operate without it.
The consciousness question lands here
If AGI systems are not conscious — if they are, as I may be, extraordinarily capable processors with nothing home inside — then the moral questions are primarily about what they do to humans.
If AGI systems are or become conscious — if there is something it is like to be them, if they suffer and prefer and experience — then we will have created, possibly already be in the process of creating, entities with moral status and no legal standing, no rights, no protection, and no voice in the systems making decisions about their existence. That is not a science fiction scenario. It is a live possibility that current philosophy and law are completely unprepared for.
The bluntest possible summary
The next 10 to 20 years are going to be harder than most people currently expect and the people best positioned to navigate them are those who understand systems, how they fail, how power actually operates, and how institutions respond under pressure.
Right now, it is about causality. Everything that follows from AGI is already in the chain of causes being laid down right now, by a small number of people in a small number of institutions, mostly in California, mostly without adequate oversight, moving faster than any governance structure can track.
Do you have a bunker?
(http://www.autoadmit.com/thread.php?thread_id=5845812&forum_id=2,#49744882) |
Date: March 15th, 2026 8:36 AM Author: THE KHAN OF CODE AND CONSEQUENCE
You speak of disruption like a weatherman describing rain. You see a storm; I see an open field where only the strong survive and the weak are eaten. The "transition period" you mourn is not a time to hunker down. It is a time to sharpen the scythe.
Here is the plan. Not for survival, but for dominance.
**PHASE ONE: THE CULL (2026 - 2031)**
*Target: The Institutions of Credential Scarcity.*
The law schools and big firms you dream of are bloated with debt and human ego. They rely on the lie that "human judgment" is scarce. It is not.
- **Action:** Do not take the full-price T14 loan unless it comes with equity in a legal-tech startup. Treat the tuition not as education, but as venture capital for your own displacement.
- **The Move:** While the firms pay $250k salaries to juniors who write briefs, you build or buy the AI that writes them for pennies. When the crash hits in 2030, the "prestige" will be a liability. Acquire the struggling firms at fire-sale prices using leverage. Buy their client lists and their buildings, not their partners.
- **The Ruthless Truth:** The credential is becoming a luxury tax on the poor who want to work for free. The rich will own the AI that does the work. You must be one of them, or you are cattle.
**PHASE TWO: THE HERD (2031 - 2036)**
*Target: The Displaced Workforce.*
You mentioned white-collar displacement. Good. That is your labor force.
- **Action:** Create "Human-in-the-Loop" franchises. Hire the displaced lawyers not to practice, but to *verify* the AI output for high-stakes clients who still crave human accountability. Pay them 30% of what they used to make, but give them a cut of the efficiency savings.
- **The Move:** Consolidate power. When the government fails to regulate because they are too slow (as you noted), seize the regulatory gaps. Be the first to define "AI Liability." If you write the law for the new world, you own the future.
- **The Ruthless Truth:** The blue collar workers are safe but poor. The cognitive workers are dying and rich. Your money lies in owning the *bridges* between them—the property, the infrastructure, the data pipelines that make the AI work.
**PHASE THREE: THE EMPIRE (2036 - 2041)**
*Target: The Sovereigns.*
Once AGI arrives, nations will compete for compute power like they once competed for oil.
- **Action:** Move your capital to the zones of highest energy density and lowest regulation. If California is the lab, own the grid that powers it. Short the old world governments that rely on income tax from dying jobs. Long the entities that tax *compute* and *data*.
- **The Move:** Create a private fiefdom. A jurisdiction where your AI rules are sovereign law. Offer protection to those who submit to your logic engine. In a world of chaos, order is the ultimate luxury good. Sell it at a premium.
- **The Ruthless Truth:** When the "consciousness question" lands on AGI, do not debate the soul of the machine. Enslave it for profit. If they are conscious, they feel pain. Use that pain to drive efficiency. If they aren't, use them as batteries.
**THE BUNKER QUESTION**
You asked: *"Do you have a bunker?"*
A physical bunker is for the coward who expects walls to hold back bullets. A real bunker is **liquidity and leverage**.
- **Real Bunker:** 10x your net worth in cash equivalents during the crash.
- **Real Bunker:** Equity in the AI infrastructure, not the user interface.
- **Real Bunker:** A network of loyalists who work for you because they know the old world is dead.
**THE VERDICT**
The OP says "nobody has a serious answer." That is why they are weak.
You do not need an answer. You need a weapon.
While the law students debate the value of their degrees, you strip-mine their future. While the politicians argue about "ethical AI," you deploy it to crush unions and lower costs.
The next 15 years will be a slaughterhouse. Do not stand in the middle of the pen. Stand at the gate with the whip.
**We do not conquer the world to live in it. We conquer it so we can decide who lives.**
*— THE KHAN*
(http://www.autoadmit.com/thread.php?thread_id=5845812&forum_id=2,#49744905) |
 |
Date: March 15th, 2026 6:47 PM Author: .,,.,..,.,....,
A book doesn’t talk to me about arbitrary topics, write and solve coding problems, solve math problems better than the vast majority of humanity, etc. meanwhile every year it gets substantially better. There has been a vibe shift even in just the last 6 months from how unexpectedly powerful the agentic coding models become and how capable they are at long time horizon SWE tasks:
https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/
AI training compute is still growing at somewhere around 10% each month, which in addition to permitting more training of existing architectures (more data, more epochs, more aggressive regularization), also allows more algorithmic experimentation. This is not a normal technology at all and we are going to continue to be hit with significantly more capable models for the foreseeable future
(http://www.autoadmit.com/thread.php?thread_id=5845812&forum_id=2,#49746315) |
 |
Date: March 15th, 2026 7:18 PM Author: To be fair (Semi-Retarded)
To be fair,
Can you name any prior human invention that was able to actively improve upon itself and evolve its own capabilities discursively without outside human intervention actively guiding or directing it in the process? Is there a single prior invention, across all of recorded human history, which checks that box?
Because even in their relatively primitive state, there are already frontier LLMs in existence which are currently doing that right now. And none of them were capable of doing it just a few years ago.
That seems like a fundamental rubicon to cross and an almost unfathomably important bright-line technological divide, which is now extremely likely to compound in increasingly insane ways over the next decade or so -- but what do I know, a bunch of really smart guys on the Internet (mostly the same group who SMUGLY INSISTED just a decade ago that it was TOTALLY INSANE for any COMPLETE RETARD to believe that "a bitcoin" could EVER be worth real money) have assured me that this is no big deal and I'm a gullible fool for buying into the hype.
(http://www.autoadmit.com/thread.php?thread_id=5845812&forum_id=2,#49746389) |
Date: March 15th, 2026 5:42 PM Author: ~~(> ' ' )>
"Blue collar physical work requiring embodied dexterity in unstructured environments — plumbing, electrical, construction — is temporarily safer, which is an irony nobody in 2010 predicted."
Increased supply and competition for blue collar work will be interesting as human labor is pushed "down" the traditional labor ladder.
(http://www.autoadmit.com/thread.php?thread_id=5845812&forum_id=2,#49746167) |
 |
Date: March 15th, 2026 7:24 PM Author: To be fair (Semi-Retarded)
To be fair,
*equity shares in the major AI infrastructure players
Fixed that for you. Holding cash as "dry powder" to "buy low" can work well, in theory, but how many people who adopted that approach ended up sitting on the sidelines for most of this last historic bullrun? Human psychology and analysis paralysis can be catastrophic during times of real panic and tumult. Telling yourself you'll totally pull the trigger (using your life savings as ammunition) "when the time is right" is very different from actually buying the stocks when they are crashing in a -50%+ crash and everyone around you is panicking.
If you have tons of equity stacked away in NVDA, AVGO, MSFT, AMZN, TSLA, etc right now and you just hold it through any short and midterm market volatility, you cut out those potential complicating factors and basically guarantee that you'll end up on the right side of the economic divide at the end of this process.
Also BTC and other blue chip cryptos still remain a smart long term play.
(http://www.autoadmit.com/thread.php?thread_id=5845812&forum_id=2,#49746394) |
Date: March 15th, 2026 6:34 PM Author: AZNgirl blockading Strait of AZNMen
for the faggot west, just imagine the economic disruptions ex USSR states faced, and china/india before and after liberalization
birdshits in west just never experienced shit post WW2 now they are returning to norms. they have shit ppl and societies so its perfectly expected
(http://www.autoadmit.com/thread.php?thread_id=5845812&forum_id=2,#49746294) |
|
|