The Problem With Tech's Latest “Something Big Is Happening” Manifesto
This piece appeared first at Forbes on February 13, 2026.
When Matt Shumer, co-founder and CEO of applied AI company OthersideAI, published his essay “Something Big Is Happening,” it spread fast. Within days, it was dissected and amplified across social media and the press. Screenshots ricocheted through group chats. The message was clear: brace yourself. AI is about to upend the labor market at a pace few expect. But is Shumer’s piece a sober assessment of technological change, or is it one more self-interested AI founder driving the doom-and-hype machine powering the industry? The viral response tells us something important. The public debate is shifting from whether AI matters to how disruptive it will be and who will bear the cost.
That shift mirrors what I argued in my 2026 predictions: AI is moving from boardrooms to kitchen tables. Concerns about automation and job displacement are no longer abstract. They are becoming electoral issues. Candidates are beginning to test language around AI and work. Governors are asking what reskilling looks like at scale. Parents are asking what their children should study.
Shumer taps directly into that anxiety. He sketches a near-term world where AI agents displace white-collar workers, startups scale with minimal staff and productivity jumps sharply. His prescription is blunt: get ahead of it. Learn to use AI. Become indispensable. Invest wisely. Prepare for volatility.
Some of that advice is sound. We should learn how these systems work. Not in theory, but in practice. Understanding prompting, model limits and data quality is now basic literacy for professionals. Financial prudence also makes sense in any period of technological transition. So does rethinking education. Teaching students how to collaborate with AI tools will matter more than memorizing static content.
But Shumer’s essay is not just a warning. It reads at times like a sales pitch. He urges readers to subscribe to the most advanced AI tools. He implies that those with access to premium models will outpace those without. He frames paid AI subscriptions as a form of insurance against obsolescence. The underlying message is subtle but consistent: the solution to looming disruption is to spend more on AI.
That is where the argument begins to blur.
There is a long tradition in technology of crisis narratives that conveniently align with commercial incentives. The dot-com era had its manifestos. The crypto boom had its white papers. Today’s AI cycle has essays that mix insight with urgency and urgency with monetization.
To be clear, the technology is real. The progress is tangible. But the framing matters.
Shumer’s tone leans toward inevitability. If you do not move now, you will be left behind. If you are not paying for the best models, you are already at a disadvantage. The labor market shock is imminent and the only rational response is rapid adoption. That framing benefits AI companies and founders. It channels anxiety into subscriptions and funding rounds. It reinforces the idea that scaling faster and spending more on compute is the path forward.
It also risks oversimplifying a complex transition.
AI will reshape tasks. It will compress certain roles and expand others. It will increase productivity in some sectors and create friction in others. But large labor shifts rarely happen in a straight line. Regulation, corporate governance and social norms intervene. Enterprises move cautiously. Workers adapt in uneven ways.
The debate about whether machines can replace human intelligence is not new. What feels different today is scale and speed. Large language models do not just calculate. They generate text, code and images at near-zero marginal cost. They mimic conversation and pattern recognition. That creates the impression of agency, even when the systems remain statistical engines. We should be careful not to conflate fluency with autonomy.
Shumer is right that white-collar automation deserves serious attention. In my own writing, I have argued that AI’s integration into enterprise software will make job transformation visible in 2026 and beyond. Finance, marketing and software development will change. Politics will respond.
But responsible analysis requires separating three threads: technological capability, commercial incentive and social adaptation. Technological capability is advancing. Commercial incentive is pushing hard. Social adaptation will determine the outcome.
If every AI manifesto ends with a call to buy more AI, we should pause. The most important investments may not be subscriptions. They may be in governance, workforce training and institutional safeguards. They may be in public policy that balances innovation with resilience.
History shows we can build guardrails around powerful technologies. Industrialization brought labor laws. Aviation brought safety standards. The internet brought privacy frameworks, however imperfect. None of these emerged automatically. They were contested and constructed.
AI is here to stay. It will alter how we work, learn and compete. The opportunity is real. So are the risks. The choice is not between blind optimism and fatalism. It is between allowing the hype cycle to dictate our response or building the social structures that channel this technology toward human flourishing.
We have faced moments like this before. We adapted.



