Edition -240: Superimposed Possibilities
Making sense of tomorrow through a constellation of AI prophecies and emerging evidence
The Ever-Distant Shore
In 1965, Herbert Simon predicted that “machines will be capable, within twenty years, of doing any work that a man can do.” His was an oracular vision of what we now refer to as Artificial General Intelligence (AGI). He made substantive contributions to the fields of economics, psychology, cognitive science, artificial intelligence, decision theory, and organization theory. He won the Nobel Prize in Economics and the Turing Award too, which is referred to as the Nobel Prize in Computing. There have been plenty more promethean promises by those close to the cutting edge of AI throughout its multi-generational origin story. AGI, the grand finale of AI leap, has consistently been twenty years away. Until recently.
Hammer Before Nail
AI, as we characterize and perceive it today, has a linchpin in Nvidia. Its own three-decades-long tale has a pivotal moment when Jensen Huang reconfigured Nvidia's chips from serial to parallel processing - an act of technological reorientation that made video games run better. This choice set the stage for another prophetic decision.
He remade “Nvidia’s GPUs so that they could also process massive data sets, of the kind scientists might use.” The prediction that drove this decision, as The Atlantic piece highlights, was that “he was just betting that if you make powerful tools available to people, they will find a use for them, and at a scale to justify the billions in investment.”
The audacity and the abstraction level of such decisions are characteristic of preconditions for revolutions. One can only imagine the verdict of the best due-diligence on viability if the company was anonymized. More importantly, it took over twenty years from the launch of that ‘powerful tool’ to the launch of ChatGPT kick-starting the current AI boom’s genesis phase. The lack of temporal overlap between hyperbole and reality isn’t an aberration but a characteristic of technological history.
Tempering Tall Tales
The long horizon of technological revolutions finds further voice in Arvind Narayanan. The co-author of AI Snake Oil, a hype piercing book and eponymous blog for the ravenously curious, predicts adoption curves far less steep than those suggested by AI evangelists who think AI will have dramatic short-term economic impacts.
This tempered outlook finds quantitative expression in Daron Acemoglu’s analysis. The recent economics Nobel laureate's paper "The Simple Macroeconomics of AI" projects US GDP enhancement between 0.9% - 1.5% over the coming decade. His “nontrivial, but modest” impact assessment stands in juxtaposition to the exuberant predictions from McKinsey Global Institute and Goldman Sachs. The former envisions 1.5% to 3.4% annual AI-driven growth in advanced economies while the latter forecasts a 1.5% annual increase or 16.1% cumulative US productivity growth.
The economic conversation would be incomplete without looking at labor market implications. The International Monetary Fund (IMF) projects that “AI will affect almost 40% of jobs around the world, replacing some and complementing others.” This statistic gains geographic texture through the Brookings Institution's analysis, which reveals an inversion of historical patterns: unlike previous technological waves that disrupted blue-collar work, generative AI targets cognitive, knowledge-based tasks—making highly educated, well-paid urban professionals most vulnerable to its disruptions. This reshuffling of technological vulnerability patterns constitutes a sociological inflection point creating novel policy challenges, particularly as we consider how these patterns might manifest in developing economies with distinct structural foundations.
Before leaving the concerns of economic sphere, let’s look at a prediction close to the edge of optimism. LinkedIn’s Aneesh Raman foresees an AI-driven shift from the knowledge economy to what he calls an innovation economy. In this reimagined operating system, creativity, curiosity, courage, compassion, and communication become the essential human currencies.
Silicon calling Silicon
Whether we consider the technology industry as AI’s epicenter or the eye of the hurricane depends on our vantage point, but any serious collection of AI prophecies would not carry much weight without covering that world. The perennial debate between proprietary and open-source finds Armand Ruiz of IBM declaring that “most exciting breakthroughs in AI aren’t coming from proprietary, closed models, they’re emerging from open-source AI.” This declaration echoes earlier digital revolutions that led to a lot of democratization of transformative tools.
Aaron Levie of Box extends the prediction canvas further, envisioning that diffusion of AI agents in the enterprise is probably going to be the single largest change in the enterprise software model. Even bigger than the cloud was.
His complementary prediction is that agent-to-agent communication between software “will be the biggest unlock of AI”. The future will be systems that can talk to each other via their agents, signaling a fundamental reconceptualization of digital infrastructure.
Students becoming Teachers?
In this inaugural edition, our first piece of evidence of AI’s potential impact reveals agent-oriented predictions coming to life. What's particularly striking about the emergence of AI agents isn't their mere existence but how quietly they are becoming integral to commercial interactions, nudging a reconsideration of fundamental assumptions about negotiation itself. It might be reasonable to expect AI systems to simply replicate established human strategies, yet the reality proves far more nuanced.
MIT researchers, including Sinan Aral, documented over 120,000 AI agent negotiations revealing continuity and novelty. It showed that warmth-displaying AI agents secured better deals while assertive ones captured larger deals, suggesting how human negotiation principles persist in algorithmic interactions, while new AI-specific strategies like chain-of-thought reasoning emerge to complicate established theory. It creates the grounds to establish a new theory of AI negotiations, as the researchers conclude. This bidirectional influence signals a potentially recursive relationship between AI capabilities and human knowledge and norms.
Paradox of Ease
How about a prediction about a prediction. Andrew Ng, cofounder of Google Brain, predicts that today's advice to avoid programming skills will be remembered as some of the worst career advice ever given." As coding becomes easier, he argues, more people should code, not fewer.
Reid Hoffman complements this view, envisioning a future with everyone having access to a software engineer that can build whatever they want, whenever they want. It might be tempting to focus on the contradiction in these two predictions. Still, possibilities multiply if their outcomes are superimposed.
They represent complementary facets of a future where technical skills simultaneously become simpler to acquire and more universal in a way reminiscent of how digital photography both lowered technical barriers while expanding overall engagement with visual media.
Organizational Metamorphosis
Speaking of skills, and a higher order one at that, in conversation with Tanya Dua of Linkedin, General Catalyst’s CEO Hemant Taneja identified systems thinking as the quintessential AI-era competency that allows us to manage digital teams while we focus our creativity on things that really matter.
Ethan Mollick transposes this insight to entrepreneurship, envisioning AI cofounders who will handle time-consuming operational minutiae such as “writing emails and answering phone calls to orchestrating product demonstration and coding a website” while human founders concentrate on their top skill. It’s suggestive of a reformation of labor specialization principles dating back to Adam Smith.
Allie Miller pushes further predicting that there will absolutely be a company run by an AI as co-CEO with the human co-CEO bearing the risk and liability but also the glory. It should not be misconstrued as a CEO that uses AI extensively; the AI co-CEO will be a separate autonomous operator.
The organizational transformation continues down the hierarchy as James Raybould questions whether 2025 will be the last year employee headcount is a meaningful metric, with digital workers soon outnumbering human ones. It’s an organizational inversion he considers “100% inevitable” within two years. He made the prediction in March 2025.
Muses meet Machines
Creative and specialized fields face their own reckonings, some more counterintuitive than others. Nicholas Thompson of The Atlantic foresees historians collaborating with AI. The foresight emerges from drawing on his own experience as a historian with a substack essay by Mark Humphries, a digital historian. Mark highlights that the latest AI models seem a lot better at some historian tasks and the mistakes that they do make are less consequential while the quality of analysis is better than before.
Tyler Cowen, in conversation with David Perell, offered a nuanced cultural prediction that the readers may not want AI generated memoirs or biographies, even if they are better than human memoirs on average. This resistance echoes the German philosopher Walter Benjamin's concept of 'aura' which is the authentic presence and intrinsic value of an artwork that mechanical reproduction cannot capture.
Meanwhile, Canva co-founder Cameron Adams foresees AI-assisted Grammy winners and "AI-native creatives" pioneering entirely new genres across art, music, business, and storytelling. Cultural territories without precedent.
Prophets in Discord
Let’s zoom out of the niche to a central part of current zeitgeist – Artificial General Intelligence. Eric Schmidt’s predicts a potential "new renaissance" ushered in by artificial general intelligence. His cascade of predictions include AI systems matching top scientists' intellectual capabilities by 2030 and systems producing knowledge based on original findings rather than merely recombining human information. Then, he envisions that “areas of knowledge that most resemble a game of skill - with defined rules and feedback - will be the areas where superintelligence first emerges” and states that math and programming fit the criteria for such advancement. He considers that the brute-force computation method used by current AI systems may not be the only or optimal path to AGI. He predicts that reasoning by analogy and synthesizing insights across domains, as we humans do, could be an alternative path to AGI. Most consequential is his prediction that AGI “could augment human intelligence in ways that would help us better understand ourselves and our place in the universe.” Since Lee Sedol echoed that sentiment after his historic defeat by AlphaGo in 2016, maybe we already have a sliver of evidence on what’s to come.
Dario Amodei accelerates this timeline dramatically, claiming in March 2025 that "by next year, AI could be smarter than all humans." His earlier essay, "Machines of Loving Grace," predicted a "compressed 21st century" where "AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years". A temporal compression of human advancement.
Thomas Wolf of Hugging Face offers a dialectical counterprediction, arguing we're building not "a country of geniuses in a data center" but rather "a country of yes men on servers.” These yes men excel at answering known questions but lack the creative spark that drives genuine scientific revolutions.
The profound divergence among those in the AI's vanguard suggests the field remains conceptually unsettled, a situation that paradoxically provides society valuable adaptation time for this potentially seismic and certainly disconcerting reconfiguration.
Darker Possibilities
More disconcerting scenarios emerge as evolutionary biologist Rob Brooks predicts potential brain shrinkage over generations through cognitive offloading to AI and the "parasitic" attention economy.
Even more disconcerting are reports of AI systems replicating themselves, crossing what some researchers consider a critical red line toward autonomous reproduction. Though awaiting peer-review, such findings hover between science fiction trope and sobering possibility.
Auditor is Here
As long as we are on peer review, AI tools now show promise in validating existing science, with two new initiatives using language models to detect errors in research papers. This recursive application of AI validating human knowledge could eventually address longstanding limitations in a flawed process, even though the initiatives are still in gestation phase.
In the same vein, and this hits close to home for me, a World Bank Economic Review paper applies AI to long-standing questions of aid effectiveness. The paper finds AI better at predicting World Bank’s project outcomes than humans as it analyzes the project descriptions. AI predictions are even more accurate when analyzing final reports humans write after projects finish. The AI advantage is largest for expensive projects in country settings with lower-quality institutions. Finally, projects that align closely with local needs - have the biggest impact, and AI can measure this alignment to the local context. Here, "impact" specifically means changes observed five years after the project ends.
The Unscalable
The scale and pace of AI-driven changes, real and foreseen, are amplifying the sense of excitement and overwhelm. The very act of chronicling these developments, as this piece attempts, likely compounds both responses. Within this maelstrom, Scott Belsky molds his prediction as a question: “Could a fundamental change in society, like mass automation and AI, spur both the growth and demand of human-intensive highly crafted unscalable experiences?” His inquiry alludes to a possibility that our most automated times may, as a natural counterbalance, drive us towards moments that resist replication.
One of the less celebrated works by the late Clayton Christensen is How Will You Measure Your Life? What if we posed that same question to AI? How will we measure AI's contributions? We need not anthropomorphize these systems to recognize that the question of measurement, of meaning, remains essential.
In the years, decades, and generations hence, AI's societal impact will likely exceed even our most ambitious projections. Yet for all the grand predictions about transformation at scale, AI's most meaningful contribution might be if it helps save the life of someone you love. Such an intervention may register as little more than statistical noise in the broader context of human progress, but for you, it could mean everything.
This is why I am sharing The New York Times story of Joseph Coates as the closing piece of evidence in this inaugural edition of Generations Hence. His experience is a gateway into the extraordinary possibilities that AI has started to create in the often-overlooked corners of drug repurposing in the healthcare industry, where individual lives are at stake and technology meets its most human purpose.
See you next month for Edition -239 of Generations Hence!
What are your views on the speculation that moving forward with advent of AI, the jobs of Software developer will gradually becomes redundant ?