On this "Closing Thought" for the Month, our team decided to highlight the great Rabindranath Tagore, who epitomizes our philosophy of Service.
As part of our mission for our Visions Property, we curated the following on companies that had the vision but did not see "beyond the now", the View on AI with thoughts courtesy Jacobin Magazine, along with Social Media Addiction courtesy the team at the Financial Times along with a "Vision of the Possible" Courtesy Goldman Sachs:













| | “Celebrate Me” by IngaRose, a fully AI-generated R&B artist with 228K Instagram followers and a fake backstory, topped both U.S. and global charts. Olivia Rodrigo knocked her off hours later, but the precedent is set. |
|
|
|
| | This official prompting guide for Nano Banana covers camera control, lighting setups, film stock references, and multi-image composition. |
|
|
|
| | After years of riding shotgun on the robotaxi wave, Uber is getting behind the wheel. According to the Financial Times, the company has committed over $10B to partners including Baidu, Rivian, and Lucid. It's a sharp pivot from the gig-economy model that built Uber's empire, driven by fear that robotaxi rivals could disrupt the business it spent a decade building. |
|
|
The AI revolution could usher in a new age of stagnation |
Critics of generative AI have for the most part been obsessed with a single question: What if the several hundred billion–dollar bet on the future of the world economy fails? This isn’t just a concern about the benefits of the technology. Bottlenecks exist at seemingly every stage. Energy supply is severely constrained by regional war in West Asia; information is limited by copyright laws; fewer than half of planned data centers are actually being built; and chips may too be in short supply. Meanwhile, the usefulness of actually existing AI has proved hard to calculate. A paper by Nobel Prize–winning economist Daron Acemoglu calculated that the new technology has had little effect on productivity and is unlikely to do so in the future. For day-to-day users, who employ large language models at work, their experience is often one of having to pick through inaccuracies and confusions caused by machine “hallucinations.” Given the hype surrounding AI, it is hard to avoid the feeling that the whole US economy is balancing rather precariously on a house of cards. For enthusiasts, AI promises to usher in something that socialists have long dreamed of: a world without scarcity in which human beings can move finally from the realm of necessity to the realm of freedom. While cynicism is an understandable response to this valuation-boosting hype, it shouldn’t prevent us from taking this possibility seriously. What if AI actually works?  |
Smoking among the rich has declined dramatically — and digital dependency could follow a similar pattern

By Qianer Liu, Jing Yang and Juro Osawa How World Models Could Take AI Beyond Text Prediction |
|
|
| George Lee | Co-Head, Goldman Sachs Global Institute |
|
|
|
|
| Dan Keyserling | Managing Director, Goldman Sachs Global Institute |
|
|
|
|
After a decade defined by systems that recognize patterns and predict text, the frontier of artificial intelligence (AI) is shifting toward models that understand how the world works. The next advances in AI may come less from bigger models and more from systems that can simulate reality, test actions before taking them, and reason about consequences. This new category of models, known as world models, represents a quiet but decisive change in how machines become intelligent. For the past few years, AI has been defined by large language models (LLMs). Trained on vast swathes of text, they learned to predict the next word with uncanny accuracy. From that simple objective emerged systems that write, translate, code, and converse with startling fluency. That achievement is real and transformative, but it also reveals a limitation to the current generation of AI models. LLMs are powerful at completing patterns, but they lack the internal sense of the world those patterns describe. They respond well to prompts but struggle to reason through consequences or act reliably in environments where mistakes carry costs. This limitation has become clearer as these systems have been pushed beyond text. When they’re asked to control robots, manage entire supply chains, or coordinate complex enterprise workflows, prediction alone proves insufficient. Intelligence, in these settings, requires more than correlation. It requires an internal model of how the world works. In the full white paper, Lee and Keyserling outline how world models are already being used and ask whether they could unlock another wave of AI spending. |
|
|
|
|
|
|
|
|
|
|