Thoughts on “AI 2027”

  1. Home page: https://ai-2027.com/

  2. Authors addressing some critique, explaining “Why America Wins(Ed.: Would that even matter much in an end-game between AI and humans? I doubt it. Rather just a bias.)

  3. A YouTube video summary: AI 2027: A Realistic Scenario of AI Takeover

My take:

  • It’s good and important to think about AI, the use/misuse/abuse of AI, scenarios why, how and when AI could take over power (and obviously: How to prevent this).
  • I’m astonished that one can spot quite a few biases (e.g. the good old USA vs. China power struggle, good government vs. bad private companies, perceived technical or intellectual superiority) and seemingly simplistic assumptions (i.e. that the further AI development is merely a matter of compute, ruling out fundamental theoretical and practical limitations on the path to AGI, or even some obvious potential geopolitical events and decisions; some of which have already manifested -> e.g. student visas, wars). Sure, the scenarios have to be largely based on known knowns, but they should account more for known unknowns and the possibility of unknown unknowns.
  • Is AGI achievable in the next 5 years? I doubt it, mainly due to – still valid – fundamental theoretical and practical constraints of the prevalent approach to achieving AGI using LLMs, like the symbol grounding problem. LLMs depend on symbols (tokens) – digitally sharply defined. But the analogue world is exactly the opposite: There are no symbols (except those introduced by living beings, i.e. humans, animals).
    Then again, I doubt this matters much in the described scenarios, as the states described in these scenarios are, because of the widespread digitisation, probably achievable with mere control over the digital world. To achieve this in a digital world, true (universal) AGI isn’t required; it’s sufficient to achieve AI that is close enough to “digital AGI” to convince humans that digital AGI has been achieved (as we’re so dependent on the digital world already).
    This assumption is both frightening and reassuring: On one hand, it makes far less sensational, but equally dangerous scenarios of AI (partial) dominance more likely; on the other hand, it opens up a possible ‘escape route’ for humanity: we must “simply” ensure that the analogue world always retains the upper hand and that we could exploit this in our favour in a final battle against AI if necessary.
  • That said, we should urgently look to make our analogue supply and communication networks more resilient, and ironically not via digitalisation as we have done so far. Rather, it is precisely through the deliberate, explicit renunciation of digitalisation.
    In my worst-case scenario, we would thus have to assume that an AI – even one lacking true/universal AGI – could take over the digital world (and therefore also many highly powerful and dangerous interfaces to the analogue world -> weapon control systems, bio-labs, food chain, water supply systems, power systems, hardware/robot/clone factories, air control, traffic lights, navigation systems, satellites, emergency networks, life-saving medical devices, subtle but decisive influence over political decisions and communication, etc.), but not a purely analogue world (thanks to the assumed lack of universal AGI).
  • Once true, universal AGI has been achieved, it’s probably “game over”, unless humanity could somehow convince the AGI of a lasting, sustainable win-win deal (but in the end, the fight for resources and power will be real).

 

 

NVIDIA developer workshop: “s1: Simple test-time scaling” paper

At the NVIDIA developer workshop days I attended last week, the following paper was highly recommended:

“s1: Simple test-time scaling” by Niklas MuennighoffZitong YangWeijia ShiXiang Lisa LiLi Fei-FeiHannaneh HajishirziLuke ZettlemoyerPercy LiangEmmanuel CandèsTatsunori Hashimoto
(PDF)

Github project: https://github.com/simplescaling/s1

Not directly related, but an interesting application of the NVIDIA RAPIDS Accelerator that was also presented: https://aws.amazon.com/blogs/industries/accelerating-fraud-detection-in-financial-services-with-rapids-accelerator-for-apache-spark-on-aws/

An open-source AI hedge fund team by @virattt

“The AI Hedge Fund is an educational model using AI agents to simulate trading decisions, emphasizing various investment strategies and analyses.”

Looks very interesting and inspiring, and @virattt seems to be a cool guy.

https://github.com/virattt/ai-hedge-fund/tree/main

Caveat/Disclaimer: “Can LLM-based Financial Investing Strategies Outperform the Market in Long Run?” by Weixian Waylon Li, Hyeonjun Kim, Mihai Cucuringu, Tiejun Ma – 11 May 2025 (PDF)

 

Paper by DeepSeek: “Insights into DeepSeek-V3: Scaling Challenges and Reflections on Hardware for AI Architectures”

“This paper presents an in-depth analysis of the DeepSeek-V3/R1 model architecture and its AI infrastructure, highlighting key innovations such as Multi-head Latent Attention (MLA) for enhanced memory efficiency, Mixture of Experts (MoE) architectures for optimized computation-communication trade-offs, FP8 mixed-precision training to unlock the full potential of hardware capabilities, and a Multi-Plane Network Topology to minimize cluster-level network overhead.”

Paper: https://www.alphaxiv.org/abs/2505.09343