-
Home page: https://ai-2027.com/
-
Authors addressing some critique, explaining “Why America Wins” (Ed.: Would that even matter much in an end-game between AI and humans? I doubt it. Rather just a bias.)
-
A YouTube video summary: AI 2027: A Realistic Scenario of AI Takeover
My take:
- It’s good and important to think about AI, the use/misuse/abuse of AI, scenarios why, how and when AI could take over power (and obviously: How to prevent this).
- I’m astonished that one can spot quite a few biases (e.g. the good old USA vs. China power struggle, good government vs. bad private companies, perceived technical or intellectual superiority) and seemingly simplistic assumptions (i.e. that the further AI development is merely a matter of compute, ruling out fundamental theoretical and practical limitations on the path to AGI, or even some obvious potential geopolitical events and decisions; some of which have already manifested -> e.g. student visas, wars). Sure, the scenarios have to be largely based on known knowns, but they should account more for known unknowns and the possibility of unknown unknowns.
- Is AGI achievable in the next 5 years? I doubt it, mainly due to – still valid – fundamental theoretical and practical constraints of the prevalent approach to achieving AGI using LLMs, like the symbol grounding problem. LLMs depend on symbols (tokens) – digitally sharply defined. But the analogue world is exactly the opposite: There are no symbols (except those introduced by living beings, i.e. humans, animals).
Then again, I doubt this matters much in the described scenarios, as the states described in these scenarios are, because of the widespread digitisation, probably achievable with mere control over the digital world. To achieve this in a digital world, true (universal) AGI isn’t required; it’s sufficient to achieve AI that is close enough to “digital AGI” to convince humans that digital AGI has been achieved (as we’re so dependent on the digital world already).
This assumption is both frightening and reassuring: On one hand, it makes far less sensational, but equally dangerous scenarios of AI (partial) dominance more likely; on the other hand, it opens up a possible ‘escape route’ for humanity: we must “simply” ensure that the analogue world always retains the upper hand and that we could exploit this in our favour in a final battle against AI if necessary. - That said, we should urgently look to make our analogue supply and communication networks more resilient, and ironically not via digitalisation as we have done so far. Rather, it is precisely through the deliberate, explicit renunciation of digitalisation.
In my worst-case scenario, we would thus have to assume that an AI – even one lacking true/universal AGI – could take over the digital world (and therefore also many highly powerful and dangerous interfaces to the analogue world -> weapon control systems, bio-labs, food chain, water supply systems, power systems, hardware/robot/clone factories, air control, traffic lights, navigation systems, satellites, emergency networks, life-saving medical devices, subtle but decisive influence over political decisions and communication, etc.), but not a purely analogue world (thanks to the assumed lack of universal AGI). - Once true, universal AGI has been achieved, it’s probably “game over”, unless humanity could somehow convince the AGI of a lasting, sustainable win-win deal (but in the end, the fight for resources and power will be real).