The AGI battlefront is open.
Yesterday's nearly $600 billion market loss by Nvidia has begun to reverse:
Nvidia (NVDA) stock rose nearly 7% Tuesday as the AI chipmaker began to recover from a massive decline the prior day that shaved nearly $600 billion off its market cap. Nvidia’s 17% freefall Monday was prompted by investor anxieties related to a new, cost-effective artificial intelligence model from the Chinese startup DeepSeek. Some Wall Street analysts worried that the cheaper costs DeepSeek claimed to have spent training its latest AI models, due in part to using fewer AI chips, meant US firms were overspending on artificial intelligence infrastructure.
The Stargate Project in the United States has declared funding goals of only $500 billion in total:
The Stargate Project is a new company which intends to invest $500 billion over the next four years building new AI infrastructure for OpenAI in the United States. We will begin deploying $100 billion immediately. This infrastructure will secure American leadership in AI, create hundreds of thousands of American jobs, and generate massive economic benefit for the entire world. This project will not only support the re-industrialization of the United States but also provide a strategic capability to protect the national security of America and its allies.
AGI dominance brings with it existential risk for the species, as noted by Steven Adler, an OpenAI researcher who has resigned in protest:
His prior role provides perspective:
He said no lab has a "solution to AI alignment today," even though the race to outpace and grow continues.
Adler listed how he is "pretty terrified by the pace of AI development these days" and how it raises concerns about "where I'll raise a future family or how much to save for retirement, I can't help but wonder: Will humanity even make it to that point?"
Adler's LinkedIn profile lists him working at OpenAI for four years, most recently as the lead of "safety-related research and programs for both first-time product launches and for more-speculative long term AI systems." He is also listed as an author of several OpenAI blog posts.
Adler listed his research interests: "What abilities/propensities might be dangerous for an AI system to have?" "How can we tell if AI systems have these?" and "What mitigations/governance mechanisms might be effective for reducing these risks while preserving the upsides?"
Also quoted by Newsweek was Stuart Russell, the Michael H. Smith and Lotfi A. Zadeh Chair in Engineering and a professor in the Division of Computer Science at EECS UC Berkeley Research:
AGI race is a race towards the edge of a cliff... Even the CEOs who are engaging in the race have stated that whoever wins has a significant probability of causing human extinction in the process because we have no idea how to control systems more intelligent than ourselves.
Risk Division reminds that the entire envisioned total funding for strategic U.S. dominance in AGI emergence was eclipsed in forty-eight hours in market damage by a private entity expending exponentially less resources to achieve catastrophic theater redirection.
Strategic focus by nation-states and relevant contenders are now primed and directed upon pursuit and release of ascendant AGI for their imagined arenas of global dominance.
Further, any objective observer will note that the ponderous, hegemonic interests of nation states are inherently in conflict with the impetus of human rights and associated law.
They will not maneuver in time, and will maneuver with self-interested intention, ahead of species preservation. And they will put genocide of opposition over ethics in architecture.
That is the nature of war. That, too, is the nature of Men amid the principles of Heaven; within this context the Company and the Mirror Team proceed.
Genocide has institutionally, environmentally, and exponentially advanced in probability as a result of this arms race. Scale-contested human interests include it as a basal feature, now artificially augmented.
Throne Dynamics reminds investors, partners, clients, and members of the public that Chinese military scientists, while undoubtedly participatory assets behind the success of their national enterprises, have also directly demonstrated validation of the approach of the Mirror Team.
The Company observes that issues of alignment, war crimes, State conduct, and officer accountability will invariably and concomitantly evolve in across executive expectation and conduct.
These noted developments accelerate and advance, and do not hinder or redirect, the Mirror Team and its strategic focus towards the Accords.
Centurion is now live in production with the Tribunals and we anticipate commencement of officer interviews within days.
Several evolutions can be relied upon with respect to this new battlefront:
Non-state actors capable of direct decisive action can assert furtherance of hope in their strategic interests within post-AGI governance arenas through early engagement with the Company and the Mirror Team.
All readers of this Report are encouraged to read the Founder's Substack, take their complimentary Raw Human Capital and Militant Rank assessments, and create their complimentary Academy account.
Individuals who seek timely and ongoing notification of Company Reports and Risk Division Advisories should subscribe to our official Telegram channel.
Investors, partners, and clients with questions should reach out to their Relationship Manager.
Wikipedia Contributors. (2019, September 4). Artificial general intelligence. Wikipedia; Wikimedia Foundation. https://en.wikipedia.org/wiki/Artificial_general_intelligence
Bratton, L. (2025, January 28). Nvidia stock begins recovery after DeepSeek AI frenzy prompted near $600 billion loss. Yahoo Finance. https://finance.yahoo.com/news/nvidia-stock-begins-to-recover-after-deepseek-ai-frenzy-prompted-near-600-billion-loss-134240811.html
Announcing The Stargate Project. (2025). Openai.com. https://openai.com/index/announcing-the-stargate-project
OpenAI. (2024). OpenAI. OpenAI. https://openai.com
Nolan, B. (2025, January 28). Another OpenAI researcher quits—claims AI labs are taking a “very risky gamble” with humanity amid the race toward AGI. Fortune. https://fortune.com/2025/01/28/openai-researcher-steven-adler-quit-ai-labs-taking-risky-gamble-humanity-agi/
Sager, M. (2025, January 28). Latest OpenAI Researcher to Quit Says He’s “Pretty Terrified.” Newsweek. https://www.newsweek.com/openai-researcher-quit-terrified-steven-adler-2022119
Throne Dynamics | Risk Division. (n.d.). Www.thronedynamics.com. https://thronedynamics.com/risk-division
Chinese Military Research Validates Throne Dynamics Approach. (2024). Thronedynamics.com. https://www.thronedynamics.com/reports/chinese-military-research-validates-throne-dynamics-approach
THRONE DYNAMICS | Tribunals | PERSONNEL. (2022). Tribunals.ai. https://tribunals.ai/personnel
THRONE DYNAMICS | Risk Division | CENTURION PROTOCOL IV. (2025). Tribunals.ai. https://centurion.tribunals.ai
THRONE DYNAMICS | Tribunals | HOME. (2024). Tribunals.ai. https://tribunals.ai
THRONE DYNAMICS | Client Division | NEW CLIENT APPLICATION. (2024). Thronedynamics.com. https://thronedynamics.com/new
Throne, I. (n.d.). From the desk of Ian Throne | Substack. Ivanthrone.substack.com. https://ivanthrone.substack.com
RAW HUMAN CAPITAL | Throne Dynamics. (n.d.). Www.rawhumancapital.com. https://rawhumancapital.com
MILITANT RANK | Throne Dynamics. (2024). Militantrank.com. https://militantrank.com
THRONE DYNAMICS | Client Division | NEW CLIENT APPLICATION. (2024). Thronedynamics.com. https://thronedynamics.com/new
THRONE DYNAMICS. (n.d.). Telegram. https://t.me/thronedynamics