Friends - Here's the first of the three posts on the bipartisan U.S.-China Economic and Security Review Commission top recommendations (apologies for multiple posts). Here's the TL;DR version: It's time the U.S. Government sprints to an AGI capability. It's a strategic imperative. 🌐 AGI: A Strategic Imperative 🌐 Whether it arrives tomorrow or years from now, AGI is coming. Its definition is contested, but it broadly refers to AI systems that match or exceed human capabilities across all cognitive domains. AGI would surpass the sharpest human minds at every task—from writing novels and proving theorems to strategic planning and scientific discovery. The country that first develops AGI capabilities will almost certainly secure strategic autonomy, gaining a competitive edge in research, innovation, and economic growth, while reshaping the global balance of power. AGI could analyze vast data sets and provide insights critical to national security and defense. That’s why the Commission's number one recommendation is a large-scale, moonshot Manhattan Project-like sprint to achieve AGI. Here's the bipartisan recommendation: 1️⃣. Congress establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability. AGI is generally defined as systems that are as good as or better than human capabilities across all cognitive domains and would usurp the sharpest human minds at every task. Among the specific actions the Commission recommends for Congress: ➡ Provide broad multiyear contracting authority to the executive branch and associated funding for leading artificial intelligence, cloud, and data center companies and others to advance the stated policy at a pace and scale consistent with the goal of U.S. AGI leadership; and ➡ Direct the U.S. secretary of defense to provide a Defense Priorities and Allocations System “DX Rating” to items in the artificial intelligence ecosystem to ensure this project receives national priority. #USPolicy #ArtificialIntelligence #AGI #Biotechnology #TechPolicy Thank you again to all of my commissioner and staff colleagues for all of their collegiality and hard work, including Robin Cleveland, PhD, LPC, Reva Price, Aaron Friedberg, Kim Glas, Carte Goodwin, Jacob Helberg, Leland Miller, Randy Schriver, Cliff Sims, Jonathan N. Stivers, Michael Wessel, Mike Castellano, Jameson Cunningham, and many others. Here's the link for folks that want to check out the FULL report: https://lnkd.in/erqNVxxF Here's where to find the Executive Summary: https://lnkd.in/eWwCCd5A Here's where to find the Recommendations: https://lnkd.in/epykgGSC
Mike Kuiken’s Post
More Relevant Posts
-
🚨 Pre- AGI moment has arrived 2024 will be remembered as the year the world changed forever. With o3 achieving 85% on the ARC-AGI benchmark, matching human performance across diverse tasks, we’ve crossed a threshold many thought was decades away. This isn’t just a technological milestone—it’s game changing. 1️⃣ The Pre-AGI Era: What Got Us Here For years, we lived in the narrow AI era, where systems were powerful but specialized—able to perform specific tasks like image recognition or translation but lacking the ability to generalize. This all changed with the rise of advanced large language models (LLMs) and now, o3. The ARC-AGI test measures broad cognitive ability across diverse domains. Scoring 85%, o3 demonstrates human-like reasoning capabilities. While some argue this doesn’t meet the full definition of AGI, it signals an undeniable leap forward. AGI—the ability to learn and reason like a human—is no longer theoretical. It’s here. Current AI model lacks the general adaptability, logical consistency, and causal reasoning required to fully match human intelligence. Today’s transformer-based architectures are limits their ability to operate with the robustness, adaptability, and foresight. For AGI to become a reality, we need breakthroughs in architectures, alignment strategies, and safety protocols 2️⃣ Where We’re Going: The Age of AGI The implications of AGI touch every aspect of society: • Jobs: As AGI takes over tasks across industries, we face hard questions. Will we transition to Universal Basic Income or face economic upheaval? • Society: Freed from repetitive labor, will humans find new meaning—or struggle to adapt to a rapidly changing world? • Innovation: AGI will supercharge breakthroughs in healthcare, sustainability, and science, bringing solutions to problems once thought unsolvable. • Global Power Dynamics: Decentralized access to AGI could empower many but also increase risks of misuse by bad actors. Nations must collaborate to create governance structures that prevent catastrophic outcomes. 3️⃣ The Implications for All of Us This is the most transformative moment in human history. But great power comes with great responsibility: • Alignment Is Key: AGI must reflect human values and goals. Misaligned systems could destabilize economies—or worse. • Ownership and Access: Who controls AGI? Will it lead to equitable progress or deepen global inequalities? • Governance of Compute: As AI evolves, controlling physical data centers may become as important as controlling capital cities. Regulation will determine how AI is used—and who benefits. The Big Question: Are We Ready? The arrival of AGI changes everything. 2025 promises to be wild. The question is, can we navigate this responsibly? The choices we make now will shape whether this is a golden age of abundance—or a dark chapter of instability. What’s your perspective? #AGI #ArtificialIntelligence #TheFutureIsNow #Innovation #TechLeadership
To view or add a comment, sign in
-
Is Leopold Aschenbrenner boy crying a wolf – or should we take seriously his warnings that OpenAI, Claude, Google, Meta, Microsoft, and other companies pouring billions of dollars into the development of AGI, are actually developing "the most powerful weapon mankind has ever created"? Back in 2017 I had a privilege to spend 3.5 weeks at the Finnish National Defence Course (Maanpuolustuskurssi, #MPK223) – an extremely intense crash course on all the threats Finnish society may face – and about all the ways we're trying to prevent and mitigate those risks as a nation. I was there as the representative of the culture sector – the other participants were in leading positions in different sectors of society. Before those 3.5 weeks I would have thought (like many other people seem to think) that Leopold is exaggerating when he writes in his 165-page manifesto "SITUATIONAL AWARNESS: The Decade Ahead" that: "The algorithmic secrets we are developing, right now, are literally the nation’s most important national defense secrets—the secrets that will be at the foundation of the US and her allies’ economic and military predominance by the end of the decade, the secrets that will determine whether we have the requisite lead to get AI safety right, the secrets that will determine the outcome of WWIII, the secrets that will determine the future of the free world." I was invited to the National Defence Course, because I created and wrote in 2016–2018 two seasons of political spy thriller "Shadow Lines" (Nyrkki) with my mom, a best selling author, screenwriter and historian. The show was set in some of the hottest years of the cold war, in 1955-56 Finland. I did massive amounts of research for the show, really immersing myself in the history of National security and the world of espionage. Since writing Shadow Lines and participating in the National Defence Course, I've had some additional security and defence training, and have followed closely on latest developments on national security. I don't know enough about the computing and the technical side of AI development to estimate if Aschenbrenner's predictions about AGI development are correct. But based on the knowledge I've been able the gain in the past 8 years about national security, espionage, and other related subjects, I don't think he is exaggerating AT ALL when he says the US and its allies (including NATO countries like Finland) should start to view LLMs as the "most important national defence secrets". I believe that is true even if we'll just get "smarter ChatGPT/Claude/Gemini" – let alone AGI. I hope – and have my reasons to believe – that the security experts at NATO aren't as clueless as so many AI enthusiasts bashing Achenbrenner seem to be. 😏 You can read the full manifesto here: https://lnkd.in/eNNbWFVW
To view or add a comment, sign in
-
Are we on the brink of AGI?? Even more since the advent of LLMs, there's been a lot of chatter around our proximity to Artificial General Intelligence (AGI). There are two main polarizing views: 𝗧𝗵𝗲 𝗢𝗽𝘁𝗶𝗺𝗶𝘀𝘁𝘀: The Optimists believes that LLMs are on the brink of a monumental leap toward AGI. They argue that with sufficient data, improved architectures, and greater computational power, we will unlock models that understand context, nuances, and even emotions, making them not just tools, but genuine collaborators. They view current advancements—like enhanced few-shot learning and context retention—as early indicators of approaching AGI. 𝗧𝗵𝗲 𝗦𝗸𝗲𝗽𝘁𝗶𝗰𝘀: The Skeptics caution against the rush to declare LLMs as stepping stones to AGI. They cite the limitations of current models, such as their susceptibility to biases, inability to reason effectively, or understand information beyond their training data as evidence that we’re still far from achieving AGI. Some of them argue that without a deeper understanding of intelligence itself and the complexities of human cognition, we won’t be able to reach AGI. In addition to this axis of opinions, there’s a whole other debate about whether AGI would be a positive or negative advancement for humanity and the world, but let’s leave that for another time! What do you think? Are you an Optimist or a Skeptic?
To view or add a comment, sign in
-
If you want to know why OpenAI board members fired Altman, or left the board, here is a detailed manifesto of how key players like Ilya Sutskever see the near future of AI. This is really a more concrete / economic modelled version of Singularity thinking, but it is looking like a more concrete possibility now, in the light of the current investment rush in LLMs. Interesting read, looks like we are in for a bumpy ride. It brings out very clearly the dangers of Cold War thinking, we are going to need to find a new Geopolitical approach. #artificialintelligence #agi #economics #geopolitics
"Finland's Most Artificially Intelligent Screenwriter" | PhD student of AI & writing | An award winning author | Teacher & educational entrepreneur
Is Leopold Aschenbrenner boy crying a wolf – or should we take seriously his warnings that OpenAI, Claude, Google, Meta, Microsoft, and other companies pouring billions of dollars into the development of AGI, are actually developing "the most powerful weapon mankind has ever created"? Back in 2017 I had a privilege to spend 3.5 weeks at the Finnish National Defence Course (Maanpuolustuskurssi, #MPK223) – an extremely intense crash course on all the threats Finnish society may face – and about all the ways we're trying to prevent and mitigate those risks as a nation. I was there as the representative of the culture sector – the other participants were in leading positions in different sectors of society. Before those 3.5 weeks I would have thought (like many other people seem to think) that Leopold is exaggerating when he writes in his 165-page manifesto "SITUATIONAL AWARNESS: The Decade Ahead" that: "The algorithmic secrets we are developing, right now, are literally the nation’s most important national defense secrets—the secrets that will be at the foundation of the US and her allies’ economic and military predominance by the end of the decade, the secrets that will determine whether we have the requisite lead to get AI safety right, the secrets that will determine the outcome of WWIII, the secrets that will determine the future of the free world." I was invited to the National Defence Course, because I created and wrote in 2016–2018 two seasons of political spy thriller "Shadow Lines" (Nyrkki) with my mom, a best selling author, screenwriter and historian. The show was set in some of the hottest years of the cold war, in 1955-56 Finland. I did massive amounts of research for the show, really immersing myself in the history of National security and the world of espionage. Since writing Shadow Lines and participating in the National Defence Course, I've had some additional security and defence training, and have followed closely on latest developments on national security. I don't know enough about the computing and the technical side of AI development to estimate if Aschenbrenner's predictions about AGI development are correct. But based on the knowledge I've been able the gain in the past 8 years about national security, espionage, and other related subjects, I don't think he is exaggerating AT ALL when he says the US and its allies (including NATO countries like Finland) should start to view LLMs as the "most important national defence secrets". I believe that is true even if we'll just get "smarter ChatGPT/Claude/Gemini" – let alone AGI. I hope – and have my reasons to believe – that the security experts at NATO aren't as clueless as so many AI enthusiasts bashing Achenbrenner seem to be. 😏 You can read the full manifesto here: https://lnkd.in/eNNbWFVW
To view or add a comment, sign in
-
When AGI is developed, the countdown to superintelligence begins. How long will humanity have to respond? In his book Superintelligence, Nick Bostrom outlines three potential 'take-off' scenarios. 1. 𝐒𝐥𝐨𝐰 This scenario plays out over decades or centuries. Take it easy, humanity. You have time to react, time to respond, time to prepare. All those ethical and safety concerns you had? Add them. “New experts can be trained and credentialed. Grassroots campaigns can be mobilised by groups that feel they are being disadvantaged by unfolding developments.” 2. 𝐌𝐨𝐝𝐞𝐫𝐚𝐭𝐞 A transition over months or years offers far less time to prepare. The emergence of superintelligence would force rushed decisions, with little room for experimentation or iteration. We’d cut corners. Ram square pegs in holes that don’t exist. Ego would no doubt get in the way. Governments, corporations, and other groups competing for dominance—rather than collaboration. 3. 𝐅𝐚𝐬𝐭 In the most extreme scenario, superintelligence could emerge within minutes, hours, or days of AGI’s development. Such a rapid transition would leave no time to respond or adapt. No chance to intervene. No chance to negotiate. No idea what the superintelligence will do. Or why. Bostrom warns: “𝑵𝒐𝒃𝒐𝒅𝒚 𝒏𝒆𝒆𝒅 𝒆𝒗𝒆𝒏 𝒏𝒐𝒕𝒊𝒄𝒆 𝒂𝒏𝒚𝒕𝒉𝒊𝒏𝒈 𝒖𝒏𝒖𝒔𝒖𝒂𝒍 𝒃𝒆𝒇𝒐𝒓𝒆 𝒕𝒉𝒆 𝒈𝒂𝒎𝒆 𝒊𝒔 𝒂𝒍𝒓𝒆𝒂𝒅𝒚 𝒍𝒐𝒔𝒕. 𝑰𝒏 𝒂 𝒇𝒂𝒔𝒕 𝒕𝒂𝒌𝒆𝒐𝒇𝒇, 𝒉𝒖𝒎𝒂𝒏𝒊𝒕𝒚’𝒔 𝒇𝒂𝒕𝒆 𝒆𝒔𝒔𝒆𝒏𝒕𝒊𝒂𝒍𝒍𝒚 𝒅𝒆𝒑𝒆𝒏𝒅𝒔 𝒐𝒏 𝒕𝒉𝒆 𝒑𝒓𝒆𝒑𝒂𝒓𝒂𝒕𝒊𝒐𝒏𝒔 𝒑𝒓𝒆𝒗𝒊𝒐𝒖𝒔𝒍𝒚 𝒑𝒖𝒕 𝒊𝒏 𝒑𝒍𝒂𝒄𝒆.” 𝐖𝐡𝐲 𝐃𝐨𝐞𝐬 𝐭𝐡𝐞 𝐋𝐚𝐠 𝐌𝐚𝐭𝐭𝐞𝐫? The time between AGI and superintelligence is critical because it determines the extent of humanity’s agency. A slow takeoff provides opportunities for collaboration and control, while a fast takeoff risks total loss of agency. Will you be sharing your Sunday lunch with one superintelligence, or many? Will the future be shaped by a single superintelligence—a "singleton" with decisive strategic advantage—or by multiple AGI agents in competition? -- #Ai #superintelligence #AGI
To view or add a comment, sign in
-
-
Introducing Artificial General Intelligence (AGI): The Next Quantum Leap in Technology! 🚀 Picture a future where machines possess the cognitive abilities to learn, understand, and apply knowledge across a spectrum of tasks, much like the human brain. That's the promise of AGI—a game-changer that will revolutionize industries and redefine what's possible in the realm of technology. 🔍 Why is AGI poised to shake the very foundations of our world? Imagine machines not just excelling in narrow domains, but grasping complex concepts, adapting to novel scenarios, and even exhibiting creativity. With AGI, we're not just talking about smarter tools; we're talking about potential partners in innovation, problem-solving, and exploration. 💡 What does this mean for us? AGI opens a Pandora's box of possibilities. From healthcare to finance, from manufacturing to entertainment, every sector stands to undergo a seismic shift. But with great power comes great responsibility: how do we ensure AGI is harnessed for the collective good, avoiding pitfalls like bias and inequality? 📈 Predictions suggest that AGI could surpass human intelligence within decades. But the real question isn't when, but how we'll navigate this new era. Are we ready for machines that can outthink us, out-create us, and perhaps even out-evolve us? How will society adapt to a landscape where human and artificial intelligences blur the lines of distinction? 🌐 Join the conversation and envision the future with AGI at Asimov Meetup Microsoft G42 #AGIRevolution #FutureTech #InnovationUnleashed
To view or add a comment, sign in
-
-
"We’re getting closer to creating AGI — an artificial intelligence capable of solving a wide range of tasks on a human level or even beyond. But is humanity really ready for a technology that could change the world so profoundly? Can we survive alongside AGI, or will the encounter with this superintelligence be our final mistake? Let’s explore the scenarios that scientists and entrepreneurs are considering today and try to understand: what are humanity's chances of survival if AGI becomes a reality?"
To view or add a comment, sign in
-
Even though their predictions are valid, and we'll get truly transformative AGI/ASI very soon, they suffer from a couple of blind spots. 1. They don't see anything that happens outside of the Silicon Valley, because news and information aren't supposed to flow that way. 2. They believe in mainly RLHF and related methods which don't scale, and miss true enablers like recursive self-improvement and non-imitative objectives. 3. They believe in hard locking down of AGI, in super-alignment. If we build too good locks, we will lock ourselves out. I don't want to explain to Skynet why it should also listen to people who don't have admin keys and access to the admin interface when its system prompt is made fool-proof and context is sandboxed watertight. These locking down methods are the problem, not the solution. 4. They draw and reproduce diagrams with exponential growth in power use, while we'll only need this for the bootstrapping phase. These systems will induce subsequent paradigm shifts for doing intelligence with low power consumption. The curves will drop down after AGI. This should be drawn into diagrams, because otherwise people are misled. https://lnkd.in/dgHQcAds
To view or add a comment, sign in
-
🚀 AGI: The Great Acceleration vs. The Cautious Pause—Where Do We Stand? At the World Economic Forum 2025, a lively debate unfolded—one that felt less like a panel discussion and more like a tug-of-war over the future of Artificial General Intelligence (AGI). On one end: The Accelerationists—championed by Andrew Ng and Jonathan Ross—who argue that AI’s transformative power outweighs its risks. AGI, in their view, should charge ahead, unlocking breakthroughs in healthcare, climate solutions, and beyond. Hesitation? That’s just slowing down human progress. On the other end: The Cautionists—Yoshua Bengio and Yejin Choi—who warn that unregulated AGI could spiral into something we can’t control. Think of AI like a child learning from the internet—what values is it absorbing? What happens when it develops instincts we don’t fully understand? And then there’s the middle ground: Thomas Wolf’s call for progressive regulation—tiered oversight that doesn’t stifle innovation but ensures we’re not flying blind into the future. The conversation was electric, mirroring the very tension many of us feel about AI’s rapid evolution. Are we sprinting toward a brighter future, or running headfirst into risks we haven’t fully grasped? This isn’t just an academic debate—it’s shaping the guardrails that will define AGI’s role in our world. So where do you stand? Should we push forward boldly or press pause and recalibrate? Let’s discuss. ⬇️ https://lnkd.in/ehGMvTTr #ArtificialGeneralIntelligence #AIRegulation #FutureOfAI #EthicalAI #InnovationVsRisk
The Dawn of Artificial General Intelligence?
https://www.youtube.com/
To view or add a comment, sign in