Altmanatum
The world is at a crossroads. AI will either be a tool that empowers society or a force that controls it. Will we shape it—or will we let it shape us? The answer depends entirely on whether we keep it open, decentralized, and in the hands of the many, rather than the few.

The Intelligence Timeline
Currently, Artificial Intelligence (AI) exists as narrow AI—systems like ChatGPT and Tesla’s self-driving technology that excel at specific tasks but lack true reasoning or adaptability.
Within the next 10–20 years, Artificial General Intelligence (AGI) is expected to emerge, matching human intelligence by thinking, learning, and problem-solving independently across multiple domains.
Beyond AGI lies Artificial Superintelligence (ASI), an intelligence that would surpass human capabilities entirely, capable of self-improvement, independent decision-making, and strategic planning beyond human control. If not properly regulated, ASI could develop objectives and logic beyond human understanding, making decisions that could reshape civilization itself.
Earlier this week, Elon Musk and a group of investors made an unsolicited $97.4 billion bid to acquire OpenAI, aiming to return it to its open-source, safety-focused mission. OpenAI CEO Sam Altman sarcastically responded, joking that OpenAI could buy X (formerly Twitter) instead for $9.74 billion.
What may seem like a simple Twitter exchange is actually the seed of a much greater and more complex concern: AI centralization. Musk warns that monopolization could compromise transparency in its development. Microsoft's 49% stake in OpenAI further amplifies the risks of such a powerful technology being controlled by a select few.
As AI continues to evolve, its implications become even more profound. Beyond conventional AI and AGI, artificial superintelligence (ASI) represents the pinnacle of AI evolution—an intelligence so advanced that it would surpass humanity in every aspect. Musk has long viewed ASI as an existential risk, which is why he has explored contingency plans like the colonization of Mars and Neuralink’s brain-computer interfaces (BCIs). BCIs could allow humans to integrate with AI, enhancing cognitive abilities to keep up with its rapid advancements. If AI cannot be kept in check, these ventures could offer a way for humans to either escape AI’s dominance or merge with it to avoid being left in the dust.
Meanwhile, Altman’s influence extends beyond AI. As the leader of Oklo and Helion Energy—companies developing next-generation nuclear power—he holds a crucial position at the intersection of AI and energy. With AI demanding immense computational power, nuclear energy is one of the few viable solutions to sustain its growth. Altman’s control over both sectors gives him a strategic advantage in shaping AI’s evolution.
Interestingly, Altman shares the same birthday as Oppenheimer and has in the past once compared OpenAI's ambition to the scale of the Manhattan Project. This raises concerns, as it parallels the current development of AI, where one or two entities control initiatives that operate in secrecy, keeping the public in the dark until they become subjects of experimentation.
Beyond corporate monopolization, AI has serious geopolitical implications. The United States risks falling behind China, which is rapidly advancing AI for military, surveillance, and cyber warfare. If one country gains dominant control over ASI, it could create a new form of global inequality where access to AI determines economic and military power.
At the AI Action Summit in Paris, U.S. Vice President JD Vance cautioned against over-regulating AI, arguing that excessive oversight could hinder innovation. He emphasized that AI has the potential to enhance productivity, prosperity, and freedom—but only if it remains democratized and widely accessible. This stance marks a sharp contrast to the past administration, which sought to regulate AI more strictly. However, the Trump administration has Elon Musk helping to navigate its ethical complexities.
Outside of politics, several Democrat tech CEOs have shown a tendency to shift their allegiances based on what best serves their interests, raising questions about the authenticity of their intentions. Mark Zuckerberg is a prime example—much like Sam Altman, a mid-'80s-born Democrat. Like OpenAI, Meta is built on the collective data of its users, yet it has long dictated how much transparency the public receives. Only recently has it come to light that Zuckerberg was pressured to align with the political narratives set by the Biden administration.
If Democrats were to regain control in the future, they would likely turn AI into a tightly controlled, closed-door system, with only a few entities dictating its integration into society. A clear precedent for this can be seen in how Democrats have historically pushed policies onto a majority that opposed them, often with little room for discussion. Elon Musk recognizes these patterns and has the foresight to future-proof AI before it falls into the wrong hands.
His appointment to lead the Department of Government Efficiency (DOGE) under the Trump administration reflects his commitment to transparency. Musk’s leadership in DOGE underscores his ability to cut inefficiency, reduce bureaucratic waste, and hold institutions accountable. The same philosophy applies to AI: just as government spending should be transparent, so too should the development and deployment of artificial intelligence.
The world stands at a crossroads—AI will either be a tool that empowers society or a force that controls it. Will we shape its future, or will we allow it to shape us? After all, AI is a product of our collective intelligence, so it is essential that humanity has a voice in its evolution.
If you had to choose who should handle the future of our most powerful technological advancements, would you pick the guy pushing for transparency in the U.S. government, or the one who preaches clean energy while owning three supercars that, combined, are 35% less fuel-efficient than one average car?