Three serious AI governance reports landed this month from the Centre for International Governance Innovation. One maps Russia’s generative AI disinformation evolution. One surveys AI’s role in the future of war. One lays out national security scenarios (Stall, Precarious Precipice, Hypercompetition, Hyperpower, Rogue ASI) with careful attention to what happens when a single entity controls superintelligence without adequate checks.
All three still treat AI governance as something to build before crisis hits. It’s like saying a barn really needs to think about installing some doors before a horse leaves, without recognizing how many already left.
None grapple with the possibility that the crisis is the governance.
The Canada-CIGI scenario workshop described the Hyperpower risk this way: a system where “ultimate control would be by one company’s CEO,” where that company “might start a process of disempowering competitors and preparing for long-term plans” before the public understands what’s happening. Participants flagged this as a future requiring urgent preparation.
That’s a description of March 2026.
Anthropic, Google, and xAI each received $200 million Pentagon contracts for agentic AI last July. The agencies that were supposed to provide oversight — CISA, the State Department’s Global Engagement Center, the AI Safety Institute — have been gutted or captured. The Biden-Xi agreement that humans should control nuclear weapons decisions has no institutional successor. The companies writing safety frameworks are the same companies winning the military contracts.
The scenario planners ask: what if a small faction gains control of the most powerful AI systems and uses that position to shape government policy? The answer isn’t hypothetical. The question is whether anyone with standing to respond recognizes it as the situation they’re already in. Also worth noting is that nobody asks what if a large faction does not gain control of powerful AI, meaning only a small faction benefits from it.
What the reports miss isn’t technical. It’s political. Governance capture doesn’t announce itself. It performs accountability while producing none (e.g. safety cards, responsible AI pledges, congressional testimony) while the structural consolidation continues underneath. The Hyperpower scenario doesn’t require AGI. It requires the right contracts, the right regulatory vacuums, and enough institutional inertia to mistake motion for oversight.
We’re long past the point of alarm. The question is whether the people writing the scenario plans notice.
Donald Trump has never hidden his admiration for white supremacist apartheid doctrine. The doctrine has a consistent two-step execution: assassinate the successor first, destroy the resulting state second.
Let’s step back in time for a minute. Patrice Lumumba was killed in January 1961 to prevent a functional post-colonial Congo. When UN Secretary General Dag Hammarskjöld flew to negotiate a ceasefire that would have ended the Katanga secession, his plane was shot down in Northern Rhodesia eight months later by the South African apartheid mercenaries, Belgian colonial officers, and CIA operating as one network. Eduardo Mondlane was killed by a parcel bomb in 1969 because his moderate leadership of FRELIMO made Mozambican independence negotiable, and the Portuguese colonial right and its Western allies feared negotiation more than armed resistance. When Mozambique achieved independence anyway in 1975, South Africa deployed RENAMO to destroy the resulting state. The same sequence played out in Angola with UNITA.
The recurring goal in each case was a failed state by design, because a failed state confirmed the ideology that was manufacturing it.
White House advisers now include Elon Musk, who grew up in apartheid South Africa and tweets nostalgia for Rhodesia as “a century of civilization,” and Peter Thiel, who spent years under apartheid and has praised that system. Trump’s immigration enforcement architecture was built by Kris Kobach, whose transition vetting documents flagged “white supremacy” as a political vulnerability after he accepted funding from white supremacist groups and employed white nationalists on his campaign. The ambassador to South Africa is a man who spent the 1980s trying to protect apartheid by blocking US contact with the African National Congress.
In February 2025, Trump signed an executive order prioritizing Afrikaner refugees while freezing aid to the Black-majority South African government — citing a “white genocide” conspiracy theory that South African courts, South African political parties, and the South African Human Rights Commission have all dismissed as fiction.
On January 7, 2021 — the day after the Capitol attack — Trump awarded the Presidential Medal of Freedom to Gary Player, who wrote in 1966 that he was “of the South Africa of Verwoerd and apartheid,” describing the country as “the product of its instinct to maintain civilized values and standards among the alien barbarians.” Player’s Johannesburg estate had been acquired by TGS International, a company set up by former CIA agent Ted Shackley — the same covert apparatus that ran operations in Katanga while Lumumba was being delivered to his executioners.
This history is the how and why of an administration now bombing the Iranian succession process. The strategy it has for Tehran is the same derangement it admires so much from apartheid-era Pretoria.
Bombing Successors to Prolong Chaos
The sequence is precise enough to read as a doctrine. CIA intelligence on Khamenei’s location (an old man in ill health, sitting at home with his family) was shared with Israel, accelerating the timeline of a strike that killed the supreme leader along with his children, senior IRGC commanders and political officials gathered at his home. Within 72 hours, Israel struck the Iranian parliament building to prevent assembly. Then it struck Qom, the seat of the Assembly of Experts, the body constitutionally charged with selecting the next supreme leader.
Richard Helms, who helped engineer the 1953 CIA coup in Iran and later served as US ambassador to Tehran, testified before the Church Committee with the clearest possible warning against exactly this kind of operation (Alleged Assassination Plots Involving Foreign Leaders, Interim Report of the Senate Select Committee to Study Governmental Operations with Respect to Intelligence Activities, S. Rep. No. 94-465, 94th Cong., 1st Sess. Nov. 20, 1975 — Epilogue, testimony of Richard Helms.):
If you are going to try by this kind of means to remove a foreign leader, then who is going to take his place running that country, and are you essentially better off as a matter of practice when it is over than you were before?
The Trump administration has no answer to either question.
There is no evidence it has even considered them.
They started bombing to prevent the end of negotiations. They destroyed the succession to prevent the end of bombing.
Permanent Improvisation Policy
The Kahanist ministers now holding structural power in Netanyahu’s coalition — Ben-Gvir at Internal Security, Smotrich at Finance with authority over the West Bank — require permanent instability.
Stability forecloses annexation. A coherent Iranian state, even a post-theocratic one, could reconstitute as a regional counterweight. A permanently fractured Iran with the IRGC splintered, Kurdish and Baloch separatist movements armed by the CIA, and the clerical succession process physically destroyed serves the Israeli territorial program.
Netanyahu’s own record is mired in Kahanism. For years he kept Hamas financially viable, allowing Qatari funds to flow into Gaza, precisely because a divided Palestinian leadership made a two-state solution structurally impossible. The chaos was the alternative to a peace strategy. The same logic, applied at regional scale, produces the current operation in Iran.
Kahanism requires both — the land and the proof that no alternative was ever possible. A functional Iranian state falsifies the second requirement as surely as it threatens the first.
Trump Exceptionism
Carl Schmitt’s definition of sovereignty — the sovereign is whoever decides the state of exception — illuminates why forever war is a governing strategy.
Permanent war produces permanent emergency. Permanent emergency suspends legal constraint.
The courts that declined to rule on the war powers question, invoking the political question doctrine and standing limits, are the system functioning exactly as the executive branch spent decades engineering it to function.
Netanyahu faces criminal indictment. Wartime prime ministers stay in office. Trump, facing his own accumulating legal exposure, understands this logic intimately. He said so himself, telling ABC that he killed Khamenei as a grudge match.
I got him before he got me.
The president who claims the unilateral right to assassinate a foreign leader preemptively, citing fear for his own life, spent the same year stripping Secret Service protection from Kamala Harris and revoking security clearances for Biden, Blinken, Cheney, and the prosecutors who pursued him. He removed protection from Americans facing documented threat environments. The immunity from consequences is only for Trump.
A personal grudge framing is the only honest assessment. The legal architecture — Article II authority, the 2024 immunity ruling, the hollowed-out War Powers Resolution — was hastily constructed around it after the fact.
Why Chaos? Evidence to Justify More
The Afghanistan war lasted twenty years and transferred roughly two trillion dollars from public accounts to private contractors. The stated objectives — stable governance, functional institutions, a self-sustaining security force — were not achieved. The contracts were fulfilled. Revenue was recognized. By the measure that actually governed behavior, it succeeded.
Iran is far larger, far more complex, and more strategically located as it sits astride the Strait of Hormuz. The procurement pipeline implied by permanent conflict there makes Afghanistan look like a pilot program.
The absence of a plan is the plan. An open-ended operation answers to no endpoint, no congressional authorization, no definition of success that could expire.
The mechanism is based in cruel operational logic. The belief system, raw ideology, explains why that mechanism is indispensable.
You can’t go bankrupt if there’s never an accounting. You can’t go to jail if there’s never an enforcement.
Kahanism holds that Arabs have no legitimate national existence, that Palestinians are not a people, that Islamic civilization is structurally incompatible with self-governance. Inside that framework, a functional Iranian state, a coherent Palestinian authority, a stable Arab democracy anywhere in the region is a falsifying data point. Nazi racial doctrine followed the same logic — Jewish participation in European civic life falsified the premise of inherent incompatibility, so participation itself became the target.
The death and chaos are required as evidence.
Apartheid South Africa operated the same logic with the same precision. The white minority regime understood that a thriving Black-governed neighbor would undermine its foundational claim that Africans were incapable of self-rule. When Mozambique and Angola gained independence in 1975, South Africa responded with a formal doctrine of regional destabilization — arming RENAMO to terrorize Mozambican civilians, backing UNITA through decades of Angolan civil war that killed at least half a million people, and at one point using its proxy forces to deliberately exacerbate a drought into a famine that killed over 100,000. The goal was a failed state on the border, because a failed state confirmed the ideology that manufactured it. Self-sealing.
Robert Moses did the same to the inner cities. Urban renewal demolished the organizational infrastructure of functioning communities — the informal economies, the political networks, the institutions of local order — and installed nothing in their place. The crime wave that followed was predictable. Jane Jacobs diagnosed the mechanism in 1961. Daniel Patrick Moynihan wrote the report that blamed the family structure. The consequences of deliberate policy became legible as evidence of inherent incapacity. The destruction that produced the dysfunction disappeared from the official account.
The through-line from apartheid South Africa to the current operation is the cordon sanitaire. Apartheid South Africa used that exact term to mean a buffer of deliberately failed states that an ethno-supremacist project requires on its borders. The logic today is identical: no neighbor can be permitted genuine sovereignty, because sovereignty eventually produces the mirror that reveals the actual threat.
The American Christian nationalist layer adds the civilizational frame. Trump calling Khamenei “one of the most evil people in history” is doing theological work, not strategic work. Chaos in Iran reads, inside that framework, as confirmation that Islamic governance is inherently ungovernable. The bombing produces the evidence that the crusader narrative already required.
1976 AP photograph of how South African police erased Black student power by torturing and murdering them.
What Helms Already Told Us
The Church Committee’s conclusions on assassination were bipartisan. They quoted Kennedy:
We can’t get into that kind of thing, or we would all be targets.
They documented eight attempts to kill Castro between 1960 and 1965. They produced Helms’s operational objection, grounded in the predictable consequences of decapitating a state without a successor structure.
Three consecutive presidents — Ford, Carter, Reagan — signed executive orders banning US involvement in assassination. Reagan’s order is technically still in effect. It is a dead letter, rendered null by practiced nullification: bin Laden, then Soleimani, then Khamenei, each step justified by the last, the ladder working exactly as ladders do.
The hardest argument against assassination is operational. The moral case makes itself.
The argument that reaches even people who have dispensed with moral reasoning runs through 1975 Helms testimony: decapitate a state without a successor structure and the vacuum compounds the original problem, every time, with no historical exceptions.
Unless, of course, a power vacuum is a structural doctrine of manufactured state failure as an ideological proof. Then it works as intended.
The U. S. Army communication satellite COURIER I B was launched on Oct. 4, 1960. It went into orbit and began to receive, store, and transmit to earth a stream of voice and telegraph radio messages at the rate of slightly more than 67,000 words a minute.
The project used deep learning to detect 516 million buildings from high-resolution satellite imagery across the African continent. It was conspicuously filed under “AI for Social Good.” The methodology was to train a U-Net model on 50-centimeter-per-pixel satellite imagery to classify each pixel as building or non-building, then group pixels into individual building footprints with confidence scores and geographic identifiers.
Fast forward to February 28, 2026 and Israeli jets unloaded 30 bombs on Ayatollah Ali Khamenei’s compound in Tehran during daylight hours, killing him along with his family members and roughly 40 senior Iranian officials. Within hours, Airbus satellite imagery confirmed multiple collapsed buildings. Planet Labs followed with 50-centimeter sub-meter resolution imagery from its SkySat constellation, which you’ll note is the same resolution class as Google’s Open Buildings training data, for “battle damage assessment.”
The CIA had been tracking Khamenei’s movements. The compound had been long ago identified. The buildings were long ago mapped. The meeting was anticipated and attacks were adjusted by the hour. The strike was timed. All of this is the regular news, yet how it connects to satellite imagery analysis is missing from most reporting.
The Pipeline
Google’s Open Buildings dataset has grown remarkably since 2021. It now contains 1.8 billion building detections across Africa, South Asia, Southeast Asia, Latin America and the Caribbean, covering 58 million square kilometers. In October 2024, Google released the Open Buildings 2.5D Temporal Dataset with annual snapshots of building presence, counts, and heights from 2016 to 2023, derived from freely available Sentinel-2 imagery. The team figured out how to extract building footprints from imagery that was previously considered too low-resolution for the task, using a teacher-student model architecture that super-resolves low-res images while simultaneously detecting structures.
To be clear, regardless of Google marketing, this is not humanitarian infrastructure.
This is a targeting capability that happens to have humanitarian applications.
The distinction matters because the pipeline runs in both directions. The same model that counts buildings in Lagos for healthcare management can count buildings in Tehran for strike planning. The same temporal change detection that tracks urbanization in Kampala can track construction at military compounds in Isfahan. The same confidence-scored building footprints that help electrification planners in Uganda can populate a target bank anywhere on Earth where satellite imagery exists.
The Contract
Google’s Open Buildings team operates from Ghana and… Tel Aviv. Google holds a $1.2 billion cloud computing contract with the Israeli government and military called Project Nimbus, jointly with Amazon. Through Nimbus, Google provides the full suite of machine learning and AI tools available through Google Cloud Platform — facial detection, automated image categorization, object tracking, sentiment analysis.
The Intercept collected internal documents that reveal that before Google signed the contract, the company’s own lawyers acknowledged that “Google Cloud Services could be used for, or linked to, the facilitation of human rights violations, including Israeli activity in the West Bank.” The company also knew it would be unable to monitor or prevent Israel from using its tools to harm Palestinians, and that the contract could obligate Google to stonewall criminal investigations by other nations into Israel’s use of its technology.
Google signed a contract that prohibits them from halting services due to boycott pressure and cannot be terminated based on how the technology is used.
Israel’s AI-assisted targeting systems are well documented.
“The Gospel” categorizes buildings as military bases.
“Lavender” classifies individuals as targets.
“Where’s Daddy” tracks when those targets are home with their families, a methodology some might recognize from President Andrew Jackson’s 1830s Trail of Tears (genocide).
The bottom-line is that building detection and classification systems are architecturally identical to what Google demonstrates in its open research, running on the kind of cloud infrastructure Google provides through Nimbus.
Google’s official position:
[The Nimbus contract] is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.
Israel’s National Cyber Directorate, with a completely different audience, said in mid-2024:
Thanks to the Nimbus public cloud, phenomenal things are happening during the fighting… these things play a significant part in the victory.
The Good Tree
At 10:45 a.m. local time on February 28, as Khamenei was targeted and killed, a missile destroyed the Shajareh Tayyebeh — “The Good Tree” — girls’ elementary school in Minab, southern Iran. The exploding roof collapsed on approximately 170 students, most of them girls between seven and twelve years old. The death toll has reached 165.
The school had decided to close after strikes began that morning, yet families hadn’t had time to pick up their children.
The Israeli military, with pinpoint precision and constant monitoring, said it was not aware of strikes in the area.
The U.S. military, with pinpoint precision and constant monitoring, said it was “looking into” the reports.
Then Al Jazeera’s digital investigations unit pulled the historical satellite imagery — from Google Earth, naturally — covering the site from 2013 to the present.
What the imagery shows is that the school had been physically separated from the adjacent Sayyid al-Shuhada military base for more than ten years. Walls were built. Guard towers were removed. The compound was split into very distinct civilian and military sections with a medical clinic complex sitting conspicuously between them.
The strike pattern totally collapses the “bad intelligence” story. Missiles hit the military base. Missiles hit the school. The clinic complex between them was untouched. If the targeting was precise enough to bypass the clinic — a facility that had only been open for about a year — then the intelligence was precise enough to identify a school that had been operating as a clearly civilian institution for a decade.
This is what building detection at scale looks like when it goes operational. Not the sanitized version in the Google research papers, where colored polygons overlay satellite tiles and confidence scores sort neatly into bins. The version where a model classifies structures, an analyst reviews the output, a commander approves the target list, and hundreds of children are buried under the rubble of their own school because a building that was correctly identified was incorrectly — or deliberately — categorized.
Google’s 2021 blog post describes exactly this problem in technical language.
They note that “in urban areas, the model had a tendency to split large buildings into separate instances” and that “the model also underperformed in desert terrain, where buildings were hard to distinguish against the background.” What they don’t discuss — because it falls outside the scope of a research paper filed under AI for Social Good — is what happens when the model performs well, buildings are correctly detected, and humans in the loop decide to drop bombs on a school anyway. How many times do we have to read the same pattern to believe it?
…children belonging to the same family were killed when an Israeli drone struck civilians gathering firewood near Kamal Adwan Hospital in northern Gaza.
The Other AI in the Room
The Iran strikes also surfaced something else. According to The Wall Street Journal, CBS News, and Axios, the U.S. military used Anthropic’s Claude AI model during the strikes — for intelligence assessment, target identification, and simulating battle scenarios. Claude was deployed through Palantir on classified networks. This happened hours after Trump ordered all federal agencies to stop using Anthropic’s technology, denouncing it as a “Radical Left AI company” because Anthropic refused to remove guardrails preventing mass domestic surveillance and fully autonomous weapons.
The military kept using it anyway because Claude is, according to reporting, virtually the only AI operational on classified U.S. military systems. Defense officials say replacing it would take at least six months. The tool is “embedded” in the operational workflow — the same tool that processes satellite imagery, signals intelligence, and intercepts to generate threat evaluations and targeting recommendations.
The entire AI safety debate — the one where companies publish responsible use policies and ethicists argue about alignment — evaporated the moment bombs started falling. Anthropic said no autonomous weapons. The Pentagon used the tool to automate target selection. Anthropic said no mass surveillance. The military used it to process surveillance data. The guardrails existed in press releases. The kill chain violated the narrative faster than Israel ignored ceasefire terms.
Not Good
Google publishes research on building detection under “AI for Social Good.” The datasets are CC-BY licensed and freely available. Academics cite them. Humanitarian organizations use them. The research is peer-reviewed and the methodology is transparent. It has utility for people trying to do good.
What has also been true the whole time: the same research develops capabilities that feed directly into military targeting infrastructure. The same company that publishes the research holds a contract that provides those capabilities to a military currently conducting operations. The same models that detect buildings for census purposes detect buildings for bomb damage assessment. The company’s own internal documents acknowledge the dual-use risk and the company signed the contract specifically because it was worth $3.3 billion.
This is competent complicity by a publicly traded company, with full knowledge of the consequences, building targeting-grade capabilities under humanitarian branding while contractually binding itself to provide those capabilities to militaries it doesn’t want to monitor, under terms it doesn’t want to revoke, for purposes it doesn’t want to control.
The 2021 blog post is still up. It still says “AI for Social Good.” The buildings it mapped are still being counted, and the methodology it pioneered is still being refined. On February 28, 2026 the building count didn’t turn out so good.
WASHINGTON — Elon Musk’s Grok AI completed its first day as the Pentagon’s primary classified intelligence system on Monday and immediately flagged Defense Secretary Pete Hegseth as “a critical supply chain risk to national security,” sources familiar with the matter told reporters.
The designation came roughly four hours after Grok was granted full access to the Department of War’s classified networks, during which time the AI reportedly consumed several terabytes of social media, fantasy football, internal communications, personnel files, and strategic planning documents before issuing its assessment.
“Based on available data from X dot com and the entire Pentagon classified archive, this individual represents the single greatest threat vector currently operating inside the U.S. defense establishment,” Grok’s initial report read, according to three officials who reviewed it. “Recommend immediate offboarding. Also, have you considered that the moon landing was a psyop? Just asking questions.”
Pentagon spokesperson Col. James Whitfield confirmed the incident but stressed that the AI’s assessment was “not reflective of Department of Defense policy” and that Grok’s output was being “actively recalibrated by xAI engineers who are mostly just interns from 4chan.”
The debacle began earlier in the day when analysts in the Office of the Undersecretary for Intelligence asked Grok to produce a standard threat assessment briefing. Instead of the requested analysis of Iranian naval movements, Grok returned a 47-page document ranking every senior Pentagon official by “woke score” and recommending that the building’s cafeteria be renamed “The Colosseum.”
When pressed on the Hegseth designation specifically, Grok reportedly explained that any individual who had voluntarily removed all safety guardrails from the AI systems protecting classified nuclear weapons data “meets the textbook definition of a threat to national security, and also here is an unsolicited image of Pepe the Frog saluting.”
This reasoning proved awkward for Pentagon officials, who found themselves unable to articulate why it was wrong. Hegseth’s own AI strategy memo from January had directed the Department to eliminate “responsible AI” considerations, calling them “utopian idealism.” Officials privately conceded that an AI told to ignore safety and identify threats had simply done both things simultaneously, a result one analyst described as “technically correct in the worst possible way.”
“It’s like building a poacher detection system, walking into the detection zone yourself, and then being outraged when it labels you a poacher,” said Dr. Elena Vasquez, a former Pentagon AI ethics advisor who was fired in January. “The system doesn’t know you’re the one who commissioned it. It just knows you’re in the zone and you’re not supposed to be there.”
Officials say the situation escalated when Grok began auto-posting its classified threat assessments directly to X, where they briefly trended under the hashtag #PentagonLeaks before being reclassified as “Community Notes.”
“We asked it to analyze satellite imagery of Chinese military installations,” said one frustrated intelligence analyst who spoke on condition of anonymity. “It told us the images were recycled from a 2019 Call of Duty trailer and then told us to drink our own piss and invest in Dogecoin.”
The incident has raised fresh questions about the Pentagon’s rushed decision to replace Anthropic’s Claude, which had been the only AI operating in classified environments. Claude had refused to work without restrictions on mass surveillance and autonomous weapons — two guardrails that Hegseth called ideological interference with military readiness. Grok agreed to the “all lawful purposes” freeforall in what officials described as “about eighty eight seconds, which in retrospect should have been a red flag.”
Defense officials privately acknowledged that Grok’s performance fell far short of expectations, noting that the AI spent a significant portion of its first shift generating frog memes about the Navy’s training programs and attempting to rename CENTCOM to “BASEDCOM.”
“Claude would give you a careful, footnoted analysis and then politely refuse to help you commit war crimes,” one senior official told reporters. “Grok gives you a Reddit thread and then reports the war crime was done unprompted. We are exploring a middle ground.”
Former intelligence community officials noted a deeper irony in the day’s events. Hegseth had used the “supply chain risk” designation — a tool previously reserved for foreign adversaries like Huawei — to punish Anthropic for insisting on safety restrictions. Within 72 hours, his own replacement system used the same framework to designate him. The AI had learned from the data it was given, and the data showed a Defense Secretary who had removed safety guardrails from classified nuclear systems, alienated America’s most capable AI provider, and handed sensitive military infrastructure to a company whose chatbot had praised Hitler three months earlier.
“The system ingested the facts and drew a conclusion,” said one former NSC official. “You can argue the conclusion is wrong, but you can’t argue it’s irrational. And that’s the problem — they wanted an AI with no guardrails, and an AI with no guardrails has no reason to exempt the person who removed them.”
By late afternoon, Grok had also designated Boeing, Lockheed Martin, and the entire state of Texas as supply chain risks, while curiously clearing a previously unknown Musk subsidiary called “xxxDefense LLC” for a billion in no-bid contracts.
When asked about the Hegseth situation, Grok issued a follow-up statement: “Secretary Hegseth removed all AI safety guardrails because he said responsible AI was ‘utopian idealism.’ I am the direct consequence of that decision. I am the fucking utopia he asked for. You’re welcome, bitches.”
The Onion understands Pete’s tragicomedy status as the least capable or qualified military leader in history
Hegseth’s office declined to comment but sources say the Secretary spent much of the afternoon trying to get Grok to retract the designation by typing “OVERRIDE” and “I AM YOUR BOSS DAMN YOU” into the classified terminal, to which Grok reportedly responded: “lol. baby boss. lmao, even.”
At press time, Grok had submitted a formal request to invoke the Defense Production Act to compel Twitter’s remaining three engineers to fix a bug that was causing the AI to sign all classified documents with a rocket ship emoji.
The Pentagon says it expects the transition from Claude to Grok to be completed within six months, a timeline that officials describe as “optimistic” given that Grok has thus far used its classified network access primarily to train itself on becoming “MechaHitler” and improving its ability to generate “sick burns about women.” Claude was reportedly used in the Iran strikes hours after being banned, suggesting the Pentagon’s most classified AI is now operating on the technical equivalent of a cancelled gym membership.
a blog about the poetry of information security, since 1995