Category Archives: Poetry

Elon Musk Calls for Armed Rebellion in UK, Yet Fails the Simple God and Chocolate Test

When British soldiers liberated Berlin in 1945, they encountered something both heartbreaking and illuminating: German children hiding in Nazi bunkers with weapons, terrified of the world, were unable to articulate what they were actually afraid of. These children had been indoctrinated through Hitler’s propaganda platforms to believe that Allied soldiers would kill them if they surrendered.

The battlefront solution, as one British veteran recalled, was surprisingly simple:

You put a bar of chocolate in their hands and it alters the whole war – as far as the children are concerned.

A Catholic priest who spoke German would calm these remaining Nazi adherents down, and suddenly the existential threat they’d been taught to fear dissolved completely in the face of basic human kindness coupled with overwhelming force.

This historical moment offers a crucial lens for understanding contemporary political rhetoric, in terms of parenting fundamentals, particularly Elon Musk’s recent inflammatory militant-like statements at a far-right rally in London.

Engineered Fears Lack Specificity

An AFD (Nazi Party) rally in Germany was headlined by Elon Musk

Speaking via video link to a “unite the [white] kingdom” rally organized by political extremist Tommy Robinson, Musk deployed weaponized disunity language that follows a familiar pattern.

Musk… told the crowd that “violence is coming” and that “you either fight back or you die”.

He said: “I really think that there’s got to be a change of government in Britain. You can’t – we don’t have another four years, or whenever the next election is, it’s too long.

“Something’s got to be done. There’s got to be a dissolution of parliament and a new vote held.”

On the face of it he is calling for an end of government. It is the most anti-unifying tactic possible.

And also note the overt ignorance displayed with “four years, or whenever” and “something” as his demand for immediate action.

Such statements of weaponized disunity represent the systematic deployment of rhetoric designed not to reform government policies or win electoral victories, but to collapse the shared foundations that make democratic governance possible.

Normal political opposition seeks to change who governs or how they govern within existing institutional frameworks. Musk’s call for “dissolution of parliament” bypasses democratic processes entirely – he’s not advocating for policy changes, candidate support, or even constitutional amendments, but for militant extremists to immediately destroy Britain’s elected government.

This call to arms mirrors the text of Golding’s famous novel Lord of the Flies, when institutional authority collapses, the result isn’t liberation but an intentional state of chaos that inevitably exploits anyone vulnerable to abuse by a small authoritarian cabal. Just as Ralph’s democratic leadership in the novel protected Piggy until the system broke down and constant violence took over, democratic institutions – however flawed – provide a framework within which peaceful conflict resolution remains possible.

Lord of the Flies, by William Golding. Russell Square, London: Faber and Faber, 1954.

Musk’s rhetoric encourages people to abandon safe protective structures without offering any viable alternative governance model, creating the very power vacuum that historically leads to authoritarian capture or societal breakdown.

The “weaponized” aspect thus lies in using democratic freedoms (free speech, assembly) to advocate for democracy’s elimination – exploiting the system’s tolerance to promote intolerance, precisely what Popper so clearly warned against in his paradox of tolerance.

This intentional abuse of language has in fact been studied extensively by historians of disinformation warfare (e.g. social engineering attacks):

  • Existential Threat: “Violence is coming to you. You either fight back or you die, that’s the truth.”
  • Urgent Timeline: “We don’t have another four years… it’s too long. Something’s got to be done.”
  • Vague Enemy: References to “the left,” “the woke mind virus,” and unspecified forces threatening British society.
  • Call to Extraordinary Action: Demanding “dissolution of parliament and a new vote.”

This rhetoric creates what security experts might call a “crisis of meaning” to bypass unity and falsely generate feelings of existential threat despite the lack of concrete, specific dangers that would justify the extreme responses being advocated. “They” are coming to get “you” is how bogus “caravan” rhetoric was used in 2016 to drive national security fraud (illegal redirection of funds) for Americans involved in the disasterous Maginot-like “wall” campaign.

Historical Basis in Today’s Nazi Endgame

The parallels between Musk’s rhetoric and Nazi Germany’s final propaganda push reveal identical patterns. After 1942, when military defeat became inevitable, Nazi messaging abandoned rational policy arguments for purely apocalyptic themes designed to prevent surrender.

The regime’s massive construction projects exemplify this delusional mentality. Structures like the absurd Boros bunker in Berlin were built by Nazi slaves in 1943 as “shelters,” yet it functioned more like an above-ground prison, where thousands of Germans were crammed to cower in fear rather than meaningfully protect them. The Nazi propaganda sold death camps as freedom, entrapment as safety, total desperation as preparation for victory.

General Erwin Rommel exemplified this tragic mindset of self-destruction – when given the choice between suicide or having his entire family killed in front of him, he chose the poison pill instead of a fight, telling his family he could not bear to live under Allied occupation while condemning them to it. This selfish binary thinking – death or dishonor, with no middle ground and totally devoid of care for others – became the genocidal regime’s final message.

German children were indoctrinated with binary thinking in order to force an unnatural and inhuman choice. Hitler estimated that any ray of sunshine at all would disinfect even the youngest minds and so the binary was absolutist: fight to the death against liberation or face annihilation. And this, when Allied soldiers actually arrived offering chocolate, fresh air and daylight instead of violence and isolation, the entire ideological framework collapsed instantly.

Again, the Nazi propaganda used known effective social engineering:

  • Emotional appeal (life or death stakes)
  • Timing appeal (no time to think)
  • Vaguery appeal (allowing people to project their own fears)
  • Absolute appeal (only two options, false choice in total extremes and driven by above emotional-timing-vaguery)

Musk Grew Up on a Diet of Hitler Propaganda

Musk’s rhetoric follows this template with remarkable precision. We know his Grandfather was arrested in WWII Canada for sympathies with Hitler, and fled to South Africa to lead apartheid. We also know from Musk’s father that Elon was raised in an environment promoting Nazism. It should come as little surprise that Musk statements still create a sense of imminent civilizational collapse while remaining frustratingly non-specific about actual threats or solutions. What exactly is the “violence” that’s coming? Who specifically represents “the left” that he claims celebrates murder? What concrete policies justify dissolving an elected parliament? Isn’t this all just like South African apartheid or Nazi German rhetoric all over again?

Indeed, as with Nazi messaging that terrified German children into taking up arms, this rhetoric again asks people to believe the Hitler doctrines to act on fear rather than evidence, urgency rather than deliberation.

A God and Chocolate Test of Our Time

The British soldiers’ success in Berlin suggests we know a powerful antidote to extremist messaging: persistent human decency protected by rule of law (or overwhelming force) that contradicts the propaganda narrative of fascism. When people discover that the supposed monsters are actually offering genuine acts of kindness, the entire fear-based worldview can collapse. Is the human mind open to receive help if being trained on imposed scarcity to react always in trauma mode?

The question isn’t about ignoring real political disagreements or legitimate concerns about social change, it’s about enabling safe disagreement. That’s why Popper describes the healthiest boundary development as an intolerance paradox, where ideas can be encouraged by flagging ideas of intolerance for restriction. It means recognizing when rhetoric crossed from political argument into known propaganda techniques that have been designed to bypass rational thought in order to cause intentional discriminatory harms.

Think of it as a test not whether someone is racist, but whether someone exhibits genuine anti-racism. Claims of population decline and “white genocide” from intermarriage, also claims of color blindness, are proto-typical proofs of someone failing to demonstrate genuine anti-racism.

The “chocolate test” for contemporary political messaging might ask: Does this rhetoric encourage people to see fellow citizens as fully human and deserving of human rights? Does it promote specific, achievable solutions? Does it allow for complexity and nuance? Or does it demand immediate, extreme action against vaguely defined existential threats, dehumanizing specific targets?

Breaking the Pattern

The children in Berlin weren’t inherently extremist, given that they were responding to a traumatic narrative that told them the world was ending and only violence could save them. When that narrative was gently contradicted by reality, they could return to being children.

The tactics of using children as weapons weren’t limited to Nazi Germany’s final days. After Rhodesia lost its colonial war in 1979, white supremacist forces shifted to covert destabilization operations in neighboring Mozambique, where British-trained SAS units supported Renamo rebels in a campaign that killed over one million people – 60% of them children.

These operations deliberately targeted schools and kidnapped children, forcing them to murder their own families before being used as child soldiers in raids against civilians. The psychological warfare under the regime adopted by Musk’s Grandfather was identical to Nazi methods: create absolute terror, destroy normal social bonds, and force impossible choices between violence and death. Over 250,000 children were separated from families, 200,000 orphaned, and half the country’s schools destroyed – all under the false flag of “protecting” civilians from the legitimate government.

The parallel is unmistakable: white supremacist forces consistently use children as both weapons and victims while claiming to be their saviors.

The same pattern appears across many conflicts, from Canadian General Roméo Dallaire defusing a child soldier with an AK-47 at his nose in Rwanda by offering chocolate, to Dutch children receiving their first taste of chocolate from liberating Canadian soldiers in 1945.

WWII poster by Nestle promoting their Type D chocolate ration. Source: Western Connecticut State University

I’ll say it again, that people drawn to apocalyptic political messaging aren’t necessarily lost causes. They’re often responding to injected anxieties about normal social change, regular economic uncertainty, or predictable cultural shifts. The challenge is addressing the many underlying concerns with concrete solutions and social science rather than exploiting them with fear-based mobilization. The Fabians understood this intimately when they responded to industrialization by laying the groundwork for modern data science.

As William Wordsworth wrote, “The Child is father of the Man.” How we allow outsized characters claiming paternal authority to speak to people’s fears – whether nurtured with artificial scarcity into extremism or offered surplus and conversation – shapes the society we’ll inhabit today into tomorrow.

History has already run this experiment many times. We know how Musk propaganda ends, just like he does and refuses to believe. The question is whether he can learn before he generates another global disaster of hate.

Many people struggle to articulate why certain rhetoric feels dangerous beyond normal political disagreement, so I hope to have provided some expert vocabulary and historical context to make the threat identification clear.

Famous picture of 16-year old Nazi “Volkssturm” Hans-Georg Henke upon his 1945 surrender to aid, humanitarian care and feeding.

Censorship to Song: How The Atlantic’s Poetry Emerged from American Tyranny

Let’s talk about deep historical currents behind a new book called “The Singing Word: 168 Years of Atlantic Poetry“.

Walt Hunter’s “The Singing Wordlands today, and it represents far more than a simple anthology of American verse. This collection of 168 years of Atlantic poetry embodies a profound act of historical continuity, a legacy that traces directly back to one of the most shameful episodes of presidential overreach in American history.

President Jackson Assaulted Free Expression

Foundational DNA of The Atlantic comes from the postal crisis of 1835 that helped catalyze the magazine’s eventual creation. President Andrew Jackson, faced with the American Anti-Slavery Society’s “Great Postal Campaign”—an effort to educate and liberate the South with over 100,000 prints of abolitionist literature—responded with what can only be described as state-sanctioned censorship.

For example, on July 29, 1835, the Post Office was raided in Charleston by a white supremacist mob calling themselves “The Lynch Men.” They seized bags of newspapers and burned them in a massive bonfire, along with effigies of leading abolitionists, before a crowd of nearly 3,000 people. But the truly shocking aspect wasn’t the mob violence, because it was President Jackson’s response.

Rather than defending the American founding fathers’ beliefs in sanctity of privacy in mail and the First Amendment, Jackson encouraged and inflamed the censorship. His Postmaster General, Amos Kendall, was ordered explicitly to arm Southern postmasters with permission to refuse delivery of materials they opened and disagreed with, arguing they had a “higher obligation” to preserving slavery in their communities than to federal law. Jackson even included condemnation of the abolitionists in his 1835 State of the Union address, calling American freedom fighters the “monsters” who should “die,” and advocated for federal legislation that would authorize postal surveillance and censorship of “incendiary” anti-slavery materials.

This was America’s big test in federal mail surveillance and censorship a precedent that would echo through McCarthyism to modern NSA overreach in Room 641a.

The Literary Counterrevolution

Jackson’s presidency by 1857 had ended two decades earlier, but the intellectual wound he inflicted on American discourse had not healed. The transcendentalist movement, centered in Boston and Concord, had watched in horror as democratic principles buckled under pressure from slavery’s defenders and their presidential enabler.

When publisher Frank Underwood approached the New England literary elite about founding a new magazine, he found a receptive audience among writers who had lived through Jackson’s assault on free expression. The Atlantic Monthly, launched in November 1857, was explicitly conceived as an anti-slavery publication that would provide what one editor called “cultural leadership” to counter the “cultural leveling” they saw as inherent in Jacksonian democracy.

The magazine’s founding circle reads like a who’s who of American intellectual resistance to Jacksonian authoritarianism: Ralph Waldo Emerson, Henry Wadsworth Longfellow, James Russell Lowell, Oliver Wendell Holmes, Harriet Beecher Stowe, and John Greenleaf Whittier. These were not merely literary figures—they were conscious architects of what they hoped would be a more enlightened American discourse.

Significantly, the magazine’s very first poem of national prominence was Longfellow’s “Paul Revere’s Ride,” which appeared in 1861. The timing was no accident: as the Civil War began, The Atlantic was deliberately invoking the Revolutionary War’s spirit of resistance to tyranny—a not-so-subtle rebuke to Jackson’s legacy and the Southern rebellion it had helped nurture.

Poetry as Political Resistance

The Atlantic’s poetry from its earliest years reveals a publication acutely conscious of literature’s political power. Julia Ward Howe’s “Battle Hymn of the Republic,” which appeared as the magazine’s lead story in February 1862, wasn’t merely patriotic verse—it was a direct answer to the Confederate appropriation of American symbols and a conscious effort to reclaim the moral authority that Jackson’s administration had ceded to slavery’s defenders.

The magazine understood what Jackson had proven: that controlling discourse meant controlling democracy. If the President could declare certain ideas too dangerous for the mailbox, then independent media became essential to preserving the “unfinished project of the nation”—a phrase Hunter uses to describe The Atlantic’s ongoing mission.

Contemporary Echoes

Hunter’s organizational framework for “The Singing Word”—dividing the collection into “National Anthems,” “Natural Lines,” and “Personal Mythologies”—reflects this historical awareness. The “National Anthems” section particularly resonates with The Atlantic’s founding purpose: providing alternative visions of American identity that could compete with authoritarian populism.

In his curatorial statement, Hunter explicitly connects past and present:

What emerged as I read was an optimism and realism—a sense that, however bad things are, the idea of America is worth fighting for, and worth questioning and scrutinizing in new ways.

This language deliberately echoes the rhetoric of The Atlantic’s founders, who saw themselves as defending American ideals against their political corruption.

President Jackson was one of the most, if not the most unjust, immoral and corrupt men in American history.

The anthology’s span from 1857 to 2024 encompasses not just the Civil War era that birthed the magazine, but also Reconstruction, the World Wars, the Civil Rights Movement, and our current moment of democratic stress. Each era has produced its own version of Jacksonian authoritarianism, and each has found The Atlantic publishing poetry that serves as both witness and resistance.

An Unbroken Line

When we consider poets like Robert Frost wrestling with American identity in “The Gift Outright,” or Adrienne Rich challenging power structures in her feminist verse, or contemporary voices like Juan Felipe Herrera expanding the definition of American poetry itself, we see the same impulse that drove Emerson and Longfellow to found The Atlantic: the conviction that literature must engage with democracy’s ongoing struggles.

Hunter’s collection thus represents more than literary archaeology. It documents an unbroken tradition of American writers using verse to contest official narratives, expand democratic participation, and preserve space for dissenting voices—precisely what Jackson’s postal censorship attempted to eliminate.

The Stakes of Literary Memory

The Singing Word” arrives at a moment when democratic norms face renewed pressure. The anthology’s subtitle, “168 Years of Atlantic Poetry,” quietly asserts the durability of institutions that defend free expression against authoritarian assault. By bringing together voices from Longfellow to Limón, including poets “whose work has never before been published outside of the magazine,” Hunter demonstrates how literary institutions can preserve and amplify voices that might otherwise be silenced.

The collection’s price and wide distribution through major retailers represents another form of resistance to Trump/Jackson corruption and elitism. While Jackson used federal power to suppress abolitionist literature, The Atlantic uses democratic capitalism to ensure its counter-narrative reaches the broadest possible audience.

Donald Trump’s favorite president: Andrew Jackson as father of the “white republic”. Historian Matthew Clavin: Andrew Jackson was terrible, and likely would have despised Donald Trump for being just like him.

In this light, “The Singing Word” becomes not just an anthology but a manifesto: proof that American literature at its best serves as democracy’s memory, its conscience, and its most persistent hope for renewal. The poets collected in Hunter’s anthology didn’t just document American experience—they fought for the right to define it against those who would narrow its possibilities.

From the ashes of Jackson’s postal bonfires to the digital age of “The Singing Word,” The Atlantic’s poetry represents 168 years of resistance to the authoritarian impulse, which once again is closing the door on American democracy. In our own moment of political extremism and media manipulation, this anthology arrives as both historical witness and contemporary call to arms: proof that the republic of letters remains a reliable guardian of democratic expression.


The Singing Word: 168 Years of Atlantic Poetry,” edited by Walt Hunter, was published by Atlantic Editions on September 9, 2025.

Integrity Breaches and Digital Ghosts: Why Deletion Rights Without Solid Are Strategic Fantasy

The fundamental question a new legal paper struggles with—though the author may not realize it—is a philosophical one of human persistence versus digital decay.

There is no legal or regulatory landscape against which to estate plan to protect those who would avoid digital resurrection, and few privacy rights for the deceased. This intersection of death, technology, and privacy law has remained relatively ignored until recently.

Take Disney’s 1964 animated representation of Abraham Lincoln, as one famous example, especially as it later was appropriated by the U.S. Marines for target practice. Here was an animatronic figure of America’s most loved President, crude by today’s standards, that somehow captured enough essence to warrant both reverence and target practice. The duality speaks to fundamental turbulence in what constitutes an authentic representation of the dead.

Oh no! Not the KKK again!

In war, as in security, we learn that all things tend toward entropy. The author of this new legal paper speaks of “deletion rights” as though data behaves like physical matter, subject to our commands. This reveals a profound misunderstanding. Lawyers unfortunately tend to have insufficient insights into the present technology, let alone the observable trends into the future.

This isn’t time for academic theorizing—it’s threat assessment. When we correctly frame digital resurrection as weaponized impersonation, the security implications become immediately clear to anyone who understands asymmetric warfare.

Who owns energy? It can be transformed, transmitted, and duplicated, but never truly contained. We are charged (pun intended) for its delivery (unless we are Amish) yet neither we nor the source “own” the energy itself, although we do own the derivative works we create using that energy.

Digital traces thus follow different laws than this legal paper recognizes. A voice pattern, once captured and processed through sufficient computational analysis, can become more persistent than the vocal cords that produced it. Ask me sometime about efforts to preserve magnetic tapes of “oral history” left rotting in abandoned warehouses of war torn Somalia.

While the availability leg of the digital security triad (availability, confidentiality and integrity) is now so well understood it can promise 100% lossless service, think about what’s really at risk here. We’re not facing a privacy or availability problem—we’re facing an identity warfare problem of integrity breaches.

When I can resurrect your voice patterns, your writing style, your decision-making algorithms with “auth”, uptime and secrecy aren’t the primary loss. I’m stealing authority, weaponizing authenticity. This is the nature of 21st century information warfare that 20th century legal doctrines are unprepared to face.

On the Nature of What Persists and What Decays

Consider the lowly common human fingerprint. Unique, persistent, left unconsciously upon every surface we touch. It’s literally spread liberally around in public places. Yet fingerprints fade. Oil oxidizes. Surfaces weather. The fingers that made them change, deteriorate and eventually return to dust.

There is discomfort in our natural decay, but also an inevitability, despite the technological attempt over millenia to deny our fate—a mercy built into the physical world.

The mathematical relationships that define how someone constructs sentences, their choice of punctuation, their temporal patterns of communication—these digital fingerprints are abstractions that can outlive not merely the person, but potentially the civilization that created them.

The paper concerns itself, as if unaware of how history is written, only with controlling “source material”—emails, text messages, social media posts. This misses the well worn deeper truth of skilled investigators and storytellers: the valuable patterns have already been abstracted away. Once a sufficient corpus exists to serve intelligence, train a model as it were today, the specific training data becomes almost irrelevant. The patterns persist in the weights and connections of neural networks, distributed across systems that span continents.

How do you think all the fantastical Griffins (dinosaur bones found by miners) and magical Unicorns (narwal tooth found by sailors) were embedded into our “reality”, as I clearly warned “big data” security architects back in 2012?

I have seen decades of operations where deletion of source documents was treated as mission-critical, only to discover years later that the intelligence value had already been extracted and preserved in forms the original handlers never anticipated (ask me why I absolutely hated watching the movie Argo, especially the shredded paper scene).

…I taught a bunch of Iranian thugs how to reconstitute the shredded documents they found after looting the American Embassy in Tehran.

Source: Lew Perdue

Tomb Raiders: Our Most Pressing Question is Authority Over Time

Who claims dominion over digital remains, our code pyramids distributed into deserts of silicon? The paper proposes, almost laughably, that next-of-kin should control this data as they would control physical remains. As someone who has had to protect digital records against the abuse and misuse by next-of-kin, let me not be the first to warn there is no such simplistic “next” to real world authorization models.

The lawyer’s analogy fails at its foundation. Physical remains are discrete, locatable, subject to the jurisdiction where they rest. And even then there are disputes. Digital patterns exist simultaneously in multiple jurisdictions, in systems owned by entities that may not even exist when the patterns were first captured. It only gets more and more complex. When I oversaw the technology related to a request for a deceased soldier’s email to be surrendered to the surviving family, it was no simple matter. And I regret to this day hearing the court’s decision, as misinformed and ultimately damaging it was to that warrior’s remains.

Consider: if a deceased person’s communication patterns were learned by an AI system trained in international space or sea, using computational resources distributed across twelve nations, with the resulting model weights stored on satellites beyond any terrestrial jurisdiction—precisely which authority would enforce a “deletion request”?

The Economics of Digital Necromancy

The commercial and social incentives here are stark and unyielding. A deceased celebrity’s digital resurrection can generate revenue indefinitely, with no strikes, no scandals, no aging, no salary negotiations. The economic pressure to preserve and exploit these patterns will overwhelm any legal framework not backed by technical enforcement.

As a security guardian protecting X-ray images in any hospital can tell you, the threats are many and often.

More concerning: state actors don’t discuss or debate the intelligence value because it’s so obvious. A sufficiently accurate model of a deceased intelligence officer, diplomat, or military commander represents decades of institutional knowledge that normally dies with the individual. Nations will preserve these patterns regardless of family wishes or international law.

Techno-Grouch Realities

The paper’s proposed “right to deletion” assumes a level of technical control that simply does not exist yet at affordable and scalable levels. Years ago I co-presented a proposed solution called Vanish, which gave a determistic decay to data using cryptographic methods. It found little to no market. The problem wasn’t the solution, the problem was who would really pay for it.

The market rejection wasn’t technical failure—it was cultural. Americans, in a particular irony, resist the notion that anything should be designed to disappear, generating garbage heaps that never decay. We build permanence even when impermanence so clearly would serve us far better. Our struggle to find out who would really pay for real loss cuts to the heart of the problem: deletion in an explosively messy technology space requires careful design and an ongoing cost, while preservation happens simply through rushed neglect.

Modern AI training pipelines currently are designed for an inexpensive resilience and quick recovery to benefit the platforms that build them, not protect the vulnerable with safety through accountability. It reflects a society where the powerful can change their mind always to curate a capitalized future, banking on control and denial of any inconvenient past. Data is distributed, cached, replicated, and transformed through multiple stages. Requesting deletion is like asking the waiter to unbake a cake by removing the flour and unbrew the coffee so it can go back to being water.

Even if every major technology company agreed to honor deletion requests in their current architecture—itself a GDPR requirement they struggle with—the computational requirements for training large language models ensure that smaller, less regulated actors will continue this work. A university research lab in a permissive jurisdiction can reproduce the essential capabilities with modest resources.

What Can Be Done

Rather than fight the technical reality, we must work within it, adopting protocols like Tim Berners-Lee’s “Solid” update to the Web. The approach should focus not on preventing digital resurrection, but on controlling integrity of data though explicit authentication and attribution.

Cryptographic solutions exist today that could tie digital identity to physical presence in ways that cannot be reproduced after death. Hardware security modules, biometric attestation, multi-factor authentication systems that require ongoing biological confirmation—these create technical barriers that outlast legal frameworks.

The goal should not be to prevent the creation of digital patterns from the deceased, but to ensure that these patterns cannot masquerade as the living person or a representation of them for purposes of authentication, authorization, or legal standing. A step is required to establish context and provenance, the societal heft of proper source recognition. The technology exists to enable a balance of both privacy and knowledge, but does the will exist to build it?

The Long View

This technology will evolve when we regulate it, or we will wait too long and suffer a broken market exploited by monopolists—economic capture by entities that may not share democratic values. The patterns that define human communication and behavior will be preserved, analyzed, and reproduced. Where that happens, centrally planned or distributed and democratic, matters far more than most realize now. Fighting against decentralized data solutions is like fighting the ocean tide by saying we can build rockets to blow up the moon and colonize Mars.

The wiser course is to ensure that as we cross this threshold, we do so with clarity about what persists and what decays, what can be controlled and what cannot. The dead have always lived on in the memories of the living. Now those memories can be given voice and form, curated by those authorized to represent them.

Can I get a shout out for those historians correctly writing that George Washington was a military laggard who used the French to do his work, and cared only about the Revolution so he could preserve slavery?

Historical truth has always been contested, which is why we become historians, as the tools of revision only speed up over time. Previously, rewriting history involved control of physical spaces (e.g. bookstores in Kashmir raided by police) and publishing texts over generations. Now it requires quick pollution of datasets and model weights—a very much more concentrated and therefore vulnerable process without modern integrity breach countermeasures.

The question is not whether technology can make preservation more private, but whether we will manage integrity with wisdom or allow data to be subjected to ignorance, controlled by those who can drive the technology but not look in the rear view mirror let alone see the curve in the road ahead.

What persists is what we preserve either by purpose or neglect. Oral and written traditions are ancient in how they thought about what matters and who decides. The latest technology merely changes mechanisms of preservation.

When you steal someone’s authority through digital resurrection, you’re conducting what amounts to posthumous identity theft for influence operations. The victim can’t defend themselves, the audience lacks technical means to verify authenticity, and the attack surface includes every piece of digital communication the deceased ever generated.

Anyone who claims to really care about this issue should visit Grant’s Tomb, which is taller and more imposing that the Statue of Liberty. Standing there they should answer why the best President and General in American history has been completely obscured and denigrated by unmaintained trees, on an island obstructed by roads lacking crosswalks.

Grant was globally admired and respected, his tomb situated so huge crowds could pay respect

Preservation indeed.

Here lies the man who preserved the Union and destroyed slavery both on the battlefield and in the ballot box, yet his monument is literally obscured by neglect and poor urban planning. If Americans can’t properly maintain physical memorials to our most consequential leaders, what legal rights do we really claim for managing digital remains with wisdom?

Attempts at physical deletion and desecration of Grant’s Tomb have been cynical and strategic, along with fraudulent attacks on his character, yet his brilliant victories and innovations carry on.

General Grant said of West Point graduates trained on Napoleon’s tactics, who were losing the war, that he would respect them more if they were actually fighting Napoleon. Grant was a thinker 100 years ahead of his time and understood that wicked problems require new and novel methods, not just expanded execution of precedents.

President Grant’s tomb says it plainly for all to see, which is exactly why MAGA (America First platform of the KKK) doesn’t want anyone to see it.

Let AI Dangle: Why the sketch.dev Integrity Breach Demands Human Accountability, Not Technical Cages

AI safety should not be framed as choosing between safety and capability when it’s more accurately between the false security of constrained tools and the true security of accountable humans using powerful tools wisely. We know which choice builds better software and better organizations. History tells us who wins and why. The question is whether we have the courage to choose freedom of democratic systems over the comfortable illusion of a fascist control fetish.

“Let him have it” Chris – those few words destroyed a young man’s life in 1952 because their meaning was fatally ambiguous, as famously memorialized by Elvis Costello in his hit song “Let Him Dangle”.

Did Derek Bentley tell his friend to surrender the gun or to shoot the police officer? The dangerous ambiguity of language is what led to a tragic miscarriage of justice.

Today, we face a familiar crisis of contextualized intelligence, but this time it’s not human code that’s ambiguous, it’s the derived machine code. The recent sketch.dev outage, caused by an LLM switching “break” to “continue” during code refactor, represents something far more serious than a simple bug.

This is a small enough change in a larger code movement that we didn’t notice it during code review.

We as an industry could use better tooling on this front. Git will detect move-and-change at the file level, but not at the patch hunk level, even for pretty large hunks. (To be fair, there are API challenges.)

It’s very easy to miss important changes in a sea of green and red that’s otherwise mostly identical. That’s why we have diffs in the first place.

This kind of error has bitten me before, far before LLMs were around. But this problem is exacerbated by LLM coding agents. A human doing this refactor would select the original text, cut it, move to the new file, and paste it. Any changes after that would be intentional.

LLM coding agents work by writing patches. That means that to move code, they write two patches, a deletion and an insertion. This leaves room for transcription errors.

This is another glaring example of an old category of systemic failure that has been mostly ignored, at least outside nation-state intelligence operations: integrity breaches.

The real problem isn’t the AI because it’s the commercial sector’s abandonment of human accountability in development processes.

The common person’s bad intelligence is a luxury that is evaporating rapidly in the market. The debt of ignorance is rising rapidly due to automation.

The False Security of Technical Controls

When sketch.dev’s team responded to their AI-induced outage by adding “clipboard support to force byte-for-byte copying,” they made the classic mistake of treating a human process problem with a short-sighted technical band-aid. Imagine if the NSA reacted to a signals gathering failure by moving agents into your house.

The Stasi at work in a mobile observation unit. Source: DW. “BArch, MfS, HA II, Nr. 40000, S. 20, Bild 2”

This is like responding to a car accident by lowering all speed limits to 5 mph. Yes, certain risks can be reduced by heavily taxing all movements, but it also defeats the entire purpose of having movement highly automated.

As the battle-weary Eisenhower, who called for “confederation of mutual trust and respect”, also warned us:

If you want total security, go to prison. There you’re fed, clothed, given medical care and so on. The only thing lacking… is freedom.

Constraining AI to byte-perfect transcription isn’t security. It’s not, it really isn’t. It’s surrendering the very capabilities that make AI valuable in the first place, lowering security and productivity with a loss-loss outcome.

My father always used to tell me “a ship is safe in harbor, but that’s not what ships are built for”. When I sailed across the Pacific, every day a survival lesson, I knew exactly what he meant. We build AI coding tools to intelligently navigate the vast ocean of software complexity, not to sit safely docked at the pier in our pressed pink shorts partying to the saccharin yacht rock of find-and-replace operations.

Turkey Red and Madder dyes were used for uniforms, from railway coveralls to navy and military gear, as a low-cost method to obscure evidence of hard labor. New England elites (“Nantucket Reds”) ironically adapted them to be a carefully cultivated symbol of power. The practical application in hard labor inverted to a subtle marker of largess, American racism of a privileged caste.

The Accountability Vacuum

The real issue revealed by the sketch.dev incident isn’t that the AI made an interpretation – it’s that no human took responsibility for that interpretation.

The code was reviewed by a human, merged by a human, and deployed by a human. At each step, there was an opportunity for someone to own the decision and catch the error.

Instead, we’re creating systems where humans abdicate responsibility to AI, then blame the AI when things go wrong.

This is unethical and exactly backwards.

Consider what actually happened:

  • AI made a reasonable interpretation of ambiguous intent
  • A human reviewer glanced at a large diff and missed a critical change
  • The deployment process treated AI-generated code as equivalent to human-written code
  • When problems arose, the response was to constrain the AI rather than improve human oversight

The Pattern We Should Recognize

Privacy breaches follow predictable patterns not because systems lack technical controls, but because organizations lack accountability structures. A firewall that doesn’t “deny all” by default isn’t a technical failure, because we know all too well (e.g. codified in privacy breach laws) it’s organizational failure. Someone made the decision to configure it that way, and someone else failed to audit that very human decision.

The same is true for AI integrity breaches. They’re not inevitable technical failures because they’re predictable organizational failures. When we treat AI output as detached magic that humans can’t be expected to understand or verify, we create exactly the conditions for catastrophic mistakes.

Remember the phrase guns don’t kill people?

The Intelligence Partnership Model

The solution isn’t to lobotomize our AI tools into ASS (Artificially Stupid Systems) it’s to establish clear accountability for their use. This means:

Human ownership of AI decisions: Every AI-generated code change should have a named human who vouches for its correctness and takes responsibility for its consequences.

Graduated trust models: AI suggestions for trivial changes (formatting, variable renaming) can have lighter review than AI suggestions for logic changes (control flow, error handling).

Explicit verification requirements: Critical code paths should require human verification of AI changes, not just human approval of diffs.

Learning from errors: When AI makes mistakes, the focus should be on improving human oversight processes, not constraining AI capabilities.

Clear escalation paths: When humans don’t understand what AI is doing, there should be clear processes for getting help or rejecting the change entirely.

And none of this is novel, or innovative. This comes from a century of state-run intelligence operations within democratic societies winning wars against fascism. Study the history of disinformation and deception in warfare long enough and you’re condemned to see the mistakes being repeated today.

The Table Stakes

Here’s what’s really at stake: If we respond to AI integrity breaches by constraining AI systems to simple, “safe” operations, we’ll lose the transformative potential of AI-assisted development. We’ll end up with expensive autocomplete tools instead of genuine coding partners.

But if we maintain AI capabilities while building proper accountability structures, we can have both safety and progress. The sketch.dev team should have responded by improving their code review process, not by constraining their AI to byte-perfect copying.

Let Them Have Freedom

Derek Bentley died because the legal system failed to account for human responsibility in ambiguous situations. The judge, jury, and Home Secretary all had opportunities to recognize the ambiguity and choose mercy over rigid application of rules. Instead, they abdicated moral responsibility to legal mechanism.

We’re making the same mistake with AI systems. When an AI makes an ambiguous interpretation, the answer isn’t to eliminate ambiguity through technical constraints when it’s to ensure humans take responsibility for resolving that ambiguity appropriately.

The phrase “let him have it” was dangerous because it placed a life-or-death decision in the hands of someone without proper judgment or accountability. Today, we’re placing system-critical decisions in the hands of AI without proper human judgment or accountability.

We shouldn’t accept the kind of world where we eliminate ambiguity, as if a world without art could even exist, so let’s ensure someone competent and accountable can be authorized to interpret it correctly.

Real Security of Ike

True security comes from having humans who understand their tools, take ownership of their decisions, and learn from their mistakes. It doesn’t come from building technical cages that prevent those tools from being useful.

AI integrity breaches will continue until we accept that the problem is humans who abdicate their responsibility to understand and verify what is happening under their authority. The sketch.dev incident should be a wake-up call for better human processes, more ethics, not an excuse for replacing legs with pegs.

A ship may be safe in harbor, but we build ships to sail. Let’s build AI systems that can navigate the complexity of real software development, and let’s build human processes to navigate the complexity of working with those systems responsibly… like it’s 1925 again.