Scientists: Napoleon’s Mistreated Army Was Dying Faster Than Enemies Could Kill Them

600,000 troops were destroyed by Napoleon’s mistreatment, leaving barely 20,000 alive. This scene captures the desperation of their existence, burning whatever they could find for warmth, including regimental standards and flags. These weren’t just pieces of cloth; they were sacred symbols of military honor and unit identity that French soldiers burned for basic survival, absent of any pride. Source: Wojciech Adalbert Kossak’s woodcut depicting French retreat on 29 November 1812.
For all the extravagant jewelry and fine dining the ruthless Napoleon loved to shower himself in, his troops basically died as disposable slaves.

Binder says. “We have these paintings in the museums of soldiers in shiny armors, of Napoleon on his horse, fit young men marching into battle.”

“But in the end, when we look at the human remains, we see an entirely different picture,” she says.

It’s a picture of lifelong malnutrition, broken feet from marching too far, too quickly, and bodies riddled with disease.

Napoleon was truly a horrible human. The Grande Armée marched without adequate supply lines because his plan was literally to rape and pillage the land—as if his soldiers could sustain themselves while marching hundreds of miles into hostile territory. When Russia came up empty, hundreds of thousands of his own men starved and froze to death. Meanwhile, his baggage train advanced and retreated with his expansive silver dinnerware and fresh steaks.

Scientists are thus proving a subtext of the well-known disasters, that Napoleon never was building a professional army. He was instead rapidly extracting every ounce possible from expendable human material in a hopeless imperial ambition that couldn’t last.

Authoritarian systems consistently demonstrate this pattern of toxic leadership that treats humans as disposable, while maintaining elaborate fake performances of power and legitimacy to hide their dangerous extraction.

The gap that emerges between the story telling of museum paintings, and the facts from modern bone pathology, isn’t just about artistic license; it’s evidence of horribly corrupted power systematically erasing human cost in projects and logs.

Devastating supply line failure killing his own men wasn’t from logistical incompetence—it was a strategy of “efficiency” coming to bear. The fail faster doctrine of Napoleon, in fact failed faster, to the tune of 400,000 and more of his own soldiers destroyed for… nothing.

Charles Minard’s renowned graphic of Napoleon’s 1812 march on Moscow. The tremendous numbers of casualties suffered shows in thinning of the lines (1 millimeter of thickness is equal to 10,000 men) through space and time.

Napoleon is still framed falsely as a military genius rather than as mass murderer, someone who burned everything he touched, destroyed human lives at an industrial scale and then “efficiently” lost it all. His “strong man” propaganda continues to work centuries later, which should make us deeply skeptical of how current authoritarian systems (e.g. Trump) present their own real costs.

Tesla FSD Shows AI Getting Worse Over Time

The great myth of AI is that it will improve over time.

Why?

I get it, as I warned about AI in 2012, people want to believe in magic. A narwhal tusk becomes a unicorn. A dinosaur bone becomes a griffin. All fake, all very profitable and powerful in social control contexts.

What if I told you Tesla has been building an AI system that encodes and amplifies worsening danger, through contempt for rules, safety standards, and other people’s lives?

People want to believe in the “magic” of Tesla, but there’s a sad truth finally coming to the surface. Elon Musk has been promising for ten years AI can make his cars driverless “a year from now”, as if Americans can’t recognize snake oil of the purest form.

Back in 2016 I gave a keynote talk about Tesla’s algorithms being murderous, implicated in the death of Josh Brown. I predicted it would get much worse, but who back then wanted to believe this disinformation historian’s Titanic warnings?

Source: My 2016 BSidesLV keynote presentation comparing Tesla autopilot to the Titanic

If there’s one lesson to learn from the Titanic tragedy, it’s that designers believed their engineering made safety protocols obsolete. Musk sold the same lie about algorithms. Both turned passengers into unwitting deadly test subjects.

I’ll say it again now, as I said back then despite many objections, Josh Brown wasn’t killed by a malfunction. The ex-SEAL was killed by a robot executing him as it had been trained.

Ten years later and we have copious evidence that Tesla systems in fact get worse over time.

NHTSA says the complaints fall into two distinct scenarios. It has had at least 18 complaints of Tesla FSD ignoring red traffic lights, including one that occurred during a test conducted by Business Insider. In some cases, the Teslas failed to stop, in others they began driving away before the light had changed, and several drivers reported a lack of any warning from the car.

At least six crashes have been reported to the agency under its standing general order, which requires an automaker to inform the regulator of any crash involving a partially automated driving system like FSD (or an autonomous driving system like Waymo’s). And of those six crashes, four resulted in injuries.

The second scenario involves Teslas operating under FSD crossing into oncoming traffic, driving straight in a turning lane, or making a turn from the wrong lane. There have been at least 24 complaints about this behavior, as well as another six reports under the standing general order, and NHTSA also cites articles published by Motor Trend and Forbes that detail such behavior during test drives.

Perhaps this should not be surprising. Last year, we reported on a study conducted by AMCI Testing that revealed both aberrant driving behaviors—ignoring a red light and crossing into oncoming traffic—in 1,000 miles (1,600 km) of testing that required more than 75 human interventions.

Let’s just start with the fact that everyone has been saying garbage in, garbage out (GIGO) is a challenge to overcome in AI, since forever.

And by that I mean, even common sense standards should have forced headlines about Tesla being at risk of soaking up billions of garbage data points and producing dangerous garbage as a result. It was highly likely, at face value, to become a lawless killing machine of negative societal value. And yet, its stock price has risen without any regard for this common sense test.

Imagine an industrial farmer announcing he was taking over a known dangerous superfund toxic sludge site to suddenly produce the cleanest corn ever. We should believe the fantasy because why? And to claim that corn will become less deadly the more people eat it and don’t die…? This survivor fallacy of circular nonsense from Tesla is what Wall Street apparently adores. Perhaps because Wall Street itself is a glorified survivor fallacy.

Let me break the actual engineering down, based on the latest reports. The AMCI Testing data (75 interventions in 1,000 miles) provides a quantifiable deterioration rate. That’s a Tesla needing intervention every 13 miles.

Holy shit, that’s BAD. Like REALLY, REALLY BAD. Tesla is garbage BAD.

Human drivers in the US average one police-reported crash every 165,000 miles. Tesla FSD requires human intervention to prevent violations or crashes at a rate roughly 12,000 times higher than human baseline crash rates.

Elon Musk promised investors a 2017 arrival of a product superior to “human performance”, yet in 2025 we see code that is still systematically worse than a drunk teenager.

And, it’s actually even worse than that. Tesla re-releasing a “Mad Max” lawless driving mode in 2025 is effectively a cynical cover up operation, to double-down on deadly failure as normalized outcomes on the road. Mad Max was a killer.

I’ve disagreed with GIGO for as long as I’ve pointed out Tesla will get worse over time. I could explain, but I am not sure a higher bar even matters at this point. There’s no avoiding the fact that the basic GIGO tests show how Tesla was morally bankrupt from day one.

The problem isn’t just that Tesla faced a garbage collection problem, it’s that their entire training paradigm was fundamentally flawed on purpose. They’ve literally been crowdsourcing violations and encoding failures as learned behavior. They have been caught promoting rolling stop signs, they have celebrated cutting lanes tight, and even ingested a tragic pattern of racing to “beat” red lights without intervention.

That means garbage was being relabeled “acceptable driving.” Like picking up an old smelly steak that falls on the floor and serving it anyway as “well done”. Like saying white nationalists are tired of being called Nazis, so now they want to be known only as America First.

This is different from traditional GIGO risks because the garbage is a loophole that allows a systematic bias shift towards more aggressive, rule-breaking, privileged asshole behavior (e.g. Elon Musk’s personal brand).

The system over time was setup to tune towards narrowly defined aggressive drivers, not the safest ones.

What makes this particularly insidious is the feedback loop I identified back in 2016. “Mad Max” mode from 2018 wasn’t just marketing resurfacing in 2025, it’s a legal and technical weapon deployed by the company strategically.

Source: My presentation at MindTheSec 2021

Explicitly offering a “more aggressive” option means Tesla moves the Overton window while creating plausible deniability: “The system did what users wanted.”

This obscures that their baseline behavior was degraded by training on violations, and reframes the failures within a worse option. Disinformation defined.

Musk’s snake oil promises – that Teslas would magically become safer through fleet learning – require people to believe that more data automatically equals better outcomes. Which is like saying more sugar is going to make you happier. It’s only true if you have labeled ground truth, to know how close to diabetes you are. It needs a reward function aligned with actual safety, and the ability to detect and correct for systematic biases.

Tesla has none of these.

They have billions of miles of “damn, I can’t believe Tesla got away with it so far, I’m a gangsta cheating death” which is NOT the same as if its software drove the car legally let alone safely.

Tesla claimed to be doing engineering (testable, falsifiable, improvable) while actually doing testimonials (anecdotal, survivorship-biased, unfalsifiable). “My Tesla didn’t crash” is not data about safety, it’s absence of negative outcome, which is how drunk drivers justify their behavior too… like a teapot orbiting the sun (unfalsifiable claims based on absence of observed harm).

OpenAI CISO Admits They Have Become the Theranos of AI

A CISO announces a dangerous “unsolved security problem” in his product when it ships. We’ve seen this playbook before.

“Theranos’ Elizabeth Holmes Plays the Privileged White Female Card” Source: Bloomberg Law

OpenAI’s Chief Information Security Officer (CISO) Dane Stuckey just launched a PR campaign admitting the company’s new browser exposes user data through “an unsolved security problem” that “adversaries will spend significant time and resources” exploiting.

This is a paper trail for intentional harm.

The Palantir Playbook

Dane Stuckey joined OpenAI in October 2024 from Palantir – the self-licking ISIS cream cone that generates threats to justify selling surveillance tools to anti-democratic agencies (e.g. Nazis in Germany). His announcement of joining the company emphasized his ability to “enable democratic institutions” – Palantir double-speak for selling into militant authoritarian groups.

The timeline:

  • Oct 2024: Stuckey joins OpenAI from Palantir surveillance industry
  • Jan 2025: OpenAI removes “no military use” clause from Terms of Service
  • Throughout 2025: OpenAI signs multiple Pentagon contracts
  • Oct 2025: Ships Atlas with known architectural vulnerabilities while building liability shield

He wasn’t hired to secure OpenAI’s products. He was hired to make insecure products acceptable to government buyers.

The Admission

In his 14-tweet manifesto, Stuckey provides a technical blueprint for the coming exploitation:

Attackers hide malicious instructions in websites, emails, or other sources, to try to trick the agent into behaving in unintended ways… as consequential as an attacker trying to get the agent to fetch and leak private data, such as sensitive information from your email, or credentials.

He knows exactly what will happen. He’s describing attack vectors with precision – like a Titanic captain announcing “we expect to hit an iceberg, climb aboard!” Then:

However, prompt injection remains a frontier, unsolved security problem, and our adversaries will spend significant time and resources to find ways to make ChatGPT agent fall for these attacks.

“Frontier, unsolved security problem.”

OpenAI’s CISO is telling you: we cannot solve this, attackers will exploit it, we’re shipping anyway.

Technical Validation

Simon Willison – one of the world’s leading experts on prompt injection who has documented this vulnerability for three years – immediately dissected Stuckey’s claims. His conclusion:

It’s not done much to influence my overall skepticism of the entire category of browser agents.

Simon builds with LLMs constantly and advocates for their responsible use. When he says the entire category might be fundamentally broken, that’s evidence CISOs must heed.

He identified the core problem: “In application security 99% is a failing grade.” Guardrails that work 99% of the time are worthless when adversaries probe indefinitely for the 1% that succeeds.

We don’t build bridges that only 99% of cars can cross. We don’t make airplanes that only land 99% of the time. Software’s deployment advantages should increase reliability, not excuse lowering it.

Simon tested OpenAI’s flagship mitigation – “watch mode” that supposedly alerts users when agents visit sensitive sites.

It didn’t work.

He tried GitHub and banking sites. The agent continued operating when he switched applications. The primary defensive measure failed immediately upon inspection.

Intentional Harm By Design

Look at what Stuckey actually proposes as liability shield:

Rapid response systems to help us quickly identify attack campaigns as we become aware of them.

Translation: Attacks will succeed. Users will be harmed first. We’ll detect patterns from the damage. That’s going to help us, we’re not concerned about them.

This is the security model for shipping spoiled soup at scale: monitor for sickness, charge for cleanup afterwards, repeat.

“We’ve designed Atlas to give you controls to help protect yourself.”

Translation: When you get pwned, it’s your fault for not correctly assessing “well-scoped actions on very trusted sites.”

As Simon notes:

We’re delegating security decisions to end-users of the software. We’ve demonstrated many times over that this is an unfair burden to place on almost any user.

“Logged out mode” – don’t use the main feature.

The primary mitigation is to not use the product. Classic abuser logic: remove your face if you don’t want us repeatedly punching it. That’s not a security control. That’s an admission the product cannot be secured, is unsafe by design, like a Tesla.

Tesla officially says relying on their warning system for warnings is deadly. Unofficially they upsell it as AI capable of collision avoidance. Source: Tesla

Don’t want to die in a predictable crash into the back of a firetruck with its lights flashing? Don’t get in a Tesla, since it says its warning system can’t be trusted to warn.

Juicy Government Contracts

Why would a CISO deliberately ship a product with known credential theft and data exfiltration vulnerabilities?

Because that’s a feature for certain buyers.

Consider what happens when ChatGPT Atlas agents – with unfixable prompt injection vulnerabilities – deploy across:

  • Pentagon systems (OpenAI has multiple DoD contracts)
  • Intelligence agencies (the board has NSA links)
  • Federal government offices (where “AI efficiency” mandates are coming)
  • Critical infrastructure (where “AI transformation” is being pushed)

Every agent becomes an attack surface. Every credential accessible. Every communication interceptable.

The NSA, sitting on the board of OpenAI must be loving this. Think of the money they will save by pushing backdoors by design through OpenAI browser code.

Who benefits from an “AI assistant” that can be exploited to exfiltrate data but has plausible deniability because “prompt injection is unsolved”?

State actors. Intelligence services. The surveillance industry Stuckey was getting rich from.

Paul Nakasone on the board goes beyond decoration and becomes their new business model.

The Computer Virus Confession

Stuckey closes with this analogy:

As with computer viruses in the early 2000s, we think it’s important for everyone to understand responsible usage, including thinking about prompt injection attacks, so we can all learn to benefit from this technology safely.

He’s describing the business model: Ship broken, ship often, let damage accumulate, turn security into rent-seeking over years.

Rent seeking. Look it up.

Source: Reddit

Remember Windows NT? Massive security holes from first release, on purpose. “Practice safe computing.” Cracked in 15 seconds on a network. Viruses everywhere. Years of damage before hardening.

But here’s the difference: Microsoft eventually had to patch vulnerabilities under regulatory pressure from governments, as well as stop making a monopolistic browser. How LLMs process instructions isn’t yet regulated at all. So this vulnerability is architectural, the NSA is going to drive hard into it, and there’s little to no prevention on the horizon. It’s not a bug to fix – it’s a loophole so big it prevents even acknowledging the risk.

Stuckey knows this. That’s why he calls it “unsolved” and invokes Stanford-sounding revisionist rhetoric about the “frontier.”

Typical American frontier town signs banned guns and promoted healthy drinks. OpenAI, for being so unsafe and unhealthy, likely would have been banned in the 1800s frontier.

Documented Mens Rea

Let me be explicit what I see in the OpenAI strategy:

  1. Knowingly shipping a system that will be exploited to steal credentials and exfiltrate private data
  2. Documenting this in advance to establish legal cover
  3. Marketing to government and enterprise customers with a Palantir veteran providing security rubber stamp
  4. Responding to exploitation reactively, after damage occurs, while collecting revenue
  5. Treating infinite user harm as acceptable externality

This isn’t a CISO making a difficult call under pressure. This is a surveillance industry plant deliberately enabling vulnerable systems for undisclosed predictable harms.

That’s documented mens rea.

Theranos Comparison They Fear

Elizabeth Holmes got 11 years for shipping blood tests that gave wrong results, endangering patients.

Dane Stuckey is shipping a browser that his own documentation says will be exploited to steal credentials and leak private data – at internet scale, including government systems.

The difference? Holmes ran out of money. Stuckey has a $157 billion company backing him. And unlike Holmes and Elon Musk who claimed their technology worked, Stuckey admits it doesn’t work and ships anyway.

That’s not fraud through deception. That’s disclosure.

“Buyer beware. I warned it would hurt you. You used it anyway. Now where’s my bonus?”

It’s a police chief announcing a sundown town: Our officers will commit brutality. There will be friendly fire. This is an unsolved problem in policing. We apologize in advance for all the dead people and deny the pattern of it only affecting certain “poor” people. Sucks to be them.

The Coming Harm

Stuckey’s own words describe what’s coming:

  • Stolen credentials across millions of users
  • Exfiltrated emails and private communications
  • Compromised government systems
  • Supply chain attacks through developer accounts
  • State-sponsored exploitation (he admits “adversaries will spend significant time”)

OpenAI will respond reactively. Publish more specific attack patterns after exploitation and deploy temporary fixes. Issue updates to “safety measures.” Settle lawsuits with NDAs to undermine justice.

But the fundamental problem – that LLMs cannot distinguish between trusted instructions and adversarial inputs – remains unregulated and insufficiently challenged.

The Myanmar Precedent

I’ve documented before how CISOs can be held liable for atrocity crimes when they enable weaponized social media.

Facebook’s CSO during the Rohingya genocide similarly:

  • Was warned repeatedly about platform misuse enabling violence
  • Responded with “hit back” PR campaigns claiming critics didn’t understand how hard the problems were
  • Argued that regulation would lead to “Ministry of Truth” totalitarianism
  • Enabled nearly 800,000 people to flee for their lives while saying consequences matter

New legal research on Facebook established that Big Tech’s role in facilitating atrocity crimes should be conceptualized “through the lens of complicity, drawing inspiration… by evaluating the use of social media as a weapon.”

Stuckey is wading into the same dangers and with government systems.

Professional Capture

This isn’t about one vulnerable product. It’s about what it represents: the security industry doesn’t have any prevention let alone detection standards for a captured CISO.

The first CSO role was invented by Citibank after a breach as a PR move, but there was hope it could grow into genuine protection. Instead, we’re seeing high-dollar corruption – an extension of the marketing department. CISOs are paid more than ever for leaning into liability management layers that simply document concerns regardless of harms. When I was hacking critical infrastructure in the 1990s, I learned about a VP role the power companies called their “designated felon” who was paid handsomely to go to jail when regulators showed up.

Stuckey could have refused. He could have resigned. He could have blown the whistle.

Instead, he joined OpenAI from Palantir to enable this, shipped it with a useless warning label, and is planning to collect equity in mass harms.

That’s not a security professional making a hard call. That’s a paper trail of safety anti-patterns (reminiscent of the well-heeled CISO of Facebook enabling genocide from his $3m mansion in the hills of Silicon Valley).

When Section 83.19(1) of the Canadian Criminal Code says knowingly facilitating harmful activity anywhere in the world, even indirectly, is itself a crime – and when legal scholars argue we should conceptualize weaponized technology “through the lens of complicity” – Stuckey’s October 22, 2025 thread becomes evidence… of documented intent to profit from failure regardless of harms.

And what does a BBC reporter think about all this?

OpenAI says it will make using the internet easier and more efficient. A step closer to “a true super-assistant”. […] “Messages limit reached,” read one note. “No available models support the tools in use,” said another. And then: “You’ve hit the free plan limit for GPT-5.”

Rent seeking, I told you.

School AI System SWATs Kid for Eating Doritos: Handcuffed With Guns in His Face for What?

Police descended on a school, guns drawn, and handcuffed a kid for eating a bag of chips, because AI.

Armed police handcuffed and searched a student at a high school in Baltimore County, Maryland, this week after an AI-driven security system flagged the teen’s empty bag of chips as a possible firearm.

Baltimore County officials are now calling for a review of how Kenwood High School uses the AI gun detection system and why the teen ended up in handcuffs despite school safety officials quickly determining there was no weapon.

This is a foreshadowing of Elon Musk’s robot army. Teens will face stiff competition and lethal threats from armies of centrally planned and controlled machines. It’s basically the plot of Red Dawn come to life.

Red Dawn was John Milius’ (Apocalypse Now screenwriter) comic book vision of how teenagers could stop huge waves of mechanized conventional forces attacking America.