Trump Repeals AI Innovation Rules, Declares No Limits for Big Tech to Hurt Americans

The Great AI Safety Rollback:
When History Rhymes with Catastrophe

The immediate and short-sighted repeal of AI oversight regulations threatens America with a return to some of the most costly historical mistakes: prioritizing quick profits over sustainable innovation.

Like the introduction of leaded gasoline in the 1920s, we’re watching in real-time as industry leaders push for unsafe deregulation that normalizes reckless behavior under the banner of innovation. What happens when AI systems analyzing sensitive data are no longer required to log their activities? When ‘proprietary algorithms’ become a shield for manipulation? When the same companies selling AI tools are also controlling critical infrastructure?

The leaded gasoline parallel is stark because industry leaders actively suppressed research showing devastating health impacts for decades, all while claiming regulations would ‘stifle innovation.’ Now we face potentially graver risks with AI systems that could be deployed to influence everything from financial markets to allegedly rigged voting systems, with even less transparency. Are we prepared to detect large-scale coordination between supposedly independent AI systems? Can we afford to wait decades to discover what damage was done while oversight was dismantled?

Deregulation Kills Innovation

Want proof? Look no further than SpaceX – the poster child of deregulated “innovation.” In 2016, Elon Musk promised Mars colonies by 2022. In 2017, he promised Moon tourism by 2018. In 2019, he promised robotaxis by 2020. In 2020, he promised Mars cargo missions by 2022. Now it’s 2025 and SpaceX hasn’t delivered on any of these promises – not even close. Instead of Mars colonies, we got exploding rockets, failed launches, and orbital debris fields that threaten functioning satellites.

This isn’t innovation – it’s marketing masquerading as engineering. Reportedly SpaceX took proven 1960s rocket technology, rebranded it with flashy CGI videos and bold promises, then used public money and regulatory shortcuts to build an inferior version of what NASA achieved decades ago. Their much-hyped reusable rockets? They’re still losing them at an alarming rate. Their promised Mars missions? Apparently they haven’t even reached orbit yet without creating hazardous space debris and being grounded. Their “breakthrough” Starship? It’s years behind schedule and still exploding on launch.

Yet because deregulation has lowered the bar so far, SpaceX gets celebrated for achievements that would have been considered failures by 1960s standards. This same pattern of substituting marketing for engineering produced Cybertrucks unable to be exposed to water, increasingly in the news for unexplained deadly crashes.

Boeing’s 737 MAX disaster stands as another stark warning. As oversight weakened, Boeing didn’t innovate – they took deadly shortcuts that killed hundreds and vaporized billions in value. When marketing trumps engineering and systems get a similar free pass, we read about unmistakable tragedy more than any real triumph.

History teaches us that true innovation thrives not in the absence of oversight, but in the presence of clear, meaningful, measured standards especially related to safety from harm.

Consider how American scientific innovation operated under intense practical pressures for results in WWII. Early radar systems like the SCR-270 (which detected the Japanese at Pearl Harbor but was ignored) and MIT’s Rad Lab developments faced complex challenges with false echoes, ground clutter, and atmospheric interference.

The MIT Radiation Laboratory, established in October 1940, marked a crucial decision point – Vannevar Bush and Karl Compton insisted on civilian scientific oversight rather than pure military control, believing innovation required both rigorous standards and academic freedom. This approach led to the February 1940 cavity magnetron breakthrough by John Randall and Harry Boot that revolutionized radar capabilities. Innovations like the cavity magnetron and H2X ground-mapping radar demonstrated remarkable progress through regulations that enforced rigorous testing and iteration.

Contrast the success of heavily regulated outcomes in WWII with the vague approaches in the Vietnam War, such as Operation Igloo White (1967-1972) – burning $1.7 billion yearly on an opaque ‘electronic battlefield’ of seismic sensors (ADSID), acoustic detectors (ACOUSID), and infrared cameras monitored from Nakhon Phanom, Thailand. The system’s sophisticated IBM 360/65 computers processed thousands of sensor readings but couldn’t reliably distinguish between North Vietnamese supply convoys and local farming activity along the Ho Chi Minh Trail, leading to massive waste in random bombing missions. It was such a failure that President Nixon ordered the same system installed around the White House and on American borders. Why? He opposed regulations that made it clear the system didn’t work.

This mirrors today’s AI companies selling us a new generation of ‘automated intelligence’ – expensive systems making bold claims while struggling with basic contextual understanding, their limitations obscured behind proprietary metrics and classification barriers rather than being subjected to transparent, real-world validation.

Critics have said nothing proves this point better than the horrible results from Palantir – just as Igloo White generated endless bombing missions based on misidentified targets, Palantir’s systems have perpetuated endless cycles of conflict by generating flawed intelligence that creates more adversaries than it eliminates. Their algorithms, shielded from oversight by claims of national security, have reportedly misidentified targets and communities, creating the very threats they promised to prevent – a self-perpetuating cycle of algorithmic failure marketed as success: the self-licking ISIS-cream cone.

The sudden rushed push for AI deregulation is most likely to accelerate failures such as Palantir and lower the bar so far anything can be rebranded as success. By removing basic oversight requirements, we’re not unleashing innovation – we’re creating an environment where “breakthrough developments” require no real capability or safety, and may even be demonstrably worse than before.

Might as well legalize snake-oil.

The Real Cost of an American Leadfoot

The parallels with the tragic leaded gasoline saga are particularly alarming. In the 1920s, General Motors marketed tetraethyl lead as an innovative solution for engine knock. In reality, it was an extremely toxic shortcut as a coverup that avoided addressing fundamental engine design issues. The result? Fifty years of widespread lead pollution, untold human and animal suffering, that we’re still cleaning up today.

When GM pushed leaded gasoline, they funded fake studies, attacked critics as ‘anti-innovation,’ and claimed regulation would ‘kill the auto industry.’ It took scientists like Patterson and Needleman 50 years of blood samples, soil tests, and statistical evidence before executive orders could mature into meaningful enforcement – and by then, nearly irreversible massive damage was done. Now AI companies run the same playbook with a crucial difference. We need to scientifically define ‘AI manipulation’ before we can regulate it. We need updated ways to measure evolving influence operations despite no physical traces. Without executive level regulation requiring transparent logging and testing standards now, we’re not just delaying accountability – we’re ensuring manipulation will be undetectable by design.

Clair Patterson’s initial discoveries about lead contamination came in 1965, but it took until 1975 for the EPA to announce the phase-out, and until 1996 for the full ban. This was an intentionally corrupted 31-year gap between scientific evidence and regulatory action. The counter-campaign by the Ethyl Corporation (created by GM and Standard Oil) included attacking Patterson’s funding and trying to get him fired from Caltech.

While it took 31 years to ban leaded gasoline despite clear scientific evidence, today’s AI deregulation is happening virtually overnight – removing safeguards before we’ve even finished designing them. This isn’t just regression; it’s willful blindness to history.

Removing AI safety regulations doesn’t solve any of the fundamental challenges of developing reliable, useful and beneficial AI systems. Instead, it allows companies to regress towards shortcuts and crimes, potentially building fundamentally flawed systems unleashing harms that we’ll spend decades trying to recover from.

When we mistake the absence of standards for freedom to innovate, we enable our own decline – just as Japanese automakers dominated by focusing on quality (enforced under anti-fascist post-WWII Allied occupation) as American manufacturers oriented instead around marketing and took engineering shortcuts. Countries that maintain rigorous AI development standards ultimately will leap ahead of those that don’t.

W. Edwards Deming’s statistical quality control methods, introduced to Japan in 1950 through JUSE (Japanese Union of Scientists and Engineers), became mandatory under occupation reforms. Toyota’s implementation through the Toyota Production System (TPS) starting in 1948 under Taiichi Ohno proved how regulation could drive rather than stifle innovation – creating manufacturing processes so superior that American companies spent decades trying to catch up.

For AI to develop sustainably, just like any technology in history, we need to maintain safety standards that can’t be gamed or spun away from measured indicators. Proper regulatory frameworks reward genuine innovation rather than hype, the same way a good CEO rewards productive staff who achieve goals. Our development processes should be incentivized to build in safety from the ground up, with international standards and cooperation to establish meaningful benchmarks for progress.

False Choice is False

The choice between regulation and innovation is a false one. Its like saying choose between having a manager and figuring out what to work on. The real choice is between sustainable progress versus shortcuts that cost us dearly in the long run — penny wise, pound foolish. As we watch basic AI oversight being dismantled, we must ask ourselves: are we willing to repeat known mistakes of the past, or will we finally learn from them?

The elimination of basic oversight requirements creates an environment where:

  • Companies can claim “AI breakthroughs” based on vague probably misleading marketing rather than measurable results
  • Critical safety issues can be downplayed or ignored until they cause major problems and get treated as fait accompli
  • Technical debt accumulates as systems are deployed without proper safety architecture, ballooning maintenance overhead that slows or even stops innovation
  • America’s competitive position weakens as other nations develop more regulated and therefore sustainable approaches

True innovation doesn’t fear oversight – it thrives on it. The kind of breakthrough development that put America at the forefront of aviation, computing, and space exploration came from environments with clear standards and undeniable metrics of success.

The cost of getting this wrong isn’t just economic – it’s existential. We spent decades cleaning up the incredibly difficult aftermath of leaded gasoline that easily could have been avoided. We might spend far longer dealing with the privacy and integrity consequences of unsafe AI systems deployed in the current unhealthy rush for quick extraction of value.

The time to prevent this is now, before we create a mess that future generations will bear.

Twittler of the Digital Reich: Elon Musk Gave the Hitler Salute Today

Elon Musk used the unmistakable Hitlergruß “Sieg Heil” (Nazi) salute today at a political rally.

This Nazi salute is banned in many countries, including Germany, Austria, Slovakia, and the Czech Republic as a criminal offense. The gesture remains inextricably linked to the Holocaust, genocide, and crimes of Nazis. Such illegal use or mimicry of Nazi gestures continues to be a serious matter that can result in criminal charges due to their connection with hate speech and extremist ideologies.

Elon Musk’s calculated public display of Nazi symbolism, culminating in the January 2025 “Sieg Heil” gesture on a political stage, represent a disturbing parallel to historical patterns of media manipulation and democratic erosion. This analysis examines these events through the lens of historical scholarship on Nazi propaganda techniques and media control.

As noted by Ian Kershaw in “Hitler: A Biography” (2008), the Nazi seizure of control over German media infrastructure occurred with remarkable speed.

Within three months of Hitler’s appointment as Chancellor, the Reich Ministry of Public Enlightenment and Propaganda under Joseph Goebbels had established near-complete control over radio broadcasting. This mirrors the rapid transformation of Twitter following Musk’s acquisition, where content moderation policies were dramatically altered within a similar timeframe to promote Nazism.

Many people were baffled why American and Russian oligarchs would give Elon Musk so much money to buy an unprofitable platform and drive it towards extremist hate speech. Today we see that was simply political campaign tactics to destroy democracy.

Z.A. Ziegler’s “Radio Under the Nazis” (1947) documents how the Reich Broadcasting Corporation achieved dominance through both technological and editorial control:

The radio became the primary instrument of mass suggestion… its impact was immediate and profound.

This parallels the documented surge in hate speech on Twitter post-acquisition, which researchers found increased explosively in the first month. Those at Twitter who moderated speech or otherwise respected human life were quickly fired and replaced with H1-B sycophants held under an oppressive thumb.

As Jeffrey Herf argues in “The Jewish Enemy” (2006), the Nazis understood that controlling the dominant communication technology of their era was crucial to reshaping public discourse. Radio represented a centralized broadcast medium that could reach millions simultaneously. Herf notes:

The radio became the voice of national unity, carefully orchestrated to create an impression of spontaneous popular consensus.

The parallel with social media platform control is striking. However, as media historian Victoria Carty observes in “Social Media and Democratic Erosion” (2023), modern platforms present even greater risks due to:

  1. Algorithmic amplification capabilities
  2. Two-way interaction enabling coordinated harassment
  3. Global reach beyond national boundaries
  4. Data collection enabling targeted manipulation

The normalization of extremist imagery often comes within a shrewed pattern of “plausible deniability” through supposedly accidental or naive usage.

The 2018 incident of Melania Trump wearing a pith helmet – a potent symbol of colonial oppression – in Kenya provides an instructive parallel. Just as colonial symbols can be deployed with claims of ignorance about their historical significance, modern extremist gestures and symbols are often introduced through claims of misunderstanding or innocent intent.

So too Elon Musk denies understanding any symbolism or meaning to words and actions, while also regularly signaling he is the smartest man in any room. This contradiction is not accidental, as it supercharges the notion of normalization by someone who uses his false authority to promote Nazism.

Martin M. Winkler’s seminal work “The Roman Salute: Cinema, History, Ideology” provides crucial insight into how fascist gestures became normalized through media and entertainment. The “Roman salute,” which would later become the Nazi salute, was actually a modern invention popularized through theatrical productions and early cinema, demonstrating how mass media can legitimize and normalize extremist symbols by connecting them to an imagined historical tradition.

Winkler’s research shows how early films about ancient Rome created a fictional gesture that was later appropriated by fascist movements precisely because it had been pre-legitimized through popular culture. This historical precedent is particularly relevant when examining how social media can similarly normalize extremist symbols through repeated exposure and false claims of historical or cultural legitimacy.

Perhaps most concerning is Elon Musk’s pattern of normalization that emerges, right on cue. Richard Evans’ seminal work “The Coming of the Third Reich” (2003) details how public displays of extremist symbols followed a predictable progression:

  1. Initial testing of boundaries
  2. Claims of misunderstanding or innocent intent
  3. Gradual escalation
  4. Open displays once sufficient power is consolidated

The progression from Musk’s initial “jokes” and coded references (Tesla opens 88 charging stations, Tesla makes 88 kWh battery, Tesla recommends 88 K/h speed, Tesla offers 88 screen functions, Tesla promotes 88 ml shot cups, lightning bolt imagery) to rebranding Twitter with a swastika and giving open Nazi salutes follows this pattern with remarkable fidelity.

Modern democratic institutions face unique challenges in responding to these threats.

Unlike 1930s Germany, today’s media landscape is dominated by transnational corporations operating beyond traditional state control. As Hannah Arendt presciently noted in “The Origins of Totalitarianism” (1951), the vulnerability of democratic systems often lies in their inability to respond to threats that exploit their own mechanisms of openness and free discourse.

The key difference between historical radio control and modern social media manipulation lies in the speed and scale of impact, similar to how radio rapidly and completely displaced prior media.

While radio required physical infrastructure and could be regulated by state authorities, social media platforms can be transformed almost instantly through policy changes and algorithm adjustments. This makes the current situation potentially more dangerous than historical precedents.

The parallel between Hitler’s exploitation of radio and Musk’s control of Twitter raises crucial questions about platform governance and democratic resilience. As political scientist Larry Diamond argues in “Democratic Decay” (2023), social media platforms have become fundamental infrastructure for democratic discourse, making their governance a matter of urgent public concern.

The progression from platform acquisition to public displays of extremist symbols suggests that current regulatory frameworks are inadequate for protecting democratic institutions from technological manipulation. This indicates a need for new approaches to platform governance that can respond more effectively to rapid changes in ownership and policy.

But it is maybe too late for America, like Hearst realized in 1938 on Kristallnacht how it was too late for Germany and he never should have been promoting Nazism in his papers.

The historical parallels between 1930s media manipulation and current events are both striking and concerning. While the technological context has changed, the fundamental pattern of using media control to erode democratic norms remains consistent. The speed with which Twitter was transformed following its acquisition, culminating in its owner’s public display of Nazi gestures, suggests that modern democratic institutions may be even more vulnerable to such manipulation than their historical counterparts.

Of particular concern is how social media’s visual nature accelerates the normalization process that Winkler documented in early cinema. Just as early films helped legitimize what would become fascist gestures by presenting them as historical traditions, social media platforms can rapidly normalize extremist symbols through viral sharing and algorithmic amplification, often stripped of critical context or warnings.

Future research should focus on developing frameworks for platform governance (e.g. DoJ for laws, FCC for wireless) that can better protect democratic discourse while respecting fundamental rights. As history demonstrates, the window for effective response to such threats may be remarkably brief.

Gamers Prove Elon Musk is a Fraud, Lies About Everything

As you may know from reading this blog, since at least 2016 I’ve been trying to point out Elon Musk lies about engineering.

Great Disasters of Machine Learning: Predicting Titanic Events in Our Oceans of Math

Without fraud, there would be no Tesla. Hundreds of people have died unnecessarily.

Source: IIHS

It hasn’t made the news enough to bring him to account.

And for years I’ve been pointing out his racist extreme right wing affinity for Nazism.

This also hasn’t brought him to account.

Now, as Elon Musk throws overt Nazi salutes in America, and tells German politicians he thinks Hitler was too liberal, the gaming community is having a field day exposing his recreational lies.

It’s sad because anytime Musk wades into a field to demonstrate himself as a genius, he looks like a fraud to experts in that field. And yet, somehow he gathers true believers who seem to care most about the trading price of his snake oil brands.