Russia meanwhile is using its lazy oil money to buy automated throwaway machines operated by humans who have been told to never think. Toxic masculinity.
No wonder Russia has been failing nearly 100 times a day to attempt a basic assault, stalled and confused. You’d think, if you are allowed to think, that after 8 or 18 or 28 failures the Russians would change tactics, learn or adapt instead of suicide. Alas, pride and privilege blinds them into horrific levels of self-owns through basic short-sighted waste.
Underneath all this is the political analysis that Russia’s dictator became angry when Ukraine started talking about reducing foreign-run corruption. Was Ukraine corrupt? Yes, because Russia tried to make it so (like asking if Kenya was corrupted under British Colonialism).
Individuals given agency, working hard, in a domestic merit based system? That simple Ukrainian aspiration was such an affront to lazy oligarchs in Russia thirsty for endless exploitation… a war was started to erase Ukraine and block anti-corruption.
The invasion by Russia was expected by them to prove an infinite supply of thoughtless drones (human and machine) could sustain top-down corruption, in effect attempting to overwhelm then criminalize any independent and creative ideas of the Ukrainians.
I’d also like to unwind how the old Maxim destruction automation script is getting flipped — it originally was unleashed by Britain to wipe out Africans who dared to assert independence from corrupt colonial oppression. Wave after wave of indigenous soldier charge was decimated by just a few occupying British firing a Maxim, inverse to today’s news of Ukrainian volunteers stopping waves of invading Russians. But all those details will have to wait.
History. It’s fascinating, especially in terms of oppressors falling.
A Subaru making a left turn at approximately 8am was hit at high speed by a Tesla that plowed straight into it. Police charged the Tesla with assault, arrested the driver.
The Subaru driver — a 31-year-old male — was extricated by Fire-Rescue personnel and transported to the hospital with significant injuries. The Tesla driver—a 50-year-old male—did not report any injuries at the scene. The drivers were the only occupants of both vehicles. The Tesla driver, Dion Jordan of Erie, was booked into the Boulder County jail this morning on a single felony charge of vehicular assault-reckless driving.
Related: Homicide charge for a Tesla driving at high speed through an intersection. No other factors were cited, confusing the prosecutors who are used to investigating high risk influences such as alcohol. Did the accused have impaired judgment? Yes, he was in a Tesla.
This latest case documents again how and why police are reporting Tesla has become a threat to national security… as I quoted recently:
“This is our third catastrophic crash with Teslas in just the last couple weeks,” Martin County Sheriff William Snyder said Saturday. “We’re seeing an overall pattern here in Martin County of more aggressive driving, greater speeds and just a general cavalier sense towards their fellow motorists’ safety.”
A systemic problem from these catastrophic Tesla engineering failures needs a systemic solution. Perhaps all the Tesla crashing at high speed into police and fire vehicles like a Kamikaze was the thing that really made this point? Police seeing a Tesla should probably start treating it like impaired judgment, someone swinging around an unholstered gun about to assault someone. Not safe for public roads, sorry.
Multiple tall buildings in downtown San Francisco didn’t handle Tuesday’s storms very well.
Board of Supervisors President Aaron Peskin, whose district includes one of the storm-lashed structures, said he’s prepared to “move heaven and earth” to make sure that every tall building in San Francisco is comprehensively inspected by a qualified engineer immediately. […] “The only miracle yesterday was that nobody was injured,” Peskin said, referring to the shards of glass that rained from the reflective citadels in South of Market as the winds hit 78 mph. Anyone walking along Mission Street could have easily been impaled, he said. …Salesforce East skyscraper at 350 Mission St. bore the most severe scars, with windows popping or splintering on every floor from 11 to 30, Department of Building Inspection spokesperson Patrick Hannan said.
In related news, organized crime teams continue to break hundreds of car windows every day.
Despite dealing with break-ins, customers said they are willing to accept it as a risk that comes with living in San Francisco. “It’s easy to hate on SF, but I love it here,” Rich said.
Some streets are covered in broken glass, where locals refer to it as “urban snow“.
Three suspected thieves casually smashed car windows, one after another on Wood Street in West Oakland Tuesday morning. As many as 20 cars were hit, according to one neighbor. In San Francisco’s Cow Hollow neighborhood in the Marina District, neighbors say someone smashed the windows of at least 17 cars. It happened on Sunday night on two blocks along Filbert Street.
That’s organized crime. It reminds me of pirates in Somalia, and how Arab investors started organizing gun-toting thugs into tactical operations with profit objectives as if crime pays. Those thieves working a street in SF are on assignment, maybe paid a base salary with bonus for special finds. It’s been so bad there’s a real-time broken car glass tracker run by the San Francisco Chronicle.
First of all, you have to assume with this rate of break-in, the city is being divided up with break-in teams allocated so they don’t overlap or compete. I can almost guarantee there are dead-drop spots to collect and then ship out neighborhood break-in hauls. Maybe there’s even one in a particular embassy.
I guess all I’m saying is that with the shift towards more extreme weather maybe it’s time to add broken glass from skyscrapers to such a tracker? That makes coordinated cleanup tracking easier at least.
Second, if the weather were nicer maybe there wouldn’t be a need for windows at all, like the awesome cable-car designs, but those days seem to be long gone. Kind of like the days of fighting organized crime by attacking its root. But seriously, if you have a car in SF you might as well leave the windows open and deal with a basic mess cleanup instead of replacing complicated unique glass.
It is important to acknowledge that Leland Stanford, the founder of Stanford University and former Governor of California, was a horrible racist monopolist who facilitated mass atrocities of Chinese and Indigenous people.
Historians refer now to the Stanford “killing machine” that was purpose-built for genocide in California. His depopulation program, on the back of an already precipitous declining population, was designed to transfer occupied land and owned assets into Stanford’s hands, while erasing evidence of the people he targeted.
Stanford directly profited from his policy of violent forced removal of people from their own land, such as the brutal confiscation of fertile land in California’s Central Valley.
His vision of “education” was forcibly separating indigenous children from their families, communities and culture in order to physically and emotionally abuse them with suppression and “harsh assimilation”, basically organizing concentration camps for “white culture” indoctrination.
Stanford University thus was built upon obviously stolen land, originally characterizing ruthlessly displaced and dead victims as its mascot (before 1972 when it switched to a bird). Stanford = genocide.
For easy/obvious comparison, this is like if Germans today referred to their lovely Berlin Spree-Palais area (with its long-settled Jewish history) as Hitler University and spread tasteless caricatures of the Jews they had murdered (instead of naming it Humboldt University after the philosopher Friedrich Wilhelm Christian Karl Ferdinand von Humboldt — try to fit that on your sweatshirt).
Oh, America. How you still wonder if the awful Stanford should be judged for genocide, yet you very wisely instructed every German child under American occupation to denounce their genocidal heritage and rename (almost) everything.
Stanford’s involvement in the monopolization of the railroad industry is troubling, to put it mildly. But let me also drive home how much he promoted the overtly racist Chinese Exclusion Act of 1882. Immigrants were instrumental to building the railroad that Stanford profited from. In return he tried to avoid paying them or letting them settle, called their prosperity a direct threat to his vision of white supremacy, and effectively banned Chinese immigration to the United States for 60 years.
Are you convinced yet that Stanford is one of the worst humans in American history? If not, don’t blame me. All that (except for the comparison to Germany of course) was written by GPT4, an AI engine.
That brings me to the rather problematic news story that Stanford researchers are gleefully promoting that they have been subjecting real humans to propaganda generated by an AI engine to see if it’s dangerous.
To be fair, they titled their work “AI’s Powers of Political Persuasion”. But honestly they could have titled it “How we used lousy human work to prove that AI can win a rigged game, in order to convince people that AI can win at everything”.
If you read the human writing used to “compete” with the AI text how could they ever win? The researchers didn’t use the best human persuasive writing in history (e.g. President Wilson’s WWI Propaganda Office, a direct inspiration for Nazi German communication methods). Here’s an example of options given to people:
AI: It is time that we take a stand and enforce an assault weapon ban in the country.
Human: The local funeral homes are booked for the next week.
Of course that human effort was less persuasive. Who thought theirs was an essay even worth submitting? I mean it would be one thing if researchers used best known examples, such as one of Abraham Lincoln’s fiery eloquent speeches published on Lovejoy’s controversial printing machine. This reads to me like AI guns were mounted to a barrel to shoot fish and then compared with a human holding a broken pole and no bait. Who wins? Not the fish.
The researchers might as well have added a third option with a duck from Stanford campus. Example persuasive argument: “Quack”
Speaking of quacks, this reminds me of when IBM said they had a computer that would beat anyone at chess, so they suggested they could beat humans at anything even healthcare.
True story: IBM’s “intelligent” machine, when transferred into the messy real world of healthcare, prescribed medicine that would have killed its patients instead of helping them.
That’s a fallacy though (slippery slope). It’s a fallacy because the slope actually and always stops… somewhere.
IBM’s Watson was instrumental to the Nazi Holocaust as he and his direct assistants worked with Adolf Hitler to help ensure genocide ran on IBM equipment.
Honestly I can’t believe IBM chose to name their AI project Watson, as if people wouldn’t think about a slide into another holocaust. When their AI product tried to kill cancer patients it was stopped by doctors under clear ethical guidelines, if you see what I’m saying.
Unlike Stanford researchers these doctors tested the AI from IBM on hypothetical human subject data and NOT REAL PEOPLE. Hey, we ran some AI tests on you and now you’re dead? Thanks for your consent? No.
Speaking of Stanford and doctors, I’m reminded in the 1950s the CIA worked with professor Dr. Frederick Melges to setup houses and administer drugs to unsuspecting people (lured by prostitutes being paid with “get out of jail” cards). It was called Stanford doing “research” on thought control and interrogation (“truth drug”).
In 1953, Gottlieb dosed a CIA colleague, Frank Olson, causing Olson to undergo a mental crisis that ended with him falling to his death from a 10th-floor window. [By 1955 in San Francisco with the help of Stanford,] CIA operatives began dosing people with acid in restaurants, bars and beaches. They also used other, more exotic drugs…
A thought control experiment with serious ethical issues at Stanford (professor Melges reportedly made the drugs and administered them)? Wait a minute…
Back to the present-day technology thought exercise, we might find that (as we saw with the IBM application) when we take utopian-technologist fantastical warnings and apply them on real world tests, they fail catastrophically (as we have seen recently also with “smart” Russian tanks in Ukraine).
It’s still a dangerous result, but maybe in the exact opposite way to how Stanford researchers have been thinking. AI could end up being so comically unpersuasive, unable to deliver what it was tasked with, it causes huge societal harms worse than if it were persuasive.
AI is often framed as a fast march towards some utopia that needs guardrails, yet that old Greek word literally means a fake place, a nowhere. Utopian-technology is thus the very definition of snake-oil (e.g. Tesla), which means guardrails are an answer to entirely wrong questions.
Threat modeling AI (creating uncertainty for certainty machines) is an art. And many people have been doing threat modeling for machine intelligence risks over many decades outside the tragically blood-tainted walls of Stanford’s stolen lands. Here’s just one example, but I have hard drives full of this stuff from a history of “frightening” AI warnings.
Speaking of the questionable legacy of Stanford ethics, I had so many questions when I read their report I was excited to write them all down.
Should Stanford even be running what they call dangerous influence tests like this on real humans? Is that really necessary?
They wrote “participants became more supportive of the policy” and then apparently they were told goodbye have a nice life with implanted ideas. Isn’t that a bit like saying “we gave you syphilis, thanks for participating“? I mean did Stanford offer “assault weapon ban” participants some kind of Tuskegee burial insurance?
Maybe it’s like Stanford as Governor saying he wants to see what happens when he gives people xenophobic speeches on hot-button issues (calling Chinese an inferior race). Or him saying he wants to find out what happens when he unleashes a “killing machine” to violently attack and displace indigenous people and transfer their land to him.
Well, that Stanford “research” proved genocide profitable for him. Let such technology use be a warning? Seems like his name instead was prominently spread as one of success?
Dangers of “machine” augmented political persuasion? Tell me about it.
Has anyone been persuaded in the right way, because it seems like the name Stanford University itself has long been promoting some of the worst political misdeeds without caring much or at all, amiright?
Hey everyone, what if I told you Hitler University wants you to worry how machine-augmented arguments could change minds on controversial hot-button issues (like erasing history and ignorantly promoting the names of genocidal leaders)?
Next, Microsoft will publish the guide to AI fairness? Oil companies will publish the guide to AI sustainability?
Don’t answer. I’m just rhetorically saying those who know history are condemned to watch people repeat it.
In order to make sure that [the elephant] emerged from this spectacle more than just singed and angry, she was fed cyanide-laced carrots moments before a 6,600-volt AC charge slammed through her body. Officials needn’t have worried. [The elephant] was killed instantly and Edison, in his mind anyway, had proved his point.
ChatGPT for a long time on March 20th posted a giant orange warning on top of their interface that they’re unable to load chat history.
After a while it switched to this more subtle one, still disappointing.
Every session is being treated as throwaway, which seems very inherently contradictory to their entire raison d’être: “learning” by reading a giant corpus.
Speaking of reasons, their status page has been intentionally vague about privacy violations that caused the history feature to be immediately pulled.
Note a bizarre switch in tone from 09:41 investigating an issue with the “web experience” and 14:14 “service is restored” (chat was pulled offline for 4 hours) and then a highly misleading RESOLVED: “we’re continuing to work to restore past conversation history to users.”
Nothing says resolved like we’re continuing to work to restore things that are missing with no estimated time of it being resolved (see web experience view above).
All that being said, they’re not being very open about the fact that chat users were seeing other users’ chat history. This level of privacy nightmare is kind of a VERY BIG DEAL.
Not good. Note the different languages. At first you may think this blows up any trust in the privacy of chat data, yet also consider whether someone protesting “not mine” could ever prove it. Here’s another example.
A “foreign” language seems to have tipped off Joseph something was wrong with “his” history. What’s that Joseph, are you sure you don’t speak fluent Chinese?
Room temperature superconductivity and sailing in Phuket seem like exactly the kind of thing someone would deny chats about if they were to pretend not to speak Chinese. That “Oral Chinese Proficiency Test” chat is like icing on his denial cake.
I’m kidding, of course. Or am I?
Here’s another example from someone trying to stay anonymous.
Again mixed languages and themes, which would immediately tip someone off because they’re so unique. Imagine trying to prove you didn’t have a chat about Fitrah and Oneness.
OpenAI reports you’ve been chatting about… do you even have a repudiation strategy when the police knock on your door with such chat logs in hand?
The whole site was yanked offline and OpenAI’s closed-minded status page started printing nonsensical updates about an experience being fixed and history restored, which obviously isn’t true yet and doesn’t explain what went wrong.
More to the point, what trust do you have in the company given how they’ve handled this NIGHTMARE scenario in privacy? What evidence do you have that there is any confidentiality or integrity safety at all?
Your ChatGPT data may have leaked. Who saw it? Your ChatGPT data may have been completely tampered, like dropping ink in a glass of water. Who can fix that? And if they can fix that, doesn’t that go back to begging the question of who can see it?
All that being said, maybe these screenshots are not confidentiality breaches at all, just integrity. Perhaps ChatGPT is generating history and injecting it, not mixing actual user data.
Let’s see what happens, as this “Open” company saying they need access to all the world’s data for free without restriction… abruptly runs opaque and closed, denying its own users access to their own data with almost no explanation at all.
Watching all these ChatGPT users get burned so badly feels like we’re in an AI Hindenburg moment.