The Wall Street Journal has rushed to print a breathless report about the “growing security risks” of the LLM, painting a picture of unstoppable AI threats that companies must face “on their own” due to slow-moving government regulation in America.
Reading it, you’d think we were facing an unprecedented crisis with no solutions in sight.
*sigh*
There’s a problem with this 100,000 foot view of the battle-fields some of us are slogging through every day down on earth: actual security practitioners have been solving the exact challenges for decades that they are talking about as theory.
Let’s break down the article’s claims versus reality:
**Claim**: “LLMs create new cybersecurity challenges” that traditional security can’t handle
**Reality**: Most LLM “attacks” fail against basic input validation, request filtering, and access controls that have existed since the 1970s. As just one security researcher (Marcus Hutchins) easily demonstrated last month, 90% of documented LLM exploits are blocked by standard web application firewalls (WAF). Perhaps it’s time to change the acronym for this dog of an argument to Web Warnings Originating Out Of Outlandish Feudal Fears (WOOF WOOF).
**Claim**: Companies must “cope with risks on their own” without government help
**Reality**: The ISO 42001:2023 framework years ago published standards for AI management system (AIMS) related to ethical considerations and transparency. Major cloud providers operating in a global market (e.g. GCP Vertex, AWS Bedrock and Azure… haha, who am I kidding, Microsoft fired their entire LLM security team) have LLM-specific security controls documented because of global regulations (and because regulation is the true mother of innovation). These aren’t experimental future concepts, they’re production-ready and widely deployed to meet customer demand for LLMs that aren’t an obvious dumpster fire by design.
And even more to the point, today we have trusted execution environment (TEE) providers delivering encrypted enclave LLMs as a service… and while that sentence wouldn’t make any sense to the WSJ, it proves how reality is far, far away from the fairy-tales of loud OpenAI monarchs trying to scare the square pegs of society into an artificially round Silicon Valley “eat your world” hole.
Om nom nom again? No thanks, I think we’ve had enough feudal tech for now.
**Claim**: The “unstructured and conversational nature” of LLMs creates unprecedented risks
**Reality**: This one really chaps my hide, as the former head of security for one of the most successful NoSQL products in history. We’ve been securing unstructured data and conversational interfaces for years. I’ve personally spearheaded and delivered field-level encryption and I’m working on even more powerful open standards. Ask any bank managing any of their chat history risks or any healthcare provider handling free-text medical records including transcription systems. These same human language principles in tech, applied for decades, apply to LLMs.
The article quotes exactly zero working security engineers. Instead, we get predictions from a former politician and a CEO selling LLM security products. It’s like writing about bridge safety but only interviewing people selling car insurance.
Here’s what actual practitioners are doing right now to secure LLMs:
- Rate limiting and anomaly detection catch repetitive probe attempts and unusual interaction patterns – the same way we’ve protected APIs for years. An attacker trying thousands of prompt variations to find a weakness looks exactly like traditional brute force that we already detect.
- OAuth and RBAC don’t care if they’re protecting an LLM or a legacy database – they enforce who can access what. Proper identity management and authorization scoping means even a compromised model can only access data it’s explicitly granted. We’ve been doing this since SAML days.
- Input validation isn’t rocket science – we scan for known malicious patterns, enforce structural rules, and maintain blocked token lists. Yes, prompts are more complex than SQL queries, but the same principles of taint tracking and context validation still apply. Output filtering catches anything that slips through, using the same content scanning we’ve used for DLP.
- Data governance isn’t new either – proven classification systems already manage sensitive data through established group boundaries and organizational domains. Have you seen SolidProject.org by the man who invented the Web? Adding LLM interactions to existing monitoring frameworks just means updating taxonomies and access policies to respect long-standing natural organizational data boundaries and user/group trust relationships. The same principles of access grants, control and clear data sovereignty that have worked for decades apply here, yet again.
These aren’t theoretical – they’re rather pedestrian proven security controls that work today despite the bullhorn-holding soap-box CEOs trying to sell Armored Cybertrucks that in reality crash and kill the occupants at a rate 17X worse than a Ford Pinto. Seriously, the “extreme survival” truck pitch of the “cyber” charlatan at Tesla has produced the least survivable thing in history. Exciting headlines about AI apocalypse drive the wrong perceptions and definitely foreshadow the fantastical failures of 10-gallon hat wearing snake-oil salesman of Texas.
The WSJ article, when you really think about it, brings to mind mistakes being made in security reporting since the 15th century panic about crossbows democratizing warfare.
Yes, crossbows at first glance wielded by unskilled over-payed kids serving an unpopular monarch were powerful weapons that could radically shift battlefield dynamics. Yet to the expert security analyst (career knight responsible for defense of local populations he served faithfully) the practical limitations (slow reload times, maintenance requirements, defensive training) meant technology had a supplement effect rather than replacement to existing military tactics. A “Big Balls” teenager who shot his load and then sat on the ground struggling to rewind the crossbow presented easy pickings, thus wounded or killed with haste. The same is true for skids with LLMs as they shift security considerations by re-introducing old vulnerabilities, which don’t magically bypass experts who grasp fundamental security principles.
When journalists publish theater scripts for entertainment value instead of practical analysis, they do our security industry a disservice. Companies need accurate information about real risks and proven solutions, not hand-waving vague warnings and appeals to fear that pump up anti-expert mysticism.
The next time you read an article about “unprecedented” AI security threats, ask yourself: are they describing novel technical vulnerabilities, or just presenting tired challenges through new buzzwords? Usually, it’s the latter. The DOGEan LLM horse gave a bunch of immoral teenagers direct access to federal data as if nobody remembered why condoms are called Trojans.
And remember, when someone tells you traditional security can’t handle LLM threats, they’re probably rocking up with a proprietary closed solution to a problem that repurposed controls or open standards could solve.
Stay salty, America.