Blog

Mythos - Myth or Reality?

2025-04-15

NULLPOINT - Issue #1

Drops when something actually matters.


WAIT, WHAT?

An AI That's Too Dangerous To Release

Anthropic built a model called Mythos that's so good at finding security holes in software, they refused to release it publicly. It found a 27-year-old bug hiding in OpenBSD — an OS literally famous for having no bugs — and chained together vulnerabilities in every major operating system and browser, writing complete working exploits entirely on its own, without any human steering. The good news: Microsoft, Google, Apple and 40 others get early access to patch things before the bad guys find out. The bad news: Anthropic's own head of offensive cyber research expects competitors to have comparable capabilities within six to twelve months — at which point "too dangerous to release" becomes someone else's problem.

Read: Anthropic / Project Glasswing


But Wait — Who benefits?

TechCrunch noticed something worth sitting with: limiting Mythos to 40 giant  enterprises is a remarkably good sales strategy that also, conveniently, makes it harder for competitors to study. The safety narrative and the revenue model point in exactly the same direction. That doesn't mean the threat isn't real — but it's worth noting that Anthropic gets to be both the hero and the vendor here.

Read: TechCrunch


The "Oops, We Open-Sourced Our Code" Incident

When Anthropic accidentally leaked Claude Code's source, people expected pure neural network magic all the way down. What they found was weirder: buried at the centre is a 3,167-line file called print.ts — not AI at all, but old-fashioned deterministic code. 486 IF-THEN branches, 12 levels of nesting, the kind of explicit rule-following logic that 1970s AI researchers like John McCarthy and Marvin Minsky built their careers on. Anthropic essentially decided: for the parts that really need to be right every time, don't trust the AI. Use a rulebook. That marriage of neural networks and explicit symbolic logic has a name — neurosymbolic AI — and the researchers who spent decades arguing it was the future, while being laughed at by the "just scale the neural net" crowd, are currently insufferable at parties. Rightfully so.

Read: Gary Marcus / Substack


Vibeware: When Hackers Stopped Trying to Be Smart

A year ago, everyone predicted AI would write malware. Nobody predicted this.

For decades, security tools have been remarkably good at catching malicious code — they've seen so much C, Python and Java malware that they've built fingerprints for all of it. Spot the signature, kill the program, done. Hackers knew this. So they asked AI a simple question: what if we wrote the same malware in a language nobody's ever heard of?

Enter Nim and Zig. Yes, those are real programming languages. No, your security tools have no idea what malicious Nim code looks like — because until recently, nobody was writing malicious Nim code. The same stealing, spying, and — if your laptop happens to be connected to critical infrastructure — potentially much worse, now runs completely undetected. Not because the code is clever. It isn't. It's often broken. One sample had a placeholder where the server address should be, meaning it could never actually steal anything. Another always reported itself as "online" because it reset its own timestamp every time it checked.

Doesn't matter. Undetectable beats elegant, every time.

And yes — researchers will eventually catch up, using the same AI to write detection signatures. The hackers thought of that too. Pakistan-aligned APT36 is churning out new variants daily, in different languages, using different communication channels — Slack, Discord, Google Sheets — so the malicious traffic looks like your colleagues having a busy afternoon. By the time a signature exists for Monday's variant, Tuesday's is already inside the network. Bitdefender calls it "Distributed Denial of Detection." Flood the defenders until something gets through.

The surprising part isn't that AI writes malware. It's that AI made every obscure programming language simultaneously dangerous overnight.

→ Bitdefender Research · Dark Reading


FROM THE VAULT

North Korea Almost Stole $1 Billion. A Printer Stopped Them.

In 2016, North Korea's elite hacker group — the Lazarus Group — spent a year quietly inside Bangladesh Bank's systems before making their move. They tried to transfer $1 billion out through the Federal Reserve. All but $81 million was stopped because one of the transfer instructions contained the word "Jupiter" — which also happened to be the name of a sanctioned Iranian shipping vessel. The rest was laundered through Manila casinos. The whole thing unravelled because a printer on the 10th floor ran out of paper and nobody noticed the transfer logs. The BBC turned this into a 10-episode podcast that is genuinely unputdownable — better than most crime fiction.

Listen: The Lazarus Heist — BBC World Service


CYBERCOM

internet_is_born.png

Read the full comic.


You're reading Nullpoint — a newsletter that drops when something actually matters. nullpoint.academy

← All Posts