A quick test.

Have you heard of Axios?

Most people haven't. That's the point of today's post.

Back in March, somebody hacked it. The fix was in place within three hours. Almost no normal person noticed. And yet, depending on which apps you have on your phone, which websites you visit, and which company processes your paycheck, there's a real chance the code that briefly went bad was running somewhere in your life that day.

This is a different kind of hack. Worth understanding, because it's the kind that's getting more common.


The thing nobody told you about apps

When someone builds an app - a banking app, a weather app, a school grading system, anything - they don't write every single piece from scratch.

They use building blocks. Other people's code, freely available, that handles common jobs. Sending data back and forth between the app and the server. Reading a date. Drawing a chart.

One of the most popular of these building blocks is called Axios. It does one job: it lets an app talk to the internet.

About one hundred million times a week, somewhere in the world, a developer downloads a copy of Axios to put inside something they're building.

That's not a typo. A hundred million downloads a week.

So when somebody breaks into Axios, they don't break into one app. They potentially break into everything built with it that week.


What actually happened

On March 31, attackers tricked the person who maintains Axios into giving them access to his account. Not by hacking his computer in some dramatic way - by social engineering him. A convincing message, the right pressure, the right urgency. Same playbook as the voice clone scam from Monday. Different target.

Once they had his account, they uploaded a poisoned version of Axios - version 1.14.1 - and waited.

For about three hours, anyone who downloaded a fresh copy of Axios got the booby-trapped one. The malicious code quietly installed something called a remote access trojan - basically, a way for the attackers to come back later and do whatever they wanted on that computer. Mac, Windows, Linux, didn't matter.

Then the maintainer caught it. The bad version was pulled. The internet exhaled.

Google's threat intelligence team later attributed the attack to a group connected to North Korea.


Did it touch you?

Almost certainly not in a way you can see.

The three-hour window meant only developers who happened to update their software that afternoon would have grabbed the poisoned version. Most of them caught it before pushing anything to real users. Most apps you actually use were never affected.

But "most" isn't "all," and that's the uncomfortable part. Some piece of software you depend on - and won't ever hear about by name - was built by a small team somewhere that did update that day, and they may or may not have noticed before shipping the result to their customers.

For an individual home user, the practical risk from this specific incident is very low.

The reason I'm writing about it isn't this one event. It's the category.


Why this matters more than the headline does

Most people think about cybersecurity in terms of their own behavior. Did I click a bad link? Did I use a weak password?

That's still important. But it's only half the picture.

The other half is that every app you use was built out of borrowed parts, and you have no way to inspect any of them. You're trusting:

  • The app's developer
  • Every building block they used
  • Every person who maintains those building blocks
  • Every account those maintainers log into

That's a long chain. And every link in it is one social engineering attempt away from going sideways.

The Axios story is the cleanest illustration of how that can happen. It almost certainly will happen again. The question is whether the next one gets caught in three hours or three months.


What you can actually do about it

Here's the honest answer: you can't audit the supply chain of your apps. Nobody can, really.

What you can do is make yourself less exposed when one of these inevitably slips through:

Keep your phone and computer set to update automatically. When a building block goes bad and gets fixed, that fix only protects you if you actually receive it. Auto-updates are not optional anymore.

Don't install apps you don't need. Every app on your phone is a separate supply chain you're trusting. The smaller the surface, the less exposure.

Get apps from the official store, not from a link in an email or a text. Most supply chain attacks ride along inside legitimate-looking software that came from somewhere it shouldn't have.

Pay attention when a service emails you about an incident. These notices are easy to ignore. They contain the only signal you'll get that something built with shared parts had a bad week.


The pattern across this week

Monday: somebody you trust on the phone wasn't actually them.

Tuesday: a school system you trust got breached, and the data is fuel for the next round of impersonations.

Today: a piece of code that thousands of companies trust got briefly poisoned, by way of social-engineering the one person who could approve a new version.

Three different stories. Same shape. Trust is the attack surface now.

Tomorrow I'm going to lay out the unifying playbook - the small set of habits that protects you across all three of these patterns, without you needing to understand any of the technical bits.

Friday's newsletter has the printable version. If you're not subscribed yet, this is the week.

Subscribe to the PCRescue weekly →

If you want a clean review of what's on your devices — including the apps and extensions you've forgotten about, which are the most common silent supply-chain risks — that's a great use of a remote session.

Request a callback → | Schedule a remote session →

Why a Hack You've Never Heard Of Probably Touched Your Phone

In March, attackers slipped malicious code into a building block used inside 100 million app downloads a week. You didn't see a headline — and it still matters.