
There’s a line of shell that looks harmless, even helpful:
curl <XXX> | bash
It promises speed. Convenience. “Just install the thing and get on with your day.”
That promise is exactly why attackers love it.
Table of Contents
- It starts with you
- A dangerous myth: “I downloaded it first”
- The tools you trust are more powerful than you think
- Developers are now the soft perimeter
- The first domino
- How one developer compromises another
- The blast radius expands
- Supply chain contagion
- This isn’t theoretical
- Breaking the chain
- This is your fight
That one-liner doesn’t merely install a tool. It hands execution control to whatever sits on the other end of a network request, runs with your permissions on your machine, and within your environment. There is no sandbox. No review step. No guardrails.
Despite how we often talk about security, most modern supply-chain breaches don’t begin in a data centre. They don’t start with a firewall misconfiguration or a zero-day in Kubernetes. Instead, they start silently, on a developer laptop, when someone installs something they didn’t stop to inspect.
It starts with you
At this point, many developers reach for a familiar defence.
A dangerous myth: “I downloaded it first”
There’s a comforting belief that goes like this:
“I didn’t
curl | bash. I downloaded the script first and looked at it.”
That feels safer. Unfortunately, it isn’t. What you inspected is not necessarily what you executed.
Modern attack infrastructure routinely serves different payloads to different clients. The same URL might return one script to a browser, another to curl, and a third to a CI runner or a specific operating system. User-Agent headers, IP ranges, TLS fingerprints, geolocation, and even timing can all influence what code you receive.
As a result, you might review a clean, boring install script in your editor, only to execute a different script later. One tailored to your environment, your platform, or your role.
This technique is neither novel nor rare. Malware delivery systems, phishing kits, and supply-chain attacks use it precisely because it defeats casual inspection and complicates post-incident forensics. The evidence you saved no longer matches the code that ran, and by the time you notice, the server has already moved on.
The lesson is critical: trust cannot be established by eyeballing a transient network response. If execution depends on a live server you don’t control, you are still trusting that server. No matter how careful you believe you were.
The tools you trust are more powerful than you think
There’s another assumption hiding underneath all of this: “I know what this command does.”
In practice, most developers don’t.
We tend to learn tools by copying incantations rather than reading manuals. curl is a perfect illustration. It looks simple: fetch a file, pipe it somewhere. Yet under the hood it’s a fully-featured data transfer engine with dozens of options that alter behaviour in subtle, security-relevant ways.
Redirects can be followed without drawing attention. Protocols can be negotiated that you never intended to allow. TLS validation can be weakened or bypassed. Output can be written to locations you didn’t expect. Headers you never set can still be sent. What appears to be “download this script” can turn into “negotiate, transform, and execute something else entirely.”
Attackers understand this asymmetry very well. They know most developers don’t know which flags matter, which defaults are dangerous, or which combinations fundamentally change the trust model of the command. As a result, they don’t need you to type anything exotic. They only need you to assume the tool behaves the way you think it does.
This problem isn’t unique to curl. It applies to every tool that sits between you and the network: package managers, bootstrappers, wrappers, and version managers. These tools are powerful, composable, and opaque unless you’ve taken the time to understand their failure modes.
Most of us haven’t.
Fortunately for attackers, zero-days aren’t required when defaults and misunderstandings are enough.
Developers are now the soft perimeter
Once you zoom out, the pattern becomes obvious.
Attackers know exactly where to hide, and as we’ve already seen, they don’t have to work very hard. Exotic exploits aren’t necessary when routine tooling and humans already execute code automatically. Git hooks fire on commit. IDE plugins “enhance productivity.” Build scripts and post-install steps run simply because that’s how the ecosystem works.
Nothing crashes. Nothing looks broken. The code behaves exactly as expected. Just not exclusively for you.
In the background, a token is copied. A config file is read. Environment variables are scraped. SSH agents are queried. All of this happens under the cover of normal development activity, surrounded by logs that look like noise and behaviour that blends into the background.
The first domino
Once a developer machine is compromised, attackers suddenly have options.
Git hooks trigger automatically. Local caches can be poisoned. IDE extensions observe workflows. Secrets that were never meant to leave a laptop: API keys, cloud credentials, signing material, etc., leak out in quantities small enough to avoid suspicion.
This isn’t smash-and-grab. It’s patient, ambient execution: malicious behaviour that lives inside your workflow instead of attacking it from the outside.
How one developer compromises another
At this stage, the attack is no longer personal.
After compromising a single developer machine, attackers no longer need to wait for CI. Instead, they move sideways, developer to developer, using the very tools teams rely on to keep builds “consistent.”
Wrapper scripts make an ideal vehicle.
Maven Wrapper. Gradle Wrapper. Checked into source control. Executed automatically. Rarely reviewed after the day they were added.
If an attacker modifies a wrapper script or its bootstrap configuration on a compromised machine, that change doesn’t remain local. It gets committed. It gets pushed. Every other developer who pulls the repository then runs attacker-controlled code the next time they type ./mvnw or ./gradlew.
This is how a single compromised laptop becomes a team-wide execution surface. Each time a developer runs the wrapper, they re-establish trust and enable lateral propagation of the breach, long before any central system is involved.
By the time CI notices, the infection has already finished spreading.
The blast radius expands
Eventually, one of the stolen credentials opens a larger door.
A CI token. A bot account. A publish key.
At that point, the attacker isn’t running code as a developer anymore. They’re running code as your pipeline. Build scripts are subtly altered. Wrapper logic is adjusted. New behaviour appears in places nobody thinks to inspect, because “it’s always been there.”
The privileges involved are no longer local. They’re organisational. Production-adjacent. Sometimes fully production-grade.
And the developer who ran curl | bash never sees the moment it all tipped over.
Supply chain contagion
From there, attackers deliberately push the infection through the supply chain, exploiting the mechanics modern software relies on.
Through dependencies. Through transitive resolution. Through typosquatting and dependency confusion. Through packages that inherit trust simply by existing in the graph.
At this stage, your build artefacts carry more than your code. They carry someone else’s intent, downstream into systems and teams that had nothing to do with the original mistake.
That’s how a single developer action becomes an ecosystem-level problem.
This isn’t theoretical
We’ve seen this pattern repeat across ecosystems.
In Python, packages published to PyPI have abused native loading paths to establish command-and-control channels. In JavaScript, malicious postinstall scripts on npm have quietly harvested credentials at build time, long before an application ever ran. Tooling breaches show how attackers turn a single compromised integration into many by exploiting the deep integration of these tools into developer workflows.
None of these attacks relied on novel techniques. They relied on assumptions.
That install scripts are benign.
That developer machines are disposable.
That “this is just how tools work.”
Breaking the chain
There’s no silver bullet here, and there isn’t another product that fixes this for you.
The first step is refusing blind trust. Stop piping unknown scripts directly into a shell. Stop assuming convenience implies safety. When code executes automatically, it deserves scrutiny! Especially in development environments where privileges are wide and monitoring is thin.
Isolation matters. Dependency hygiene matters. Short-lived credentials limit the value of what inevitably leaks. Observability inside developer workflows: watching for unexpected execution paths rather than just runtime failures, changes what you notice and when.
Above all, we need to acknowledge modern realities: ambient execution is now the norm. Code runs because it exists, not because you explicitly invoked it.
This is your fight
If you write code, this isn’t someone else’s problem.
The attacker is already in the toolchain. The only real question is whether you’ll spot the signs before the compromise jumps from your laptop to your pipeline and from your pipeline to everyone who depends on you.
The breach doesn’t always start in the application. It often starts with a developer installing a compromised component.
That developer is all of us.
