Sveriges mest populära poddar

The New Stack Podcast

How Do We Protect the Software Supply Chain?

21 min • 8 november 2022

DETROIT — Modern software projects’ emphasis on agility and building community has caused a lot of security best practices, developed in the early days of the Linux kernel, to fall by the wayside, according to Aeva Black, an open source veteran of 25 years.

 

“And now we're playing catch up,“ said Black, an open source hacker in Microsoft Azure’s Office of the CTO  “A lot of less than ideal practices have taken root in the past five years. We're trying to help educate everybody now.”

 

Chris Short, senior developer advocate with Amazon Web Services (AWS), challenged the notion of “shifting left” and giving developers greater responsibility for security. “If security is everybody's job, it's nobody's job,” said Short, founder of the DevOps-ish newsletter.

 

“We've gone through this evolution: just develop secure code, and you'll be fine,” he said. “There's no such thing as secure code. There are errors in the underlying languages sometimes …. There's no such thing as secure software. So you have to mitigate and then be ready to defend against coming vulnerabilities.”

 

Black and Short talked about the state of the software supply chain’s security in an On the Road episode of The New Stack Makers podcast.

 

Their conversation with Heather Joslyn, features editor of TNS, was recorded at KubeCon + CloudNativeCon North America here in the Motor City.

 

This podcast episode was sponsored by AWS.

‘Trust, but Verify’

For our podcast guests, “trust, but verify” is a slogan more organizations need to live by.

 

A lot of the security problems that plague the software supply chain, Black said, are companies — especially smaller organizations — “just pulling software directly from upstream. They trust a build someone's published, they don't verify, they don't check the hash, they don't check a signature, they just download a Docker image or binary from somewhere and run it in production.”

 

That practice, Black said, “exposes them to anything that's changed upstream. If upstream has a bug or a network error in that repository, then they can't update as well.” Organizations, they said, should maintain an internal staging environment where they can verify code retrieved from upstream before pushing it to production — or rebuild it, in case a vulnerability is found, and push it back upstream.

 

That build environment should also be firewalled, Short added: “Create those safeguards of, ‘Oh, you want to pull a package from not an approved source or not a trusted source? Sorry, not gonna happen.’”

 

Being able to rebuild code that has vulnerabilities to make it more secure — or even being able to identify what’s wrong, and quickly — are skills that not enough developers have, the podcast guests noted.

 

More automation is part of the solution, Short said. But, he added, by itself it's not enough. “Continuous learning is what we do here as a job," he said. "If you're kind of like, this is my skill set, this is my toolbox and I'm not willing to grow past that, you’re setting yourself up for failure, right? So you have to be able to say, almost at a moment's notice, ‘I need to change something across my entire environment. How do I do that?’”

GitBOM and the ‘Signal-to-Noise Ratio’

As both Black and Short said during our conversation, there’s no such thing as perfectly secure code. And even such highly touted tools as software bills of materials, or SBOMs, fall short of giving teams all the information they need to determine code’s safety.

 

“Many projects have dependencies 10, 20 30 layers deep,” Black said. “And so if your SBOM only goes one or two layers, you just don't have enough information to know if as a vulnerability five or 10 layers down.”

 

Short brought up another issue with SBOMs: “There's nothing you can act on. The biggest thing for Ops teams or security teams is actionable information.”

 

While Short applauded recent efforts to improve user education, he said he’s pessimistic about the state of cybersecurity: “There’s not a lot right now that's getting people actionable data. It's a lot of noise still, and we need to refine these systems well enough to know that, like, just because I have Bash doesn't necessarily mean I have every vulnerability in Bash.”

 

One project aimed at addressing the situation is GitBOM, a new open source initiative. “Fundamentally, I think it’s the best bet we have to provide really high fidelity signal to defense teams,” said Black, who has worked on the project and produced a white paper on it this past January.

 

GitBOM — the name will likely be changed, Black said —takes the underlying technology that Git relies on, using a hash table to track changes in a project's code over time, and reapplies it to track the supply chain of software. The technology is used to build a hash table connecting all of the dependencies in a project and building what GItBOM’s creators call an artifact dependency graph.

 

“We had a team working on it a couple of proof of concepts right now,” Black said. “And the main effect I'm hoping to achieve from this is a small change in every language and compiler … then we can get traceability across the whole supply chain.”

 

In the meantime, Short said, there’s plenty of room for broader adoption of the best practtices that currently exist. “Security vendors, I feel,  like need to do a better job of moving teams in the right direction as far as action,” he said.

 

At DevOps Chicago this fall, Short said, he ran an open space session in which he asked participants for their pain points related to working with containers

 

“And the whole room admitted to not using least privilege, not using policy engines that are available in the Kubernetes space,” he said. “So there's a lot of complexity that we’ve got to help people understand the need for it, and how to implement it.”

 

Listen to whole podcast to learn more about the state of software supply chain security.

Förekommer på
00:00 -00:00