By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
18px_cookie
e-remove

An Auditor’s Perspective on Addressing OSS Vulnerabilities for PCI DSS v4

Learn how your organization can achieve PCI DSS v4 compliance for managing open source software vulnerabilities with reachability-based SCA and more.

Learn how your organization can achieve PCI DSS v4 compliance for managing open source software vulnerabilities with reachability-based SCA and more.

Learn how your organization can achieve PCI DSS v4 compliance for managing open source software vulnerabilities with reachability-based SCA and more.

Written by
A photo of Jenn Gile — Director of Product Marketing at Endor Labs.
Jenn Gile
Published on
May 2, 2024

Learn how your organization can achieve PCI DSS v4 compliance for managing open source software vulnerabilities with reachability-based SCA and more.

Learn how your organization can achieve PCI DSS v4 compliance for managing open source software vulnerabilities with reachability-based SCA and more.

PCI DSS version 4.0 contains a host of new practices that will become requirements on March 31, 2025. In this article, we focus on a vulnerability management change that looks — at first glance — to be minor, but in reality could have significant implications for Application Security (AppSec) teams. We talked with Joe O’Donnell (Senior Manager at Schellman, a leading provider of attestation and compliance services) about this new requirement and how his firm approaches PCI DSS compliance. 

PCI DSS Requirement 11.3.1.1

Requirement 11 covers the regular identification, prioritization, and addressing of external and internal vulnerabilities. Sub-requirement 11.3 narrows the focus to internal vulnerability scans, which is where we find an impactful new requirement,11.3.1.1, which reads:

All other applicable vulnerabilities (those not ranked as high-risk or critical per the entity’s vulnerability risk rankings defined at Requirement 6.3.1) are managed as follows:

  • Addressed based on the risk defined in the entity’s targeted risk analysis, which is performed according to all elements specified in Requirement 12.3.1.
  • Rescans are conducted as needed.

To summarize the change, previously PCI DSS only required that high and critical vulnerabilities be addressed. This new requirement is an acknowledgement that all vulnerabilities, regardless of criticality, provide a potential avenue of attack and must be dealt with. To achieve compliance in this area, auditors will seek to verify whether all vulnerabilities have been addressed. 

What Does it Mean to “Manage Vulnerabilities”?

The concept of “managing vulnerabilities” (also referred to as “addressing vulnerabilities”) can be surprisingly controversial because there are several ways to interpret both “managing” and “vulnerabilities.” We will provide definitions that align to the context of PCI DSS and how auditors evaluate programs for compliance.

In the security industry, a “vulnerability” can be defined very narrowly or broadly depending on who is defining it. For example, many AppSec teams consider a vulnerability to equal known vulnerabilities, a.k.a. CVEs. But with PCI DSS, the definition goes far beyond CVEs. O’Donnell says, “In the context of PCI DSS, a vulnerability is anything that potentially can cause serious harm to the environment.” This definition includes known vulnerability as well as a host of other risks, for example malware. A vulnerability in and of itself is nothing but an opportunity for exploitation that exists. 

For PCI DSS compliance, the analysis of the vulnerability is much more important than its face value. It’s often assumed that managing or addressing is synonymous with remediate, meaning that the vulnerability has to be fixed. However; there are several ways a vulnerability can be managed without requiring it to be fixed (which we will get into shortly). The point of this new PCI DSS requirement is to ensure organizations don’t make remediation decisions solely based on CVSS scores. Instead, the new expectation is that the organization determines whether or not each vulnerability has a major impact on the environment. According to O’Donnell, “If an organization has performed a detailed assessment of a vulnerability, and determined it does not require remediation (for example, because it is unreachable), then we consider it to have been managed in alignment with PCI DSS.”

When it comes to managing vulnerabilities, there is also a time component. Requirement 6.3.3 indicates that anything your organization has determined to be a critical or high severity vulnerability must be patched or updated within one month of release, whereas lower tiered risks are managed within an appropriate timeframe determined by your organization’s processes.

To summarize, managing vulnerabilities can vary from organization to organization depending on the system you use to assess and classify risk. We’re going to dive deep into how to assess and triage risks for open source software (OSS) vulnerabilities.

Open Source Vulnerabilities Present Greatest Threat

Section 11.3 is focused on internal vulnerability scans for first-party software and OSS. To detect risks and facilitate compliance in application code, most organizations use two types of tools:

  • SAST (Static Application Security Testing) tools examine source or binary code to discover coding errors that represent actual or potential security risks.
  • SCA (Software Composition Analysis) tools examine a software project’s dependencies for known vulnerabilities and related risks.

Recent estimates from the likes of GitHub indicate that anywhere from 70-97% of modern code bases are OSS, so you can expect that the majority of your organization’s vulnerabilities will come from those OSS packages. Given this prevalence, it’s no wonder that OSS is considered to generate more risk than first-party code. But don’t be alarmed! OSS is generally good for business — in fact, research by Harvard Business School indicates that developing this code from scratch would cost firms $8.8 trillion — so the answer clearly isn’t to eradicate it. A better path is to set up guardrails that encourage selection of safe OSS and leverage an SCA tool that can identify and prioritize vulnerabilities already in your code.

Why Noisy SCA Tools Prevent Compliance

Before we jump into the tactics for achieving compliance with this new standard, it’s important to understand a big problem faced by many enterprises: SCA tools are notorious for producing noise. What does this mean, why does it happen, and why does it matter to compliance?

  • What is SCA noise? SCA noise is vulnerability findings that have to be manually researched, usually by a software or AppSec engineer, and triaged accordingly.
  • What causes SCA noise? SCA noise happens when the tool wrongly thinks software is affected by vulnerable code because it surfaces software components that contain vulnerable code which is unused or unexploitable in an application.
  • Why does SCA noise matter for PCI DSS? When you have no idea what’s real, you are left with two options: Try to figure out which represents real risks or just throw it all over the fence for engineering to sort out. Best case scenario, you manage to stay compliant but overextend your engineering team. Worst case scenario, risks get remediated in the order they are discovered, regardless of severity — leaving open the possibility of exploit or PCI DSS failure.

The root cause of SCA noise is a lack of context to accompany the results. The method used by an SCA tool to determine what’s in use can greatly determine whether context is available to enable fast decision making. There are a number of factors that lead to poor context:

  • Relies mainly on CVSS,which measures vulnerability severity, not risk
  • Has a “guilty until proven innocent” approach (you have the package, therefore you're vulnerable)
  • Fails to consider likelihood of exploit and whether the issue relates to releasable code (instead of e.g. test code)
  • Does not factor in the risks of remediation actions (i.e. the tool tells you to fix something in a way that actually introduces more overall risk to the org just to fix a security risk)

How the SCA tool gets information on your dependencies dictates how much useful context is provided. Most SCA tools scan only the OSS package manifest to figure out your dependencies, but this is problematic because manifests present an incomplete picture. A manifest can be compared to a party guest list. It tells you what OSS software was invited to your application but not which dependencies (bonus party guests) they invited as well. It isn’t guaranteed to be accurate, and it’s commonplace for manifests to become outdated, or a developer can choose to use a component that’s not listed. So in addition to producing noise, SCA tools that only use the manifest also miss dependencies. And you can’t secure what you can’t see.

With this understanding, just imagine how difficult it is for organizations to address vulnerabilities in their OSS code, nevermind achieve PCI DSS compliance!

Reduce Noise with Reachability-Based SCA

The solution to the SCA problem is to adopt an SCA tool that provides full visibility into OSS dependencies and gives teams the necessary context to de-prioritize false alarms and issues that won’t cause serious harm to the environment. An SCA tool should help organizations determine which OSS vulnerabilities need to be fixed using several factors beyond CVSS, including:

  • Is the function containing the vulnerability “reachable”? (more on that below)
  • Is there a fix available?
  • Is it in production code (not test code)?
  • Is there a high probability of exploitation (high EPSS)?
  • Is the risk of performing the upgrade lower than the security risk it repairs?

This is where relatively new concepts come into play: Program Analysis and Reachability. 

Accurate Dependency Inventories with Program Analysis

As we established above, using the OSS manifest is problematic because it doesn’t represent an accurate list of OSS dependencies. An SCA tool that uses program analysis, which can create a working model of  your application’s behavior to identify exactly which dependencies are actually being used and how they connect to each other. This produces an accurate source of truth so you can be confident that your organization has identified all the OSS risk vectors.

Faster Remediation with Reachability

With a program analysis approach in place, the concept of reachability is unlocked. More specifically, function-level reachability, which tells you whether a vulnerable package is exploitable at the function level. Function-Level reachability is the closest we can get to certainty of a false positive and therefore provides tremendous value. At Endor Labs, we find that function-level reachability reduces false positives by an average of 80%, with some customers achieving 98+% noise reduction. The AppSec team can automatically apply a reachability filter to weed out the noise, which both accelerates remediation of vulnerabilities and helps meet the standard of “managing vulnerabilities.” Afterall, if a vulnerability can’t be exploited, there’s no need to remove it from the code base.

For more on this topic, watch The Basics of Dependency Management and SCA (and Why Your SCA is Wrong), in which Endor Labs CTO Dimitri Stiliadis breaks down the basics of dependency management and how SCA tools work.

Implement Select Safe OSS Selection

As part of the strategy to address — and prevent — vulnerabilities, organizations implement initiatives like Open Source Program Offices (OSPO) to govern the selection process. Early activities for OSPOs usually tackle legal elements of OSS, including licensing, inventory, and developer education. For this stage to be successful, a combination of strategy and tool automation are necessary.

For people who aren’t intimately familiar with OSS, it might not be readily apparent how to determine the organization’s risk tolerance in a way that isn’t just opinions. We recommend adopting a scoring system (like the Endor Score) that looks at overall package health across four categories:

  • Security:Packages with security issues (not limited to CVEs) can be expected to have a large number of security-related issues when compared with packages with better scores. 
  • Activity: Packages that are active are presumably better maintained and are therefore more likely to respond quickly to future security issues.
  • Popularity: Widely used packages tend to have the best support and are more likely to have latent security issues discovered and reported by the community.
  • Quality: Using best practices for code development will help you align with standards including PCI DSS and SSDF.

Next, an OSPO (or similar initiative that has engineering buy-in) should  implement pre-commit checks to warn developers when they’ve selected a dependency that has unacceptable health scores and could expose your environment to a new vulnerability. 

In other words, keep your volume of vulnerabilities, well, manageable, so your team isn’t chasing down preventable risk.

Next Steps for PCI DSS v4 Compliance

Vulnerability management is easily one of the more difficult requirements to show compliance to when going through a PCI assessment. The issue is often that security teams do not have the right kind of visibility with their current tools in order to adequately manage vulnerabilities (in fact, too much visibility into the wrong stuff is the status quo). A traditional tracking system may not suffice and could be too burdensome to manage. It is also likely that current tools do not provide enough visibility into the vulnerabilities that are actually present, causing confusion and the potential for gaps in compliance.

Let’s hear from O’Donnell one more time: “At Schellman, we are constantly trying to improve the user experience for our clients. A tool or set of tools such as what Endor Labs provides can greatly reduce the confusion around managing vulnerabilities by security teams. For an audit, we would be able to use the platform provided by Endor Labs to evidence some of this activity. The most challenging part of these requirements is trying to provide evidence that vulnerabilities are actively being managed. Having the ability and visibility for a tool to use analytics to make the lives of security teams easier and show us that evidence, is extremely helpful to us.”

Get Started with Endor Labs

Endor Labs is SOC 2 Type II certified and can help you achieve PCI DSS compliance, as well as FedRamp, DORA, and more.

To get started with Endor Labs, start a free 30-day trial or contact us to discuss your use cases.

The Challenge

The Solution

The Impact

Request a Demo

Request a Demo

Request a Demo

Welcome to the resistance
Oops! Something went wrong while submitting the form.

Request a Demo

Request a Demo

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Request a Demo