This is the 2nd part of a blog post series.
Part 1 of this blog post series delved into the emerging trends and technologies in vulnerability management.
Organizations need to work with a cybersecurity partner that delivers a high degree of efficacy of AI based Threat Intelligence; meaning no true threats go undetected and there is high degree of confidence to limit false positives.
False positives create noise that waste SOC analysts time. Chasing false positives becomes a bit like “chicken little… the sky is falling” continuously. This triggers wasted cycles of unnecessary investigation and escalations.
The opportunity cost of chasing false positives is diverting SOC analyst attention away from investigating and responding to true threats and incidents.
Highly correlated real time cybersecurity events are delivered with AI threat intelligence, and 3D vector graphing helps SOC analysts visualize the entire attack, dramatically accelerating investigation and response. Automated preventions close the exposure window.
Actionable intelligence and context becomes critical. The ability to focus on those vulnerabilities that are not only being actively exploited, but also those that are likely to be exploited, allows resources to be allocated more efficiently.
Organizations should seek out an exposure management solution that brings together 360-degree attack surface visibility enriched with complete context into downstream compensating controls, and continuous validation to confirm the validity and potential impact of a given exposure and/or vulnerability.
New regulations and standards are significantly impacting vulnerability management, especially in highly regulated industries like healthcare and defense. These regulations, including HIPAA, NIST, DFARS, and CMMC, are placing increased emphasis on proactive vulnerability identification, risk-based prioritization, and timely remediation.
Organizations are now required to implement more robust vulnerability management processes, conduct regular assessments, and demonstrate compliance through detailed documentation.
This regulatory landscape is driving the adoption of advanced technologies like AI and machine learning for more accurate threat detection and automated vulnerability management.
The focus has shifted from simply identifying vulnerabilities to prioritizing them based on their potential real-world impact and aligning with specific industry compliance requirements. As a result, vulnerability management has become a critical component of overall cybersecurity strategy, directly tied to regulatory compliance, risk mitigation, and maintaining business continuity in these sectors.
There are several approaches organizations can take to reduce the noise of false positives and prioritize risk effectively, ensuring that attention and resources are directed toward addressing genuine threats.
Utilizing frameworks like the Exploit Prediction Scoring System (EPSS) allows for contextual prioritization, focusing remediation efforts on vulnerabilities more likely to be exploited.
Another way of reducing noise and enhancing detection accuracy is to optimize the configuration of scanning tools to align with the organization's specific environment, minimizing redundant or irrelevant alerts.
Additionally, implementing automated validation processes confirms vulnerabilities before escalation, streamlining workflows and decreasing the manual effort required to sift through potential false positives.
To manage false positives effectively, organizations should adopt intelligent prioritization through risk-based scoring systems that account for exploitability, asset criticality, and threat intelligence, while incorporating environmental context to evaluate the actual impact of vulnerabilities within their infrastructure.
Consolidating a unified view of assets and exposures, supported by deduplicated data from all tools in the environment, provides an accurate, up-to-date perspective and helps identify discrepancies in asset or vulnerability information, significantly reducing false positives.
Additionally, clear communication with remediation owners is essential, as they often hold critical insights about the environment that may not be captured in the data.
This collaboration enhances prioritization accuracy and ensures false positives are addressed more efficiently.
Instead of relying solely on CVSS scores, enterprises should be adopting RBVM to prioritize vulnerabilities based on
For IT systems beyond the Security team’s core expertise—such as storage and backup systems, IoT devices, or OT solutions—relying on superficial scans can create a false sense of security, leading teams to underestimate their vulnerability to attacks. Nothing could be further from the truth.
Threat actors are notorious for finding ways to obtain privileges to user accounts and finding their way into storage and backup systems. From there, they can wreak havoc.
Our research shows that on average, about 20% of storage & backup devices are currently exposed. That means they are wide open to attack from ransomware.
What I see is lack of cyber situational awareness and visibility:
All too often, security teams try to identify and fix every vulnerability, which is neither sustainable nor understanding of the business’s top-level requirement of continuity.
Focusing on the key issues allows the security team to maximize productivity while properly balancing the organization’s objectives and its risk tolerance.
Our research has consistently found that the most commonly overlooked gaps are also the most basic:
41% of incidents Rapid7 MDR observed in 2023 were the result of missing or unenforced multi-factor authentication (MFA) on internet-facing systems, particularly VPNs and virtual desktop infrastructure.
MFA should be universally implemented, tested, and enforced as a top priority.
The importance of metrics should not only measure vulnerability detection but also drive remediation efforts and risk reduction. Key metrics we recommend include:
Unpatched software remains a significant vulnerability, as outdated applications can harbor known exploits. Configuration weaknesses, such as improperly set-up devices or services, can create entry points for attackers. Additionally, a lack of network segmentation allows threats to move laterally easily across systems.
Organizations should conduct regular audits and attack simulations to identify and remediate these vulnerabilities before exploitation.
Leveraging benchmarks like the Center for Internet Security (CIS) controls provides best practices for securing systems.
Implementing these measures enhances the organization's ability to detect and address security gaps proactively.
Unscanned assets pose a critical challenge by introducing unknown risks.
As environments evolve and vulnerability scanners are updated, gaps may arise, leaving new assets, shadow IT, or misconfigured scanners unmonitored. These gaps create security vulnerabilities and expose organizations to unforeseen risks.
To address this, proactive monitoring of asset inventory for unscanned devices or applications is essential.
Moreover, assessing remediation efficiency requires a broader approach than simply counting vulnerabilities by severity. This limited perspective fails to account for crucial factors such as ongoing remediation efforts and the root causes of vulnerabilities, which are key to improving security outcomes.
The major gaps include:
Traditional vulnerability scanning tools often rely on installing agents on target systems to perform in-depth analysis. However, certain systems like storage arrays and backup appliances typically do not support the installation of third-party agents due to their specialized operating systems and proprietary architectures.
Each storage and backup system may utilize its own operating system, APIs, and command-line interfaces (CLIs), creating a heterogeneous environment that complicates scanning efforts.
In addition, many common vulnerability scanning tools do not include storage and backup systems in their support matrices. This absence results in outdated vulnerability databases for these critical systems, leaving organizations unaware of potential security risks. This is especially concerning given the growing number of storage & backup vulnerability exploits.
I highly recommend complementing existing vulnerability scanners with specialized tools designed specifically for storage and backup platforms, IoT devices, or unmanaged SaaS applications.
These vulnerability scanners understand the unique architectures, operating systems, and underlying technologies of these systems, performing authenticated scans, and ensuring comprehensive coverage across your network.
Part 3 examines the critical security metrics organizations can use to measure success and present to the Board.
©2024 Continuity Inc. All rights reserved. Privacy Policy