Skip to main content

Blog

CISO Point of View page images (11)
Q&A Blog Post #2

Vulnerability Management Trends & Innovations to Watch in 2025

This is the 2nd part of a blog post series.

Part 1 of this blog post series delved into the emerging trends and technologies in vulnerability management.

Closing Security Gaps

How can organizations ensure they aren’t overburdened by false positives, and focus instead on vulnerabilities with real-world impact? 

Micki Boland - image

Organizations need to work with a cybersecurity partner that delivers a high degree of efficacy of AI based Threat Intelligence; meaning no true threats go undetected and there is high degree of confidence to limit false positives.  

False positives create noise that waste SOC analysts time. Chasing false positives becomes a bit like “chicken little… the sky is falling” continuously. This triggers wasted cycles of unnecessary investigation and escalations.   

The opportunity cost of chasing false positives is diverting SOC analyst attention away from investigating and responding to true threats and incidents.   

Highly correlated real time cybersecurity events are delivered with AI threat intelligence, and 3D vector graphing helps SOC analysts visualize the entire attack, dramatically accelerating investigation and response. Automated preventions close the exposure window. 


Raj Samani - Image3

Actionable intelligence and context becomes critical. The ability to focus on those vulnerabilities that are not only being actively exploited, but also those that are likely to be exploited, allows resources to be allocated more efficiently.  

Organizations should seek out an exposure management solution that brings together 360-degree attack surface visibility enriched with complete context into downstream compensating controls, and continuous validation to confirm the validity and potential impact of a given exposure and/or vulnerability. 


Joe Petrocelli - Image3

New regulations and standards are significantly impacting vulnerability management, especially in highly regulated industries like healthcare and defense. These regulations, including HIPAA, NIST, DFARS, and CMMC, are placing increased emphasis on proactive vulnerability identification, risk-based prioritization, and timely remediation. 

Organizations are now required to implement more robust vulnerability management processes, conduct regular assessments, and demonstrate compliance through detailed documentation.  

This regulatory landscape is driving the adoption of advanced technologies like AI and machine learning for more accurate threat detection and automated vulnerability management.  

The focus has shifted from simply identifying vulnerabilities to prioritizing them based on their potential real-world impact and aligning with specific industry compliance requirements. As a result, vulnerability management has become a critical component of overall cybersecurity strategy, directly tied to regulatory compliance, risk mitigation, and maintaining business continuity in these sectors. 


Maor Kuriel - SentinelOne - Image 2

There are several approaches organizations can take to reduce the noise of false positives and prioritize risk effectively, ensuring that attention and resources are directed toward addressing genuine threats.  

Utilizing frameworks like the Exploit Prediction Scoring System (EPSS) allows for contextual prioritization, focusing remediation efforts on vulnerabilities more likely to be exploited.   

Another way of reducing noise and enhancing detection accuracy is to optimize the configuration of scanning tools to align with the organization's specific environment, minimizing redundant or irrelevant alerts.   

Additionally, implementing automated validation processes confirms vulnerabilities before escalation, streamlining workflows and decreasing the manual effort required to sift through potential false positives. 


Tal Morgenstern - Image

To manage false positives effectively, organizations should adopt intelligent prioritization through risk-based scoring systems that account for exploitability, asset criticality, and threat intelligence, while incorporating environmental context to evaluate the actual impact of vulnerabilities within their infrastructure.   

Consolidating a unified view of assets and exposures, supported by deduplicated data from all tools in the environment, provides an accurate, up-to-date perspective and helps identify discrepancies in asset or vulnerability information, significantly reducing false positives.   

Additionally, clear communication with remediation owners is essential, as they often hold critical insights about the environment that may not be captured in the data.   

This collaboration enhances prioritization accuracy and ensures false positives are addressed more efficiently. 


DoronP - Image

Instead of relying solely on CVSS scores, enterprises should be adopting RBVM to prioritize vulnerabilities based on   

  • The criticality of affected assets  
  • Threat intelligence (e.g., active exploits in the wild) 
  • Business impact. This will help organizations focus limited resources on remediating the most impactful vulnerabilities  

For IT systems beyond the Security team’s core expertise—such as storage and backup systems, IoT devices, or OT solutions—relying on superficial scans can create a false sense of security, leading teams to underestimate their vulnerability to attacks. Nothing could be further from the truth.  

Threat actors are notorious for finding ways to obtain privileges to user accounts and finding their way into storage and backup systems. From there, they can wreak havoc.  

Our research shows that on average, about 20% of storage & backup devices are currently exposed. That means they are wide open to attack from ransomware. 


What are the most common gaps organizations overlook in their security posture, and what can be done to help identify and remediate these vulnerabilities, before they are exploited?

Micki Boland - image

What I see is lack of cyber situational awareness and visibility:   

  1. Enterprises without monitoring and logging enabled, especially in public cloud and hybrid cloud environments.  This makes it impossible for organizations to identify if there are threats against vulnerabilities. 

    We know that in both public and hybrid cloud, misconfigurations, weak or no encryption, and excessive IAM permissions are the top vulnerabilities that are exploited in real-time. 

  2. Lack of security posture without logging or AI threat intelligence makes it virtually impossible to protect the organization against exploitation of vulnerabilities and misconfigurations. 

    Cannot fix what cannot be seen (human or automated bot). The old fashioned approach of capturing logs and dumping them to a SIEM, for the SOC analysts to then pour through terabytes of logging data, does NOT help the humans in “seeing” threats or putting together a chain of events of multi-vector, multiple phase attacks.

    This is where real-time AI-based Threat Intelligence really helps.  

Raj Samani - Image3

All too often, security teams try to identify and fix every vulnerability, which is neither sustainable nor understanding of the business’s top-level requirement of continuity.   

Focusing on the key issues allows the security team to maximize productivity while properly balancing the organization’s objectives and its risk tolerance.   

Our research has consistently found that the most commonly overlooked gaps are also the most basic:   

41% of incidents Rapid7 MDR observed in 2023 were the result of missing or unenforced multi-factor authentication (MFA) on internet-facing systems, particularly VPNs and virtual desktop infrastructure.   

MFA should be universally implemented, tested, and enforced as a top priority.   


Joe Petrocelli - Image3

The importance of metrics should not only measure vulnerability detection but also drive remediation efforts and risk reduction. Key metrics we recommend include:  

  • Scan coverage and asset inventory to ensure comprehensive visibility across your environment.
  • Time-based metrics, such as average time to action and mean time to remediation are crucial for assessing your team's responsiveness and efficiency.
  • I’d also strongly advocate for risk-based metrics like total risk remediated and risk scores, which help prioritize efforts on the most critical vulnerabilities. 
  • Additionally, I’d encourage organizations to track the rate of vulnerability recurrence and average vulnerability age, as these metrics can uncover systemic issues in patch management processes.
  • The distinction between internal and external exposure is another vital metric, allowing for targeted risk mitigation strategies. 

Maor Kuriel - SentinelOne - Image 2

Unpatched software remains a significant vulnerability, as outdated applications can harbor known exploits. Configuration weaknesses, such as improperly set-up devices or services, can create entry points for attackers. Additionally, a lack of network segmentation allows threats to move laterally easily across systems. 

Organizations should conduct regular audits and attack simulations to identify and remediate these vulnerabilities before exploitation.  

Leveraging benchmarks like the Center for Internet Security (CIS) controls provides best practices for securing systems. 

Implementing these measures enhances the organization's ability to detect and address security gaps proactively. 


Tal Morgenstern - Image

Unscanned assets pose a critical challenge by introducing unknown risks. 

As environments evolve and vulnerability scanners are updated, gaps may arise, leaving new assets, shadow IT, or misconfigured scanners unmonitored. These gaps create security vulnerabilities and expose organizations to unforeseen risks.  

To address this, proactive monitoring of asset inventory for unscanned devices or applications is essential. 

Moreover, assessing remediation efficiency requires a broader approach than simply counting vulnerabilities by severity. This limited perspective fails to account for crucial factors such as ongoing remediation efforts and the root causes of vulnerabilities, which are key to improving security outcomes.


DoronP - Image

The major gaps include:  

  • Unmanaged SaaS applications 
  • Areas that require expertise that InfoSec do not have: OT, ICS, Storage arrays, Backup appliances, data protection systems, IoT 
  • AI 
  • Software Supply Chain 

Traditional vulnerability scanning tools often rely on installing agents on target systems to perform in-depth analysis. However, certain systems like storage arrays and backup appliances typically do not support the installation of third-party agents due to their specialized operating systems and proprietary architectures.   

Each storage and backup system may utilize its own operating system, APIs, and command-line interfaces (CLIs), creating a heterogeneous environment that complicates scanning efforts.    

In addition, many common vulnerability scanning tools do not include storage and backup systems in their support matrices. This absence results in outdated vulnerability databases for these critical systems, leaving organizations unaware of potential security risks. This is especially concerning given the growing number of storage & backup vulnerability exploits.  

I highly recommend complementing existing vulnerability scanners with specialized tools designed specifically for storage and backup platforms, IoT devices, or unmanaged SaaS applications.  

These vulnerability scanners understand the unique architectures, operating systems, and underlying technologies of these systems, performing authenticated scans, and ensuring comprehensive coverage across your network. 


Read Part 3 ! 

Part 3 examines the critical security metrics organizations can use to measure success and present to the Board.