Testing a Threat Pattern: Quality is Never an Accident

EMC logo


John Ruskin, one of the great visionaries of the 19th century, said “Quality is never an accident; it is always the result of intelligent effort”, in our continuing journey through the lifecycle of a threat pattern, we are now at the testing phase. After analyzing  requirements, asset and threats, designing a general and reusable model for the threat pattern and implementing the model in our security monitoring platform, we cannot assume no mistakes were made. This is why a well-structured and accurate testing process is vital – to validate the pattern has been implemented correctly and to ensure we are able to effectively address the attack scenario.

Even in this phase, the typical process of the Software Development Life Cycle (SDLC) can help us head in the right direction. Once implemented, software is usually tested with two distinct but complementary methodologies: a white-box and a black-box approach. Let’s see how this would apply to our newly implemented threat pattern.

White-box testing is intended to verify the threat pattern logic and ensure implementation is coherent with the requirements. If the design phase has been approached correctly, the different components making up the pattern should already be well defined and documented. For each of them we need to prepare a list of specific inputs and expected outputs to trigger each part of the implementation. As with any software, testing progress can be quantified by the percentage of code covered by the tests.

This process may be time consuming – and time is a precious resource in a Security Operations Center (SOC) – so automating even part of it is key. Luckily, at this stage we should have a good amount of historical data at our disposal as information has already started flowing into the system. Accurate sample selection is critical as they must ensure both good horizontal coverage (e.g. the number of the tested components), and vertical coverage (e.g. the variety of the inputs). Mapping each selected sample with the component under testing make logic validation and threat pattern effectiveness immediate. Furthermore, the same samples can be re-injected into the platform after any change to ensure the output is still consistent with the requirements.

Conversely, black-box testing examines the functionality of the threat pattern without dissecting its logic. In this stage, it is recommended to start from the attack scenarios originally elaborated during the analysis phase and determining the possible means an attacker would leverage to exploit the depicted situations. Many organizations find establishing or engaging with a red team (playing the role of the attacker) beneficial, with a blue team defending the kingdom by ensuring the implemented threat patterns behave as expected. At the end of the exercise the two teams meet to review the results and list all the situations not detected by the current implementation and the threat patterns requiring further review.

This leaves us with the question of how to measure the patterns’ effectiveness based on these test results? The simplest way would be to evaluate the number of false positives and false negatives. Those two parameters alone may be sufficient to understand how effective the threat pattern implementation was – based on the false negatives rate, and the potential “noise” it can generate – based on the false positive rate.

Here’s a very important lesson every SOC should have learned by now: the most dangerous enemy may not be an obscure attacker, rather it may be the number of false positives overwhelming analysts and preventing them from responding to the security issues which really matter! It is imperative to reduce the noise-to-signal ratio before it becomes insurmountable on our journey to protect the organization.

At the end of the process, a root cause analysis must be conducted on every issue identified, but beware of simply applying changes to the implementation – it would be a pity to get lost at the very end! Always start from the requirements and the results of the analysis before touching the code: did we miss the attack scenario entirely? Is there any data source we did not consider? Was it a mistake in the implementation? Approaching this stage with a simple checklist also helps to ensure a potential change here does not have a negative impact on other threat patterns.

It is important to be conscious of the fact that the work performed thus far may become obsolete in a very short time. New TTPs and attack scenarios may arise, as well as the organization’s evolving business approach. To mitigate this situation, in the last article of this series we present the benefits of this structured and focused approach to maintaining and evolving a threat pattern in production.

The post Testing a Threat Pattern: Quality is Never an Accident appeared first on Speaking of Security – The RSA Blog.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

  • No Related Posts

Driving Business Resiliency Through Operational Risk Management

EMC logo


I recently had the pleasure of presenting with a panel of RSA Archer customers on the topic of “Building Resiliency Across the Value Chain” for a Disaster Recovery Journal webinar.

 

Two key questions were posed to the 80 attendees. The first question was: “Where is your organization on the business resilience scale?”  The responses were:

 

  • Recovery only (5%)
  • Mainly recovery with some focus on resiliency (53%)
  • Mainly resiliency with some focus on recovery (18%)
  • Very resiliency-oriented (18%)
  • Other (5%)

 

The second question was: “How closely do your business continuity/IT disaster recover/crisis management teams work with or integrate with operational risk teams?”  The responses were:

 

  • Not at all (2%)
  • Sporadic discussions when required (32%)
  • We are working with ORM more and more (28%)
  • BC/DR/CM is well aligned with or a part of ORM (32%)
  • Other (6%)

 

90% of respondents indicated they are addressing resiliency at some level, and 92% have BC/DR/CM teams integrated with operational risk management (ORM) teams. The alignment of responses to these two questions is no coincidence.  There is a direct correlation between business resiliency and effective risk management that more and more organizations are benefitting from as they continue to mature their operational risk management and business continuity or resiliency programs.

 

What does GRC maturity look like? The RSA Archer maturity model defines three stages for GRC maturity:

 

Diagram 1 – RSA Archer Maturity Model

 

As organizations mature their operational risk management programs, their business resiliency capabilities grow as well, often due to three factors:  

 

  1. Methodologies – deploying risk assessment and treatment approaches (e.g., ISO 31000) and common business impact analyses (BIA) consistently across the organization
  2. Priorities – consistently applying common methodologies drives more aligned priorities and higher consensus 
  3. Actions – clear priorities drive better understanding, prioritization, and execution

 

These three factors initiate proactivity, consistency, and alignment in both the risk management and resiliency practices and culture of the organization.

 

Risk management is, by its very nature, a proactive practice, as is business resiliency. The two go hand in hand.

 

For comments, contact me at Patrick.potter@rsa.com or @pnpotter1017.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

The GDPR and your data protection obligations

EMC logo


The focus is growing for the European Union’s forthcoming “General Data Protection Regulation,” or GDPR. As its May 25, 2018 implementation date draws nearer, organizations are starting to understand the magnitude of change this major regulation will drive.

It is not only EU-based organizations that are subject to the GDPR’s requirements. If your company stores or handles any personally identifiable information about EU residents – things as simple as names and email addresses – then you are obligated to be in compliance, and risk penalties if you’re not.

And those penalties for noncompliance? Let’s just say you wouldn’t want to be one of the organizations feeling the pain for being judged in violation. The GDPR authorizes fines ranging up to €20 million, or 4% of a company’s total worldwide sales, whichever is greater. Those are business-impacting numbers, not to mention the reputational damage suffered if you break this highly-visible new law.

You will definitely want to be in compliance, but that will be neither simple nor cheap. The driving principle behind the GDPR is that any data that specifically relates to a person, belongs to that person – not to the organization creating, holding, or processing it. So, in effect, you become the custodian of every user’s data, with all of the responsibilities you’d expect from someone holding something valuable of yours.

For organizations, this means gaining explicit permission to hold someone’s personal information; limiting its use to the context in which that permission was granted; letting the data owner review it, correct it, or even export and delete it, any time they want; and making sure it’s kept safe and protected from misuse – by your employees or by third parties.

In practical terms, the GDPR requires a complete re-thinking of your data handling processes. This review involves locating every place personal data is collected and stored, and the processes involved. You will need to design a system such that all future business processes will comply with the GDPR’s requirement for privacy by design.

As you can see, these activities will touch virtually every part of the organization, consuming a lot of attention and resources as the May 2018 deadline approaches. There is another, equally critical, component of the GDPR that must also be addressed: the data protection requirement.

It’s notable that data protection – not data privacy – is what the “DP” in GDPR stands for. This is because, no matter how well implemented your processes are for handling personal information, if it’s lost in a breach, nothing else matters. The writers of the GDPR understand this, and framed the requirements accordingly.

The biggest change, in terms of data protection, is a new data breach disclosure requirement. Most companies will be required to appoint a Data Protection Officer (DPO), whose role will be to oversee the implementation of data handling processes, but also to interface with the EU regulatory regime. One new requirement: in the case of a data breach, the DPO must formally report within 72 hours of discovery, or have a very good explanation why not.

The harshest penalties are reserved for repeat violations, or in instances where there’s inadequate or insufficient protection of user data. Maximum penalties for first offenders are typically half of the GDPR maximum (up to €10 million vs €20 million, or 2% vs. 4% of revenue), but the EU is being clear that failure to use “appropriate technical and organisational measures” will bring the hammer down.

In light of these requirements, and their attending risks, organizations should use this time to review their strategy and tools for threat detection and response. While it’s always good business to protect against the increasing sophistication and impact of the evolving threat landscape, the GDPR changes the risk equation significantly.

RSA can help you meet your data protection responsibilities under the GDPR. RSA NetWitness® Suite is a set of state of the art threat detection and response tools, while our RSA® Incident Response and RSA® Advanced Cyber Defense Practices deliver world-class planning and implementation services. As you prepare for the GDPR, a world-class data protection process is the foundation.

The post The GDPR and your data protection obligations appeared first on Speaking of Security – The RSA Blog.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

  • No Related Posts