Translating Security Leadership into Board Value

EMC logo


 

CISOs find themselves increasingly engaged directly with their Board and Executives because the Board and Execs see the volume and impact of security incidents increasing.  In fact, Oxford Economics just reported that serious breaches permanently shave nearly 2% off public company value.  This is in addition to the substantial expense ($4 million per breach on average) and turmoil organizations experience when incidents do occur.  Executives and Boards are left wondering if management understands the risk – where it resides, how much it is, and whether it is being adequately addressed. 

 

When CISOs get the call from their boards and Execs, they are often not able to answer these questions and to converse in a way the Execs and Board want.  CISOs are extraordinarily adept at understanding security risk that arises around the technologies employed by their organization but translating their technical understanding into business terms can be difficult.  Communicating technical risk into business risk is a paradigm shift for most organizations.  Effective information security programs are becoming “Business Driven Security” programs.

 

RSA just released a Security for Business Innovation Council Report regarding this problem, “What Boards Want to Know and CISOs Need to Say” that discusses the translation of security leadership into board value.  A Business Driven Security strategy is core to the translation of CISO technical expertise into Board terminology but it also enables CISOs to better understand where they should implement technical and organizational measures to protect the most important information to the organization.  This understanding can be more easily conveyed to the Board and executive team before spending millions of dollars on security initiatives and human resources.  It provides the what, where, how, and why they are spending the money, that it’s being spent properly, on the biggest risks, and that there are procedures to monitor that the spend has been effective.  In my next blog I will describe how you can use RSA Archer to drive a Business Driven Security strategy.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

Six Keys to Successful Identity Assurance – Broader Ecosystem

EMC logo


Earlier in this blog series, we discussed anomaly detection and machine learning focusing primarily on examples that included information you could expect to be available from the system that provides your identity assurance. It’s likely, however, that there is much more data that can be leveraged for making system access decisions in your current IT ecosystem. Your threat detection system, Cloud Access Security Brokers (CASB), enterprise mobility management tool, and physical security system all have data that can provide insight into your identity assurance system and impact your strategy.

When building out your identity assurance strategy, be sure to look at the active system management and security tools and ask yourself a simple question: What information from these existing tools do I wish our identity system knew?

How would this work? Let’s review a few common examples.

Threat Detection (including SIEM, CASB)
Threat detection and identity assurance systems should be a two-way street. Identity assurance systems have log information that should feed threat detection. Failed and successful logins and locked out accounts should be part of the data analyzed to identify potential threats. This gets even more compelling if we reverse the data flow from threat detection to identity system. When the threat detection system receives an alert, the identity system should be able to respond. Let’s look at a couple of situations:

  1. Threat alert for a user. When an alert is raised for a specific user, identity assurance should adjust to either require multi-factor authentication for all access for the user or, if the threat is significant enough, block access for the user until this alert is cleared.
  2. Threat alert for a resource. For alerts on a resource, anyone attempting access to that resource should be required to provide a higher level of assurance to gain access. Again, in extreme cases, possibly all access to a resource should be blocked.

The key to both of these is that the correct alerts are triggering the appropriate additional access security in real time. In addition to real-time threats, many of these tools also have additional risk analytics. When this risk analytics data is available, it should work with the risk analytics of the identity assurance system to raise or lower confidence in a user’s identity, impacting the authentication requirements.

Enterprise Mobility Management (EMM)
Another widely-deployed tool in the enterprise is EMM. Devices managed through a corporate EMM have an increased amount of device data available which can be used to create stronger identity assurance. This device information can provide additional context for both static rules, such as policies allowing easier access to users with corporate managed devices, or additional analytics insights fed to the identity risk engine.

Physical Security Systems
Building access can also help us gain confidence that a user is who he or she claims to be. By integrating data from these physical access systems into the identity risk engine, we can extend our insight. If a user just badged into an office in San Francisco, and soon after is attempting a login from London, that’s something that should impact our risk assessment. Conversely, if the user just entered their normal building and, following their normal pattern, accessed their Office 365 account moments later, we can consider this behavior in determining if additional authentication is required.

Putting It All Together
These are not the only systems where additional data about anomalies, normal behavior or other information can help feed behavioral recognition and static policy rules. You should be looking throughout your organization for systems that contain data that may help to improve access security and reduce end user friction. What is important in your identity assurance strategy is not only that you have systems containing these types of access-relevant data, but that there is real-time response within your identity system.

We have spent a lot of time talking about gathering the data required to identify the correct authentication requirement for each situation. In the final blogs of this series we will cover the two final key components to a successful identity assurance strategy and they are all about the end user. Until then, take just 15 minutes to learn how RSA SecurID® Access is enabling access to the modern enterprise with identity assurance in this on-demand webinar.

The post Six Keys to Successful Identity Assurance – Broader Ecosystem appeared first on Speaking of Security – The RSA Blog.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

Transforming Security for our Next Generation Systems

EMC logo


Cyberattacks on IT assets are reaching new highs. Just when you think you are caught up, another unforeseen attack vector has opened.  Look at just about any security architecture – it has been implemented slowly over time in a piece-meal fashion; leaving a mix of old and new technologies and overlapping parts and pieces, with few pieces talking to each other.

CISOs looking across their security architectures can only hope their solutions will withstand the onslaught of a cyber-attack.  A scary proposition indeed.  Recently, a CISO for a large financial services organization shared that his security architecture is composed of over 190 products. Despite this, he still feels vulnerable.

How do you manage 190 security products?  Can you imagine the overlaps and the potential for gaps?

The US National Institute of Standards and Technology published a Cybersecurity Framework, also called the CSF and just released a draft for the next version.

The Cybersecurity Framework was developed to “enable organizations to apply the principals and best practices of risk management to improve the security and resilience of critical infrastructure.” The original CSF was released in 2014 and consolidates security research, international standards, and best-practices into a comprehensive protection guide.

At a high-level, the framework has five functions, and under these functions are categories and sub-categories.

Identify

I’ll leave it to you to read the full details on the NIST Cybersecurity Framework site, but essentially the five functions boil down to:

Identify—understand your assets, risks, governance, and create a cyber-risk management strategy

Protect—protect the assets, train your people, and keep up on maintenance

Detect—create the processes and implement the technologies to detect cyber mischief, and monitor for anomalies

Respond—develop a cyber-incident response plan and continue to improve

Recover—develop a recovery plan, test it, and continue to improve

The CSF is a great way to organize approaches to cybersecurity – although at the lowest levels of the framework, reference standards and tiers of implementation, are enormously complex.  One of the collaborators of the Cybersecurity Framework explained at the February 2017 RSA Conference that the latest CSF pointed to over 120 security controls in these areas.  Yikes!!! There’s got to be a better way.

We have made 2017 the year of Security Transformation. Now is the time to prioritize your organization’s cyber-security practices and evolve to combat new threats. We are joining the expertise of RSA, SecureWorks, VMware, and Dell EMC to produce adaptive security products and services to help you lead your security transformation.

DEW Twitter Image IWe’ll be starting the conversation over the next few months and will be making major announcements related to Security Transformation.  We are starting the conversation at Dell EMC World this May 8-11 in Las Vegas.  I’ll be leading a session titled, “Learn How to Put Security at the Very Core of Your Organization with Secure Infrastructure”.

It’s an exciting time for us at Dell Technologies and we look forward to the year of Security Transformation. I invite you to join us during this exciting time and look forward to seeing and talking with you at Dell EMC World.

The post Transforming Security for our Next Generation Systems appeared first on InFocus Blog | Dell EMC Services.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

Navigate Through the RSA Archer Documentation Set

EMC logo


                      

 

Looking for information on Archer and how to get the most out of it?  The Archer Information Design and Development team (formerly known as Technical Publications) has your back.  I’m Elizabeth Wenzel, and I have the pleasure of managing a talented team of content developers that are working hard to deliver the information that you need to get the most value out of your Archer investment.

 

We are currently working to not only strengthen and deepen the coverage in the existing documentation but also to add additional content and manuals to help you on your business-driven security journey with RSA Archer. Of course, having a lot of material to use is both a blessing and a curse – you know that the information is ‘somewhere’ but where?  This is where the new RSA Archer Navigator 2.0 comes in.

 

Use the Navigator on RSA Link to filter the Archer assets by your role and expertise in using Archer, the area you are focused on (Platform, Use Cases, and so forth), and the product version. Navigator shows you the assets that meet your filter criteria, allowing you to jump right in and get the right information so you complete your task.

 

While all of the documentation content, other than technical content (installation, sizing, and Archer Control Panel) is included in the Archer online Documentation system built into the Archer product, we’ve anticipated that you may have the need to access that information in a printable book format (PDF) – all of the content in the online Documentation is also available within PDF guides (the same content in two formats): What’s New Guide, Platform Administrator’s Guide, User Guide, RESTful API Guide, and Use Case Guides.  Combine these with the technical documents such as Installation and Configuration Guide, and it adds up to a lot of content at your disposal! Now, I recommend you use the Navigator to hone in on just what you are looking for. If we’ve missed something, we have even provided an easy way for you to share this with us, right on the Navigator home page.

 

We hope you agree that the Navigator 2.0 a helpful tool; find your path to success with the RSA Archer Navigator 2.0

 

Watch the RSA Navigator video to maximize your Navigator experience. 


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

RSA Charge – Call For Speakers!

EMC logo


So time flies… It seems like yesterday when the RSA customer community gathered in New Orleans to share experiences and learn new tactics and strategies. 2017 marked the 13th year of the RSA Archer user community summit and believe it or not, year number 14 is just around the corner.   Last week, we announced the call for speakers for RSA Charge 2017 and I cannot wait to start seeing the speaker submissions flowing in.  

We have put together a stellar team to construct the learning tracks to optimize your experience. As content chairperson for the RSA Archer portion of RSA Charge, I have the privilege of seeing this process unfold. While this will be my 9th user group conference with RSA and Archer, it is still inspiring to hear you tell the stories of your successes – how you overcame challenges or leveraged an innovative approach to deliver strategic value to your organization.

If you are contemplating submitting a session, know that this is a very rewarding experience. Presenting to your peers can be a bit unnerving but the satisfaction and return is well worth it. To teach others is to learn about oneself. Thinking through your experiences, applying your new found knowledge and acknowledging your successes and lessons learned is as much of a benefit as imparting your wisdom to others.

A few topics come to mind as food for thought if you are looking for ideas:

  • We always welcome stories about how your long term strategies unfolded in your companies. Our Take Command of Your Risk Management Journey track is dedicated to hearing how you built your plans, gathered forces and conquered the difficult path that risk and compliance efforts can sometimes take.  
  • As the market moves toward concepts of Integrated Risk Management, the Inspire Everyone to Own Risk track needs content focused on engaging all lines of defense to manage risk. How your company is blending different risk initiatives – Operational Risk, Resiliency, 3rd Party Risk and Audit – is a topic of keen interest.
  • We can’t forget the Compliance world either. Many of your GRC and risk management efforts were borne out of compliance drivers and our Transforming Compliance track is THE place to tell your tale. One topic that keeps coming up is the impending General Data Protection Regulation (GDPR). Any story of how your organization was better prepared for GDPR or any new regulation based on the RSA Archer implementation is a great learning topic for all participants.
  • And what RSA user group conference is not complete without stories of how IT & security risk is being managed. RSA Archer has a great legacy when it comes to helping IT & security teams manage risk processes. Vulnerability and threat management, security incident processes, IT compliance and general IT risk strategies are top of mind subjects for every organization today and perfect for the Managing Technology Risk in Your Business track.
  • Last but certainly not least are the RSA Archer Technical Tracks. This is where the innovation, creativity and expert chops of RSA Archer administrators come to the forefront.   The topics in these tracks range from inventive workflows to state-of-the-art API integrations and more.

I invite all of you to take a look across your implementation of RSA Archer and pull out those nuggets to share with your peers. RSA Charge is the perfect venue to help others navigate their own challenges. Hope to see and hear you in Dallas!

Check out our webinar in preparing to submit your proposal.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

Is the cyberworld doomed to be unsafe forever?

EMC logo


Before seeking an answer, let’s question the question.

I recently returned to the cybersecurity industry and (re)joined the good fight to secure the cyberworld. As the digital era unfolds, it feels good to be part of this mission-driven industry to help create a safe digital future. While a lot has changed, and there have been great advances in technology, does the cyberworld feel any safer today than before?

We are in the fight of our digital lives and the mission is certainly worthwhile.

But is the mission impossible?

The latest edition of The Economist makes a somber case that the cyberworld will forever be “hackable” and that cybersecurity is broken. The premise of this article is:


More software in more things that are more connected + More software written by non-software companies + “Ship code as fast as you can and fix it as late as you can get away with” mentality + Zero economic liability for shipping insecure code = A cyber-world that is doomed to be unsafe forever


That damning verdict can spur a few different reactions.

You could look helplessly as the very users you are trying to protect casually click socially-engineered emails as a spear pierces the shield you tried to put up. You could bemoan the fact that you are totally outnumbered by the bad guys, hanging your head in utter despair.

But those are failure-mode thinking! It’s not about numbers, it’s about strategy.

You could get angry and come out with your technology guns blazing. You could picture yourself crushing the bad guys with clever machine learning, artificial intelligence and data science.

But that is wishful thinking! The bad guys have all the same technology you do.

Or you could take a Zen approach. Before seeking answers, you could question the question. Is our mission to create an “un-hackable” cyberworld, or is it to create a safer world? You would ponder the idea that the world will forever be hackable, but our mission is not to eradicate hacking – it’s to minimize the impact of it, thereby creating a safer world.

Now that we have framed the question properly, let’s seek some answers.

Let’s begrudgingly admit: an “un-hackable world” may be mission impossible. A safer world, though, is not just possible, but quite plausible (inevitable even) if you take the right approach.

I look forward to discussing just such an approach we have developed here at RSA, the first ever pure-play cybersecurity company (yes, we have been at it for 40 years) now part of Dell Technologies – the largest privately controlled technology company. Here is a teaser to the approach: when the amount of work to be done appears overwhelming, you should factor in the business context and prioritize ruthlessly. It’s about applying business context to cybersecurity to protect what matters most and taking command of all risk. We call this Business-Driven Security™ and we launched it at RSA Conference 2017. I will dig into this more in the coming weeks and months.

 

The post Is the cyberworld doomed to be unsafe forever? appeared first on Speaking of Security – The RSA Blog.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

Peter Principle: The Destroyer of Great Ideas…and Companies

EMC logo


Wikibon just released their “2017 Big Data Market Forecast.” How rosy that forecast looks depends upon whether you look at Big Data as yet another technology exercise, or if you look at Big Data as a business discipline that organizations can unleash upon competitors and new market opportunities. To quote the research:

“The big data market is rapidly evolving. As we predicted, the focus on infrastructure is giving way to a focus on use cases, applications, and creating sustainable business value with big data capabilities.”

Leading organizations are in the process of transitioning the big data conversation from “what technologies and architectures do we need?” to “how effective is our organization at leveraging data and analytics to power our business models?

We developed the Big Data Business Model Maturity Index to help our clients to answer that question; to be able to 1) understand where they sit today with respect to how effective they are in leveraging data and analytics to power their business models, and 2) what is the roadmap for creating sustainable business value with big data capabilities (see Figure 1).

Figure 1: Big Data Business Model Maturity Index

Figure 1: Big Data Business Model Maturity Index

So why do organizations struggle if it’s not a technology or an architecture challenge? Why do organizations struggle when the path is so clear, and the business and financial benefits to compelling?

I believe that organizations fail in creating sustainable business value with big data capabilities because of the Peter Principle.

“Peter Principle”: The Destroyer of Great Ideas

The Peter Principle is a management theory formulated by Laurence J. Peter in 1969. It states that the selection of a candidate for a position is based on the candidate’s performance in their current role, rather than on abilities relevant to the intended role. Thus, employees only stop being promoted once they can no longer perform effectively – that “managers rise to the level of their incompetence.[1]

There are two key points in this concept that are hindering the wide spread adoption of data and analytics to power – or transform – an organization’s business models:

  • “Selection of a candidate for a position is based on the candidate’s performance in their current role, rather than on abilities relevant to the intended role.” Never before have we had an opportunity to create and leverage superior customer, product, operational and market insights to disrupt business models and disintermediate customer relationships…never. Consequently, current business leadership lacks the experience to know what to do to make this happen. Organizations likely need a new generation of management (which we are seeing in the “born digital” companies like Amazon, Google, Uber and Netflix) or a massive un-education/re-education of their current business leadership (like what we are seeing at GE…more to follow on the GE transformation, so keep reading!!) to realize that analytics is a business discipline to drive differentiation and monetization opportunities.
  • “Managers rise to the level of their incompetence” which means that those in power are very reluctant to embrace any new approaches with which they are not already familiar. And we have all met these folks who can’t embrace a new way of thinking because they are so personally or professionally invested in the old way of thinking. Consequently, new ideas and concepts die before they are even given a chance because these folks are threatened by any thinking that did not get them to where they are today.

How do you teach the existing generation of management to “think differently” about how to leverage data and analytics to power their business models? How does one get an organization to open their minds and stop focusing on just “paving the cow path,” but instead focus on data and analytics-driven innovation? Let’s try a little exercise, my guinea pigs!!

Decision Modeling: Predictions Exercise

The Challenge: Can we transform business thinking by changing the verb from “automate” to “predict?” Instead of focusing on automating what we already know, in its place let’s try focusing on “predicting” what is likely to happen and “prescribing” what actions we should take.

“Automate” assumes that the current process is the best process, when in fact; there may be opportunities to leverage new sources of data and new data science techniques to change, re-engineer or even delete the process. Can we drive a more innovative approach by instead of focusing on “automation,” we focus on what predictions (in support of key business decisions) we are trying to make and prescribing what actions we should take?

Let’s demonstrate the process using the Chipotle key business initiative of “Increase Same Store Sales.” (Note: this decision modeling exercise expands upon Step 8 in the “Thinking Like A Data Scientist” methodology).

  • First, list the use cases. In Table 1, we will start with just one use case: “Increase Store Traffic Via Local Events Marketing.”
  • Second, list the decisions that one would to address to support the use case. For example, we would need to make a decision about “Which local events to support and with how much funding?”
  • Next, for each decision, brainstorm the predictions that one would need to make to enable the decision. It’s useful to start the predictions statement with the word “Predict.” For example, in support of the “Which local events to support” decision, we would need to “Predict attendance at the local events”.
  • Then, list the potential analytic scores that could be used to support the predictions that we are trying to make. The potential scores were identified in Step 7 in the “Thinking Like A Data Scientist” methodology, but this decision modeling exercise gives us a chance to validate and expand upon those potential analytic scores.
  • Finally, brainstorm the potential variables and metrics that might be better predictors of performance. Step 6 in the “Thinking Like A Data Scientist” methodology identified many of those variables and metrics, but again this decision modeling exercise gives us a chance to validate and expand the potential variables and metrics.

Table 1 shows the results of this process for one use case (Increase Store Traffic Via Local Events Marketing) that supports the “Increase Same Store Sales” business initiative.

Chipotle Business Initiative: Increase Same Store Sales
Use Cases Decisions -> Predictions Scores/Metrics
Increase Store Traffic Via Local Events Marketing Which local events to support and with how much funding?

  • Predict attendance at local events (sporting events, concerts)
  • Predict composition of attendance at local event (parents, kids, teenagers)

How much staff do we need to support the local events?

  • Predict how many workers are required by hour to staff the store
  • Predict what special skills are needed by hour to staff the store
  • Predict how much overtime might be required

How much additional inventory do we need?

  • Predict how much additional food inventory is required to support the local event
  • Predict how much many additional utensils and bowls inventory required to support local events
  • Predict store waste/shrinkage
  • Predict when we need to replenish store inventory and with what

From what suppliers do we source additional food inventory?

  • Predict suppliers excess capacity by food item
  • Predict time-to-delivery for food inventory replenishment
  • Predict (prioritize) what suppliers to engage for additional food procurement
  • Predict quality scores of the new suppliers
Economic Potential Score

  • Local demographics
  • Increase in home values
  • Local economic indicators
  • Local unemployment rate
  • Change in city budget
  • Average income levels
  • Average education levels
  • Number of local IPO’s

 

 

Local Vitality Score

  • Miles from high school
  • Miles from mall
  • Average mall attendance
  • Miles from business park
  • Number of college students
  • Number of local sporting events
  • Number of local entertainment events

Local Sourcing Potential

  • Number of local suppliers
  • Miles from stores
  • Supplier production capacity
  • Supplier quality
  • Supplier reliability
  • Delivery feasibility
Table 1: Predictions Exercise Worksheet

In the workshop or classroom, we would repeat this process for each use case (e.g., improve promotional effectiveness, improve market basket revenues). This analytics-driven approach can bring more innovative and out-of-the box thinking to the organization.

Summary: The GE Story

A recent article titled “You Can’t Outsource Digital Transformation” discusses what GE is doing to prepare for–if not lead–digital business transformation disruption. To quote the article:

“It’s the threat of a digital competitor who skates past all the traditional barriers to entry: the largest taxi service in the world that owns no cars; or a lodging service without any real estate; or a razor blade purveyor without any manufacturing.”

The author, Aaron Darcy, describes what GE is doing to “think differently” – that is to unlearn and relearn – regarding digital business model disruption. This includes:

  • Transforming their operating model with the creation of GE Digital to help lead their digital business transformation.
  • Creating a partner open software ecosystem that enables collaboration with partners and third-party developers to deliver business and financial value for all participants (Customer, Partner and GE).
  • Transforming (un-education and re-education) management leadership with lean startup principles that emphasizes iterative innovation, space to experiment, and a fail-fast mentality.
  • Exploring new or alternative business models by focusing on delivering outcomes and creating sustainable business value with big data capabilities.

Nothing threatens the existence of your business like the Peter Principle. An organization’s unwillingness to “un-education / re-education” will ultimately be the undoing of the organization. Because as IDC believes “By 2018, 33% of all industry leaders will be disrupted by digitally enabled competitors.” Ouch.

IoT is Essential

[1] https://en.wikipedia.org/wiki/Peter_principle

The post Peter Principle: The Destroyer of Great Ideas…and Companies appeared first on InFocus Blog | Dell EMC Services.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

Uncovering the Hidden Connections: Why Data Center Blueprinting

EMC logo


“What happened to the mobile phone charging cord?”

Every few weeks, I find myself scurrying around the house, trying to find the charging cable for my mobile phone. Actually, it’s the cable to my wife’s phone, which she usually keeps plugged in at a handy location in our kitchen. And when I find my phone’s power running low, I like to plug into that handy cable. Sometimes, my wife takes the cable to charge her phone in the car. Of course, that’s usually when I realize my phone’s battery is perilously low.  Then I have to run to the other room to dig out my own cable from my backpack.

Well, you may wonder what this has to do with data centers, or you may wonder why I don’t just break down and buy another charging cable or two. The point I’m trying to make with this trivial little example is that when you share infrastructure across multiple application owners, sometimes in the heat of the immediate need you do things that unwittingly complicate life for others.

And in today’s enterprise data centers, with hundreds of applications, some of which have very complex configurations, and thousands of servers, the number of such shared connections is quite large, and the potential for trouble, especially with rapidly-changing application and infrastructure modernization initiatives, is great.

Hey, I think we’re OK, we’ve got a CMDB!

Some of you seasoned data center pros may say, hold on a minute. Isn’t this why we’ve spent years instituting change control disciplines and configuration management databases (CMDBs)? True, and these processes and databases are a great help, but sometimes the rate of change to applications and their underlying infrastructure is much faster than the rate of change in these traditional toolsets.

OK, you say, our data center housekeeping is a bit behind the curve. But that doesn’t really impact the business so it’s no big deal. Well, that may be true in the short-term. However, our experience is that it does become a big deal when you’re embarking on initiatives like data center consolidation, or modernizing applications to cloud-native platforms such as Pivotal Cloud Foundry, or modernizing your data center infrastructure.

In such situations, migrating an application from only part of the server infrastructure that it’s running on, can be risky. You can break the application, causing an interruption of a critical business process and annoying your application stakeholders. Or you can make your application configurations even more complex, with an unwieldy, hard-to-manage mix of older and newer infrastructure.

Don’t worry folks; we’re trained professionals

So you really do need to baseline the interdependencies between your applications and your infrastructure before you start on such a strategic data center initiative. But you may say, this is a big effort, and things may have changed again by the time we finish. Fortunately, today’s automated toolsets can greatly reduce the time and effort it takes to develop such a data center blueprint that documents and inventories applications as well as the underlying server infrastructure and the connections and dependencies among all of them. And keeping track of things as your migration or modernization program continues can also be done in the same automated fashion, minimizing the risks of application breakage or business interruption.

This is why Dell EMC Services advises its clients to conduct such an automated data center blueprinting exercise as a matter of course for such initiatives as data center migration, hybrid cloud deployments, application modernization and data center infrastructure upgrades.

DEW Twitter Image IWant to know more? At next month’s Dell EMC World, Ted Streck from Dell EMC Services consulting arm will be giving a breakout session, including a live demo of these blueprinting tools. The session is entitled “Data Center Blueprinting: The First Step to the Future is Knowing Exactly Where You Are Today”  Ted will be give the presentation twice:  Monday afternoon from 1:30-2:30 in Lido 3005 and again Wednesday morning from 8:30-9:30 in Marco Polo 703.  If you can’t catch Ted at these times, he’ll be giving a shorter version of this presentation at the Dell EMC Services booth in the solutions showcase. Or let me know, and we can arrange a time for you to meet with Ted.

Hope to see you there!

The post Uncovering the Hidden Connections: Why Data Center Blueprinting appeared first on InFocus Blog | Dell EMC Services.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

GET TO THE CHOPPAH

EMC logo


A new variant of this tool, previously reported in 2013 by TrendLabs, was submitted to VirusTotal from the Philippines on March 27th, 2017. Its original filename, 2017.exe, was prescient since it has the ability to exploit CVE-2017-5638 and other previous Apache STRUTS vulnerabilities.

File Details
File Name: 2017.exe

File Size: 107008 bytes

MD5:         3b405c30a7028e05742d0fbf0961e6b2

SHA1:         1d69338543544b31444d0173c08e706d57f148cb

PE Time:   0x58D24651 [Wed Mar 22 09:39:29 2017 UTC]

PEID Sig:   Microsoft Visual C# / Basic .NET

PEID Sig:   Microsoft Visual Studio .NET

PEID Sig:   .NET executable .NET executable compressor

Sections (3):

Name     Entropy     MD5

.text         5.29          85cb592ad6f0d2a47a2d873db6c587af
.rsrc         4.08         3b438fb713ec89f2430e8100a3a25e04
.reloc       0.1            efd52c048dfc4249799144c25a9a6239

Table 1 Tool Details

The application decompiles cleanly with a tool like ILSpy and contains no real surprises. When the C# app is executed it runs a GUI, presenting the user with a static header (vulnerability selection and execution portion) and footer (log output box). The middle section comprises four tabs, shown in Figure 1 below.

Fig1_Tool

Figure 1 Tool Overview

The first tab provides an overview of the vulnerabilities it is configured to exploit, along with handy links to documentation for each one. To use the application, you enter the URL you’d like to target and then select the exploit in a dropdown box. Then you select an HTTP Method and hit the button underneath it. If successful, the information from the targeted application will show up in the log and replace the contents of this first window.

Fig2_VulnServer

Figure 2 Query Vulnerable Server

The second tab includes a dropdown menu of canned commands to run on the target machine, Windows and Linux shell commands are supported. Alternatively, you may select to run a batched cmd.txt from the same local directory to run on the remote target.

Fig3_

Figure 3 Preconfigured Queries

Fig4_ExecutedCommand

Figure 4 Executed Command Output

This behavior is detectable via RSA NetWitness® Endpoint and Packets. The HTML.lua parser for Packets contains code that enables finding this behavior in either the GET or POST HTTP Methods.

Fig5_IOC

Figure 5 IOC Metadata

When seeing this alert, you can pivot into RSA NetWitness Endpoint, searching in Tracking Data to determine if the Apache Tomcat process executed the requested command. If so, the server is vulnerable and should be handled according to your Incident Response plan as the actors likely ran additional commands. This can be verified by hitting ctl-f and searching within NetWitness Endpoint for ‘Tomcat’ to filter on those events. The Event “Create Process” is where you’ll find the attackers command history.

Fig6_NW-Endpoint

Figure 6 RSA NetWitness Endpoint Event

You may also follow-up in Packets. The HTTP response will not be HTML, rather it will be raw output from the command that was run.

Fig7_NW-packet

Figure 7 NetWitness Packets Command Execution

The third tab (Figure 8) is a webshell installer function. By default it is configured to install the JSP version of China Chopper with the default password ‘chopper’. This can be controlled with a customized version of caidao.exe or cknife. Alternatively, you can paste in your own JSP code and choose the webshell of your liking. This simple webshell is a perfect fit as the application errors on larger, fuller function webshells. Figure 9 displays the remote command execution and output. This is more of a half shell and won’t allow interactive applications such as powershell or mimikatz to properly execute.

Fig8_webshell

Figure 8 Webshell Installation

Fig9_webshell

Figure 9 Simple Webshell Output

The final tab (Figure 10) allows you to add a list of URL’s manually, or via a text file, in order to perform bulk scans. Anyone searching for vulnerable applications can use google dorking to find and scrape vulnerable URL’s and then bulk scan using this tool.

Fig10_bulk-scanning-utility

Figure 10 Bulk Scanning Utility

This simple tool, an evolution of a previously released tool, keeps pace with recently released vulnerabilities. When only using signature-based tools to detect and defend your network, you can easily fall prey to zero-day exploits, such as CVE-2017-5638. With comprehensive network and endpoint forensics tools that deliver data in near-real time, such as the RSA NetWitness Suite, defenders can proactively search for this behavior and find new techniques. RSA recommends proactive security; hunting versus fishing.

The post GET TO THE CHOPPAH appeared first on Speaking of Security – The RSA Blog.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related: