Symantec ATP Add-on for Splunk Malware & Attack Tagging question

I need a solution


Can you please look at the symantec_malware_attacks event type in Version 1.0.6 of the Symantec ATP Add-on for Splunk and let me know if you think these 5 addtional source_event.type_ids should have been considered?

Those are all detection events just like 4099, 4102 etc and others are IOC’s. search the page for the term “detection” for example and you will get 10 hits in the table.  Here is the Symantec Site I used for your reference:

4100: SONAR Detection: Reports when Symantec Online Network for Advanced Response (SONAR) technology detected a new threat.
4109: File IoC Event: Reports when an Incident of Compromise (IoC) event occurred on a file.
4113: Vantage Detection: Reports when Symantec Vantage technology detected malicious activity on an endpoint or Vantage signature-based threats were found in the network system.
4124: Endpoint (IP/URL/Domain) Detection: Reports when a suspicious IP, URL, or domain was detected on an endpoint.
4125: Email Detection: Reports when suspicious email was detected.
4110: Network IoC Event: Reports when an Incident of Compromise (IoC) event occurred on a network.
4112: Blacklist (IP/URL/Domain): Reports when an IP, URL, or Domain was detected that is in a Symantec-provided Blacklist or the ATP Blacklist.

Here is the eventtype:

search = (sourcetype=”symantec:atp:*” (“source_event.type_id”=4353 OR “source_event.type_id”=4102 OR “source_event.type_id”=4115 OR “source_event.type_id”=4099 OR “source_event.type_id”=4116 OR “source_event.type_id”=4117 OR “source_event.type_id”=4123)) OR sourcetype=”symantec:cloud:email”
#tags = malware attack

Can you please let me know what you think and if you believe the tagging was done accuratly per Symantec’s standards?


Amir Khamis



WSS App for Splunk

I need a solution

Hello everyone,

I’m looking to set up the WSS App for Splunk and I can’t seem to find anywhere where I can obtain an API key that the app requires, as per pp.29-30 of the Tech Document. Is this something that only our Admin for the service is able to see/access on the portal?

If anyone has any experience of setting up the App and can offer any further advice or tips I would be very grateful.

Many thanks in advance



Ready Bundle for Splunk Just Got Even Better

EMC logo

Dell EMC has had a strategic partnership with Splunk for the past few years now with the goal of providing our joint customers with optimal infrastructure to deploy and grow their Splunk environments.

illustration with zeros and ones floating in space

Back on March 1, Splunk announced their fiscal year 2018 financial results and blew their numbers out of the water.  They saw a 34 percent year-over-year growth in revenue and added over 1,500 new customers.  Their CEO Doug Merritt commented that “organizations around the world are increasingly turning to Splunk to get strategic business answers from their machine data. Our opportunity is massive.”

Organizations around the world are increasingly turning to Splunk to get strategic business answers from their machine data.

Customers who use Splunk quickly realize that having the right infrastructure is paramount to being successful with Splunk. I’ve spoken with many customers who initially purchased Splunk and deployed it on vacent hardware only to find that after a few months, as data sources and users are added, the performance needed isn’t there.  Last year, Dell EMC came out with our first Ready Solutions for Splunk and last month, we released our updated Ready Bundle for Splunk on Dell PowerEdge 14G.

What is the Ready Bundle for Splunk?

The Ready Bundle for Splunk is a part of the Dell EMC Ready Solutions family. If you have not heard about Ready Solutions before, in short they are workload specific solutions that are tested and validated to help accelerate innovation, reduce deployment risks and lower total cost of ownership. Within the Ready Solutions family we have three Ready Solutions for Splunk: Ready System for Splunk on VxRack, Ready System for Splunk on VxRail and Ready Bundle for Splunk with PowerEdge. Additionally, with all our Ready Solutions, you have the option to add Isilon scale-out NAS for your cold data as a way to tier out your storage for Splunk. The Ready Bundle for Splunk consists of compute, storage, networking and Splunk Enterprise (which Dell EMC can resell) that provides the building blocks for deploying Splunk.

Within the Ready Bundle, there are different options that customers can choose from depending on your specific needs for Splunk.  All of the options were designed based on Splunk’s hardware recommendations.

Here are the different options:ready bundle options for splunk

  • Reference Hardware Single Instance Ready Bundle
  • Single Instance Ready Bundle
  • Distributed Deployment Ready Bundle
  • Clustered Deployment Ready Bundle
  • High Performance Clustered Deployment Ready Bundle


What’s new?

Dell EMC PowerEdge 14g servers on Intel® Xeon® Scalable processors provide some great new features and capabilities that greatly benefit Splunk deployments.

  • You can now get up to 28 cores per processor in the R640 and R740XD versus 22 cores per processor in the R630 and R740XD. Each of these models allows for two processors so you are now able to get up to 56 cores per processor versus 44 before.  That’s a 27 percent increase in compute capacity per a 2U server! What this means for Splunk is all about efficiency.  With more cores per server, you can now use a 2U server and align it to Splunk’s High-Performance hardware recommendations of 48 cores.
  • If you virtualize, you can now fit some of your utility components on the same server as your search head(s) and cut down on number of physical servers you are using for your deployment.
  • The 14G servers also come with more drive bays then the 13G options. Looking at the 740XD which is our indexer recommendation, you can get an additional 4x 2.5” drives which gives you roughly a 10 percent increase in usable storage capacity for Splunk.  A common problem I’ve seen with customers is that at some point in the Splunk journey, they find themselves adding additional indexers for capacity purposes which creates underutilized indexers.  The increased capacity can help reduce the need to scale for only capacity reasons.  I highly recommend that if this happening to you, you look at Dell EMC’s Isilon scale-out NAS as a way of decoupling your hot/warm and cold storage tiers.  This, however, isn’t always an option so getting the capacity bump with 14G is a plus.

There are a lot of other great features and upgrades with PowerEdge 14g.  Here is a great blog that talks more about them.

Want to learn more about our data analytics solutions for Splunk and other Big Data workloads? Email us at

Happy Splunking!


Update your feed preferences





submit to reddit


VMAX for Splunk 2.0

Hi again! The VMAX for Splunk 2.0 add-on and app have been out for a small amount of time so I decided it was time to follow on from the version 1.0 blog and put one together for 2.0.

A number of VMAX for Splunk users have been in contact in the release of the initial offering, and a number of improvements have been made based on their suggestions, so I hope version 2.0 will be as well received as our first outing!

Same as always, I will try to cover everything that goes with getting VMAX for Splunk set up in your environment, but if there is anything you would like more information on, isn’t covered, or you are still in need of some help, get in contact in the comments or via

Downloading Links for VMAX for Splunk 2.0

  • VMAX for Splunk Technology Add-on (TA): Splunkbase
  • VMAX for Splunk Technology Add-on User-guide: DECN
  • VMAX for Splunk App: Splunkbase
  • VMAX for Splunk App User-guide: DECN

About the VMAX for Splunk TA and App

The Splunk Technology Add-on (TA) for Dell EMC VMAX allows a Splunk Enterprise administrator to collect inventory, performance information, and summary information from VMAX storage arrays. You can then directly analyse the data or use it as a contextual data feed to correlate with other operational or security data in Splunk Enterprise.

The Splunk VMAX TA is configured to report events in 5 minute intervals which is the lowest possible granularity for performance metrics reporting. Event metric values are representative of the value recorded at that point in time on the VMAX. Values shown for an event in Splunk at 10:00am represent their respective values at 10:00am on the VMAX.

The Splunk App for Dell EMC VMAX allows a Splunk Enterprise administrator to take inventory, performance information, and summary information from VMAX storage arrays through the VMAX Technical Add-on (TA) and present them in pre-built dashboards, tables and time charts for in-depth analysis and drill-downs to event.

Improvements from Version 1.0

There have been a number of improvements. The new TA is configured to work with Unisphere 8.4, so will feature new endpoints and metrics not previously available in version 1.0. In addition, the number one ask from our customers, to define an instance of Unisphere per VMAX input has been included, removing the restriction which required an instance of the TA per Unisphere instance. This will greatly benefit customer environments where embedded Unisphere is in operation, or if they have multiple instances of Unisphere configured to manage their arrays. SSL has been included also, giving customers the option to encrypt all of their data travelling between Unisphere and their Splunk instances.

The amount of metrics ingested from Unisphere has greatly increased, users can now collect metrics for the following levels:

  • Array
  • Storage Resource Pool
  • Storage Group
  • Director (FE/BE/RDF/IM/EDS)
  • Ports
  • Port Groups
  • Hosts
  • Initiators
  • Workload Planner (Compliance/Headroom)
  • VMAX System Alerts

In addition, the TA gives customers the ability to select what metrics they would like to ingest into Splunk instead of collecting all metrics. For example, if a customer wants to only collect Array level metrics, they can turn off all other metric reporting levels.

The front-end app has had a complete overhaul also. All reporting levels mentioned above will have its own dedicated dashboard(s), each driven by the KPIs defined in Unisphere to allow for all information to be collated in one place and with control over filtering right down to individual resources, or reporting on everything all at once. To help drive these bulkier searches, the app has post-search processing configured to reduce the resource usage footprint of the app. Instead of having an individual search per dashboard panel, one search could drive up to 12 panels at once, drastically increasing report generation time and performance. Drilldowns have been greatly improved, users have the ability to report against all arrays/resources, or drill down by array and dynamically load resources for further drilldowns, giving users complete control over what level they want to view their VMAX environment at.

Data Collection and Source Types

The VMAX TA provides the index-time and search-time knowledge for inventory, performance metrics, and summary information. By default, all VMAX data is indexed into the default Splunk index, this is the ‘main’ index unless changed by the admin.

The add-on collects many different kinds of events for VMAX including performance, inventory, and summary metrics. Depending on the activity of the Hosts, Port Groups & Initiators in your environment, there may be events where there are no performance metrics collected. This can be confirmed if there is a metric present in the event named ‘perf_data’ with a value of ‘false. To limit the amount of data collected and stored on a VMAX, only active Hosts, Port Groups & Initiators are reported against, so it is intended behaviour to have no performance metrics for those which have been inactive for some time. More on this in the section ‘Active vs. Inactive Performance Metrics Gathering’.

The source type used for the Splunk Add-on for VMAX is ‘dellemc:vmax:rest. All events are in key=value pair formats. All events have an assigned ‘reporting_level which indicates the level at which the event details, along with the associated VMAX array ID & if reporting at lower levels, the object ID e.g. Storage Group, Director, Host.

Hardware and Software Requirements

To install and configure the VMAX TA & App, you must have Splunk admin privileges. Because this add-on runs on Splunk Enterprise, all of the Splunk Enterprise system requirements apply.

There are no specific hardware or software requirements for the VMAX TA, it will point towards your existing environment and Unisphere to gather metrics. The VMAX app no longer requires additional packages to be installed in Splunk, it is ready to go once installed in the environment and configured for use.

Single Instance/Distributed Environment Installations

In a distributed deployment, install the Splunk VMAX TA to your search heads and heavy forwarders. This TA does not support universal forwarders because the TA requires Python. The add-on does not need to be installed on indexers because it does not support universal forwarders or light forwarders, thus parsing occurs on the heavy forwarder rather than on indexers. The app only needs to be installed on the search heads, and requires no additional configuration.

For a detailed single/distributed installation instructions, refer to Splunk’s “Installing add-ons” that describes how to install an add-on in the following deployment scenarios:

  • Single-instance Splunk Enterprise
  • Distributed Splunk Enterprise
  • Splunk Cloud
  • Splunk Light

VMAX TA Installation Considerations

The add-on does not require the ability to modify VMAX configuration. It is highly recommended that you create a read-only user account with to provide greater security over access to your storage network.

The VMAX TA works through the RESTful communications between Splunk and Unisphere, so it is necessary to have Unisphere setup and running in your environment with your arrays added. I wont go into details about REST here, but if you would like to know more about it my colleague Paul Martin has put together a great series of blog articles on REST & VMAX to get you started. The first article in that series is ‘Getting Started with the REST API‘.

Performance of data collection is dependent on many factors, such as VMAX system load, Splunk Enterprise system load, and environmental factors such as network latency. I have written a useful VMAX for Splunk sizer script which is designed to mimic the function of Splunk and provide information into the reccomended reporting intervals for the arrays in your environment, more on the sizer script later in blog.

Enabling VMAX Performance Metrics Gathering

Before any metrics can be collected from a VMAX you must also ensure that the VMAX is registered to collect performance metrics. This is enabled from within the Unisphere for VMAX Web UI.

To register your VMAX(s) follow these steps:

1. Log in to Unisphere and from the main home screen identify the VMAX you want to add to Splunk

2. In the VMAX’s summary panel, under ‘System Utilization’ click ‘Register this system to collect performance metrics’

Pt2 - Click Register.png

3. A new page for ‘System Registrations’ will open where you will see your VMAX listed. Click the VMAX to highlight it and click ‘Register’

Pt3 - Settings register.png

4. When the ‘Registration’ dialogue window opens, select the check-box for ‘Root Cause Analysis’ and click OK.

Pt4 - RCA Apply.png

5. If the registration process is successful, you will see a green dot to signify root-cause analysis is enabled.

Pt5 - RCA enabled.png

6. With the registration process complete, leave Unisphere for 8-24hrs to start gathering performance metrics before adding the VMAX to Splunk. Performance metrics collection is not immediate, for more information please refer to the ‘Performance Management – Metrics’ section of the ‘Unisphere 8.4 Online Help’ guide available on

Active vs. Inactive Performance Metrics Gathering

To limit the amount of data collected and stored on a VMAX, only active Port Groups, Hosts, and Initiators are reported against for performance metrics. Inactivity is determined by no activity being recorded by performance monitors for a specified amount of time. The VMAX TA ingests a wide range of metrics across each of the reporting levels. To get detailed definitions of each of the performance metrics see the ‘Performance Management – Metrics’ section of the ‘Unisphere 8.4 Online Help’ guide available on

This is not enforced by Splunk but is the behaviour of the VMAX, recording zero values for every Port Group, Host, and Initiator in an environment would very quickly fill databases with useless data.

When the VMAX TA is collecting information on the Port Groups, Hosts, or Initiators in your environment, it will first obtain a list of all objects for each reporting level. Using this list, calls will be to Unisphere for performance metrics for each, if an object is inactive, no performance metrics will be returned. This inactivity is reflected in the VMAX events through the key/value pairs below.

{reporting_level}_perf_details: false

{reporting_level}_perf_message: No active {reporting_level} performance data available

SSL Configuration

One of the biggest asks from version 1.0 of the add-on was to have end-to-end SSL availability in the VMAX/Splunk environment. Thankfully, the root cause of the issue surrounding hostname resolution was identified and SSL is available as a configuration option for each VMAX input.

SSL is enabled by default in the VMAX TA when adding inputs. In order to retrieve the required certificate from Unisphere follow the following steps:

1. Get the CA certificate of the Unisphere server. This pulls the CA cert file and saves it as .pem file:

# openssl s_client -showcerts -connect {unisphere_host}:8443 </dev/null 2>  /dev/null|openssl x509 -outform PEM >{unisphere_host}.pem

Where {unisphere_host} is the IP address or hostname of the Unisphere instance.

2. OPTIONAL: If you want to add the cert to the system certificate bundle so no certificate path is specified in the VMAX data input, copy the .pem file to the system certificate directory as a .crt file:

# {unisphere_host}.pem /usr/share/ca-certificates/{unisphere_host}.crt

3. OPTIONAL: Update CA certificate database with the following commands:

# dpkg-reconfigure ca-certificates# update-ca-certificates

Check that the new {unisphere_host}.crt will activate by selecting ask on the dialog. If it is not enabled for activation, use the down and up keys to select, and the space key to enable or disable.

4. If steps 2 & 3 are skipped and instead the cert from step 1 will just remain in a local directory, you can specify the location of the .pem cert in the VMAX data input setting ‘SSL Cert Location’. Otherwise, leave ‘SSL Cert Location’ blank and ‘Enable SSL’ enabled to use the cert from the system certificate bundle.

VMAX for Splunk Sizer

An additional script has been included with the VMAX TA to help determine the optimum reporting interval required for your VMAX data inputs. This sizer is meant to be used with one instance of Unisphere at a time, it is not concerned with performance across multiple instances of Unisphere as this would fall under the remit of Splunk performance. This sizer will help set the VMAX TA input intervals so that each input has enough time to complete before the reporting interval is exceeded and metric collection intervals are missed.

Metrics collection run times depend entirely on the environment, the VMAX itself, how heavily utilised and loaded with resources it is, so there is no one size fits all option. This script will simulate Splunk and gather summary and performance metrics from an instance of Unisphere and VMAX(s) of your choosing. These collection runs will also run concurrently as Splunk does. When complete, information will be output as to how long metric collection lasted for a given VMAX, and the recommended reporting interval time.

To run VMAX for Splunk sizer script, you will require Python 2.7 and the Python Requests library:

To run the sizer script, follow the steps below:

1. Navigate to the VMAX TA folder containing the sizer script and configuration file:

# cd {splunk_dir}/etc/apps/TA-DellEMC-VMAX/bin/sizer

2. Open the vmax_splunk_sizer_config.ini configuration file for editing.


  • The Unisphere IP address or hostname
  • The Unisphere port (default is 8443)
  • The Unisphere username & password
  • The required SSL setup:
    • If you require no SSL, set this to False
    • If you have an SSL cert loaded into the system bundle, set this to True
    • If you have an SSL cert but want to specify the path, set this to the path to the cert
  • Your VMAX numerical IDs, for more than one VMAX separate with a comma

4. Under [REPORTING_LEVELS], if you want to turn on or off any specific reporting level change the value to False

5. Debug mode is not necessary unless diagnosing an issue with VMAX for Splunk support, but if you would like to see all calls output to screen, change this to True

5. With all the environment settings configured, run the VMAX for Splunk environment sizer script using the python file

$ python

6. Once the script has run to completion, details of the metrics collection run will be output to the screen along with recommendations on the optimum reporting interval for each VMAX.


Installation and configuration overview for the Splunk Add-on for VMAX

Once you have Splunk set up and running in your environment, there is very little required to get the VMAX TA set up and collecting information. There are no additional requirements or dependencies, so once you have the VMAX TA downloaded from the Splunkbase website or through the app store from within Splunk you are good to go with set up!

I am going to go through the process of setting up the TA first, adding VMAX arrays as data inputs afterwards, then finally setting up the VMAX app to start viewing meaningful analysis of your environment through Splunk. The installation of both the TA and app follow the same procedure as all others so I won’t bother including screenshots of the process.

1. From your Splunk home screen, click the cog icon beside ‘Apps’ to navigate to the ‘Manage Apps’ section.

2. Within the ‘Manage Apps’ section, click the button ‘Install App from file’.

3. Click ‘Choose File’, select the VMAX Add-on for Splunk, and click ‘Upload’.

4. Once the upload is complete you will be prompted to restart Splunk to complete the installation, click ‘Restart now’. When Splunk restarts, navigate back to the home screen and you will now see a dashboard panel for the VMAX TA. Click on the panel to start adding your VMAX(s) to Splunk.

5. Once opened, you can add VMAX(s) to Splunk by clicking on the ‘Create New Input’ button in the top right of the UI.

6. To add a VMAX to Splunk, you must enter a number of details into Splunk about the instance of Unisphere used, VMAX details, SSL details, and reporting metrics configuration. The table below lists each option, its default value if there is one, and a description of the option. Once all options are set, click ‘Add’ to add the VMAX as a data input to Splunk.







The name of the input as it will appear in Splunk



The metrics collection interval. This should be set in increments of 300s as this is the reporting interval of performance metrics in Unisphere. For more information on determining the ideal setting for the reporting interval for your environment, see the ‘VMAX for Splunk Sizer’ section above.



The index to which data from Unisphere for this VMAX will be written.

Unisphere IP Address


Unisphere IP address or hostname.

Unisphere Port


Unisphere port.

Unisphere Username


Unisphere username.

Unisphere Password


Unisphere password.

VMAX Numerical ID


The 12-digit numerical VMAX ID

Enable SSL


If you require end-to-end SSL communication between Splunk and Unisphere. Uncheck to disable SSL entirely. See ‘SSL Configuration’ section above for more information on SSL set-up.

SSL Cert Location


If ‘Enable SSL’ is enabled, this option has two behaviours:

  1. If left blank, Splunk will search the system certs bundle for a valid Unisphere cert.
  2. If a path is provided, this is the path Splunk will use to access the Unisphere cert independently of the system certs bundle.

REST Request Timeout


The amount of time Splunk will wait for a response from Unisphere for any given call before timing out and logging an error. If changing from default, consider Unisphere load, setting it too low may have a negative impact on metrics collection.



Collect array level metrics.



Collect VMAX system alerts.

Collect VMAX only metrics


If enabled, Splunk will collect only those metrics which directly specify the Array ID in the alert description (see known issues section for impact of enabling this option). If disabled, Splunk will gather all system alerts from the instance of Unisphere it is collecting VMAX metrics from, even for those VMAXs which are not added as an input to Splunk.

Storage Resource Pool


Collect Storage Resource Pool metrics.

Storage Group


Collect Storage Group metrics.



Collect Director metrics.



Collect Port metrics.

Port Group


Collect Port Group metrics.



Collect Host metrics.



Collect Initiator metrics.

Workload Planner


Collect Workload Compliance & Headroom metrics.

7. To add another VMAX to the TA, repeat steps 5-6 as many times as necessary.

8. When all VMAX(s) have been added to the TA, you will see them listed within the TA. From here you can enable, disable, or edit the options for a given VMAX after it has been configured.


9. Once a VMAX has been added to the VMAX TA, it starts gathering information immediately. To access that data, use Splunk Search to start looking at VMAX related events using the SPL query: sourcetype=”dellemc:vmax:rest”


Troubleshooting the VMAX TA

The VMAX TA has been developed to give the end-user as much detail as possible about the activity of the add-on in their environment. All add-on logged events will either be marked as info, error, or critical depending on the nature of the event. If you are having any issues with the add-on, the logs will be able to give you precise information as to the cause of the problem. These issues could be related, but not limited to:

  • Incorrect Unisphere configuration or username/password combination
  • Incorrect SSL setup
  • Incorrect Array ID
  • VMAX is not performance registered
  • Performance metrics timestamp is not up-to-date

The two log files that you can use to diagnose problems with this add-on are:

  • /{splunk_install_dir}/splunk/var/log/splunk/ta_dellemc_vmax_inputs.log
  • /{splunk_install_dir}/splunk/var/log/splunk/splunkd.log

Before the add-on successfully runs for the first time, error logs go to splunkd.log. After the add-on successfully runs, error logs go to ta_dellemc_vmax_inputs.log.

Installation and Configuration Overview for the VMAX App for Splunk

As there are no dependencies required for the installation of the VMAX App for Splunk, the set-up is completed from the Splunk Web UI and, if required, associated VMAX App macro configuration file.

The VMAX App uses Splunk macros to shorten lengthy and frequent search queries. By default, these queries use the Splunk default index as configured by the user, typically this is the ‘main’ index but can be changed. If you store your VMAX event data in a different index from the default index, or have distributed your VMAX data across multiple indexes, you will need to configure these macros to use the correct indexes in order for the app dashboards to work as intended.

1. From your Splunk home screen, click the cog icon beside ‘Apps’ to navigate to the ‘Manage Apps’ section.

2. Within the ‘Manage Apps’ section, click the button ‘Install App from file’.

3. Click ‘Choose File’, select the VMAX App for Splunk, and click ‘Upload’.

4. (OPTIONAL) If you have configured the VMAX TA for Splunk to index event data in an index other than the default index you will need to reconfigure the VMAX App macro configuration. Navigate to the installation directory of the VMAX App for Splunk which contains all default configuration files:

# cd {splunk_dir}/etc/apps/App-DellEMC-VMAX/default

Copymacros.conf to the local directory in the App installation directory:

# cp macros.conf {splunk_dir}/etc/apps/App-DellEMC-VMAX/local

Edit the newly copied macros.conf so that each ‘index=’ key/value pair represents the indexes in use in your environment. Each reporting level ingested by the VMAX TA corresponds to a macro in macros.conf, so you will be able to set different indexes for array level, host level, alert level metrics for example.


[vmax_array]definition = index=main sourcetype=dellemc:vmax:rest reporting_level="Array"


[vmax_array]definition = index=vmax_index sourcetype=dellemc:vmax:rest reporting_level="Array"

Once all the macros have been updated to reflect the indexes in use, save the file and return to Splunk UI.

5. Once the VMAX App is added to Splunk you will be not be prompted to restart Splunk to complete the installation, but it is advisable to restart Splunk before using the App and also to apply any optional changes made in step 4. Navigate to ‘Settings > Server Controls’, and click ‘Restart now’. When Splunk restarts, navigate back to the home screen and you will now see a dashboard panel for the VMAX App.




VMAX App Usage & Navigation

Navigating throughout the VMAX App is done entirely through the menu featured at the top of the screen when you open the VMAX App.


By default, the drop-down boxes in each dashboard will feature all the objects available at that reporting level through the ‘ALL’ option. You can drill down further by array ID and more depending on the dashboard you are within.


The default time-range for each dashboard time chart is last 24 hours. This time can be changed by changing the time settings under ‘Time Frame’. It works the same as all other Splunk time range pickers.


It is important to note that not all panels within each dashboard are set to represent data from the last 24 hours, most are set to present the most recent and up to date information to the user. As a rule of thumb, only time charts in the VMAX App dashboards reflect data from the time range specified in the time range picker. All other panels, such as tables, single number panels, pie charts etc., all represent the most-recent information up-to-date from the most recent VMAX event data. If there is an issue with data collection and the data is older than 10minutes old some of the panels may display a ‘No Results Found’ message. To check this, check your VMAX TA logs to determine if data is still being collected by the VMAX TA.

Troubleshooting the VMAX App for Splunk

The VMAX App for Splunk has been designed in such a way that there is little or no user interaction required in order to get it running in the environment. The only use-case for when manual configuration is required is when indexes are used by the VMAX TA which differ from the default Splunk index (see installation & configuration section above for more info).

If you are having issues with the macros in your VMAX App, check the indexes configured for use by each of the VMAX TA data inputs, the indexes specified by each input should match those used by the VMAX App in the macros.conf configuration file. If these do match and everything appears to be correct, restart Splunk to ensure that the changes have been applied and settings are read from the VMAX App’s local directory.

As the VMAX App only takes the data ingested by the VMAX TA and presents it in dasboards populated with various panels, there is no other VMAX App settings or areas which could cause issues to appear within the App. If you are facing any issues in the VMAX App and the macros are configured correctly, the next place to troubleshoot is in the VMAX TA itself.

Known Issues

VMAX Alerts

When requesting VMAX alert information from the Unisphere REST API through the /system/alert endpoints there is no key/value pairs for the array and object ID. When the event is processed in the VMAX TA, the alert description is parsed and if an array or object ID is present, it is added before the data is indexed in Splunk. At present, most information can be parsed from the description but this does not always work.

An example of not being able to parse the array/object info from an alert is with array metadata usage. Using the REST API there is no IDs associated with the alert so unless the user reverts to Unisphere manually there is no way they can know what array the alert belongs to.

The impact of this in the VMAX TA is that when ‘Collect VMAX only metrics’ is enabled, certain alerts which do not have the array/object info in the alert description will not be ingested into Splunk. This is because this option uses the array ID key/value pair to determine if it is related to the VMAX ID specified in the data input.

It must be noted however, that the data for which the alert describes can be viewed within the various dashboards of the VMAX App for Splunk. For example, the array metadata usage percentage is featured as a time chart in the VMAX dashboard. This is because for each of the reporting levels, every possible piece of information pertaining to each reporting levels’ objects is retrieved from Unisphere.

If ‘Collect VMAX only metrics’ is left disabled, all system alerts from the instance of Unisphere specified in the VMAX data input will be ingested into Splunk. This also means that any system alerts for other arrays which are not added as data inputs, but present in that instance of Unisphere associated with the data input are ingested into Splunk.

REST Response Code 401

Occasionally when the Unisphere REST API is under heavy load of REST requests it may return a 401 response to a request from the VMAX TA. This is temporary and will clear itself, usually immediately.

Contacting Support

For any and all issues or queries, please contact Include as much information as possible about the issue, the Operating System and Splunk versions, and associated VMAX TA logs.


Triggering the Symantec DLP with Splunk

I need a solution

Hi everyone!

Does anyone know if there is an add-on or option in Splunk to trigger a third-party DLP (Symantec, IDLP, …)? Or is it also possible to configure Splunk Alert Actions (or custom scripts) to perform a DLP function?

The flow should be like this:
Collecting all the client actions(changes on files, copying of files etc.), portable media logs (USB drives, CD-roms etc), and e-mail server actions, analysing and correlating the collected data based on the rules (which are going to be defined by us), in case of an anomaly detection, creating an alert and triggering triggering the DLP solution to block session.



Explaining the Time Value of Data

EMC logo

Running Out Of Time….

30 seconds remaining in the game up by 1 point my team is at the foul line. We go 1 for 1 on the free throws. Up 2 and the other team has the ball with 30 seconds. The opposing team takes the ball down the court as the clock expires and drains a 3 pointer.

Game over!

Time expired!

The college basketball playoff season is a great reminder of the importance of playing to the clock. Did your team have more points than the opposing team when time expired?

What about your data? Has time expired on the business value of your data?

basketball heading into hoop

Does Data Expire?

Most organizations are trying to wrap their head around how to make faster decisions based on their data. One of the quickest emerging trends is to use that data before their competitors even have a chance to react. This is where the concept of data expiration plays a key role. To prevent data expiration organizations must realize and execute on the business value of the data faster than the competition.

Technologies Enabling Faster Analytics

The Internet of Things  (IoT) enables low powered sensors with an IP address to send data off to be processed. Imagine a washing machine connected to the internet. While it might be marvelous to manage the washing machine from a smartphone how about the ability for the manufacturer to monitor the machine’s performance. One day the analytics from the washing machine notices that a $10 part is failing. If the part is replaced quickly it could avoid the whole machine from needing to be replaced and prevent flood the laundry room. Next, the owner is sent an email is explaining the part needs to be replaced and coordinated list of available times. All of this is possible because the washing machine diagnostic files were analyzed before the machine failed.

In Charles Duhigg’s ”Smarter Faster Better” he discusses how humans face information blindness when given too many choices. Organizations face the same problem when trying to analyze their data that is until recently. Machine Learning has given an organization the ability to sift through millions and billions of data points to find value in data. Early in my analytics career, I worked on a project to help systems administrators parse through proxy log files looking for bad actors. The administrators had an enormous task of sifting through millions to billions of log files. Most of the day to day work was only reporting on events some odd days or weeks after they occurred. Now with Machine Learning tools and software like Splunk, Tensorflow, Caffe, and others, those administrators can be alerted in real-time when security threats occur.

Streaming Analytics allows for real-time analysis of data as it streams in. In the past analytical processing relied on batch systems using bound data sets. Batch processing tended to have long cycles to results that were out of date once processed. Now with processing frameworks like Apache Spark, Beam, Flink and others unbound infinite data sets are processed in real-time. Faster processing of the data is the feedback loop applications need to ensure business value is captured before the data expires.

Capture Your Data Before Time Expires…..Our Story…..

While time is expiring on this year’s NCAA Tournament and your team’s chances of a Championship, make sure the time value of your data doesn’t expire. Dell Technologies is strategically placed to help you capitalize on your data before the shot clock runs out. If you would like to know more, please contact us at Twitter at @DellEMCbigdata or email us at


Update your feed preferences





submit to reddit


Splunk Intgration

I need a solution

Trying to get SEP Cloud events into Splunk Enterprise v6.6.1 running on a Windows Server.

I created the scripted input per the Symantec Technote, however it doesnt appear to be working.

The following appears in splunkd.log:

ERROR ExecProcessor – Couldn’t start command “”C:Program””: FormatMessage was unable to decode error (193), (0xc1)

Also, the following error occurs when trying to run the script directly from a command line:

C:Program FilesSplunkbin>splunk cmd python
Traceback (most recent call last):
File “”, line 8, in 
import dateutil.parser
ImportError: No module named dateutil.parser

It appears some python modules are not installed with the version of python that is included with Splunk.

Has anyone managed to get this working?

Is there additional information available regarding what’s needed to get this working?



ATP3.0-Splunk Event Forwarding

I need a solution


I set up ATP Endpooint3.0 and Splunk6.6, but it does not work.

There is no data at dashboard except for “Open Incidents”.

What should I do to display data in other areas?


I did..

-Splunk Settings

  -install Symantec ATP App for Splunk
  -install Symantec ATP Add-on for Splunk 
  -setting HTTP Event Collector

-ATP Settings

  -Splunk Event Forwarding settings
  -OAuth Clients settings