Nonrecoverable I/O error occurred on file ‘%ls’.

Details
Product: SQL Server
Event ID: 3271
Source: MSSQLServer
Version: 10.0
Component: SQLEngine
Symbolic Name: DMPIO_IO_ERROR
Message: A nonrecoverable I/O error occurred on file “%ls:” %ls.
   
Explanation

This is a general error that occurs when the operating system raises an error while performing I/O during a backup or restore operation. In most situations the cause is simply that the backup medium is full.

The error may include additional text from the operating system indicating that the disk is full. When performing a backup or restore operation with third-party backup software an additional message may occur indicating that the backup failed. The message may look similar to the following text:

“2005-08-02 16:05:16.04 spid55 Error: 18210, Severity: 16, State: 1. 2005-08-02 16:05:16.04 spid55 BackupVirtualDeviceFile::RequestDurableMedia: Flush failure on backup device ‘VDINULL’. Operating system error 995(The I/O operation has been aborted because of either a thread exit or an application request.).”

This is an indication that the backup software requested a termination of the backup or restore operation.

   
User Action

Perform the following tasks as appropriate:

  • Review the underlying system error messages and SQL Server error messages preceding this one to identify the cause of the failure.

  • Ensure that the backup and restore medium has sufficient space.

  • Correct any errors raised by third-party backup and restore software.

   
   
Version: 9.00.1281.60
Symbolic Name: DMPIO_IO_ERROR
Message: A nonrecoverable I/O error occurred on file “%ls:” %ls.
   
Explanation

This is a general error that occurs when the operating system raises an error while performing I/O during a backup or restore operation. In most situations the cause is simply that the backup medium is full.

The error may include additional text from the operating system indication that the disk is full. When performing a backup or restore operation with third-party backup software an additional message may occur indicating that the backup failed. The message may look similar to the following text:

“2005-08-02 16:05:16.04 spid55 Error: 18210, Severity: 16, State: 1. 2005-08-02 16:05:16.04 spid55 BackupVirtualDeviceFile::RequestDurableMedia: Flush failure on backup device ‘VDINULL’. Operating system error 995(The I/O operation has been aborted because of either a thread exit or an application request.).”

This is an indication that the backup software requested a termination of the backup or restore operation.

   
User Action

Perform the following tasks as appropriate:

  • Review the underlying system error messages and SQL Server error messages preceding this one to identify the cause of the failure.

  • Ensure that the backup and restore medium has sufficient space.

  • Correct any errors raised by third-party backup and restore software.

   
   
Version: 9.0
Component: SQLEngine
Symbolic Name: DMPIO_IO_ERROR
Message: A nonrecoverable I/O error occurred on file “%ls:” %ls.
   
Explanation

This is a general error that occurs when the operating system raises an error while performing I/O during a backup or restore operation. In most situations the cause is simply that the backup medium is full.

The error may include additional text from the operating system indication that the disk is full. When performing a backup or restore operation with third-party backup software an additional message may occur indicating that the backup failed. The message may look similar to the following text:

“2005-08-02 16:05:16.04 spid55 Error: 18210, Severity: 16, State: 1. 2005-08-02 16:05:16.04 spid55 BackupVirtualDeviceFile::RequestDurableMedia: Flush failure on backup device ‘VDINULL’. Operating system error 995(The I/O operation has been aborted because of either a thread exit or an application request.).”

This is an indication that the backup software requested a termination of the backup or restore operation.

   
User Action

Perform the following tasks as appropriate:

Review the underlying system error messages and SQL Server error messages preceding this one to identify the cause of the failure.

Ensure that the backup and restore medium has sufficient space.

Correct any errors raised by third-party backup and restore software.

   
   
Version: 8.0
Component: SQL Engine
Message: Nonrecoverable I/O error occurred on file ‘%ls’.
   
Explanation
The BACKUP or RESTORE command cannot finish because of an I/O error at the hardware or operating system level.
   
User Action
Follow these steps:

  • Verify the integrity of the media where the backup file resides. If this is a RESTORE operation and the backup file originally resided on another drive or machine, verify the integrity of the media where the backup file originally resided.
  • Check the Event Viewer logs and the SQL Server error log for additional errors that may occur around the same time.

Related:

The log shipping source %s.%s has not backed up for %s minutes.

Details
Product: SQL Server
Event ID: 14420
Source: MSSQLServer
Version: 10.0
Component: SQLEngine
Symbolic Name: SQLErrorNum14420
Message: The log shipping primary database %s.%s has backup threshold of %d minutes and has not performed a backup log operation for %d minutes. Check agent log and logshipping monitor information.
   
Explanation

Log shipping is out of synchronization beyond the backup threshold. The backup threshold is the number of minutes that are allowed to elapse between log-shipping backup jobs before an alert is generated. This message does not necessarily indicate a problem with log shipping. Instead, this message might indicate one of the following problems:

  • The backup job is not running. Possible causes for this include the following: the SQL Server Agent service on the primary server instance is not running, the job is disabled, or the job’s schedule has been changed.

  • The backup job is failing. Possible causes for this include the following: the backup folder path is not valid, the disk is full, or any other reason that the BACKUP statement could fail.

   
User Action

To troubleshoot this message:

  • Make sure that the SQL Server Agent service is running for the primary server instance and that the backup job for this primary database is enabled and is scheduled to run at the appropriate frequency.

  • The backup job on the primary server might be failing. In this case, examine the job history for the backup job to look for the cause.

  • The log shipping backup job, which runs on the primary server instance, might not be able to connect to the monitor server instance to update the
    log_shipping_monitor_primary
    table. This could be caused by an authentication problem between the monitor server instance and the primary server instance.

  • The backup alert threshold might have an incorrect value. Ideally, this value is set to at least three times the frequency of the backup job. If you change the frequency of the backup job after log shipping is configured and functional, you must update the value of the backup alert threshold accordingly.

  • When the monitor server instance goes offline and then comes back online, the
    log_shipping_monitor_primary
    table is not updated with the current values before the alert message job runs. To update the monitor tables with the latest data for the primary database, run
    sp_refresh_log_shipping_monitor
    on the primary server instance.

  • On the primary or monitor server instance, the date or time is incorrect. This may also generate alert messages. Possibly the system date or time was modified on the one of them.

    Note:

    Different time zones for the two server instances should not cause a problem.

   
   
Version: 9.0
Component: SQLEngine
Symbolic Name: SQLErrorNum14420
Message: The log shipping primary database %s.%s has backup threshold of %d minutes and has not performed a backup log operation for %d minutes. Check agent log and logshipping monitor information.
   
Explanation

Log shipping is out of synchronization beyond the backup threshold. The backup threshold is the number of minutes that are allowed to elapse between log-shipping backup jobs before an alert is generated. This message does necessarily indicate a problem with log shipping. Instead, this message might indicate one of the following problems:

The backup job is not running. Possible causes for this include the following: the SQL Server Agent service on the primary server instance is not running, the job is disabled, or the job’s schedule has been changed.

The backup job is failing. Possible causes for this include the following: the backup folder path is not valid, the disk is full, or any other reason that the BACKUP statement could fail.

   
User Action

To troubleshoot this message:

Make sure that the SQL Server Agent service is running for the primary server instance and that the backup job for this primary database is enabled and is scheduled to run at the appropriate frequency.

The backup job on the primary server might be failing. In this case, examine the job history for the backup job to look for the cause.

The log shipping backup job, which runs on the primary server instance, might not be able to connect to the monitor server instance to update the
log_shipping_monitor_primary
table. This could be caused by an authentication problem between the monitor server instance and the primary server instance.

The backup alert threshold might have an incorrect value. Ideally, this value is set to at least three times the frequency of the backup job. If you change the frequency of the backup job after log shipping is configured and functional, you must update the value of the backup alert threshold accordingly.

When the monitor server instance goes offline and then comes back online, the
log_shipping_monitor_primary
table is not updated with the current values before the alert message job runs. To update the monitor tables with the latest data for the primary database, run
sp_refresh_log_shipping_monitor
on the primary server instance.

On the primary or monitor server instance, the date or time is incorrect. This may also generate alert messages. Possibly the system date or time was modified on the one of them.

Note:

Different time zones for the two server instances should not cause a problem.

   
   
Version: 8.0
Component: SQL Engine
Message: The log shipping source %s.%s has not backed up for %s minutes.
   
Explanation
As part of log shipping, alert message 14420 is generated to track backup and restoration activity. The alert message 14420 indicates that the difference between the current time and the time indicated by the last_backup_filename value in the log_shipping_primaries table on the monitor server is greater than value that is set for the “Backup Alert” threshold.

For more information about this message, see Microsoft Knowledge Base article 329133.

   
User Action
Message 14420 does not necessarily indicate a problem with log shipping. The message indicates that the difference between the time of the last backed up file and the current time on the monitor server is greater than the time that is set as the” Backup Alert” threshold.

There are several possible reasons why the alert message was generated. The following list includes some of these reasons:

  1. The date or time (or both) on the monitor server is different from the date or time on the primary server. It is also possible that the system date or time was modified on the monitor or the primary server. This may also generate alert messages.
  2. When the monitor server is offline and then comes back online, the fields in the log_shipping_primaries table are not updated with the current values before the alert message job runs.
  3. The log shipping Copy job that is run on the primary server might not connect to the monitor server msdb database to update the fields in the log_shipping_primaries table. This may be the result of an authentication problem between the monitor server and the primary server.
  4. You may have set an incorrect value for the “Backup Alert” threshold. Ideally, you must set this value to at least three times the frequency of the backup job. If you change the frequency of the backup job after log shipping is configured and functional, you must update the value of the”Backup Alert” threshold accordingly.
  5. The backup job on the primary server is failing. In this case, check the job history for the backup job to see a reason for the failure.

Related:

The LSN %S_LSN passed to log scan in database ‘%.*ls’ is invalid.

Details
Product: SQL Server
Event ID: 9003
Source: MSSQLServer
Version: 8.0
Component: SQL Engine
Message: The LSN %S_LSN passed to log scan in database ‘%.*ls’ is invalid.
   
Explanation
If you see this message during startup when the SQL Server process tries to recover the database or as a result of an ATTACH statement, the log file for the database is corrupted. If you see the message during a restore process, the backup file is corrupted. If you see this message during a replication process, the replication metadata may be incorrect.
   
User Action
If you see the error during a restore process, check the integrity of the backup file. If possible, create a new backup in a new location and retry the restore with the new backup file.

If you see this error during startup or when you try to attach a database:

HARDWARE FAILURE

Run hardware diagnostics and correct any problems. Also examine the Microsoft Windows NT system and application logs and the SQL Server error log to see if the error occurred as the result of hardware failure. Fix any hardware-related problems.

If you have persistent data inconsistency problems, try to swap out different hardware components to isolate the problem. Check that your system does not have write caching enabled on the disk controller. If you suspect this to be the case, contact your hardware vendor.

Finally, you might find it beneficial to switch to a completely new hardware system, including reformatting the disk drives and reinstalling the operating system.

RESTORE FROM BACKUP

If the problem is not hardware related and a known clean backup is available, restore the database from the backup.

DBCC CHECKDB

If no clean backup is available, execute DBCC CHECKDB without a repair clause to determine the extent of the corruption. DBCC CHECKDB will recommend a repair clause to use. Then, execute DBCC CHECKDB with the appropriate repair clause to repair the corruption.

CAUTION: If you are unsure what effect DBCC CHECKDB with a repair clause has on your data, contact your primary support provider before executing this statement.

If running DBCC CHECKDB with one of the repair clauses does not correct the problem, contact your primary support provider.

Related:

%1: %2 failure on backup device ‘%3’. Operating system error %4.

Details
Product: SQL Server
Event ID: 18210
Source: MSSQLServer
Version: 8.0
Component: SQL Engine
Message: %1: %2 failure on backup device ‘%3’. Operating system error %4.
   
Explanation
This message indicates that an I/O error was reported by the operating system after a file handle was successfully opened. The error occurs when reading from or writing to a device specified as part of a BACKUP or RESTORE command, often when there is not enough disk space available for a write operation. This error may also be seen if third-party software that uses a virtual device to perform SQL Server backups cancels the operation.
   
User Action
The steps to take will depend upon the operating system error received.

  • Verify that the specified path has sufficient disk space for the file.
  • Test to see if the problem is isolated to this particular server, path, or file.
  • Run hardware diagnostics to verify that the media specified in the path is healthy.
  • If the operating system error only returns a number and not any text, you can open a command prompt and execute NET HELPMSG with the operating system error number as the parameter. In many cases, this will return text that can help you to isolate the problem.
  • If you received this error while using third-party backup software, check that application’s logs to see if it canceled the backup operation, and if so, why.

Related:

Three-Dimensional Data Protection: Access, Visibility, and Control

New series focuses on Information Protection from Symantec

Background Image on Blogs “Quilted” Page: 

Knowledge is power. Whether it’s your proprietary data, customer insights, or strategic plans, data is valuable and needs protection. The problem is large. In 2015, half a billion personal records were stolen or lost, according to the Symantec 2016 Internet Security Threat Report Vol. 21 (ISTR). 

What’s behind this risk? Our research shows both internal and external threats. Criminals have found that they can obtain your data by breaking into your systems or by targeting your staff who might be softer targets. If your staff use simple or default passwords, over-share data, or don’t follow security measures (such as removing redundant files from cloud services), they put your data at risk. And malicious insiders, such as disgruntled employees, may try to steal sensitive corporate data to further their career or to sabotage your company.

Data Protection is not just about data loss prevention, it’s also about protection and access control. The key questions revolve around how do we allow open access to everyone, while still ensuring sensitive data is properly controlled? And moreover, how do we do this correctly?

Symantec Information Protection

The objective is not to contain data, but instead place the right visibility, controls, and policies to ensure that data is useful and not over-exposed. There’s also the people element. Encouraging the right behavior is better for employee trust and security. Consider a member of your team that attaches a document to an email. If they accidentally attach the wrong file in their haste, it can lead to embarrassment at best or a PR disaster or worse. Ideally, you would want to intercept this email before it leaves the organization, but if this isn’t carefully managed you can block emails that you didn’t mean to. A better approach is to empower your staff. A well-timed alert could inform your staff member that the attachment contains sensitive data, and gives an opportunity for any mistakes to be corrected. This approach allows your staff to make the right decision in what might be complex circumstances, which both plays to their strengths and reinforces and builds a strong security culture.

Symantec Information Protection helps you identify critical data across all your files and emails using automated discovery and context-based classification. With Symantec, risk is reduced by ensuring you limit access to the right people. You limit the risk of data getting into the wrong hands by managing how it’s stored and the protection that surrounds it. You can easily apply policies to control access and usage―in the cloud, on mobile devices, or on the network—and protect and control data by establishing policies that apply across your entire network via a single point.

iip-blog-graphic[1].png

Symantec VIP, VIP Access Manager, Data Loss Prevention all work together to create an information protection platform. Symantec Information Protection covers three areas: Access, Visibility, and Control.

“Where are my data risks?”

To protect data, you first need to find it, classify it, and then ensure that it’s properly managed. The challenge here is identifying the highest risks to your data. With data volumes exploding (a five-fold increase in data is predicted between 2015 and 2020), and data formats becoming less structured (photographs of forms or whiteboards), the challenges will only grow. 

Symantec Information Protection helps you discover where your sensitive data is stored across your infrastructure. You’ll be able to monitor and protect sensitive data on mobile devices, on-premises, and in the cloud. And it’s all done through a unified policy framework to define data loss policies and to help you review and remediate incidents.

“Who is accessing my data?”

Passwords are the de facto standard, but bitter experience teaches us that too many users are inundated with them, resulting in the use of weak passwords, passwords being reused or even written down when they are too hard to remember. A recent study entitled Cyber Security Flaws in Working Practices discovered that 21 percent of workers write down their passwords. In another study, sixty-three percent of confirmed data breaches involved weak, default, or stolen passwords, according to the Verizon 2016 Data Breach Investigations Report. You need to strike the right balance—making it easy for the end-user to access systems while ensuring security without relying on written-down notes.

Poor password hygiene makes accounts vulnerable to takeover attacks. These attacks can be eliminated with the use of single-sign on and multi-factor authentication technologies, such as Symantec VIP and VIP access manager. Symantec Managed PKI service also provides simple to manage device certificates, enabling secure access from any device, anywhere, to any apps your users need. Symantec increases security because VIP password-less fingerprint authentication makes accessing all approved applications simple, without the user needing to remember multiple passwords for multiple applications. This enables your organization to determine what applications show up as an option for the user based on their role.

With Symantec VIP, VIP Access Manager, and Managed PKI Service, we offer single sign on with rock-solid authentication to protect all your cloud and on-premises apps.

“How do I better protect my data?”

Data Breaches have almost become a weekly, if not daily, occurrence. According to the ISTR, the number of publically disclosed data breaches has risen steadily over the last number of years to reach 318 in 2015. What about stolen laptops or USB thumb drives and data breaches? Breaches caused by stolen or lost devices are real threats organizations face. In fact, this type of data breach makes up 45 percent of healthcare industry data breaches, according to the Verizon 2015 Data Breach Investigation Report. And the cost? The Ponemon Institute found that the average consolidated total cost of a data breach grew from $3.8 million to $4 million last year, but of course this is highly variable with costs escalating significantly depending on scope, scale, and nature of the breach.

Fortunately, you can take some measures to help protect your organization from data breaches. Symantec offers four broad ways to help.

  • Symantec Endpoint Encryption helps prevent breaches by protecting critical data sent by email, as well as with files shared on network drives and in the cloud.
  • Second, Symantec’s unified policy controls the flow of information everywhere it goes—in the cloud (with Office 365, Box, Gmail and others), on premise, and with mobile applications. We deliver powerful protection without added complexity.
  • Third, Symantec Data Loss Prevention (DLP) integrates with encryption to prevent accidental leaks through user error and secures devices against data loss or theft.  
  • The fourth area is that Symantec ensures you limit access to only trusted users and devices. Symantec VIP, VIP Access Manager, and Managed PKI Service offer rock-solid access control, reducing the risk and consequences of account takeovers.

In upcoming posts of this series, we’ll take a closer look at specific features of Information Protection. 

Related:

Symantec Named a Leader in Data Loss Prevention

Background Image on Blogs “Quilted” Page: 
Twitter Card Style: 

summary

We’re excited to announce that Symantec has been recognized as a Leader in The Forrester Wave™: Data Loss Prevention Suites, Q4 2016, and was top-ranked across all three high-level categories: Current Offering, Strategy, and Market Presence.

What’s most striking to us about this year’s report is how dramatically the DLP vendor landscape has changed in the past six years. We believe that Symantec has been able to stay ahead of the pack thanks to the world’s largest team of R&D experts working on the next generation of DLP technology.

Forrester Wave Leader_0.png

Continued innovation in data loss prevention

“Symantec continues to innovate in this space and has strong brand recognition in the DLP market.” — The Forrester Wave™: Data Loss Prevention Suites, Q4 2016

We believe our scores in the report recognize Symantec’s ability to deliver the best DLP solution for security and risk (S&R) pros today. During the past six months alone, we’ve released a number of major innovations designed to eliminate security blind spots, including:

  • Integration of Symantec DLP and CASB to give you complete visibility and control of sensitive data in cloud apps
  • Advanced cloud discovery and monitoring for Box, Gmail and Microsoft Office 365 in DLP 14.5
  • Expanded endpoint control for a wide range of new apps, file types and operating systems in DLP 14.5

The most complete data loss prevention suite                                 

“Symantec provides a comprehensive DLP suite with robust capabilities for intellectual property protection, information management, incident management, and encryption support.” — The Forrester Wave™: Data Loss Prevention Suites, Q4 2016

Symantec Data Loss Prevention earned the highest scores possible across twenty-three criteria, including the three key differentiating criteria that go beyond traditional DLP:

     5 out of 5 in Intellectual Property Protection
     5 out of 5 in Information Management
     5 out of 5 in Endpoint Visibility and Control

Read the full report

To learn more about the changing DLP market and how Symantec scored across all categories, read The Forrester Wave™: Data Loss Prevention Suites, Q4 2016, here.

Related:

Hitachi TrueCopy mirroring with IBM PowerHA SystemMirror

This article explains the steps to configure Hitachi Storage system mirroring and IBM® PowerHA® SystemMirror® to monitor and manage Hitachi disks. In case of a disaster scenario such as fire, earthquake, and so on, the configured PowerHA SystemMirror resource can be moved from the disaster location to some other backup location thereby minimizing the loss of data and business application’s downtime.

Related:

Introducing Data Loss Prevention 14.5

New Capabilities Eliminate Security Blind Spots

Background Image on Blogs “Quilted” Page: 
Twitter Card Style: 

summary

As our jobs demand more collaboration with customers, partners and suppliers outside of our organizations, security teams are working harder than ever to secure an exponentially growing number of interconnected devices and applications, and keep sensitive data from slipping through the cracks. 

The DLP team at Symantec is passionate about helping companies protect their most valuable and sensitive information from falling into the wrong hands. The new version of Data Loss Prevention 14.5 adds over twenty new data discovery, monitoring and protection capabilities to eliminate blind spots and give security teams better visibility and control over sensitive data. Read on to learn more!

Minimize Your Cloud Security Risks                                                       

During the past year, we’ve introduced new cloud discovery and monitoring capabilities for Box, Gmail for Work and Microsoft Office 365 Exchange Online. DLP 14.5 expands those capabilities so you can store and share data in the cloud even more securely.

With DLP Cloud Storage, you can track sensitive documents that users are storing and sharing on Box, and identifies risky practices such as using shared links which could give open access to unauthorized users. When users violate a policy, you can automatically move exposed files and folders a protected quarantine folder on Box and leave behind a marker file in its place to notify users by leveraging the new File Quarantine feature of DLP Cloud Storage. Not only can you secure unprotected files, but also visually tag files to alert users to self-remediate sensitive files and folders.

Box Screenshot.jpg

Along with the release of DLP 14.5, we’ve rolled out an update to our DLP Cloud Service for Email. The DLP Cloud Service for Email is a cloud-powered data detection service that provides powerful email monitoring for Gmail for Work, Microsoft Exchange Online, and now your on-premises Microsoft Exchange Server, and can be easily plugged into your existing DLP Enforce Management Server. It protects your corporate email regardless of whether it’s hosted in a conventional on-premise email application, public or private cloud email service, or a hybrid mix of on-premise and cloud environments.

Cloud Email Screenshot A.png

Spot PII in Imaged Form Documents

Tax returns, insurance claims, patient forms are rife with personally identifiable information (PII) that goes undiscovered because forms are often stored as imaged documents that aren’t easily recognized by security tools. With DLP Form Recognition, you can spot sensitive data in images of handwritten and typed forms. Form Recognition is a new type of content detection technology that leverages intelligent image processing to catch and stop confidential data that would otherwise go undetected in scanned or photographed forms.

Form Recognition.png

Control Data in Use Across More Apps, Files and Platforms  

Employees have limited knowledge of the cybersecurity risks they face both inside and outside the enterprise firewall. With the DLP Endpoint Agent, you can keep them safe wherever they work by monitoring and protecting data in use across a wide range of activities such as downloading to removable storage; copying and pasting within documents; and sending over the web. In this release, we’ve added endpoint coverage for new apps, file types and operating systems routinely used by employees to store and share sensitive data:

  • Mac OS 10.11
  • Microsoft Office 2016 file types
  • Microsoft Outlook 2011 email client
  • Box for Office and Box Sync applications
  • Chrome, Firefox and Safari browsers (via HTTP and HTTPS)
  • Cisco Jabber and Skype for Business instant messaging clients
  • Skype instant messaging client                                                              

Guard Dangerous SSL Blind Spots 

With more and more applications encrypting their traffic to protect users from prying eyes, you lose visibility into sensitive content that company insiders are unwittingly leaking or knowingly concealing under the cover of encrypted protocols like SSL. In DLP 14.5, we’ve added new SSL monitoring capabilities for web, email, FTP and IM communications by leveraging integrations between DLP Network Monitor and these leading SSL decryption products: Blue Coat SSL Visibility and Palo Alto Networks Next Generation Firewalls.

Learn More

To learn more about what’s new in the latest version of Data Loss Prevention 14.5 visit go.symantec.com/dlp

Related:

Extending the Security of Office 365: Symantec Data Loss Prevention

How Symantec can bring higher levels of protection, control and visibility

While your organization has turned to Office 365 for productivity with the cloud, is your data safe and secure? According to an IDC white paper sponsored by Symantec, organizations should focus their efforts on the main areas of authentication and access control, data loss prevention, email security, and advanced threat protection to improve upon Office 365’s integrated security features.

In previous posts of this series, we examined how Symantec Office 365 Protection helps fill in the security gaps that Office 365 misses; and in particular, email and advanced threat protection. In this installment, we’ll have a close look at how Symantec Data Loss Prevention (DLP) can provide an extra layer of security for organizations using Office 365.

The need for Data Loss Prevention

While Microsoft Office 365 has some basic built-in security, enterprises should consider augmenting and extending the security.

Does your organization have a solid data loss prevention solution?

Organizations using the cloud need data loss prevention technology to locate, monitor, and protect their data―so that they know who is doing what, with what data, in real time. Data loss prevention can block certain types of sensitive data from leaving an organization.

While Office 365 has built-in data loss prevention and encryption capabilities, it doesn’t meet the advanced compliance and complex intellectual property use cases and requirements of enterprises.

Challenges faced with Office 365’s basic built-in data loss prevention:

  • Limited content detection methods (simple regex, some document fingerprinting, and basic watermarking) can lead to a high number of false positives
  • False positive can increase burden on IT
  • Incident remediation and workflow options are limited to basic notification and blocking

Overall, these obstacles make it difficult for enterprises to respond effectively to data loss incidents.

How Symantec Data Loss Prevention Extends Office 365

Symantec for Office 365 is designed as a comprehensive security solution that seamlessly integrates with Office 365 for greater protection of your valuable information while detecting and remediating increasingly sophisticated threats.

Symantec delivers enterprise-strength data protection.

Symantec DLP Cloud Service for Email is a new cloud-based service built on Symantec’s market-leading data loss prevention technology. It offers the broadest content detection capabilities, including described content matching (keywords, expressions), data fingerprinting (structured data and unstructured documents), and machine learning (for content such as source code and forms). These advanced detection technologies are coupled with support for over 360 different file types. It offers sophisticated policy management, reporting, and incident remediation workflows.

Use a single unified set of DLP controls for all cloud and on-premises environments.

Unlike Microsoft’s multiple management interfaces and disjointed controls, Symantec’s solution provides robust and unified security controls for heterogeneous environments and hybrid deployment models. This allows you to extend your security infrastructure and policies to Exchange Online and a range of non-Microsoft applications & mobile devices environments. The Symantec Enforce management platform provides a unified, easy-to-use management console across all DLP channels, including Office 365 Exchange, and other cloud apps and on-premises deployments.

Take advantage of seamless policy-based encryption.

Symantec uses a policy-based approach to encrypt emails based on message attributes or message content in a manner that is totally transparent to the sender. Unlike Office 365, Symantec’s encryption solution does not require encrypted message recipients to register or use a Microsoft account, or use one-time passcodes to access encrypted messages. Symantec Policy Based Encryption also works with all types of mobile devices and does not require apps like the Office Message Encryption Viewer to access encrypted messages.

A 2016 Gartner Magic Quadrant Leader for Data Loss Prevention

Independent research organization Gartner recently named Symantec a leader in the 2016 Gartner Magic Quadrant for Data Loss Prevention. The Gartner recognition of Symantec as a DLP Leader assures that you are partnering with a leader in DLP technology.

Symantec helps you transition to the cloud with confidence

Microsoft Office 365 is an excellent platform to enhance productivity and while it does include some security measures, you should enhance and extend security measures with Symantec. To fight advanced threats, you need advanced protection. Symantec Office 365 Protection helps fill in the security gaps that Office 365 misses. We help enhance the security of Office 365 and most of all, create defenses to help protect your organization and your sensitive data.

Looking for more insights?

Visit Symantec Office 365 Protection

Related:

ISTR Insights: Sizing up Data Breaches

A detailed look at data breaches, how attacks happen, and what’s at stake for your organization

Background Image on Blogs “Quilted” Page: 
Twitter Card Style: 

summary

Data breaches have almost become a daily occurrence. It may not seem like it on the surface, but according to the 2016 Internet Security Threat Report (ISTR), the number of publically disclosed data breaches has risen steadily over the last number of years to reach 318 in 2015. That’s almost one data breach per day.

However, it often seems that data breaches only make the news when the number of impacted individuals reaches into the millions, or even the tens of millions—what we’ve come to call “mega breaches.” These breaches have a far reaching impact on a business that suffers one. A large company can watch its stock value drop at the same time consumer trust erodes away. And mega breaches are up in 2015; there were nine reported during the year.

Yet for all the attention-grabbing headlines, mega breaches are still relatively rare in the greater scheme of things. These types of breaches make up only around three percent of those reported in 2015. The fact is that most data breaches are different than this. So what do these data beaches look like?

Let’s start with a general overview of all data breaches this year. The average of identities stolen per breach was 1.3 million, but averages tend to get skewed by large numbers, which is exactly what mega breaches are in this case. In contrast the median, or the mid-point when all the breaches are lined up, has been trending downwards: from 8,350 identities per breach in 2012 to 4,885 in 2015. The median has almost halved in four years, which indicates there are far more small breaches than large ones.

There’s no question mega breaches have a significant impact on the overall number of identities exposed, and this year’s total was 429 million. However, with an 85 percent increase in the number of breaches not reporting the number of identities exposed, we believe this number to be much higher. At the very least, we estimate that half a billion identities were exposed in 2015.

It’s worth noting that this is a conservative estimate; in fact, there are other organizations that have reported much higher numbers for 2015 than Symantec has. However, we hold our count to a fairly strict methodology. For example, if a breach was reported this year, but took place during the previous year, we don’t add it to this year’s total. We also only count breaches that have been publically reported, either by a press release from the breached organization or a reliable news source. We don’t count records found exclusively on data dump sites or hacker “stolen identity collections” unless the source the data has come from is clear (these are often duplicates or old caches). That’s not to say some of these incidents aren’t legitimate breaches. We simply aim for accuracy over inclusion. Thus, while we estimate that there where at least half a billion identities exposed in 2015, it’s possible that this number is even higher, based on underreporting in the public sphere.

To get a better understanding of the size of most data breaches, let’s look at what statisticians call a boxplot. This will allow us to discard “outliers,” or unusual cases, in the data and give us further insight into what most data breaches look like, as opposed to all data breaches. (A deep understanding of boxplots isn’t necessary, but more information on them can be found here.)

boxplot.jpg

It turns out that most data breaches contain under 60,000 identities, with three quarters having less than 25,000 identities. Any data breach over 60,000 is actually an outlier—an irregular occurrence that falls outside the norm.

In terms of the data being exposed, looking at these more common data breaches also paints a slightly different picture. Save a small amount of shuffling in the order, the types of data stolen is largely the same. The most obvious difference is that medical and insurance information both jump up in rankings, indicating these breaches are more likely to contain these highly sensitive pieces of personal information.

information.jpg

What’s interesting is the overall percentages we see in the following table. It’s concerning that the percentages rise in every instance of our top ten list. What this means is that these breaches are more likely to contain a larger variety of data about the individuals exposed.

When looking at how these breaches take place, the order of causes changes when comparing all data breaches to the most common. Overall, attackers were responsible for the largest percentage of identities exposed. This remains true for the most common breaches; however, their overall share declines. Theft or loss climbs to second place as well, dropping the share of breaches that were the result of accidental disclosure significantly. Insider theft also increases when looking at most data breaches, in comparison to all data breaches.

cause.jpg

So why do most data breaches appear so much smaller when compared to mega breaches? It could be that most attackers are going after “soft targets,” or smaller organizations that may not have a lot of data, but also may not have strong defenses in place to protect against a data breach. The attackers get in and steal the data, but the size of the cache is about the size you would expect in a small- to medium-sized business. The data set is also richer, with more diverse types of data points.

As for the reasons most data breaches occur, the answers tend to lead to speculation, given the nature of the topic. Naturally those behind such attacks work diligently to mask their identities, which makes painting such a picture challenging. However, there have been rare cases where the motivation has come to light. These cases point to data breach goals rooted in identity theft, blackmail, cyberespionage, and even cyberactivism.

Ultimately a data breach is the end result of a larger security issue. Attackers can get in through a variety of ways, from misconfigured or unpatched servers to socially engineered phishing attacks that include malicious payloads. To avoid becoming the victim of a data breach, businesses should carry out regular security audits and employ defense-in-depth strategies that can detect and prevent intrusion attempts. Employing encryption can prevent attackers from siphoning off sensitive information that is in transit, while data loss protection (DLP) solutions can prevent the exfiltration of data if an attacker manages to make it into the internal network.

Regardless, every data breach is a serious incident. You can liken a mega breach to a plane crash, with the loss of identity being widespread and at times shocking. Meanwhile most data breaches are more akin to car crashes—far, far more frequent and an event that also leads to significant losses of identities.

These are just a few of the data breach subjects covered in the Symantec 2016 Internet Security Threat Report. Interested in what industries are at risk or what’s at play in the growing cyber insurance market?

Download the full 2016 Internet Security Threat Report

Related: