Trend Micro Predicts Escalating Cloud and Supply Chain Risk

Dateline City:
DALLAS

Cyber risk increases at all layers of the corporate network as we enter a new decade

DALLAS–(BUSINESS WIRE)–Trend Micro Incorporated (TYO: 4704; TSE: 4704), a global leader in cybersecurity solutions, today announced its 2020 predictions report, which states that organizations will face a growing risk from their cloud and the supply chain. The growing popularity of cloud and DevOps environments will continue to drive business agility while exposing organizations, from enterprises to manufacturers, to third-party risk.

Language:
English

Contact:

Erin Johnson
817-522-7911
media_relations@trendmicro.com

Ticker Slug:
Ticker:
4704

Exchange:
TOKYO

ISIN:
JP3637300009

Ticker:
TMICY

Exchange:
OTC Pink

read more

Related:

Key Development Skills for IT Pros

EMC logo


This post was originally published February 19, 2019 on AbleBlue.

In my former life as a consultant, I was constantly aware of the impact that the pace of cloud technology evolution has on IT professionals. Recently, while moderating a panel at Office and SharePoint Live! 360 in Orlando, I asked the question: “IT Pro vs. Developers, is it time to bury the hatchet?” The panel consisted of four great friends of mine, Eric Shupps and Ben Curry, representing the IT Pro side of the house; and Rob Windsor and Paul Schaeflein, representing the Developers.

As I watched, listened, cajoled and questioned, the general consensus from the developers was that IT still struggles with understanding “modern development and development tools.” While IT insists on Developers following a standard of practice, the IT pros themselves still use haphazard and sometimes risky procedures for themselves, and do not hold their IT teams to the higher standards development uses.

The general consensus from the attendees was they supported that contention. We heard story after story from the IT pros in the audience about how they are not supported with the right tools to become better, that the IT organizations are not investing in them and their future, that they are too busy fighting fires to take the time to learn something new.

While I do understand the reality that we are all busy, as a trainer in my previous life I recognize that organizations have to train their people and allow them the time to “Level Up” their skills. I think it is necessary for the employees’ well being, their organizations’ longevity, and ultimately the supportability of the systems they manage.

Microsoft makes it easier than ever for IT pros to get up and running with the necessary tooling to dip their toes in development waters, while making their job easier and providing a better process for the companies they support. IT pros may start by initially switching from PowerShell ISE to VSCode with the PowerShell Extension. If you are “just writing scripts,” step up and try writing PowerShell Modules instead.

Here are four ideas to get you started.

Repeatability

I am always thinking about how repetitive a task is. If I have to do it more than 2 times I’ll try to figure out how to be more efficient. For example, if you have ever written the same script over and over, start thinking about creating PowerShell Modules for the functions that you use repeatedly. These could be anything from routine maintenance tasks, to heroic data restoration tasks. By thinking about the tasks that would have to be executed in an emergency before the emergency happens, your stress level will be much lower when the day comes that you need the module.

Another idea to save time is using snippets when writing your code. There is a fantastic community project for Visual Studio Code (VSCode) called Awesome Snippets that will get you started. I have some customers that have created their own snippets that they share as a team.

Further Reading

VSCode PowerShell Extension – Write, Debug, and Test PowerShell in VSCode.
Awesome Snippets – PowerShell Code Snippets

Testability

On a recent project, I was concerned about the number of folks that were going to be messing with the module I was writing. I noticed a few individuals were not providing documentation, examples, or full sentences in the help text. Now this may not seem important, but if they are failing to properly document the code, how much attention are they paying to the code itself?

Back in November I had the pleasure to hear Thomas Vochten, MVP speak at SharePoint and Azure Connect in Haarlem, Netherlands. He demonstrated an approach to testing PowerShell using a module called Pester. This module ships with Windows 10 and there are updates available from the community. There are two reasons I really like this approach.

  1. The process of creating tests helps you ensure your code works as you intended.
  2. The process of creating tests and monitoring code coverage can help ensure that you test the alternate paths of your code.

There is a “tax” associated with creating tests, however — it takes time that does not directly impact the delivery of your PowerShell code. I admit, I am not “writing the test first and then writing the code,” in true TDD fashion. In fact, the first module I used Pester to test was a module I thought was nearly complete. That is where the PesterHelpers module can bootstrap a bunch of tests for you by reading your existing module and generating tests for you.

Further Reading

Pester – Test and Mock Framework for PowerShell
Pester Helpers – Helper functions for Pester
MVA: Testing PowerShell with Pester – Microsoft Virtual Academy Course on Pester

Version Control

I use version control for nearly everything I write. My documents are in OneDrive, not a version control system per se, but it is backed up to the cloud on every save. My blog posts are written in markdown, and version controlled in Azure DevOps (aka VisualStudio.com). All of my PowerShell modules and scripts are in separate projects by Customer and/or Project and stored in Azure DevOps. It doesn’t matter if it’s Azure DevOps or Github.com or any other system you are familiar with, most of these systems use a flavor of Git for version control. I think that every technical person should have a grasp of the basics of Git. You need to be able to Fork, Checkout, Branch, Add, Commit, and Merge with confidence.

I am pretty comfortable with this process because I use it every day. Do I NEED to? No, could I use OneDrive for my module files, sure. But by forcing myself to implement these practices, I can talk to a developer about Git and the issues they face with a shared understanding. I can transition from working on my Surface on the couch with my dog Ruby, to my home office, to the Mac at my downtown Austin office, with ease and confidence. It makes me comfortable knowing I have all the versions of my edits stored safely somewhere other than one hard drive. Further, as soon as you add a team, it becomes significantly easier to share your stuff.

Further Reading

Azure DevOps – Project Management, Version Control, Build Management, and Continuous Integration

DevOps

I like to think of DevOps as thinking end to end, or holistically, in terms of the lifecycle of the work. From defining the problem to writing the first line of code to delivery to applying updates, “lifecycle” DevOps thinking will help your work be portable. For example, that cool module that you built that may, someday, need to be updated — are you the only person who has the code? Can your team update it confidently or update it without you? Can they change the code without erasing your farm, locking you out of the tenant, or causing other issues? Do they have to call you on your vacation…just in case?

DevOps is about supporting the enterprise and managing the lifecycle of your code whether you are a Developer an IT pro or a combination of the two. By creating testable code, you can configure your version control system to reject check-ins that do not pass the tests. You can configure the build server to automatically deploy your runbooks when they do pass the tests, so that they enter production as efficiently as possible.

In other words, I like to enjoy my vacations, and I assume you do, too

What items are on your shortlist for “The Next Thing” you want to learn to level up for your role?

The Interview


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

Infrastructure as Code (IaC): The Next Generation of IT Automation

EMC logo


In the recent analysis of Dell EMC and VMware IT Transformation Workshops, CIOs continue to prioritize initiatives that help to accelerate delivery of software and applications for the business. The top emerging priorities for CIO’s were the desire to achieve continuous deployment (89%) and DevOps (87%) based on anonymous customer data.

For many of our Dell EMC customers, efforts to accelerate software delivery velocity and drive cloud application transformation have initially and understandably focused on developers. ‘Top down’ DevOps initiatives have focused on creating continuous delivery (CD) pipelines that eliminate the manual processes, hand-offs and bottlenecks associated with the software delivery lifecycle (SDLC) and underlying value stream. Particular focus has been placed on streamlining and automating source code and build management, integration and testing, as well as overall workflow.

As the DevOps name indicates, infrastructure and IT operations are also a critical, integral part of the story. Automating provisioning of development, test and production environments and related infrastructure is also critical to increasing overall software release velocity. Just as with application source code, infrastructure configurations can also be treated as pure code. Treating configurations as code provides the same benefits as it does for applications, including version control, automated testing, and continuous monitoring. Treating configuration as code and handling changes through CD pipelines helps prevent ‘snowflake’ infrastructure deployments that cannot be reproduced and ensure that configuration errors never make it to production.

But while the automation of provisioning with Infrastructure as Code (IaC) and pipelines is clear, many organizations to date have relied primarily on standalone automation tools and one-off scripting. While this approach certainly is an improvement over manual workflows and processes, IaC provides far more than traditional automation practices. It automates full-stack deployment of infrastructure and apps; it offers source-controlled infrastructure and packages. It introduces software development practices that are applied to infrastructure build and operate procedures; infrastructure self-monitors system configurations and infrastructure self-heals to known-good state or version.

Cloud Native IT Operating Model

To provide the AWS-like experience that developers often seek, IT organizations are finding that IaC is required for private cloud and internal CaaS, PaaS and IaaS services.  Organizations are either launching IaC initiatives that extend and leverage DevOps efforts, or in some cases are even launching pure ‘bottom-up’ IaC initiatives focused on leveraging CD pipelines to define and manage the creation, configuration, and update of infrastructure resources and services. IaC is critical to enabling IT to operate like a public cloud provider, and provide the speed, flexibility and resiliency needed to support Digital Transformation.

One of our recent Dell EMC Consulting customers in the technology services sector wanted to provide their developers a common experience across their multi-cloud environment and deliver “public-cloud responsiveness” using an on-premises converged infrastructure solution. The key desired outcomes from their DevOps / IAC initiative was to minimize the inconsistency when building infrastructure components, while improving the efficiency of deploying both the cloud and on-premises infrastructure. As with most DevOps / IAC transformation programs, driving culture and behavior change was a key priority. The customer was seeking to cultivate internal knowledge and practical experience with Infrastructure-as-Code and DevOps concepts & tools and transform disparate client teams into one that follows Infrastructure-as-Code and DevOps behaviors.

Our Dell EMC Consulting team worked with the customer to use Infrastructure-as-Code and DevOps methodologies to architect & automate the deployment of high-performance converged infrastructure platform, and to develop a customer fulfillment pipeline for provisioning of both cloud and on-premises infrastructure resources including compute, storage and networking. Our Dell EMC Consulting team also provided coaching and mentoring that enabled the customer to enable a pipeline-driven Cloud platform for IaaS (and eventually PaaS & CaaS).

As a result of their DevOps / IAC engagement with Dell EMC Consulting the customer was able to:

  • Accelerate the fulfillment of Infrastructure to platform teams regardless of public cloud or on-premises requirements, and deliver IaaS using Infrastructure-as-Code and CD tool chain at end of sprints.
  • Provide resilient on-premises Cloud Platform in place for VM & Container services.
  • Enable optimized, automated flow, cutting provision time for developers.
  • Transform disparate internal teams into one integrating an Infrastructure-as-Code and DevOps foundation and pipeline first discipline.

Critical to the success of this and many of our other customers is recognizing the central role that CD pipelines and treating infrastructure configuration as code can play in infrastructure automation.

Summary

We’d love to hear about the challenges you face on your DevOps / IAC transformation journey, see more information on our Dell EMC DevOps and IaC Consulting services.

The post Infrastructure as Code (IaC): The Next Generation of IT Automation appeared first on InFocus Blog | Dell EMC Services.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

In 2019, Put a Platform-as-a-Product Strategy in Place

EMC logo


I was going to ritualistically complain about poorly designed software, but lately, most of the software I use has been going well, for instance:

  • The KLM Royal Dutch Airlines and American Airlines apps are some of my most-used apps since moving to Europe; they do exactly what I want—quickly—and are frequently updated;

  • The Albert Heijn app and their omni-channel features (grocery delivery and getting someone else to carry groceries up and down my Amsterdam steep and narrow stairs); and

  • My bank’s app which started off as, well, kind of weird back in August has been relentlessly updating this fall and gotten much better, adding in better authentication and spending tracking.

To me, “good” software directly changes how the business operates. As one of my favorite think pieces from 2018, by Brian Sommer, points out: all the digital transformation in the world is useless if you don’t actually change the business.

There’s still plenty of work left. My life insurance company’s software, for example, is little more than a web app that lists PDFs to print and FAX(!) to do even the simplest thing like changing your address. In aggregate, the overall direction we’re headed is good. All you enterprises just need to keep it up: from CMM, to ITIL, to Agile, to DevOps, all The Great Methodologies agree that improvement never ends.

Last year I said that you should clear out all your technical debt so you’re no longer paralyzed by the cost of short-term architectural decisions. This year, you should put a platform-as-a-product strategy in place. Treating your platform as a product means applying the same product management approach that your product teams are using to improve your customer-facing applications.

 

Beyond boring platforms

Us folks at Pivotal will whiteboard your eyes out on how a cloud platform will speed up your development cycle. You know, the whole cf push thing. Our customers have shown over and over again that with a centralized, standardized platform, developers spend less time—if any—programming infrastructure, and instead solve customer problems that directly improves their business.

Standing up a platform isn’t a one-time project, just a static service to be delivered and SLAs. As with any good software, it’s a never-ending series of small batches that takes in requirements from customers, prioritizes which ones to implement this week, develops and releases the software, and verifies that the customer’s life was improved… trying it all over again if not. That continuous improvement is the product part of platform-as-a-product.

Thankfully, describing “how to product” is well known. If you’ve been improving your software by applying lean design, weekly release cycles, and actually following agile software development best practices like TDD and pair programming, you know the basics. When the product is your platform, you treat the product teams as the customers, always working to discover their problems and engineer pragmatic solutions.

 

A validated platform theory

Rabobank’s platform journey is a great example. As Vincent Oostindië explained, they needed to replace their highly successful, but now aged platform. The champ had run their online banking application for many years but now couldn’t keep up with new technologies, scale, or and the “you build it, you own it” DevOps principles the bank needed.

As with most organizations, at Rabobank, choosing a new platform, traditionally, is driven by a committee wielding spreadsheets that list endless features and requirements. Each row lists a capability, feature, or type of “requirement” that operators and developers, the committee assumes, will need. At this point, most enterprises would pick a platform using advanced column sorting strategies, vendor haruspex, and disciplined enterprise architecture futurology.

Instead, treating the developers as customers, Rabobank experimented with several different platforms by having developers actually use the platforms for small projects. Following the product approach, they then observed which platforms served the developers best. This working PoC was driven by user validation, proving out which platform worked best. More importantly, it proved that developers liked the platform. “If you guys don’t like it you’ll just go away,” Vincent explains, “and we have a nice platform—or, technically nice platform—but, [with,] no users on it, [there’s] no point.”

 

The time after toil

This just brings you to the start of the platform-as-a-product story, though. It’s just the first, major release. Applying a product-driven approach means doing this month after month, year after year. If you’re like many large organizations I talk with, you’re just now at the point where you’ve had several initial successes and nailed down platform reliably. At the beginning, your platform engineers probably also spent a considerable amount of time consulting with each product team on the basics of platform use. As the product teams get wiser and more numerous, your platform engineers have probably lost track of what applications are running on the platform. Now, those platform engineers should have more time. Now’s the time to become more product-oriented.

Forrester’s Chris Gardner and Charlie Betz describe what that looks like: “Administrators must move away from treating systems as static monoliths and toward deploying fluid infrastructure the same way developers build apps: developing, encapsulating, testing, and managing infrastructure models in code repositories.” They add in some balletics, “the new priority is understanding how software makes hardware dance to the tune of business.”

Now, listen: of course I’m going to tell you that building your own platform from nothing is a bad idea—I work at Pivotal after all! Treating your platform as a product doesn’t mean you’ll build everything from scratch. From your customer’s perspective—the product team’s perspective—it’s just waste. Instead, your platform engineers only focus on creating differentiated services, just like your product teams.

For your developers, there’s nothing differentiating about how PDFs are generated, databases, or queue. Instead of building such things, developers use existing frameworks so that they can focus on more valuable work.

Similarly, your platform team will be wasting their time (and your money) if they’re building a platform from piece-parts, paying for the joyful experience of re-learning and re-coding all the things we’ve learned in past years. Instead, platform engineers will mine for differentiated value by extending and customizing this platform. For example, they might introduce audit automation and self-service, getting audit windows down from 10 months to less than a week like the US Air Force. Or they might accelerate a bank’s global growth by providing a shared banking-as-a-service platform like Scotiabank.

For virtually every organization, time spent building your own platform from scratch is waste. “We also came to the conclusion that as a bank, we shouldn’t be building a platform,” Vincent explained. That work would require a lot of resources without directly adding value for the end-user: “It would mean people working on that every day, and, well that’s not bringing any any business value.” Instead of building your own platform, I would not too humbly suggest Pivotal Cloud Foundry®. It’s what Rabobank decided on at the end of that working PoC. This strategic decision isn’t unique to Rabobank, at all, numerous other enterprises have come to the same conclusion.

 

Start with bunnies and cats

Conceptually, the idea is simple. Putting it into place, like all transformations, takes effort and much trial and error. Recently, our customers have been sorting through this dance, to use Gardner and Betz’s flourish. Our Pivotal Cloud Foundry Solutions team has boiled the moves down into a methodology that’s been deployed many times. A couple of months ago Paula Kennedy gave a great overview that’s a good place to start, also boiled down into a delightful cartoon with cats and bunnies (thanks Denise Yu)!

Comic from @deniseyu21.

As ever, the end-goal is removing waste and toil from your organization’s software lifecycle so that first, your platform engineers can focus on helping product teams and, second, product teams can focus on writing better software for the people who use it. Hopefully, by this time next year, I won’t have to fill out a PDF just to pay my life insurance premium. That’s all I’m really looking for here. Like any good normal person, I don’t want to pay you to configure servers, I just want to update my address, pay my bills, and get on with my life.

 


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

Create Customer Value Like FAANG: Continuous Delivery via Spinnaker

EMC logo


If you haven’t heard of FAANG, then you might know them by their household names: Facebook, Apple, Amazon, Netflix and Google. Together, they make up the collection of companies that routinely outpace the S&P 500. What is the secret to the excellent market performance of these companies?

In a September 17, 2018, blog from Forrester titled “Stay Ahead Of Your Customers With Continuous Delivery,” James Staten and Jeffrey Hammond (Vice Presidents and Principal Analysts at Forrester) correlate the success of Amazon, Netflix, and Google, in particular, to their ability to rapidly create customer value through continuous delivery:

“Continuous delivery means being prepared for fast incremental delivery of your conclusions about what new values and innovations will work best for your customers. And you must be prepared to be wrong and respond fast with incremental iterations to improve the alignment based on customer feedback.”

But I’m Not Like Netflix, You Say

Reaching the development and delivery models of a Netflix or Google might seem impossible for many enterprises today. After all, a Netflix doesn’t have the same concerns and level of risk as a bank, right?

But the reality we’ve seen at Pivotal is that large enterprise companies, including those in heavily regulated industries like finance and healthcare, are transforming their application development and delivery leveraging cloud-native approaches. For example, Cerner, an electronic medical records (EMR) company, transformed its infrastructure and application delivery to be more competitive using Pivotal Cloud Foundry (PCF) and Concourse.

Before the introduction of PCF, Cerner had already made strides to modernize its development processes, but it was “still stuck delivering new software to production once or twice a year. New features reached code completion, but then just sat in a staging environment until the next release.” It’s not uncommon for companies to adopt agile development practices but still lack the infrastructure, processes or tools to overcome the inertia of traditional release practices. That ability to continuously deliver value to your customers is still one of the larger transformation hurdles for enterprises today.

Don’t Let the Value of Your Code Depreciate

Continuous delivery was born as an extension to agile methods. After all, if you can’t get your code changes to your customers fast enough to provide value and get timely feedback, then why be agile in the first place? As one of Pivotal’s customers, CSAA Insurance, stated in a presentation at Cloud Foundry Summit: “Un-deployed code is at best a depreciating asset.

For every minute that you delay getting your code to production, the value of that asset decreases. And the risk of deploying to production actually increases at the same time. In fact, deploying small, constant changes to production reduces your risk and makes it easier to find issues if something does go wrong.

What does the ideal continuous delivery process look like then? It kicks off with code commit and continuous integration (CI) to get code built and tested. CI is typically a first area of automation that companies tackle on their path to continuous delivery—you could even say it’s a required first step.

At this stage, developers make sure their code is always at a quality level that is safe for release. By leveraging a development environment that is at parity with your production environment, you reduce the variability in application behavior as it progresses through the pipeline.

An automated hand-off from the CI process initiates a continuous delivery workflow. Here, speed must work hand and hand with safety and stability to deliver your applications to a production environment. This workflow may trigger jobs like security and compliance checks, or any other testing, before moving to production, where you’ll leverage automated deployment strategies like blue/green and canary deployments to reduce risk.

 

But that’s not the end for continuous delivery. It continues with the operation of the application in production where you can monitor for issues, remediate security vulnerabilities, and test your apps for resiliency. Basically, you are extending the feedback loops through production, which enables continuous improvement of your application.

Mastering Continuous Delivery with Spinnaker

If you’re ready to tackle the challenges of continuous delivery, why not build on the discoveries of the masters? Netflix invented Spinnaker to help it extend and enhance cloud deployments for its high-performance organization. These days, Spinnaker is a powerful open source, multi-cloud continuous delivery solution that helps teams release software changes with high velocity and confidence. It works with all major cloud platforms and enjoys broad support from the open source community:

 

 

The Spinnaker community is vibrant and growing, ensuring that key learnings from these organizations continue to be shared through tool enhancements.

5 Reasons to Use Spinnaker for Cloud Deployments

Spinnaker can become a key part of any modern engineering team by extending your CI processes into automated, opinionated continuous delivery pipelines optimized for the cloud.

 

Continuous delivery pipeline where CI hands off an artifact for Spinnaker to deploy, executing integration tests and other custom actions, and providing an up-to-date view of all applications in production.

Here are five reasons that Spinnaker is useful for high-performance teams for cloud deployments:

  1. Optimization for multi-cloud infrastructure. Spinnaker enables you to decouple the release pipeline from target cloud providers so that you can deploy the same application in the same way to multiple clouds and platform instances. This simplifies and standardizes deployments across multiple stages, teams and applications. Spinnaker also leverages the immutable infrastructure of the cloud to deploy in new environments every time an application change happens.

  2. Zero downtime deployments. Spinnaker automates sophisticated deployment strategies out of the box that help to reduce risk at production, like blue/green and canary deployments with easy rollbacks if issues occur. It can also manage progressive deployments (e.g., by time zones).

  3. Application inventory. Spinnaker maintains inventory of where applications and their instances are deployed across multiple environments, IaaS and runtimes—enabling continued feedback in production. It is built by querying cloud providers and is even available for applications not deployed by Spinnaker. This level of oversight shows the health of applications and allows you to take corrective actions, such as restarting a failing application or rolling it back. The inventory can also be used by other tools, like for finding security vulnerabilities in deployed code or troubleshooting issues in production.

  4. Pipeline compliance and auditability: Spinnaker keeps an audit trail of all changes to applications and infrastructure—basically answering who, what, when and why for developers, operators, and auditors. It also supports role-based access control, providing multiple authentication and authorization options, including OAuth, SAML, LDAP, X.509 certs, GitHub teams, Azure groups or Google Groups. You can even track manual intervention and approval gates.

  5. Extensibility with integrations. Spinnaker supports a broad ecosystem of modern tools, which enables an automated CD pipeline and advanced functions. You can extend your existing CI investments (e.g., Jenkins, Travis CI, Wercker, Concourse) to add advanced deployment strategies. You can send notifications through Slack, HipChat, email, and so on. You can enhance your application monitoring through integrations with Datadog, Prometheus, Stackdriver, etc. Plus, Spinnaker is cloud-provider-agnostic. The upshot is that Spinnaker can work with your current and future tech stack.
     

While the FAANG group may stand out as masters of continuous delivery to the cloud, it’s a capability that any team in any industry can adapt for their environments.

 

First, take the infrastructure out of the delivery risk equation with a cloud platform like PCF, making it dead simple for developers to deliver value-add code on stable, secure clouds. Then, a tool like Spinnaker can help teams manage continuous delivery in a safe, scalable way. It’s not about navigating oppressive change management processes on a long path to production. Rather, it’s about leveraging a fully automated, standard, stable delivery pipeline built for the cloud and designed to optimize the cycle time for getting feedback.
 

Want to learn more and see Spinnaker in action? Check out our webcast: Continuous Delivery to the Cloud: Automate Thru Production with CI + Spinnaker.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related: