Unable to update or create MCS Machine Catalog in AWS with error “Connection credentials do not have sufficient permission to DeleteTags.”

Please update the IAM policy json to include ec2:DeleteTags permissions.

Example Policy:


“Version”: “2012-10-17”,

“Statement”: [


“Action”: [


































“Effect”: “Allow”,

“Resource”: “*”



“Action”: [







“Effect”: “Allow”,

“Resource”: “arn:aws:s3:::citrix*”



“Action”: [







“Effect”: “Allow”,

“Resource”: “*”



“Effect”: “Allow”,

“Action”: “iam:PassRole”,

“Resource”: “arn:aws:iam::*:role/*”





DLP API and Powershell

I need a solution

Hello all,

I am trying to hit the DLP API with the script below:

$Proxy = New-WebServiceProxy -Uri “https://<enforce Server>/ProtectManager/services/v2011/incidents?wsdl” -Credential domainuser

$type = $Proxy.GetType().Namespace

$incidentList = ($type + ‘.IncidentListRequest’)

$incidentListObj = New-Object ($incidentList)

$incidentListObj.savedReportId = ‘34309’

$incidentListObj.incidentCreationDateLaterThan = ‘2019-06-15T13:45:30’


When I run that I get the following error:

Cannot convert argument “incidentListRequest”, with value:

“Microsoft.PowerShell.Commands.NewWebserviceProxy.AutogeneratedTypes.WebServiceProxy13_services_v2011_incidents_wsdl.IncidentListRequest”, for “incidentList” to type

“Microsoft.PowerShell.Commands.NewWebserviceProxy.AutogeneratedTypes.WebServiceProxy13_services_v2011_incidents_wsdl.IncidentListRequest”: “Cannot convert the

“Microsoft.PowerShell.Commands.NewWebserviceProxy.AutogeneratedTypes.WebServiceProxy13_services_v2011_incidents_wsdl.IncidentListRequest” value of type

“Microsoft.PowerShell.Commands.NewWebserviceProxy.AutogeneratedTypes.WebServiceProxy13_services_v2011_incidents_wsdl.IncidentListRequest” to type


At C:UsersUserDocumentsDARReportsTestAPI2.ps1:12 char:1

+ $Proxy.incidentList($incidentListObj)

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : NotSpecified: (:) [], MethodException

+ FullyQualifiedErrorId : MethodArgumentConversionInvalidCastArgument

I think I am close but not quite. Any tips or help would be greatly appreciated.



Do We Have Containerized Solution For CWP ?

I need a solution

Currently, when we deploy CWP in our AWS environment it spins up EC2 instances for threat scanning. Although we don’t have any files into S3 coming up for a long time, servers keep on running. Is there a way we can containerize the solution and only when files landed to S3 it should spin up EC2 and scan for any threats and shut it down ?



Has Your Cloud Strategy Turned Dark and Stormy?

The cloud you and your departments choose now will affect the data center administration for years to come. Many customers have been shown the sunny side of cloud deployments and been attracted by the model of enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. The silver lining here being that in theory you can deploy instances that scale to almost any degree and turn off what you don’t use when you are not using it. Additionally, … READ MORE


PCF is Known for the Best App Deployment Experience. Now It’s Even Better with Zero Downtime Updates for Pretty Much Everything.

EMC logo

When you want to get your code from laptop to production, there’s nothing better than Pivotal Cloud Foundry. Just cf push your app, and the platform does the rest for you. Moments later, your updates are in prod.

Platform operators have their own glorious workflow. With Pivotal Cloud Foundry (PCF), patches, updates, and upgrades are automated and effortless.

Both sets of capabilities are on full display in PCF 2.4, which is now generally available. Let’s start by looking at how the app deployment process gets better.

PCF now supports native zero-downtime rolling deployments. Cloud Foundry has supported blue-green deployments for a while. But you needed two app versions, and some client-side orchestration to make it work. Now, client-side coordination is no longer required. Instead, the coordination now happens on the other side of the PCF API. It’s not a substitute for a full-featured CD tool, but it’s handy in lots of scenarios. Try it out as a beta in PCF 2.4!

Developers can scale their app in new ways in PCF 2.4. App Automator (in beta) allows you to define scaling rules based on custom app metrics. This new service uses other new capabilities (like the Metric Registrar and the new Log-Cache metrics endpoint) to give you more control over how your app behaves. 

Here’s what’s new for platform teams in PCF 2.4:

  • Dynamic egress policies. Want to make a policy change with Application Security Groups (ASGs)? In the past, you had to restart your app. Now, operators can instead use dynamic egress policies to make policy changes with zero downtime. This feature debuts as a beta, as well; experiment with them to see how they can replace ASGs over time. Say goodbye to those annoying app restarts!

  • Zero-downtime stack updates. In PCF 2.4, the new cflinuxfs3 rootfs (based on Ubuntu 18.04 LTS) is the default stack for new apps. With previous versions, migrating your app to the new stack would have required a restage and restart. Now the platform executes this in a automated and rolling fashion. That means no effort and no downtime.

  • PCF 2.4 continues the zero-downtime OS updates across the whole platform. Dealing with Operating System end of general support dates can be stressful, but with PCF this should be a minor and fully automated change in your environment.  More PCF software has updated to Ubuntu 16.04 with this release ensuring you can keep getting non-disruptive security updates for all software on the platform.

Zero downtime stack updates are a massive boost to productivity. Engineers can just go about their day, while PCF updates the bits under the hood.

But you really start to appreciate the power of zero-downtime updates when a new CVE hits. And what do you know, a Kubernetes Critical CVE was revealed earlier this month. Pivotal tested the fix, and pushed it out shortly after the vulnerability was discovered. Many Pivotal Container Service customers had their Kubernetes clusters patched in production before the news about the CVE became public. We recap this story below because it’s a terrific example of how you should run your most important business systems.

In PCF 2.4, your InfoSec teams will want to know about the additional TLS encryption and a new scanning tool to assist with compliance.

The other great thing about the automation PCF brings to your enterprise is that comprehensive security is baked into the platform. Developers inherit loads of security controls from PCF. For example, CredHub protects your passwords and other secrets, you get multi-tenancy through the permissions model, and you enjoy automated dependency management with buildpacks. (And your platform teams keep PCF in a healthy state using the aforementioned automated process.)

On to the release highlights!

Native Zero Downtime Rolling Deployments: A Powerful Addition to the Blue-Green Approach (beta)

Dynamic Egress Policies Overcome the Limitations of ASGs (beta)

Zero Downtime Stack Updates to cflinuxfs3: We Maintain the Base rootfs in Your Containers So You Don’t Have to

Zero Downtime OS Updates: Just Say No to Managing Operating Systems

PKS Fixes the Kubernetes Critical CVE with Zero Downtime

Compliance Scanner for PCF (beta)

Configure Scheduling and Scaling Events with App Automator (beta)

Apps Manager Adds Global Search, Greater Parity with the cf CLI

Emit Custom App Metrics to PCF with Metric Registrar

Operations Manager – Zip Through Your Day with These New Capabilities

MySQL for PCF 2.5: Now with HA via Galera Clusters

Use TLS to Encrypt Connections to Internal, External MySQL Instances

Connect Your Apps to File Servers with SMB Volume Services

Other Announcements

Native Zero Downtime Rolling Deployments: A Powerful Addition to the Blue-Green Approach

One popular way to reduce risk when rolling out a new version of your app is to use a blue-green deployment. Here’s the scenario:

Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green.

At any time, only one of the environments is live, with the live environment serving all production traffic. For this example, Blue is currently live and Green is idle.

As you prepare a new version of your software, deployment and the final stage of testing takes place in the environment that is not live: in this example, Green. Once you have deployed and fully tested the software in Green, you switch the router so all incoming requests now go to Green instead of Blue. Green is now live, and Blue is idle.

Cloud Foundry has supported this scenario for quite some time, but you had to have two different versions of an app plus some homegrown scripting or orchestration. Now, PCF 2.4 gives you the option to do a similar process natively in the platform. First, cf push an app (i.e. your-app-name) like normal. Then, when you want the platform to cut-over to the new version, you just type:

cf v3-zdt-push your-app-name

From there, PCF gradually brings instances of the new version online. As they prove to be working and healthy, the older versions are subsequently removed. This option isn’t meant to be used in lieu of a continuous deployment tool. But you’ll find it useful for loads of other scenarios, like performance tuning, code-level security fixes — even chaos engineering!

For a screenshot tutorial, check out Richard Seroter’s thread:

You can also watch Zach Robinson’s demo from Cloud Foundry Summit earlier this year.

Dynamic Egress Policies Overcome the Limitations of ASGs

Long-time Cloud Foundry operators are intimately familiar with Application Security Groups (ASGs). Operators use ASGs to control how applications interact with off-platform services. With ASGs, you create and manage rules that specify the protocols, ports, and IP address ranges where application or task instances send traffic.

ASGs work fine for the most part. But there is one annoying thing about ASGs: you have to restart your apps when you apply a new policy or update an existing one. This results in undesirable downtime. Another constraint with ASGs is that permissions are too coarse. Policies can only apply at the space level, not the application level. So you must grant access to an external service for all the apps in a given space, even if you just need a single app to talk to said service.

In PCF 2.4, there is a better way to govern these traffic flows: Dynamic Egress Policies, a beta capability in PCF 2.4. (This feature was released as part of open source Cloud Foundry v2.19.0.) With Dynamic Egress Policies, you can configure egress policies for CF apps and spaces, just by using the IP address range.

Want to drill into this feature more? Check out this superb technical blog post from Preethi Varambally. You can also review the docs in the cf-networking repo.

Zero Downtime Stack Updates to cflinuxfs3: We Maintain the Base rootfs in Your Containers So You Don’t Have to!

In PCF 2.3, Pivotal added support for the Ubuntu 18.04 stack and cflinuxfs3 for all supported buildpacks. New installs of PCF 2.4 will run the cflinuxfs3 stack and related buildpacks by default.

A few items to note:

  • cflinuxfs2 remains the default stack for PCF 2.2 and PCF 2.3.

  • The default stack can be toggled between cflinuxfs2 and cflinuxfs3. This setting will be inherited upon upgrade.

While the upgrade to cflinuxfs3 is seamless, there may be some impact to your apps. Make sure to test them before cutting over to the new version in production. As always, work with your account team to work through this transition!

Zero Downtime OS Updates: Just Say No to Managing Operating Systems

In PCF 2.3, we bumped the OS version of several tiles to Ubuntu 16.04, the latest release. Since then, a slew of additional services have bumped their stemcells to Ubuntu 16.04, including:

It’s almost 2019. Are you still patching your operating systems manually? Or worse, not patching them at all? Ask any Pivotal customer how liberating it feels to never worry about an OS patch again!

PKS Fixes the Kubernetes Critical CVE with Zero Downtime

I like to say there are three certainties in life: death, taxes, and security updates. The Kubernetes world was jolted with a CVE earlier this month when a privilege escalation vulnerability was discovered. Here’s how ZDNet described the issue:

With a specially crafted network request, any user can establish a connection through the Kubernetes application programming interface (API) server to a backend server. Once established, an attacker can send arbitrary requests over the network connection directly to that backend. Adding insult to injury, these requests are authenticated with the Kubernetes API server’s Transport Layer Security (TLS) credentials.

Yikes. Surprised by this? You shouldn’t be. CVEs exist in every piece of software. Expect more of them to hit. The question is how fast can you respond to them. If it’s more than a day, you need to get serious about modernizing your infrastructure operations.

Here’s the good news: we can help you with that! Our own John Allwright explained how PKS customers instantly and effortlessly applied this patch to their Kubernetes clusters before the news was even widely reported:

For PKS customers with automated upgrade pipelines, updates from PivNet are automatically applied to their PKS instances. In this case, these pipelines detected a new PKS release (v1.2.3), and immediately updated the customer’s PKS environments, with zero downtime.

Without an automated process, CVEs are stressful, white-knuckle moments in enterprise IT. This fun video captures how much easier life can be with automated patching for your most important business systems!

Compliance Scanner for PCF (beta)

More and more organizations are leaving the drudgery of OS management to automated platforms. DISA recently updated its Security Technical Implementation Guides (STIGs) to certify the use of embedded operating systems for DoD work.

We are all for this movement, and want to help accelerate it. So we’re launching the Compliance Scanner for PCF. Here’s how this tool will help auditors and compliance teams at big organizations.

There’s a universe of third-party configuration scanners for determining security and compliance in operating systems. By and large, these tools were for the pre-platform era, when the OS wasn’t embedded. So, the industry needs scanning tools that are purpose-built for platforms. Hence, Compliance Scanner for PCF!

The tile includes the best of both worlds: remastered tests that simultaneously fit the stemcell model, while aligning with industry-recognized guidelines for secure configurations. What’s not to like?

With just a few clicks, your compliance team will get a full report on the compliance posture of the entire platform. That means you get the green light to go to production, without waiting for a prolonged compliance period.

The results from the Compliance Scanner for PCF, coming soon!

The Compliance Scanner for PCF includes:

  • The OpenSCAP scanner that does the actual scanning.

  • Tests written by Pivotal Compliance Innovation in YML.

  • XGen: XCCDF Generator, which translates the YML tests to XCCDF formatted XML, as defined by the SCAP standard.

The scanner is now available on PivNet. Grab the bits, then go read the docs!

This piece of tech should make the auditor’s job much easier. Want to make their job even easier? Make sure you send them this whitepaper: Pivotal Cloud Foundry: The Auditor’s Guide!

Configure Scheduling and Scaling Events with App Automator (beta)

Here’s a neat feature. What if you could schedule scaling events for your apps – or schedule recurring batch jobs with a simple manifest file? That’s App Automator, and it’s a beta in PCF 2.4. So how does it work?

Developers can express when & what workloads should run via “Triggers” & “Actions”. These parameters live an App Automater manifest that lives within the app code. The manifest would look something like this:

   type: schedule
cron : @every 10m
   action: run_etl

   type: cf-task
   app: data-cruncher
   command: "./run_etl.sh"
    memory_in_mb: 2048 

App Automator includes predefined Triggers and Actions based on common scenarios:

  • Triggers: schedule/cron, event, and metric

  • Actions: scale, curl, task

 Sure, you can manage scaling with PCF App Autoscaler. And you use Spring Cloud Data Flow for PCF or PCF Scheduler to automate tasks. App Automator’s approach offers two advantages.

  • It’s easier to build pipelines. Scheduling and scaling behavior lives in the App Automator manifest.

  • Simplified operations. No database, no service broker. Just install via CLI, and App Automator is available in the Space.

App Automator will be available in the coming weeks. We’ll update this post with a download link to the CLI plug-in. In the meantime, contact your account team for access!

 Note that App Automator is compatible with PAS 2.2 and above.

Apps Manager Adds Global Search, Greater Parity with the cf CLI

Once you have hundreds – or thousands – of apps running PCF, you’re going to need an easy way to find a specific app or service instance. You may want an easy way to search across your orgs and spaces too.

Now you can, with the new global search capability in Apps Manager! Simply type your search terms in the top search bar, shown below. Hit enter, and the search will quickly return results for that string across all your orgs, spaces, service instances, and apps. 

Apps Manager in PCF 2.4 now features a global search.

 There’s more. Apps Manager is really useful for developers just ramping up on Pivotal Cloud Foundry, especially when compared to the cf CLI. We want Apps Manager to have pragmatic parity with the CLI wherever possible. So, in PCF 2.4, Apps Manager gains additional parity with the CLI for restaging your app and service instance sharing.

Operations Manager: Zip Through Your Day with These New Capabilities

Pivotal wants to help operators efficiently manage the platform and onboard new developments teams. We already talked about all the zero downtime goodness in this release, so let’s dig into new enhancements that unlock efficiency in other areas.

Improved permissions logic eases day-to-day administration.

Ops Manager users with write access can use the UI and API when another user with write access is logged in at the same time. This is the finishing touch on a recent round of new role-based access control features.

Expiring cert warning

PCF now protects several communication pathways with the TLS protocol. TLS, of course, uses certificates to ensure that the “client” and “server” on each side of the transaction are authorized and authenticated to share data. Managing all these new certs gets a little easier over the last 18 months, going way back to PCF 1.12 with certificate rotation APIs. We’ve further enhanced this workflow in PCF 2.4. Now, Operations Manager will now proactively prompt you with a banner when your certs may be expiring soon. Here’s a sample of the UX: 

Operations Manager 2.4 warns you about expiring certificates.

 Now, it’s easier to ensure that your certs stay current!

Bring your own antivirus software for the BOSH Director VM

Need to run antivirus software on the VMs that power PCF? Use PCF’s ClamAV Add-On. Prefer a different option? Now you can bring your own! This option will appeal to operators with specific antivirus requirements.

New tools for IaaS customization – Global CPI extensions

PCF is, of course, a multi-cloud platform. The platform’s underlying cloud provider interface (CPI) handles the subtle differences between IaaS providers for you. The end result is a uniform, consistent operational experience across any private and public cloud. But what if you want to customize how you consume the underlying infrastructure? Now you can customize this experience via self-service, with the new Global CPI extensions feature in Ops Manager. Now hundreds of different config extensions are at your disposal!

Note that Global CPI extensions are API only. In this way, they are quite similar conceptually to the vm-extensions feature announced earlier this year.

“Advanced Mode” offers streamlined workflows for power users

OpsManager “locks” certain fields after a successful deployment. If you need to “unlock” some of these fields, you may do so with via Advanced Mode. As the name suggests, this feature is recommended for advanced users only. And please use this feature in conjunction with Pivotal Support!

Emit Custom App Metrics to PCF with Metric Registrar

Metric Registrar allows app developers to export custom app metrics as native CF Metrics. Developers can use it easily create custom metrics that better signal app health and performance, using standard client libraries like Micrometer or Prometheus.

There is a new CLI plugin which you can install via:

cf install-plugin -r CF-Community "metric-registrar"

This will allow you to register your app with the Metric Registrar and gives you two options for emitting custom app metrics: via a public metrics endpoint or through structured logs.

An Overview of Metric Registrar

One other quick note: this capability is off by default. To enable it, simply go to Ops Manager > PAS > Metric Registrar for the settings.

MySQL for PCF 2.5: Now with HA via Galera Clusters

When you think of a cloud-native database, what attributes leap to mind? On-demand provisioning for sure. You want lots of availability provisions, like leader-follower and automated failover. And you want security built-in, with something like TLS.

Our MySQL tile team has been busy shipping these features (and more!) over the last year. Now in MySQL for PCF 2.5, we’ve added another essential feature your enterprise will appreciate: high availability through Galera clustering.

Now, enterprise application developers and operators can now enjoy resilience and application availability even in the face of platform and network failures. Jagdish Mirani offers this excellent write-up with all the details.

Use TLS to Encrypt Connections to Internal, External MySQL Instances

The adoption of TLS in PCF continues apace in this release. Here’s what’s new on the TLS front:

  • You can configure PAS 2.4 to use TLS for all components’ connections to the internal PXC MySQL database.

  • Similarly, you can configure PAS to use TLS for all components’ connections to an external MySQL database. Just provide a CA cert!

Two other updates, related to TLS:

  • App developers can use CF SSH when a platform operator enables authenticated container ingress. Previously, SSH was disabled in this scenario. That restriction is now gone!

  • Every PCF 2.4 deployment will now feature improved routing consistency, security, and stability from Gorouters to Linux cells. This feature was opt-in in PCF 2.3; now it’s enabled by default.

Need a reminder why PCF loves TLS? Check out this excellent article by our own Brian McClain.

Connect Your Apps to File Servers with SMB Volume Services

Do you use file servers supporting the CIFS/SMB protocol? Then the SMB volume service is for you! It ships as part of the PAS 2.4 tile.

Developers can use SMB volume services to bind existing SMB shares to their apps. The feature supports the key advantage of the SMB protocol: native password authentication, which means you can control access to file shares without the overhead of configuring an LDAP server.

Other Announcements

Concourse for PCF

This service got a significant security upgrade: user-based authentication. Previously, you had to log in under a specific team without an association to a user. In Concourse 4.2.1+, users can authenticate into teams as specific users. (Users can be added to a team by configuring the team’s whitelist as described in Configuring Team Authentication.) Another cool thing about this feature: it enables the possibility of other security capabilities, like role-based access control. Stay tuned on this front!

PAS for Windows 2.4

Two things of note on the Windows and .NET side of the house:

  • Operators can now control the density of Windows cells. This is a useful feature if you want to trim your infrastructure costs.

  • Pivotal now provides and maintains a Windows Stemcells for AWS GovCloud. You can now use PAS for Windows to run .NET workloads on this IaaS target.

Built-in MySQL database

If you are using the PAS built-in MySQL compatible database (rather than an externally managed MySQL), you must complete the transition from MariaDB to Percona in PAS 2.3 or prior before the PAS 2.4 upgrade. Read more about the straight-forward migration procedure here.

Try Pivotal Cloud Foundry for Free

Ready to get started? You can take PCF for a spin on Pivotal Web Services for free. Want to dive into the release a bit more? Check out the links below and read up on the newest capabilities. Then make the move to Pivotal Cloud Foundry!


This blog contains statements relating to Pivotal’s expectations, projections, beliefs and prospects which are “forward-looking statements” within the meaning of the federal securities laws and by their nature are uncertain. Words such as “believe,” “may,” “will,” “estimate,” “continue,” “anticipate,” “intend,” “expect,” “plans,” and similar expressions are intended to identify forward-looking statements. Such forward-looking statements are not guarantees of future performance, and you are cautioned not to place undue reliance on these forward-looking statements. Actual results could differ materially from those projected in the forward-looking statements as a result of many factors, including but not limited to: (i) our limited operating history as an independent company, which makes it difficult to evaluate our prospects; (ii) the substantial losses we have incurred and the risks of not being able to generate sufficient revenue to achieve and sustain profitability; (iii) our future success depending in large part on the growth of our target markets; (iv) our future growth depending largely on Pivotal Cloud Foundry and our platform-related services; (v) our subscription revenue growth rate not being indicative of our future performance or ability to grow; (vi) our business and prospects being harmed if our customers do not renew their subscriptions or expand their use of our platform; (vii) any failure by us to compete effectively; (viii) our long and unpredictable sales cycles that vary seasonally and which can cause significant variation in the number and size of transactions that can close in a particular quarter; (ix) our lack of control of and inability to predict the future course of open-source technologies, including those used in Pivotal Cloud Foundry; and (x) any security or privacy breaches. All information set forth in this release is current as of the date of this release. These forward-looking statements are based on current expectations and are subject to uncertainties, risks, assumptions, and changes in condition, significance, value and effect as well as other risks disclosed previously and from time to time in documents filed by us with the U.S. Securities and Exchange Commission (SEC), including our prospectus dated April 19, 2018, and filed pursuant to Rule 424(b) under the U.S. Securities Act of 1933, as amended. Additional information will be made available in our quarterly report on Form 10-Q and other future reports that we may file with the SEC, which could cause actual results to vary from expectations. We disclaim any obligation to, and do not currently intend to, update any such forward-looking statements, whether written or oral, that may be made from time to time except as required by law.

This blog also contains statements which are intended to outline the general direction of certain of Pivotal’s offerings. It is intended for information purposes only and may not be incorporated into any contract.  Any information regarding the pre-release of Pivotal offerings, future updates or other planned modifications is subject to ongoing evaluation by Pivotal and is subject to change. All software releases are on an if and when available basis and are subject to change. This information is provided without warranty or any kind, express or implied, and is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions regarding Pivotal’s offerings. Any purchasing decisions should only be based on features currently available.  The development, release, and timing of any features or functionality described for Pivotal’s offerings in this blog remain at the sole discretion of Pivotal. Pivotal has no obligation to update forward-looking information in this blog.

Update your feed preferences





submit to reddit


2019 Software Trends

EMC logo

Happy holidays. At Pivotal, our mission is to transform how the world builds software. Every year we bring together Pivotal experts, customers, and leading technologists to forecast transformative trends for the New Year. We hope these insights will help readers like you stay ahead of the curve. Enjoy!

—Rob Mee, CEO, Pivotal



The Lean, User-Centered Design, and Agile/XP communities will continue to converge. Lines will increasingly blur; practices will be de-dogmatized, in favor of principles and values. The role of the Product Manager as “mini-CEO of the team” will continue to wane, replaced by Balanced Teams. Collective thinking, and thus innovation, will rise accordingly. Psychological Safety exercises, tests, and consulting will balloon in 2019, as agile teams embrace psychological safety as a fundamental prerequisite to success. SAFe will continue to be sought out by enterprises looking for a quick, ready-made agile transformation solution—but they will continue to wrestle with sustainably delivering value to their users.

—Matthew Parker, Head of Engineering, Pivotal Labs


Applied Artificial Intelligence (AI) and Machine Learning (ML)

There is no shortage of hype about artificial intelligence, but we’re still in the early stages of AI deployment in the enterprise. This is especially true when it comes to deep learning, barely 5 years into the current explosion. The AI realist, who should become more vocal in 2019, might ask: “That’s an impressive demo of image recognition, but how does it translate into helping my business?” AI projects will require different implementations, different data (a lot of it), and different skill-sets for which companies will have to aggressively hire. Fortunately, help is coming in the form of better software products and services to make AI more accessible to the enterprise. One example is tooling for model management, where many of the same patterns from building and deploying complex applications and microservices apply. Startups in this area will continue to attract funding in 2019, as well as those applying AI to specific use cases within verticals.

—Frank McQuillan, Director of Product Management on Applied AI & Machine Learning, Pivotal



Blockchain will fall short in financial services but be utilized more in supply chain tracking in 2019. The initial use cases in Blockchain were heavily focused within financial services and have yet to take off. Currently, there are large profit margins made by intermediaries of financial transactions, i.e. custodians, as well as heavy regulation introduced by Dodd-Frank demanding central exchanges to be intermediaries for control. Until the value chain is disrupted completely and/or regulation adjusts its view, the sector will continue to stall in the area of blockchain. The supply chain tracking use case is becoming the more relevant use of blockchain and will speed up in progress in 2019 because all parties benefit from the utilization of the distributed ledger and immutable transaction benefits provided by blockchain. Consumers can guarantee that what they order is what they receive and sellers of the final product can identify down to the individual item the origination and path the item takes. In the recent lettuce recall situation, grocery stores could have barred only the specific farms that were infected by the recall and not everything all at once, whereby avoiding loss of sales and revenue.

—Jesse Bean, CIO of the Field Services Group, Pivotal



“… We have been using a centralized model for years, and at MasterCard, we have been following what they call the ‘zero trust model.’ We’ve been doing it for years, and over time, the only way we thought to be most effective was having centralized security controls, and we had been trying to build that into the containers and build it into the network level and into every single component that we can touch. But, over time, it generates a lot of complexities, and it can be hard to figure out where the problem is. And so it’s very difficult. So, now we’re kind of shifting the models more toward the platform-managed instead of container-managed. It’s not really the container; it’s more like the application server at that point, but now we’re trying to push that functionality into the platform.”

—Jenny Zhang, Principal Consultant, Mastercard

This text was taken from an interview Pivotal did with Jenny at SpringOne 2018. Watch it here.



Data continues to be one of the most important assets that an organization has at its disposal, and the ease with which insights around it can occur will rapidly accelerate in 2019. New application development will really start to involve artificial intelligence and machine learning co-developers, thanks in large part to the continued trend of decoupled microservices, which will finally start the realization of “smart applications.” This will also mean that the experiment in data lakes will come to a conclusion for many organizations as they switch back to familiar products and platforms that are either open source or managed by an IaaS or PasS in order to show some actual ROI. All of this points to an aggressive demand for qualified and experienced data engineers and data scientists, who will be critical in this next wave of innovation.

—Jacque Istok, Head of Data, Pivotal



In 2019 organizations will invest more in design leadership and we will see more Chief Design Officer and VP Design appointments across industries. This underpins their transformation into customer-centric product organisations and follows a trend to bring design capabilities in-house. As design practices mature they need appropriate executive leadership support to continue to grow the breadth and depth of their practice, scale to match the need of the organization, develop a core design ops capability, and keep hard-to-find talent engaged.  

—Martina Hodges-Schell, Head of Product Design, Pivotal Labs on Design



“… We want to enable our developers as much as possible and give them choice of tools where appropriate, but we have to strike that balance between compliance and efficiency. There are lots of tools that the developers are able to choose from on their desktop and within their teams, from team to team, but when it comes to something like a CI/CD pipeline, that remains within a centralized team. But one of the things that we’re doing now is pairing up those development teams with that centralized team when we make changes or enhancements to make sure that we’re getting real-time feedback and making decisions that will benefit the development teams.”

—Kurt Glore, Director, Cloud and Deliver Engineering, Express Scripts

This text was taken from an interview Pivotal did with Kurt at SpringOne 2018. Watch it here.


IT Modernization / Replatforming

In 2019 I expect to see IT organizations modernize larger (like “system of systems”) and more diverse (like Mainframe) workloads to our rapidly evolving set of cloud abstractions. Doing this will require good decision making that balances technical suitability (the “what”), business criteria (the “why”), and organizational (the “how”) factors aligned with a set of strategies (like Re-Host or Re-Factor) that deliver immediate results (like better security posture). These teams will work incrementally and move as many applications into a full production state as possible. By reducing the operational burden of their existing portfolio, organizations will unlock precious dollars to improve customer experience and drive innovation forward.

—Edward Hieatt, Senior Vice President, Customer Success, Pivotal


2019 will be the year of PaaS and CaaS convergence. Enterprise IT’s accelerating appetite for containerization of applications is going to take Kubernetes further up the stack and the blending of capabilities from PaaS environments will commence. I expect many modernization efforts within enterprise IT throughout 2019 will target a Kubernetes runtime, and convergence will be welcome. A single environment that accommodates new cloud-native applications and provides more control over traditional workloads will appeal to many large enterprises.

—Alan Flowers, VP and CTO EMEA, HCL



“…Right now, Kubernetes is cool, teams at T-Mobile are being told to containerize their applications, so for a lot of them, that looks like PKS. We’re trying to make sure that they’re placing themselves there appropriately and not just doing what is the cool thing, but is the right thing for their applications. If they don’t need to maintain the CI/CD for that docker image for the next two or three or five years, they’re much better off just trusting the build pack and letting that take care of the problems for them.”

—Brendan Aye, Cloud Foundry Platform Architect, T-Mobile

This text was taken from an interview Pivotal did with Brendan at SpringOne Platform 2018. Watch it here.


Some would argue that Kubernetes has crossed the chasm, as evidenced by 8,000 people attending the final Kubecon of the year in Seattle in December. And while the conference saw a marked increase in presentations coming from users (rather than only from open source contributors and vendors), most of these still came from cloud-native companies—those that were born in the web era (say within the last 10 years). Most of the enthusiasm around Kubernetes is still coming from the developer—the consumer of Kubernetes. 2019 is the year that Kubernetes will cross the chasm in the traditional enterprise. Early adopters are seeing value coming from their use of Kubernetes and are beginning to speak publicly about it. And the people who provide infrastructure to those developers asking for Kubernetes are actively looking for ways to offer that service. Of course, security and compliance reign supreme in the enterprise; and Kubernetes has matured to the point where it is possible to meet these requirements, however, the level of complexity remains a challenge. 2019 will see emphasis placed on making Kubernetes more accessible to exponentially more consumers—this will truly enable the crossing of the chasm.

—Cornelia Davis, Senior Director of Technology, Pivotal



“… Everyone has a mobile device now. Smartphones are ubiquitous. Having the ability to take a native mobile client and to connect it into the same server that you might use to present your website was absolutely critical. The next logical step at Merrill as we were thinking about what should we do, was microservices, because that’s really my view of the best way to handle complexity in today’s world. What it means to me is you can split parts of your system in these swim lanes, to make sure that there’s isolation. In the old fashion sort of consolidated monolithic systems, if a reporting job sucked up a lot of database CPU, you might impact people that are doing something totally different on the database, and that’s not good. Now, of course, you can try and engineer around it, but if you keep everything separate within a microservice from the get-go you’re built with that isolation, you’re built with that sort of stability in mind. All things still need to scale at the same rate, so if you can split things into these segments, into these microservers, you can scale as appropriate, and add more resources where necessary. It keeps the code base small, so the smaller things are the easier they are to understand, the more likely that you’re going to be able to maintain and support it and grow it over time.”

—Thomas Fredell, Chief Product Officer, Merrill Corporation

This text was taken from an interview Pivotal did with Thomas at SpringOne Platform 2018. Watch it here.


Open Source

Consolidation around open source software will continue into 2019. In late 2018, IBM and VMware announced acquisitions of Red Hat and Heptio, respectively; and Microsoft acquired GitHub, which hosts millions of open source projects. In addition to more acquisitions, additional open source projects will consolidate under foundations. This is a continuation of what we have recently seen with many open source projects, including the Ceph Foundation, moving under the Linux Foundation, for example. Additional jockeying for position within the open source Kubernetes project is likely to continue as part of this consolidation in 2019. The acquisitions of Red Hat with their OpenShift Kubernetes platform and Kubernetes company, Heptio, strengthen IBM’s and VMware’s already strong positions in Kubernetes. More of these Kubernetes-related acquisitions are expected to continue in the future.

—Dawn Foster, Open Source Software Strategy Lead, Pivotal.



“…We’re completely invested in growing technology at DICK’S Sporting Goods. We’ve touched on it a little bit earlier, but the concept of the balanced team within each of the product teams is so critical and that’s something that we’re going to continue to focus on. I have a great partnership with the product manager inside the organization and the design side of the organization and leveraging the experience of our teams working in the immersive experience of Pivotal Labs as well as the teams that we have onsite, really looking to grow 31 product teams that we have, the next 30 after that.”

—J.P. White, Director of eCommerce Application Development, DICK’S Sporting Goods

This text was taken from an interview Pivotal did with J.P at SpringOne Platform 2018. Watch it here.



“…Safety is paramount for us. We want to make sure that we are building products that are doing exactly what they’re supposed to do, no fuss, no fray on that. That also makes us inherently a very, very conservative company, which means we check everything once, twice, three times, four times. Then we have other people that check our things as well… We’re a 102-year-old company that worked very well for us for the first 102 years. That is not going to work well for us going forward. We need to fundamentally make some changes, whether it’s changing the way we do software development, the way we want to actually think about things in order to be able to come to market it a lot faster. We have to inherently change, which also means that we need to move faster without compromising the safety… The idea of test-driven-development, the idea of us sort of building in those safety checks along the way, really kind of helps to answer that question of how do we continue to build safe products that we want to have out there? I think there’s always been this idea that we need to be able to move faster, we need to build to provide value to our end customers, but what’s nice is now we have a framework that we can actually use to be able to go forward and have that conversations. That’s been really powerful.”

—Sophia Bright, IT Director, Boeing

This text was taken from a SpringOne Platform presentation. Watch it here.


“So, within West we are starting to see security teams becoming more of a partner to the product teams. We think about it as going from governance to guidance, or being much more a part of that team, so if you start to see the evolution of things like DevSecOps and things like that, that’s really where we start to see it. Our relationship with a security organization is that if we can standardize things within the platform as much as possible, that gives us an opportunity to be a lot more consistent, with the right way also being the easy way.”

—Thomas Squeo, CTO, West Corp.

This text was taken from an interview Pivotal did with Thomas at SpringOne 2018. Watch it here.


“… If you think you’re going to patch weekly or patch daily or repave daily and you’re not going to do it through automation, you are sorely going to have thousands of people, which is going to introduce human error…. Stop treating your servers like it’s something that you want to stick around. Treat them like they’re ephemeral and immutable. And then, redeploy often, even when you don’t think you have to or need to. It’s just good practice. Do it as often as possible… We probably have in the neighborhood of 15 to 18,000 virtual appliances that are spun up at any given time in all of our PCF environments… when you deploy an application and that application has a number of instances, and those number of instances follow a cloud-native practice, that they have the ability to be patched, the platform be patched without causing customer impact. We’ve seen this since we do it once a week right now, and we’re going to be going to once a day by the end of next year.”

—Lance Rochelle, Product Manager, Wells Fargo

This text was taken from an interview Pivotal did with Lance at SpringOne 2018. Watch it here.



The terms “serverless” and “FaaS” (Functions-as-a-Service) are often used interchangeably. However, the characteristics of serverless are more generally beneficial, allowing developers to be more productive since they can focus on code that solves business problems rather than “server” concerns such as load-balancing and scaling deployments to zero and back. The specific benefit of a FaaS is that it narrows the developer focus to an even higher level of abstraction. When describing application frameworks, such as Spring, we refer to that as Inversion of Control: as the framework and/or platform takes control of more concerns, the developer has fewer responsibilities and thus more focus. It also means the platform can provide and patch even more of the stack, such as a security vulnerability in an application framework dependency. Five years ago at SpringOne 2013, Paul Maritz described IaaS as the new hardware and PaaS as the new OS. In 2019, we expect more developers to consider FaaS as the new application framework for those use cases where a single-responsibility event-driven function is a good fit, while still benefiting from “serverless” characteristics for their full-stack cloud-native applications, increasing developer productivity at any layer of abstraction.

—Mark Fisher, Senior Staff Engineer, Pivotal


Software Engineer

As we enter 2019, the role of the software engineer continues to rapidly evolve. Today’s software engineers are being asked to do more than ever before as they manage the full application development lifecycle from end-user interviews and developing MVPs to production deployments and continuing support. While this has produced more effective, higher quality software products and increased customer satisfaction, it has also created a tighter labor market for engineers that possess the necessary skills for the evolving software engineer role.  Enterprises are now investing massive amounts of time and money to attract and retain employees that are able to deliver on the promise of being a “full lifecycle” engineer.

—Ryan Johnson, Associate Director, Accenture


Update your feed preferences





submit to reddit


Pivotal Moments from 2018

EMC logo

It’s been a BIG year for us at Pivotal, and we wanted to give our community an easy way to revisit the top moments and stories from 2018. Check out the following posts, videos, and comics that we were proud to bring you this year. Comment if you have a favorite that we missed!


Pivotal moments

From our IPO, to launching a program to support humanitarian organizations and launching Pivotal Function Service, these were some of our biggest announcements in 2018.


Good advice to read at any time

From Nate Schutta’s must-read six-part series to Cornelia Davis’ philosophical exploration on Kubernetes, these articles will get you thinking architecturally.


Customer stories

We’re incredibly proud to be helping some of the biggest companies in the world and we encourage you to read and watch some of these recent stories of transformation.


How to be more Agile

These workflow-focused posts were seriously popular with developers, engineers, and product managers all over.


Open source

We are strongly committed to many open source communities at Pivotal and here are some posts that reveal a bit about how we embrace open source at Pivotal.



For those who like to learn visually, we have a collection of comics available on our Facebook page, but here are some of our favorites.


Thank you to everyone—Pivots, partners, and customers, who have helped make 2018 a wonderful year! 


Update your feed preferences





submit to reddit


How Fast Are We Going Now: PKS Ecosystem at KubeCon 2018

EMC logo

With a sold-out KubeCon this week, it can feel like everyone is using Kubernetes. K8s is mainstream. The future is now. We’ve crossed the chasm.

But have we? Last week saw the first major Kubernetes vulnerability announcement. That hissing sound you hear is the air being let out of many an inflated expectation. Reality sets in.

Today, most enterprises are still figuring out how to run Kubernetes in production. How will we support thousands of developers asking for k8s clusters? How will we manage k8s across heterogenous infrastructure, multiple clouds, and avoid snowflake environments? After last week’s critical vulnerability, how will we patch and update k8s versions, with no downtime?

Most enterprises want answers to these questions before using Kubernetes broadly. The good news is that Pivotal Container Service (PKS) already has technical solutions baked in to the platform. When adopting new infrastructure software in practice, enterprises look for a few more things in place to go live and go big. Before adopting infrastructure software, they need:

  1. Integrations. Infrastructure software and middleware doesn’t exist in a vacuum. How will this software talk to my other, existing software? What about the other new software I’m going to need to run this system well?
  2. Partners. Consultants and systems integrators are woven into the fabric of how enterprises operate. Sometimes very deeply. How do I know if my partners are prepared with the right skills to help me on this journey?
  3. Peers. Super early adopters, by their very nature, will take the plunge into a new technology on the strength of their convictions. Fast followers and the majority, however, need to hear from their peers. How can I reduce my risk by learning from someone else who’s already been in my situation? What does good look like?

This post will summarize where PKS is with respect to these integrations, partners, and user stories from peers.

PKS Integrations

PKS inherits a vibrant partner ecosystem from PCF across a range of categories, and these partners have been quick to add PKS integrations across a range of capabilities.


Part of operating Kubernetes successfully requires integration into existing infrastructure, such as monitoring systems. AppDynamics, Datadog, Dynatrace, New Relic, VMware Wavefront, and Weave Cloud have all built monitoring for Kubernetes and BOSH managed clusters. Since PKS provides a simple way to deploy and operate enterprise-grade Kubernetes using BOSH, the PKS ecosystem builds on what monitoring vendors have already invested in.


Operating Kubernetes also introduces some new requirements, like container security. PKS helps automate patching Kubernetes itself, but how are the containers themselves secured? This is where Twistlock’s new PCF integration comes in. Twistlock runs on PKS to add runtime defense for every pod, as well as network and app-layer firewalls. AquaSec has also announced availability of their field beta of AquaSec on PKS.

Packaged Software

Another opportunity that PKS represents is providing a great way to run other packaged software. Third-party ISVs are increasingly handing over containers images. Orchestrating the infrastructure under K8s makes it easier to operate that third-party software.

For example, Cloudbees announced availability of CloudBees Core for PCF. This standardizes how teams deploy and manage the Jenkins-based CI/CD software. Yugabyte and Crunchy Data are other such examples. YugaByte DB Enterprise for PCF and Crunchy PostgreSQL for Kubernetes simplify how teams deploy and run the scale-out data management software. Redis Labs has also announced that they are working on bringing Redis Enterprise to PKS. Confluent has announced it is working on making its Apache Kafka-based software run on PKS.

Under the surface of these scattered examples is a groundswell of work underway across ISVs. Watch for more PKS integrations to come on the Pivotal Services Marketplace.

PKS Systems Integrator Partners

Consultants and systems integrators are where the transformation rubber meets the road. Large enterprises have thousands of applications that partners help maintain and develop. Many have parts of their IT operated by partners. Ingesting Kubernetes into this landscape touches a lot of partner work.

First, you need Kubernetes laid down. PKS makes deploying and updating Kubernetes simple and automated, and it runs on any cloud, including vSphere. VMware’s partner ecosystem paved much of the enterprise with vSphere over the last fifteen years. Now, VMware is gearing up to enable partners on PKS with a new PKS competency next year. Partners like Redapt are already leaning into the “Kubernetes on VMware” opportunity with PKS. ITQ has been training teams for months.


ITQ trains consultants on Pivotal Container Service (PKS)


Next, you need to actually migrate or deploy workloads to Kubernetes. Partners like Solstice see PKS as a great solution for the long-tail of enterprise workloads. HCL has found that containerizing an application can reduce its footprint 30-80%.

Pivotal and VMware have been working with a pilot group of partners this year with expansion ahead via VMware’s Partner Network and the Pivotal Ready Partner Program. Ask your key application and infrastructure partners about their plans for PKS.

PKS User Stories

“We’re different” is a common excuses for upholding the status quo. Yet, as a technology or methodology gains more practical examples, this excuse begins to ring hollow. When it comes to running Kubernetes in production, how different can you be?

T-Mobile already operates consumer-facing production workloads on PKS. Using PKS, their team can have Kubernetes up and running with a couple of BOSH commands, instead of months to get OSS Kubernetes set up.

T-Mobile shares experience running Pivotal Container Service (PKS) in production

West Corp sees PKS as important from a security and compliance perspective, making “the right way the easy way.” European telco Orange shared their PKS journey from proof-of-concept through production Kubernetes service  at VMworld Barcelona. National Commercial Bank of Jamaica, Playtika, and Swisscom all spoke at VMworld events in the last couple months.

If you are looking to connect with more users running kubernetes in production with PKS, plan to attend Cloud Foundry Summit in Philadelphia, PA next April 2-4, as well as SpringOne Platform in Austin, TX next October 7-10.

Where are YOU?

As you read here, the ecosystem around PKS is becoming more visible. The stories of how users are adopting PKS to run Kubernetes in production are emerging. The momentum is building. Looking out a year, you can expect many more integrations, partners, and peer stories. So, what should you be doing now?

On a recent podcast interview, Jeff Dickey of Redapt had a useful suggestion. He recommended that every vSphere user have PKS running in their lab today. If you aren’t, he warned, your competitors likely are and you are that many more steps behind in running Kubernetes in production. By getting PKS into testing, you are on the path to building competency in Kubernetes. You know your end goal is Kubernetes in production. Starting with PKS means beginning your journey with the end in mind. Take a test drive today!

Update your feed preferences





submit to reddit