Why Integration Matters for Creating Productivity Hubs with Exceptional Worker Experiences

EMC logo


The trends in productivity tools,
improved connectivity, changing work culture and worker expectations have led many organizations to rethink their technology investments. Most enterprises, even those in highly regulated industries (financial and life sciences sectors), have reduced their data center footprint. They’ve moved workloads to the cloud, virtualized desktops, adopted SaaS-based solutions, increased BYOD support and provided flexibility for their workforce to work from anywhere. So, what’s missing? A connected worker experience across enterprise applications – one that’s worker obsessed and totally focused on workforce productivity and sentiment.

Bring Enterprise Applications Together Into a Cohesive Worker Experience

Enterprise applications aren’t going away, they’re at the heart of the business and will remain so. That doesn’t mean they should operate in silos. Organizations have a variety of business applications and IT solutions that all too often fall short on worker experience.  Examples of these apps and solutions include:

Typical enterprise apps used by business workers:

  • Payroll apps such as ADP or Paychex
  • HR services powered by Workday, Kronos or Oracle ERP Cloud
  • CRM by Salesforce or Microsoft Dynamics
  • Travel & Expense via SAP Concur
  • ITSM is delivered through ServiceNow or BMC Remedy
  • Collaboration/productivity apps by Microsoft O365, Slack or Google G-Suite

Enterprise platforms used by IT departments:

  • Amazon Web Services or Microsoft Azure hosting servers and custom apps
  • VMware or Hyper-V for virtualization and cloud infrastructure
  • VMware AirWatch or Citrix XenMobile for device management and mobile apps
  • Pivotal Cloud Foundry, Docker, Kubernetes, and others for modernizing legacy apps using microservices architectures

Once enterprise workloads have been containerized, deployed in the cloud and workers are able to use any device and work from anywhere, is this the end game? Of course not, the journey never ends. Adopting purpose-built SaaS solutions have led to newer challenges around business process automation, workflow and integration. It’s resulted in multiple interfaces, security requirements and disconnected worker experiences.

Many corporate intranets and portals either provide a collection of links to other applications or embed content from external sources – neither of which provide an engaging worker experience. Why is that so? The problem is that links take the workers to other locations rather than allowing them to have a cohesive experience within a single portal. Given that content often lives in multiple repositories managed by different vendors, searching or even browsing seamlessly to access the right information is a challenge. It becomes a spaghetti space that is frustrating and time consuming for the workforce. The solution is a unified Digital Workplace that collates information from multiple enterprise applications and provides the ability to perform actions without having to switch context.  This is the difference between a productive, engaging experience and one that simply frustrates the workforce.

Adopting purpose-built SaaS solutions have resulted in multiple interfaces, security requirements and disconnected worker experiences.

iPaaS Becomes the Enabler for Delivering More Productive Worker Experiences

Traditional EAI platforms such as WebSphere, Tibco, BizTalk etc., focused on integration and process automation at the data level and the emphasis wasn’t on the worker experience. The emphasis was on batch processing, message queuing and transforming data between sending and receiving parties or systems. However, these integrations were often complex and expensive deployments requiring specialized skillsets and infrastructure. Depending on the systems, often additional adapters and/or scripting were required for full-scale integration.

The changing landscape of integration in the cloud-first world is no longer about batch processing, ETL jobs, EDI or XML formats. Modern enterprise systems support REST APIs and JSON for data interchange. Major vendors have realized the change – IBM Integration Bus, Microsoft Integration Services and Azure Logic Apps aim to address the integration needs in the cloud. However, the complexities of deployment and challenges around worker experience still exists.

While holistic transaction-focused middleware might still make sense for certain scenarios, the new breed of integration-platform-as-a-service (iPaaS) solutions offer lightweight, plug and play, low code/no code integration that is quick to deploy and easy to manage. Many provide graphical interfaces for orchestrating process automation that empowers knowledge workers with business acumen to create and manage workflows and automation. Dell Boomi has been the leader in the iPaaS space and other players such as Informatica, MuleSoft, SnapLogic etc., offer varying degrees of flexibility and niche feature sets. There’s a lot of consolidation happening in this area with major cloud solution providers such as Salesforce acquiring MuleSoft and Google taking over Apigee to bolster their iPaaS offering.

iPaaS solutions effectively tackle the cloud-to-cloud and on-premises integrations and enable drag-n-drop process automation. However, there is still a void in terms of a seamless, integrated worker experience. A combination of dashboard/portal framework, search engine and cloud-based collaboration tools – working in conjunction with an iPaaS solution forms the foundation to a comprehensive digital workplace and addresses the worker experience issue.

Digital Workplace Platforms Bring Together Enterprise Applications and Solutions for a Cohesive Personalized Experience

Platforms such as ServiceNow provide a flexible layout, navigation scheme, built-in search engine and widgets-based rendering of external content. SharePoint and Office 365 provide all the above stated capabilities, along with additional collaboration, document management, social features, AI/ML based suggestions and native integration with the Office Suite. These platforms, combined with personas and robust worker profiles as key enablers, can be leveraged to integrate with other enterprise systems either via point-to-point integration or through an iPaaS platform to deliver an integrated digital workplace solution.

Productivity Hubs in the Real-world

Slack is another notable example that has combined the collaboration and communication needs into a single chat-based interface. Slack has pioneered the use of bot frameworks to enable integration and submitting actions to other applications without having to leave the Slack channels. For example, it allows workers to schedule a WebEx meeting, book flights or submit expenses in Concur, track projects issues in Jira etc., all within the Slack interface. There’s a bot for everything and the catalog keeps growing.

For organizations already invested in Microsoft technologies, Microsoft Teams offer similar advantages as Slack by providing a consistent worker experience by natively integrating with Exchange, SharePoint Online, OneDrive, Yammer, Office etc. Workers can leverage persistent chat to connect with colleagues, schedule meetings, share screens and collaborate on documents – all within a modern interface. With PowerApps and Flow integration and new features such as support for application pages, rendering SharePoint web parts etc., Teams is truly becoming the productivity hub of choice. Bots and apps frameworks also enable integration with virtual assistants such as Alexa or Cortana for voice-based command execution and will likely support IoT integrations soon.

While bots are great for point-to-point integrations and for performing micro actions within Slack or Teams, advanced workflow automation involving transactions on multiple enterprise applications and decision tree algorithms, can be implemented by leveraging an iPaaS solution such as Dell Boomi. To the right is a conceptual architecture for a digital workplace implementation leveraging industry leading enterprise solutions.

Help the Workforce Realize Their Full Potential

Dell EMC Consulting is a thought leader in Digital Workplace.  We’ve helped organizations transform their worker experiences with modern intranets and collaboration tools by integrating with enterprise applications to deliver consumer grade, personalized hubs and experiences.

We start by engaging with workers and IT stakeholders to:

  • Understand needs and current pain points
  • Identify key personas, journeys and required capabilities
  • Assess the current IT landscape and existing investments
  • Conduct workshops with sponsors and stakeholders to establish a vision
  • Present the technical approach and roadmap to realize the vision
  • Prioritize use cases and formalize program workstreams
  • Design and implement projects to modernize applications and integrate enterprise systems
  • Collaborate with corporate communications on adoption and change management for driving adoption of modern digital workplaces

Looking to modernize your workers’ experiences? Comment below to start the conversation, or contact Dell EMC Sales to learn how our Consulting Services can help.

The post Why Integration Matters for Creating Productivity Hubs with Exceptional Worker Experiences appeared first on InFocus Blog | Dell EMC Services.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

  • No Related Posts

Trend Micro Research Uncovers Major Flaws in Leading IoT Protocols

Dateline City:
DALLAS

Hundreds of thousands of unsecured machine-to-machine deployments put global organizations at risk

DALLAS–(BUSINESS WIRE)–Trend
Micro Incorporated
(TYO:
4704
; TSE:
4704
), a global leader in cybersecurity solutions, today warned
organizations to revisit their operational technology (OT) security
after finding major design flaws and vulnerable implementations related
to two popular machine-to-machine (M2M) protocols, Message Queuing
Telemetry Transport (MQTT) and Constrained Application Protocol (CoAP).

Language:
English

Contact:

Erin Johnson
817-522-7911
media_relations@trendmicro.com

Ticker Slug:
Ticker:
4704

Exchange:
TOKYO

ISIN:
JP3637300009

Ticker:
TMICY

Exchange:
NQB

read more

Related:

The benefits of agile integration, Part 1: The fate of the ESB

While many large enterprises successfully use the enterprise service bus
(ESB) pattern, the term is often disparaged in the cloud-native space, and
especially in relation to microservices architecture. It is seen as
heavyweight and lacking in agility. What has happened to make the ESB pattern
appear so outdated, and what should we use in its place? What would
lightweight integration look like?

Related:

  • No Related Posts

How to configure EMC vCloud Director Data Protection reporting server with non-default RabbitMQ settings

Article Number: 498617Article Version: 3 Article Type: Break Fix



Avamar Plug-in for vCloud Director 2.0.5

Avamar Plug-in for vCloud Director 2.0.5,Avamar Plug-in for vCloud Director 2.0.4,Avamar Plug-in for vCloud Director 2.0.3

After deploying vCD Data Protection Extension reporting server database is not capturing VCD activities, all the tables defined but are empty.

After reviewing the reporting server log (/var/log/vcp/vcpreporting.log) two types of errors:

Issue #1 Issue connecting to rabbitmq server:

Like:

ERROR (AmqpReportingSubscriber.java:108) – org.springframework.amqp.AmqpIOException getting connection to rabbitmq, error:java.net.NoRouteToHostException: No route to host

Or

error:javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?

OR

ERROR (AmqpReportingSubscriber.java:108) – org.springframework.amqp.AmqpConnectException getting connection to rabbitmq, error:java.net.ConnectException: Connection refused

Issue #2 Rabbitmq using non-default systemExchange name.

The log shows this

ERROR (AmqpReportingSubscriber.java:237) – could not create queueing consumer, returning null consumer: null

Unlike other VCP components ( example VCP Cells or VCP UI) the reporting server does NOT talk to vmware vcloud director so RabbitMQ configuration cannot be automatically fetched. If environment is using non-default settings extra steps are required.

Both issue are discussed in the VCP install guide:

In preflight tool section of guide see: “Note: Unless you have configured your setup to deploy a Reporting Server, the rabbit.addr and rabbit.port might not exist in your properties files since the other components read this information from vCloud Director, and the RabbitMQ connection will not be validated”

For issue #1 Issue connecting to rabbitmq due to not using SSL/TLS:

In the guide: review the “Network connection and port usage summary” table and review footnote for AMQP: “Assuming use of TLS, unencrypted AMQP (not recommended) uses 5672 instead”


Issue #2 Rabbitmq using non-default systemExchange name

Also in the guide review “RabbitMQ deployment procedure” section:

vCloud Director publishes notifications on a specific exchange. vCloud Director itself does not create this exchange; it must be created as part of the setup of RabbitMQ. The default exchange name is called systemExchange.

For issue 1. (RabbitMQ is not configured to accept SSL/TLS connection )

In the properties configuration ensure RabbitMQ settings are defined. If SSL is disabled then the port is likely 5672 and needs to be defined else connection will default to 5671 which is not listening ( Connection refused)

vmdefault.properties

Example :

rabbit.port=5672

Also deployvm.sh bash script on CPSH needs to be altered

FROM

echo “rabbitmq.url=${RABBIT_ADDR}:${RABBIT_PORT}?ssl=true”

TO

echo “rabbitmq.url=${RABBIT_ADDR}:${RABBIT_PORT}?ssl=false”

For issue #2 Define the custom exchange name in the bootstrap.properties file on the reporting server

1. Edit: /etc/vcp/bootstrap.properties and add customer exchange name ( example vcdexchange)

Add line

vcp-rpt.amqpExchange=vcdexchange

2. Restart the reporting server

service vcprpt restart

Related:

DPA GUI hangs and crashes when tries to Delete Smart Groups

Article Number: 499956Article Version: 3 Article Type: Break Fix



Data Protection Advisor 6.3

Unable to delete smart group. After clicking on the smart group, the application server (GUI) hangs and crashes after a while.Error message while deleting the smart group: Error 12002, and the GUI hangs for few minutes and then it crashes .

Server log:

2017-05-05 10:32:05,583 ERROR [com.emc.dpa.command.nodes.DPASmartGroup] (Thread-3988 (HornetQ-client-global-threads-296458697)) Failed to generate smart group children, please check configuration: com.emc.apollo.common.exception.ApolloException: The Object Type with the name ‘object_name’ was not found (exception.name_not_found)

2017-05-04 10:32:25,515 ERROR [com.emc.dpa.command.nodes.DPASmartGroup] (Thread-1855 (HornetQ-client-global-threads-296458697)) Failed to generate smart group children, please check configuration: com.emc.apollo.common.exception.ApolloException: The Object Type with the name ‘object_name’ was not found (exception.name_not_found)

Listener log shows:

<node version=”1″ type=”ExternalSmartGroup”>

<id>6a277479-c44b-45ae-ad53-dbf5ebcd63b9</id>

<link rel=”self” href=”https://server.com:9002/apollo-api/nodes/6a277479-c44b-45ae-ad53-dbf5ebcd63b9“/>

<name>delete me</name>

<displayName>delete me</displayName>

<globalName>Groups:Smart Groups:delete me</globalName>

<createTime>2017-03-27T11:36:35.0+01:00</createTime>

<lastSeen>2017-03-27T11:36:39.797+01:00</lastSeen>

<creatorType>unknown</creatorType>​

For some reason DPA try to generate smart group children when we delete that smart group (In theory that can be if that on demand smart group).

Smart group fail to run and deletion process broken.

In this case we applied the below workaround and able to delete the smart group.

  • Go to “Edit Smart Group” window
  • Click on “Select Frequency”
  • Select “Once a day at”
  • Save Smart Group
  • Delete Smart Group

Please contact Dell EMC Technical Support for further details or information.

.p

Related:

  • No Related Posts

7021536: Verastream Host Integrator Event Handler Examples: Writing Data to JMS Message Queue

About this Event Handlers Example

This event handler is to generate an XML document using data from a table procedure, which is then sent to a JMS (SonicMQ) message queue.

The following sample shows output from all procedures in the CCSDemo model. The same event handler could work with all procedures; however, parameters vary according to the procedure.

<?xml version="1.0" encoding="UTF-8"?>

<Transactions>

<GetAccount Table="Accounts">

<FilterParameters>

<AcctNumber>167439459</AcctNumber>

</FilterParameters>

<InputParameters/>

</GetAccount>

<AccountSearch Table="Accounts">

<FilterParameters>

<MiddleInitial>c</MiddleInitial>

<State>ri</State>

<LastName>smith</LastName>

</FilterParameters>

<InputParameters/>

</AccountSearch>

<GetTransactions Table="Transactions">

<FilterParameters>

<AcctNumber>167439459</AcctNumber>

</FilterParameters>

<InputParameters/>

</GetTransactions>

</Transactions>

It is undesirable for an exception to interfere with procedure execution (unless this is the intended behavior), so log any XML-related exceptions and continue processing.

Parameters for connecting to the JMS queue can be hard coded or read from the Verastream properties files script.properties (server) or dt_script.properties (Design Tool).

Event Handler Code

This Verastream Event Handler example has 3 steps:

  1. Generate an XML document from procedure filter and data parameters.
  2. Convert the XML document to a string or a file.
  3. Connect to a (SonicMQ) JMS queue and submit the XML string .

The first two steps are functions called from the main handler method. generateXMLdocument() generates an XML document from input parameters, filter and data parameters, appending each key/value parameter pair as an element node to either a FilterParameter or InputParameter parent node. outputDoc2String() serializes the XML document to a string, transactionXML.

public ProcedureRecordSet

executeProcedure(ExecuteProcedureEvent event)

throws ApptrieveException {

try {

generateXMLdocument(event);

outputDoc2String(transactionXML);



} catch (Exception e) {

//throw new ApptrieveException(e.getMessage());

}

}

The remaining steps continue within executeProcedure(), prior to executing the actual table procedure. Parameters for connecting to the JMS queue are read from script.properties (server only) or dt_script.properties (Design Tool only):

broker = event.getHandlerProperty("broker");

queueName = event.getHandlerProperty("queue");

uname = event.getHandlerProperty("user");

password = event.getHandlerProperty("password");

Next, a transaction queue is opened, the XML string is sent to the appropriate queue, and the queue is closed.

try {

queueTransactionData(broker, uname, password);

sendMessage(queueName, transactionXML);

closeQueueConnection();

} catch (Exception e) {

//System.out.println("Caught exception: " + e.getMessage());

}

Finally, the table procedure is executed, returning the recordset.

return event.defaultProcedure();

Downloading the Example

The zipped Java file and ReadMe, can be downloaded from the Download Library at logTransactionToQueue.zip.

Installing the Event Handler

To install the event handler, follow these steps.

  1. Copy logTransactionToXML.java to the models<your model>scriptssrc directory.
  2. In Design Tool, click Events > Rebuild.
  3. Assuming the build is successful, click Model > Tables, select any procedure, and then click Advanced Properties.
  4. Under Event handler, select logTransactionToXML, and then click Properties and note the description.
  5. Save the changes.

To set the classpath, follow these steps, where <JMS> is the directory that contains the SonicMQ jar files:

vhi.script.classpath=<JMS>\sonic_Client.jar;<JMS>\broker.jar;<JMS>\

gnu-regexp-1.0.6.jar; <JMS>\javax.jms.jar;<JMS>\jaxp.jar;<JMS>\

xercesImpl.jar;<JMS>\xmlParserAPIs.jar;

For use by Design Tool, dt_script.properties in <VHI>etc needs to be edited in a similar manner.

Parameters for connecting to the JMS queue can either be hard-coded or (as here) read from the Verastream properties file: script.properties (server) or dt_script.properties (Design Tool). To use properties, copy the following to the appropriate .properties file.

queue=SampleQ1

user=

broker=<SonicMQ server>:2506

password=

To test the event handler, be sure that the necessary SonicMQ broker is running. The default queues in SonicMQ are SampleQ1, SampleQ2, SampleQ3 and SampleQ4; be sure that the expected queue is availabe. In Design Tool, choose "Procedure test..." from the "Debug" menu. Select table and procedure desired. Enter Procedure filters or data parameters, click "Execute". Some messages are written to the Debug Console. A new message has been added to the desired queue. XML data is readable in the body of the message using a browser.

Related:

The benefits of lightweight integration, Part 1: The fate of the ESB

While many large enterprises successfully use the enterprise service bus
(ESB) pattern, the term is often disparaged in the cloud-native space, and
especially in relation to microservices architecture. It is seen as
heavyweight and lacking in agility. What has happened to make the ESB pattern
appear so outdated, and what should we use in its place? What would
lightweight integration look like?

Related:

The benefits of lightweight integration, Part 2: Moving to lightweight integration

Microservices principles are revolutionizing the way applications are
built by enabling a more decoupled and decentralized approach to
implementation, which leads to greater agility, scalability, and resilience.
These applications still need to be connected to one another and to existing
systems of record. It clearly makes sense to use microservice techniques in
the integration space, too. Lightweight integration provides the benefits of
cloud-ready containerization to integration architecture, and provides the
opportunity to escape from the heavily centralized ESB pattern to more
empowered and autonomous application teams.

Related: