Unified Agent – No Access to Internet in WiFi with Captive Portals

I need a solution

Hello,

we have the Main Problem that our Users cannot use free WiFi in Hotels or Airport Lounges when they have a Captive Portal.

The Unified Agents detect this but the User didn’t have any possibility to access the Cative Portal to get access to the Internet.

The Only two Poissibilities are

1. Disable the Unified Agent

2. User there own WiFi with Tethering

Did anybody hav a solution for that?

Thanks for help and answering.

Kind Regards

Alen 

0

Related:

  • No Related Posts

Unified agent without captive portal

I need a solution

Hi;

Let’s say that I am working from home, If my unified agent connects and provides my credentials that I used to logon to my BYOD device “without the use of captive portal”. Can I still undergo policy evaluation based on groups providing that I have auth connector. 

Or I must use captive portal in this case to provide the domain name?

Kindly

Wasfi

0

1558355522

Related:

  • No Related Posts

How to Balance Usability and Data Protection in the BYOD Era

Gil Cattelain

Priority one for corporate IT: protect company data across devices and networks. Unfortunately, this conflicts with your users’ top priority: completing tasks and using those resources as needed. Today, the “bring your own device” (BYOD) movement has increased the number of devices and locations. Current mobile device management (MDM) solutions offer control at the expense …

+read more

The post How to Balance Usability and Data Protection in the BYOD Era appeared first on Cool Solutions. Gil Cattelain

Related:

  • No Related Posts

How does SEP determine an infection on the mobile phone, when user is connected to the network? Does it scan mobile phone files and data just like a PC?

I need a solution

How does SEP determine an infection on the mobile phone, when user is connected to the network? Does it scan mobile phone files and data just like a PC?

We have been notified that some mobile devices which was connected to our network was infected and we have asked our users to check on their mobile phones. May we know how was the detection done? Was the files being scanned on their mobile or how did you determined that the phone was infected? This question arised because some of the staff are worried that their private data on their phones were scanned. If it is true that the product must check the phone information, files (document / photos) and OS for infection? The question arises because if it is scanning done on the private phone we will need to put a disclaimer for the user to take note when connecting to the network that they are using.

0

Related:

  • No Related Posts

ZENworks at Micro Focus Universe in Vienna, Austria

Gil Cattelain

Micro Focus transforms enterprises with ZENworks – Unified Endpoint Management solutions that bridge the gap between existing and emerging technologies. Innovate faster and carry less risk, even as you adopt BYOD and upgrade your infrastructure. Micro Focus Universe is invaluable to all roles—enabling attendees to gain the insight and assistance necessary to run a secure …

+read more

The post ZENworks at Micro Focus Universe in Vienna, Austria appeared first on Cool Solutions. Gil Cattelain

Related:

  • No Related Posts

How 5G Relates to SDN and NFV Technologies – Part I: Introduction and History

EMC logo


Will 2019 be the year when 5G takes off?

We are just a few weeks away from the Mobile World Congress (MWC), where I have absolutely no doubt that 5G will be everywhere, including at the Dell EMC and VMware booth. But what is 5G exactly? And what does 5G have to do with the NFV and SDN technologies we covered in previous blog posts?

There is a lot of confusion surrounding 5G. For example, if you use AT&T and, like me, live in Dallas, Houston, Oklahoma, Indianapolis, New Orleans or Charlotte, your phone might display something like this:

So does this mean that 5G is up and running in U.S.? Well, yes and no.

5G Availability

There is a live 5G network in the cities I mentioned but you need to use a 5G hotspot to access it, even if you have the latest iPhone Xs, Samsung Galaxy or latest LG. None of these devices have a modem or antennas that will work on a 5G network.

So is having “5G” on your phone display misleading?

Not altogether.

It is true that they have been upgrading cell towers with TLE-advanced features across the nation over the last year, including things like LTE (Long Term Evolution), advanced features like 256 QAM (Quadrature Amplitude Modulation), 4X4 Multiple-input and Multiple-output (MIMO), 3-way Carrier Aggregation, etc. A more accurate display on our phones, therefore, would read “4G-LTE for Long Term Evolution,” not “5G” (those marketing folks, again! 😃).

To be fair, other companies like T-Mobile did something similar back in the day with 3G-4G, but it misled some customers. Verizon already has 5G working on several test cities (e.g., Sacramento, L.A. and Houston) and like AT&T, the only way to really access it is through a hot spot or with prototype cellphones since we don’t have 5G compatible phones yet.

Why Do I Think this Will Be the Year of 5G?

All major U.S. operators are working against the clock to have 5G coverage in most metropolitan areas by year’s end. In addition, we’ll see the launch of the first commercial cellphones in the 3Q-4Q 2019, which I am sure we will preview at the MWC 19 in Barcelona. We are also reaching a point where both NFV and SDN technologies are reaching maturity and we can even see a consolidation of the number of SDN controllers as well as NFVI components, while the number of available VNFs are exploding.

We will see in part two of this blog series how 5G and NFV go together like peanut butter and jelly, when I’ll explain the concept of network slicing, a network technology that enables network operators to provide networks on an “as-service-basis,” allowing a single physical network to be portioned into multiple virtual networks and multiple types of customer services.

A Brief History of Mobile Cellular Communications

Okay, Javier, all of that is great, but can you get into more details on what exactly 5G is and the differences compared with 4G?

5G is the fifth generation of cellular mobile communications. The first generation (1G) of analog telecommunications standards were first launched in Japan’s NTT in 1979 and later introduced in the 1980s around the world (MNT system). Some of us may remember the Motorola DynaTAC 8000x introduced in 1984 (see below and its comparison of technologies).

The second generation (2G) started in 1991 and exploded worldwide at the end of 1990. Four years later, manufacturers formed the GSM Association. Third generation (3G) was the first mobile focused on data, not just voice and texts, and started at the beginning of 2001. The fourth generation (4G) started in 2007 and became popular worldwide after 2010.

First and Second Phases of 5G

So back to the present and 5G. The first phase of real 5G started in May 2018 with the Release-15 of whitepaper specifications by the ITU (International Telecommunication Union). ITU is made of 193 countries and has more than 800 board members, which gives rise to the reason why it takes a bit of time for them to collectively agree on a standard. The positive? It eliminates the issues we had in the past with GSM/TDMA – the dual competing technologies.

The second phase of 5G and latest global standard is Release-16 due to be completed by April 2020 as a candidate for the IMT-2020 technology. This second standard will increase speed and bandwidth exponentially, compared to the previous generation, demanding speeds of up to 20 Gb/s and frequencies of at least 15 Ghz or higher. The Third Generation Partnership Project (3GPP) is going to submit 5G New Radio (NR) as standard that will include the possibility to use lower frequencies (600 Mhz to 6Ghz versus the 15 Ghz explained before). Lower frequencies can enable telecom companies to reuse existing frequencies licenses without having to buy additional ones, reuse some of the old hardware, and get better coverage. However this 5G NR software on 4G hardware is only between 20-50% faster than traditional 4G. Regardless, if this new software is loaded on new Enhanced Mobile Broadband (eMBB) hardware, the speed bump can go up to 150% on lower frequencies and up to 12-20 times on the higher than 6Ghz frequencies.

A Final Comment on Frequencies

When I explained that lower frequencies increase coverage I was speaking to having better penetration, and by that I mean getting the signal from a tower to your cellphone though a wall, building, etc… It’s rare you’ll have an open and unstructured line of sight with a tower if you live in an urban area, and it’s one of the biggest challenges of 5G. The second biggest challenge is the operator’s need to balance performance and CAPEX costs to achieve profitability and sustainability as the cost per GB of data keeps decreasing.

Stay tuned for How 5G Relates to SDN and NFV Technologies – Part II: Architecture.

Sources

nokia.com

visualcapitalists.com

The post How 5G Relates to SDN and NFV Technologies – Part I: Introduction and History appeared first on InFocus Blog | Dell EMC Services.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

  • No Related Posts

Unified Endpoint Management: One Tool to Rule Them All?

EMC logo


Just recently a lot of the buzz in the end user computing world has been around moving to unified endpoint management. As with many concepts in IT, unified endpoint management, or UEM for short, is defined more by marketing departments than any rigid scientific or legal method. It is the latest step in a journey that endpoint device management has been on for a while, namely the convergence of client management tools (CMT), mobile device management (MDM) and enterprise mobility management (EMM) toolsets.

The challenge is that the definition of UEM is governed by the participants of the conversation. The definition from Wikipedia (derived from Gartner) is probably the best that I have seen:

“Unified Endpoint Management is a class of software tools that provide a single management interface for mobile, PC and other devices. It is an evolution of, and replacement for, mobile device management (MDM) and enterprise mobility management (EMM) and client management tools.”

The Gartner paper is behind their paywall but VMware has made the entire Magic Quadrant paper available for download for free.

VMware Workspace ONE | Source Gartner, June 2018

The Unified Endpoint Management definition above shows the convergence of the toolsets used for mobile devices (typically MacOS, iOS and Android) with those used for Windows. This is a reflection both of the growing importance of the first category of devices within the workplace and the move by Microsoft to include the Open Mobile Alliance – Device Management protocol within Windows 10.

Furthermore, the UEM toolsets are typically cloud-hosted, although some have on-premises variants for those more cloud-averse organisations. This cloud hosting delivers two key benefits:

  1. There is no infrastructure to design and maintain. The software vendor provides you with a tenant and keeps adding patches and new features to it.
  2. These days, devices work outside of the corporate network more often than inside. A cloud-hosted solution means that devices can be managed wherever they are operated without relying on the users connecting to the mothership via VPN.

Organisations have typically been running multiple tools to address these device communities, but this adds complexity to what is already a complex environment. The goal of UEM is to create one tool to rule all the device communities.

The question that needs to be answered is:

Has Unified Endpoint Management reached its Digital Photography moment yet?

This may seem like an obscure question, but reading this blog by Steven Sinofsky caused me to take stock of my mindset regarding UEM. He used the example of the transition from silver halide film to digital photography. In the blog he described the technical buzz saw that devotees of the incumbent technology use to dismantle the challenger technology based on very specific and clearly defined limitations. He argued that over time, the challenger technology closed that gap. In addition, whole new workflows were invented that changed the face of photography.

I have been guilty of wielding that technical buzz saw regarding the mainstream UEM toolsets, targeting what I perceive are their shortcomings:

  • Inability to deliver a bare-metal build
  • Deployment of Win32 applications
  • Transfer of work from deployment engineers to non-IT staff

However, having read Steven’s post I revisited my thinking, looking at things from the other side of the argument.

Inability to Deliver a Bare-metal Build

UEM tools recognise that every device is shipped from its vendor with a perfectly good operating system including drivers for the subcomponents. We do not need to deploy one before we use the device, we simply need to configure the current one to meet our needs. This is the thinking behind Microsoft’s Autopilot process.

You may be thinking that Windows devices are often shipped with trial versions of software as part of the factory installed image and that you do not want that adding to the support complexity. Therefore, Dell Configuration Services recommends our “Generic Image” option, without any trial software, in conjunction with Microsoft Autopilot registration. This provides control over the version of Windows 10 installed and ensures a known clean base to begin UEM control from.

Those with one hand still on the buzz saw will point out that most vendor support processes will replace a failed hard drive with a new “clean” drive without an Operating System. However, as Sinofsky says, “Most problems are solved by not doing it the old way”.

Three mitigations come to mind:

  1. The move to thinner, lighter devices has driven the proliferation of solid state storage solutions which are less likely to fail.
  2. Organisations can change their internal support processes to include a pool of devices to swap with any failed devices, thereby maintaining user productivity. The failed device is repaired and returned to the swap pool.
  3. Once critical mass is achieved, vendor-support processes may move from a repair to a swap out policy.

Addressing this inability to deliver a bare-metal build is unlikely to be resolved by the software and is therefore one area where a mindset change may be the best route.

Deployment of Win32 Applications

This highlights how the march of technical development erodes the arguments presented by the devotees of the incumbent technology. The mainstream tools at the heart of UEM are typically mobile device management tools which were designed to deliver applications to mobile operating systems (Android and iOS).

The design specification would therefore have provided for delivering relatively small applications (a few hundred megabytes) which are simple in nature and without the need for dependency checking. Delivering Win32 applications to Windows 10 devices requires a more sophisticated capability. This capability is evolving, with the two vendors that Gartner sees as the leaders (Microsoft and VMware) in a race to bring this capability to market.

VMware

VMware was first to market with the ability to deliver Win32 apps. Their capability can deploy msi, exe and zip files and differentiates between applications and their dependencies. Additionally, VMware has released their AirLift connector which connects Configuration Manager (ConfigMgr) to your Workspace ONE tenant and enables you to export the applications from ConfigMgr and import them into Workspace ONE without the need for repackaging.

This approach makes it easy to transfer the content and assignment metadata into Workspace ONE and will help customers who wish to move away from ConfigMgr in the long term. Based on my customer experience, ConfigMgr is the most widely deployed toolset, however, we are increasingly seeing customers with Ivanti LANDesk and IBM BigFix who would like to have a similar capability to help them move. It is to be hoped that the Workspace ONE engineering team can create a similar capability to assist them.

There is an additional benefit to following the Airlift enabled route. Once the applications have been moved into Workspace ONE, Dell Configuration Services offers Factory Provisioning services. I have described this in more detail in a previous post entitled, Windows 10 Migration: Best Practices for Making a User’s First Impression Great. In summary, this enables our customers to provide us with bundles of applications, including Win32 apps, for loading in our factory, thereby streamlining the deployment of their new devices using Workspace ONE.

Microsoft

Microsoft announced at Ignite (September) 2018 that their capability would shortly be made available as part of a public preview.  At the time of writing, this facility is still in public preview and being rolled out to Intune tenants. Applications need to be converted from their current format to the new .intunewin format. This process is enabled using an upload prep tool but seems to involve significant manual data entry.

Microsoft may well feel that they have this area covered by ConfigMgr which has been the mainstay of application deployment for many customers for years. Indeed, part of their strategy is to use Intune to automatically deploy the ConfigMgr client. This gets around the limitation that a user with standard permissions would not be able to deploy the ConfigMgr client themselves.

This approach then means that the device is now in a state of dual or co-management where control is achieved using two tools. The working premise is that these tools work in concert and provide a low risk approach to transitioning from ConfigMgr to Intune one workload at a time.

Over time, applications are moving from locally deployed to software as a service or web-based. As this happens, we reduce the reliance on Win32 applications and this problem diminishes.

Transfer of Work from Deployment Engineers to Non-IT Staff

This is the key challenge for me when trying to adopt the change mindset. For years we have been delivering fully built systems to our users to minimise their downtime. In part, I suspect that this was because we were catering for less technically trained users that we have today. It was also to cater for the fact that to meet security guidelines, users were given low privilege accounts which meant that they were prevented from completing the activities even if they were willing to do them.

The introduction of solutions such as Microsoft Autopilot mean that they no longer need high privilege accounts to do key tasks. However, devices are delivered to them with few if any applications included. As described in the Windows 10 Migration post, the deployment of applications to the device can take a while. In the past this was done on the build bench and so was hidden from the users as it was in what I call Engineer Time.

If application deployment is done after device receipt by the user, it is now in User Time. This has two implications:

  • The device is not ready for use immediately, potentially preventing a user from working
  • Simply transferring the work from the IT team to the users does not make it more cost effective

Let’s break each of those items apart and examine them.

Device Not Ready for Use Immediately

New Devices

Most devices deployed using an Autopilot method today will be replacement devices, where an existing user is getting a new device. Traditionally, they have been asked to handover the old device on receipt of the new one. Using the applications pre-provisioning methods described above may be sufficient to ensure that the device is fully ready at handover. If there is some further time required, briefly delaying the return of the old device, will allow them to work on it whilst their new device finalises. This effectively negates the impact of the delay, as the user can login to their new device and allow processes to complete before relinquishing the legacy device.

Dell is investing heavily in technology and processes which will enable it to move more and more of this pre-provisioning work upstream into our factories. We are engaged with both Microsoft and VMware to look at ways to improve the day one experience of your colleagues by automating as many of the task involved in deployment as possible.

Existing Devices

Where an existing device is being upgraded to Windows 10 from an earlier operating system, there are two approaches that will be used: In-place upgrade or wipe and load. An in-place upgrade simply updates the operating system and migrates the data and applications as is. There is no impact here.

Wipe and load upgrades require a bare-metal build process and therefore require a toolset such as ConfigMgr. It is now possible to create a task sequence to perform a wipe and reload process which then sets the device to use Autopilot when the device is handed back to the user, but if the device not being ready for the user is a consideration then this would not be the route to take. If performing a bare metal build, it is more likely that the device will be handed back by an engineer fully ready for the user.

Transferring the Work from IT to the Users

New technology often results in the adoption of new or changed working practices. Before computers became standard issue, firms employed banks of typists to turn a manager’s thoughts or dictated words into formal output. No doubt somebody pointed out that asking a manager to type their own documents was less cost effective than asking a typist to do so. However, time moved on and the user empowerment that came from avoiding the need to dictate the content, saw the widespread adoption of the new way of working.

We are on the cusp of a similar change in end user device deployment. My conversations with IT departments are increasingly focused on user empowerment rather than the IT team owning the tasks. Clearly, there are employees within the organisation who earn significantly more than the deployment engineers, but do they prefer being able to get the task done rather than organising a time to meet with an engineer?

There is no definitive answer here, some will want the job done for them whilst others just want to get the job done even if it means doing it themselves. In a way that sums up where we are with unified endpoint management as well.

Dell’s viewpoint is that the best experience comes not from moving work from deployment engineers to users but by increasingly automating the tasks we remove the need for human intervention entirely. The analogy we often use is the comparison between visiting your bank to withdraw cash. You can visit the human teller who will give you the full in person service or you can visit the automated teller (ATM) which for most of us is convenient and a better experience.

Summary

For some organisations, typically those with a highly mobile workforce, the scales are going to be tipped in favour of one of the UEM approaches. For those whom the pace of workforce transformation is a little slower, they may still be happier with the traditional methods for now but over time they will still find themselves drawn to UEM in the end.

The point is that there is an opportunity to try something new and see whether it has reached the tipping point for you. Are we at the point where the opportunities offered by the new tools and processes enable you to do things of higher value than the things that they currently cannot?

The UEM tools available today are not the whole story. They need to be combined with pre-provisioning and factory services to ensure that work is not simply transferred from one team to another but replaced by automation. This is where the Dell Technologies value comes in. Working with both Microsoft and VMware we are pioneering ways to automate the provisioning processes and drive the most value out of the shift to UEM.

As the focus shifts towards user experience in the ongoing battle to retain key staff, it is likely that organisations will look to deliver more user empowerment through a better understanding of their user environment. Dell EMC has developed a series of tools and techniques described in the free eBook, 4 Tools and Techniques to Create Change and Empower the Workforce with Personalized Experiences, to help you meet the needs of an ever more demanding workforce. Key to this approach is the development of user personas and a detailed knowledge of the user profile. All of this data feeds the UEM tools to make for a better initial experience.

If you are ready to ditch the silver halide film and join the digital workforce transformation, please feel free to contact your Dell EMC Sales Representative to discuss how we can help you.

The post Unified Endpoint Management: One Tool to Rule Them All? appeared first on InFocus Blog | Dell EMC Services.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

  • No Related Posts

Citrix Workspace app for Mac and Windows OS fails with “cannot connect to the server” from the internet when connected externally

We observed that removing the response-rewrite policies made it possible to login with LDAP-only in Receiver.

However, we needed two-factor auth and thus had to bind the policies.

With response-rewrite policy bound (the one setting header “X-Citrix-AM-GatewayAuthType” = SMS).

Binding the policy setting “PWDCount=0”, made the Receiver fail.

Entrust – SMS Passcode reported back that if Netscaler version is 12.x, the policy must be replaced with this:

add rewrite policy RWP-RES-REMOVE_2ND_PASSWORD “HTTP.REQ.URL.PATH_AND_QUERY.SET_TEXT_MODE(IGNORECASE).EQ(“/logon/LogonPoint/index.html”)” RWA-RES- REMOVE_2ND_PASSWORD

and a corresponding action:

add rewrite action RWA-RES-REMOVE_2ND_PASSWORD replace_all “HTTP.RES.BODY(99999)” “”\r\n”+n”<style type=\”text/css\”>\r\n”+n”[for=\”passwd1\”] { display: none;}\r\n”+n”#passwd1 { display: none; }\r\n”+n”</style>\r\n”+n”\r\n”+n”</body>\r\n”+n”</html>\r\n”” -search “text(“</body>n</html>”)”

Related:

  • No Related Posts