SPF failures when host name starts mail2.*.messagelabs.com

I need a solution

Hi,

I have implemented SPF as per the symantec guidance but get failures when the host is mail2.bemta26.messagelabs.com or any others that start with mail2 (mail2.*.messagelabs.com). When the host starts with mail1 I have no issues.

SPF record

v=spf1 a:cluster1.uk.messagelabs.com include:spf.messagelabs.com a:cluster1a.uk.messagelabs.com  ~all

0

Related:

  • No Related Posts

SEP 12.1.7 on RHEL5 won’t talk to SEPM 14.0.1

I need a solution

My RHEL6 and RHEL7 machines have no problem connecting to the management server running 14 and using the reverse proxy for LiveUpdates.  My RHEL5 machines running SEP 12.1.7, on the other hand, cannot seam to communicate.  I’m running the latest JRE.  Installation and logs after the fact on SEP show no errors.  However, the client never shows up in SEPM and the client is stuck in a “Malfunctioning” state – presumably because it cannot download definitions.  How do I go about troubleshooting?  The client I’m testing on is running RHEL5-11.  It’s a test machines so it’s a fresh installation.  I don’t have ELS with RedHat so other than manually installing the latest Java it’s never been patched.

FYI – The LiveUpdate log indicates it’s about to connect to the reverse proxy and download but despite the lack of an error they don’t install.

0

Related:

  • No Related Posts

Re: lacp and dynamic pools

That is interesting timing: I think we hit exactly the same issue last week: We have two Dell N-series switches in a stack. We have LACP in use, so each node has one 10G on switch A, and one 10G port on switch B, and LACP in use (which works because they are stacked).

The reboot of one switch caused one 10G link to drop on all nodes (the other 10G link stayed up), and caused an “All links in LACP offline” event, which caused all dynamic pool IPs to move away.

And we are on OneFS 8.1.0.2 as well…

Our SR is 10593111

Related:

  • No Related Posts

7022594: Replication of a workload into Microsoft Azure fails “An error was encountered while creating cloud resources The remote server returned an error: (403) Forbidden.”

This document (7022594) is provided subject to the disclaimer at the end of this document.

Environment

Migrate 12.x

Situation

Replication of a workload into Microsoft Azurefails early in the replication process with error message:

An error was encountered while creating cloud resourcesThe remote server returned an error: (403) Forbidden.

Resolution

Verify that the time of the Migrate server iscorrect. Synchronize time with an NTP server if needed and run the migrationagain.

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

  • No Related Posts

Re: VNX Traps(alert) Properties

The MIB for VNX Block is already installed on both storage processors.

To enable it/set up SNMP responses, be sure the Single Notification for Multiple Events check box is not selected in the Action for events field on the General response tab.

In the systems drop-down list on the menu bar, select a storage system.

2. Select System > Monitoring and Alerts > Notifications for Block > Notification Templates.

3. Right-click the template for which you want to set up SNMP responses and select Properties.

4. Click SNMP.

5. In SNMP Management Host, type the IP address of the third-party enterprise-management application that you want to receive SNMP traps.

6. Click Test to test this response.

7. Click OK to close the Template dialog box

The VNX MIB, by default, does not contain every trap. This is because EMC allows you to customize the notification method for each event that occurs.

https://support.emc.com/kb/463389 Where can I find the Management Information Base (MIB) file for my CLARiiON or VNX storage system?

https://support.emc.com/media48612_How_to_use_SNMP_on_VNX.mp4?language=en_US

https://support.emc.com/docu41454_Configuring_Events_and_Notifications_on_VNX_for_File.pdf?language=en_US

https://support.emc.com/docu41522_Using-SNMPv3-on-VNX.pdf?language=en_US

glen

Related:

  • No Related Posts

Demo Purchase Program: Take Advantage of Huge Hardware Discounts

Up to 86% discount on demo hardware, plus free storage software*—because show is better than tell…



Proposals, stats and configuration outlines are all powerful sales tools, but there are few things more powerful than a live demonstration.



To help you both show and tell, we’ve developed a comprehensive Demo Purchase Program covering Converged and Hyper-converged Infrastructure Systems, Storage and Data Protection—and it features discounts you won’t want to miss.



What’s included in the Demo Purchase Program?



The Dell EMC Demo Purchase Program (Program), allows you to access one Dell EMC storage array type per year for each individual demo location you have—or up to two array types, with replication. Storage software used on a not-for-resale demonstration only basis will cost you nothing. The platform operating software is provided at the same discount level as the hardware—and those discount levels extend all the way up to a highly attractive 86% off of list price (?).

ProSupport Mission Critical maintenance covering both hardware and software is available at no charge for a year. And, the entire Program is eligible for earned Marketing Development Fund (MDF) spend.



Program options



Dell EMC Partners can participate in the Program in several ways. All Partners, including Authorized Resellers, can pursue one of these two routes:



Purchase

The hardware pricing depends on the product. For individual discount levels, follow the link below. Software is priced at $0 in line with the ‘Software Only’ option, also below. There is no maintenance charge for 12 months; and hardware can be resold after six months.



Software Only

Software is priced at $0 for 12 months, and there is no maintenance charge for the loaner term.

A third Program option is available. The rental approach is open to Distributors, as well as those partners at Titanium, Platinum and Gold levels. Within this route, hardware is priced at 1.2% of list per month for a rental period not to exceed 12 months. Software is priced at $0 in line with the ‘Software Only’ demonstration purposes only option above, and there is no maintenance charge during the rental term.



Get Started in the Dell EMC Demo Purchase Program Now!



To find out more about the Dell EMC Demo Purchase Program, including the systems and storage families it includes and the discount levels available, please read this handy FAQ document—or find out more from your designated Partner Account Manager.



* Storage software is available at $0 on a not-for-resale demonstration purposes only.

Related:

  • No Related Posts

Re: DD Cloud Tier chunk size – what is it?

Hi David,

So in general most of the traffic going from a DDR to the cloud will be file data – this is stored in ~64Kb ‘compression regions’/chunks and this size is not configurable. There will be other types of data also written to the cloud (i.e. metadata) however this is likely to be a much smaller proportion of what is uploaded.

Note, however, that its not possible to take a file on the DDR and divide its size by 64Kb to work out how many put requests you are likely to see as all data in the cloud is de-duplicated/compressed.

For example, lets say you have a 10Mb file on the active tier of your DDR which you are going to migrate to the cloud – you might think you can do 10Mb/64Kb = 160 PUT requests. Note, however, that this wouldn’t be correct for the following reasons:

– The data being written to the cloud will be de-duplicated against data already in the cloud. For example if 95% of the data in your 10Mb file already exists within the cloud unit you are migrating to (as its referenced by other files which have already been migrated) the DDR will only need to upload the 5% unique data (i.e. 512Kb / 8 PUT requests). Working out how much of a file on the active tier is ‘unique’ when compared with existing data in a cloud unit is very complex and certainly not something that customers can do themselves (so you cannot gain any insight into how much data a file will upload during migration without actually uploading it)

– The data being written to the cloud will be compressed prior to upload. So again lets consider that 95% of the files data already exists in the cloud unit so only 512Kb need to be physically uploaded. If, however, this is compressed via lz before being uploaded it might get 2x compression so now only 256Kb physical data needs to be uploaded (i.e. 4 PUT requests). Again compression ratios depend on a number of factors and its pretty much impossible to say how ‘compressible’ some data is without actually compressing in during migration

Basically customers don’t get any insight into this process which can make it hard to estimate exact costings. That being said DD LTR (long term retention to cloud) has been designed so that it writes to/reads from the cloud as little as possible to minimise costs.

Sorry I don’t have a better answer but I hope this helps to some extent.

Thanks, James

Related:

  • No Related Posts

ProxySG- TCP_NC_MISS- cannot access the application

I need a solution

Hi Team,

When we are accessing the lsapl application (https://egs-lsapl-02.singaporeair.com.sg) we are getting error.

While we are checking those error we found below logs:

PROXIED “none” – 200  TCP_NC_MISS POST  https://egs-lsapl-02.singaporeair.com.sg 8443/ SMTSERVERweb/post services …….

Please find the attached error screenshot for reference.

We have checke below KB articles but we are not sure that the issue related to this(in KB the error for 404 code but in our case its 200)

https://support.symantec.com/en_US/article.TECH242…

Below defined the code:

TCP-NC_miss: The object returned from the origin server was noncacheable

Proxy version: 6.2.15.6

Please advice to proceed further.

Thanks,

Ram.

0

Related:

  • No Related Posts

Error: “Your apps are not available at this time. Please try again” When Receiver Connects Through NetScaler Gateway

Solution 1

To resolve this issue change the beacon entries in StoreFront. Add the NetScaler Gateway addresses to external beacon.

Reference: https://docs.citrix.com/en-us/storefront/3-11/integrate-with-netscaler-and-netscaler-gateway/configure-beacon.html

External Beacon

If you want to use ICA proxy from internal and external connections (all clients should only go through NetScaler), then add a fake address in the internal beacon of StoreFront.

Note: The internal beacon should only be resolvable inside the network, if the beacon is resolvable externally then Citrix Receiver will not be able to add the account.

Solution 2

The issue relates to compatibility of Receiver 4.x and Web Interface XenApp services site. Receiver 4.x supports services sites but when connecting thru NS, users may experience issues as described in CTX136828 – Error When Using Windows Receiver PNAgent through Access Gateway Enterprise Edition Appliance.

Also note Citrix Documentation – NetScaler to Web Interface XenApp Services site is not supported.

Related:

  • No Related Posts

Re: Avamar managed file replication failing with error Replication failed: could not open connection to dest DDR

My issue was related to an avamar backup image that was written to DD but was corrupted. So when Avamar instructs DD to replicate that backup image/file, it errors out with that code.

It is difficult to narrow down the exact backup image that is corrupt, but there’s two options here: 1) comb through the logs to find what client’s backups are being replicated at time of faillure, note the name, remove the client from the replication scope, rerun replication, if it fails again, repeat the procedure and keep a running list of clients that fail until the replication job succeeds. Then create a separate replication job that just has the failed servers, and narrow the data range of the replication scope until you find the backup image that is causing the issue. Likely, if there are multiple servers with this issue, the failures will trace back to the same date/time. Once you have the Avamar backup images identified, you can delete it from Avamar. I’m not entirely sure if that removes the image from DD, so if the backup image is sizable, you may want to engage EMC support to dig into the array to remove the file.

Related: