Reporter to Splunk

I need a solution

Hi;

Can the Symantec Reporter forward to a Splunk Server? If so, what’s the mechanism there? is it forwarding actual access logs data that has been forwarded to it from a Proxy SG or

Does the Reporter forward reporting data “meta-data” to the Splunk server rather than the actual access logs?

Kindly

Wasfi

0

Related:

Retain 4.8 is now released!

qmangus

We’re pleased to announce that Retain 4.8 is now available! Our latest release features these enhancements: Additional Android file types can now be viewed If a sender/recipient is not recognized as an internal user for both PIN & SMS messages, a mock email address is used when forwarding (e.g. Phone_numberRetain@test.com) MySQL 8.0 is now supported …

+read more

The post Retain 4.8 is now released! appeared first on Cool Solutions. qmangus

Related:

Micro Focus Retain 4.8 is now released!

qmangus

We’re pleased to announce that Retain 4.8 is now available! Our latest release features these enhancements: Additional Android file types can now be viewed If a sender/recipient is not recognized as an internal user for both PIN & SMS messages, a mock email address is used when forwarding (e.g. Phone_numberRetain@test.com) MySQL 8.0 is now supported …

+read more

The post Micro Focus Retain 4.8 is now released! appeared first on Cool Solutions. qmangus

Related:

Messages Accepted by MessageLabs but not recieved by client

I need a solution

I have noticed that customers attempting to email users on the Symantec message labs platform have their emails apparently accepted by eu2.messagelabs.com but they do not show up in their inbox.

Our mail server does not send out spam, and we have constant checking to enable us to catch spam before it is sent.

However, some users have had mail forwarding loops in the past which may be responsible for the spam classification.

It would be nice if the messagelabs system actually rejected mail it had no intention of delivering to the final user so that IT teams can investigate issues with non-delivery. It has taken me two weeks to figure out where the problem is with mail as delivery appears to be “accepted” and normal on our end.

Edit: I also am not getting a confirmation that this message to request an investigation for the IP is being received by the web page at https://ipremoval.sms.symantec.com/

Any assitance in this matter would be greatly appreciated.

Kind regards,
Jessica

0

Related:

Working with lots of sub-domains

I do not need a solution (just sharing information)

Has anyone had the requirement to recieve mail from multiple sub-domains, and how have you configured this?

On our current platform we are able to receive anything pointed to us (resolved by the MX lookup) and can route to our exchange servers based on a match on the destination domain. For example if our domain was contoso.com we have many MX entries under the contoso.com domain in DNS like (test, help, users, etc). An incoming message to matt@test.contoso.com would resolve the MX to our server. The SMTP route then matches *.contoso.com and forwards the messages. 

At this stage it appears within Symantec.cloud I am going to have to set up many domains, one for each MX record as I cannot find a way to accept messages to the subdomains. 

0

Related:

Netscaler GSLB is answering queries for Vserver that are Down.


When the GSLB vserver is down, with all the corresponding gslb services in the down state, the DNS query response can have the IP addresses of the down GSLB services. This is by design/expected behavior.

However, you can configure the GSLB virtual server to send an empty down response (enable EDR on GSLB Vserver). When this option is set, a DNS response from a GSLB virtual server that is in a DOWN state does not contain IP address records, and this prevents clients from attempting to connect to GSLB sites that are down.


https://docs.citrix.com/en-us/netscaler/10-1/ns-tmg-wrapper-10-con/netscaler-gslb-gen-wrapper-10-con/ns-gslb-protct-setup-against-fail-con.html

Configuring a GSLB Virtual Server to Respond with an Empty Address Record When DOWN

A DNS response can contain either the IP address of the requested domain or an answer stating that the IP address for the domain is not known by the DNS server, in which case the query is forwarded to another name server. These are the only possible responses to a DNS query.

When a GSLB virtual server is disabled or in a DOWN state, the response to a DNS query for the GSLB domain bound to that virtual server contains the IP addresses of all the services bound to the virtual server. However, you can configure the GSLB virtual server to in this case send an empty down response (EDR). When this option is set, a DNS response from a GSLB virtual server that is in a DOWN state does not contain IP address records, but the response code is successful. This prevents clients from attempting to connect to GSLB sites that are down.

Note: You must configure this setting for each virtual server to which you want it to apply.

To configure a GSLB virtual server for empty down responses by using the command line interface

At the command prompt, type:

set gslb vserver<name> -EDR (ENABLED | DISABLED)

Example

> set gslb vserver vserver-GSLB-1 -EDR ENABLED Done 

To set a GSLB virtual server for empty down responses by using the configuration utility

  1. Navigate to Traffic Management > GSLB > Virtual Servers.
  2. In the GSLB Virtual Servers pane, select the GSLB virtual server for which you want to configure a backup virtual server (for example, vserver-GSLB-1).
  3. Click Open.
  4. On the Advanced tab, under When this VServer is “Down,” select the Do not send any service’s IP address in response (EDR) check box.
  5. Click OK.

Related:

Integration Symanect DCS – QRadar

I need a solution

Hello everyone,

is anyone aware on a guide of how to configure syslog forwarding or any other integration between DCS and QRadar SIEM?

I can’t find any specific connector and I was wondering if there is any.

Anyone faced the same issue?

Cheers

Matteo

0

Related:

Downloaded file using Chrome on Android shows increased file size and is corrupted with APPFirewall enabled

Chrome on Android seems to be making parallel requests to download a single file,using range requests. APPFW drops the “range” header from client request while forwarding to back-end server, which causes the problem. Below is the detailed explanation.

Say the real file size the client intends to download is 72299385 Bytes

[CLIENT]=====[NS]=====[SERVER]

====================================================================

Working Scenario – APPFW DISABLED or APPFW with only BASIC-CHECKS enabled

====================================================================

1. First request is a normal GET request for the file, this request is forwarded to the Backend

2. Backend responds with 200OK along with the content-length of 72299385, and the “Accept-Ranges: bytes” header , this response is forwarded to client and the download starts.

3. Now the client knows the file size is 72299385 Bytes (from content-length header in the response), and also that the server supports Range requests (the presence of Accept-Ranges in the response)

4. While the file is still being downloaded, client initiates another parallel Get request for the same file but includes the header “Range: Bytes=36149692-” , i.e. bytes in the range of 36149692 to end of the file” this is half of the “Content-length” received in the original response. In short this is a parallel request to download the 2nd half of the file.This request is forwarded to the Backend as is.

5. Backend responds with “206 partial-content” + “content-range: bytes 3614962-72299384/72299385” , this is the back-end response for the requested byte range. This request is forwarded to the Client the download of the 2nd half starts.


===== At this point, two downloads are in progress=====

The first request for the entire file #2

The second request for the 2nd half of the file #5

============================================

6. Once the first download reaches halfway mark, the client terminates the TCP connection FIN and also a RST (just in case)

7. The Second request is also closed once download is done.

At the end of all this, the client has the first half of the file from the 1st request and the second half of the file from the 2nd request and it constructs the actual file.

====================================================================

Non-Working Scenario – APPFW with ADVANCED-CHECKS enabled

====================================================================

1. First request is a normal GET request for the file, this request is forwarded to the Backend

2. Backend responds with 200OK along with the content-length of 72299385, and the “Accept-Ranges: bytes” header , this response is forwarded to client and the download starts.

3. Now the client knows the file size is 72299385 Bytes (from content-length header in the response), and also that the server supports Range requests (the presence of Accept-Ranges in the response)

4. While the file is still being downloaded, client initiates another parallel Get request for the same file but includes the header “Range: Bytes=36149692-” , i.e. bytes in the range of 36149692 to end of the file” this is half of the “Content-length” received in the original response. In short this is a parallel request to download the 2nd half of the file.

APPFW with advanced checks enabled, drops the “range” header from the client request (expected) and forwards to Backend

5. Without the range header, Backend responds with entire file 200OK along with the content-length of 72299385. This is forwarded back to client and download starts

===== At this point, two downloads are in progress=====

The first request for the entire file #2

The second request also for the entire file #5

============================================

6. Once the first download reaches halfway mark, the client terminates the TCP connection FIN and also a RST (just in case)

7. The Second request is also closed once download is done.

At the end, the client has the first half of the file from the first request. and the Entire file from the 2nd request. Then the client seems to merge this and end up with a file 1.5 times the original size and the file of course is corrupt.

Related:

ProxySG | Please help to recommend Reverse Proxy

I need a solution

Dear All

      I would like to config reverse proxy and my customer want forwarding to host destination  abc.co.th:9704/analytics/

if destination include port and path how can i config on forwarding host. specific port i can config but include path i cannot config it.

please recommend. Thank you for your help.

Best Regards,

Chakuttha R.

0

Related:

  • No Related Posts

Disaster Recovery options for QRadar

Hi Community!

I’ve read the entire QRadar SIEM High Availibility Guide for 7.3.1 and am still struggling to design a disaster recovery solution to our QRadar systems(two 3105 All in One). I’ve also read different topics on this subject with very good explanations by JonathanPechtaIBM.

We are looking for a solution which offers **almost no dataloss** in case of failure of site A. Yes..We have a Site A and a Site B.

There are three DR deployment scenarios according to the HA Guide.
Option1: Primary QRadar Console and backup console
Option2: Event and flow forwarding
Option3: Distributing the same events and flows to the primary and secondary sites.

**Option1** depicts the console failover in a scenario where I would have a hot console and a cold standby. In case of failure, I have to manually start the cold console, change the ip and apply the backup of the failed machine. In this scenario there is NO DATA SYNC. I have to restore data manually. In case my first machine gets restored, I must copy the delta data manually to the primary. Data can be lost during the failover period. Therefore, this option is discarded.

**Option2:** Event and flow forwarding. I have similar deployments on both sites. Both are active. Events and Flows have to be forwarded from the first system to the secondary system using:
A) off-site targets (configured under System and License Management)
B) routing rules.: There are two modes: Online and Offline. and are configured under “Forwarding Destinations” and “Routing Rules” .. (There is a very good explanation here: https://www.ibm.com/developerworks/community/forums/html/topic?id=b8be5e81-d1ed-452b-bf55-7659f78684fb)

Online mode uses best effort, which can cause data loss if there is no communication between sending and listening devices. Therefore, it is discarded.

Offline mode sends the data after being written to disk and there is a sync-delay of about >1 minute due to ariel writing data every minute. No data is lost because the offline process uses bookmarks to keep track of the last sent data. This seems to be a good method for fulfilling my requirements.

**[Question1]**: What is the difference between off-site targets and routing rules using the offline mode? If there is none, why there are the two options?

In the guide it also says on page 33 “Periodically, use the content management tool to update content from the primary QRadar to the secondary”.

**[Question2]**: What can be understood unter “content” in this sentence? Apps, DSMs? In general content that is not available in the backup file?

It is also mentioned on page 33 “In the case of a failure at site 1, you can use a high-availiability (HA) Deployment to trigger an automatic failover”.

**[Question3]:** In this case, one should be aware of the latency limitation between the two sites. Moreover, it is not anymore necessary to forwarding events and flows using one of both methods mentioned above, right?

**[Question4]**: Using routing rules and “online” modes it is possible to drop the data and bypass the CRE after being forwarded. What are the use cases for that? A system would send events to Qradar and we would like to forward some of them to another system, but not want to store them on Qradar?or let CRE test some of them but just store them for logging reasons?

**Option3**: Distributing the same events and flows to the primary and secondary sites

In this scenario I have a load balancer or another similar component which is reponsible for sending data to both sites. If Site A fails, Site B it is still active. Both components have different IP Addresses and it is not necessary to either forward data nor to backup or restore any data. This seems to be the most expensive option, because both sites should have similar architectures and there is the load balancer.

**[Question5]**: The load balancer represents a Single Point of Failured (SPOF) and should be therefore planned redundant? According to picture 3 on page 37, all data is sent to a Load balancer on site 1. What happens if it fails?

I know that if a whole site fails, I have more things to worry about than logging, but I would like to go through all the methods.

to sum up. The method to be chosen should be option2 with offline mode, right?

Thank you in advance

ps: This video: https://www-01.ibm.com/support/docview.wss?uid=swg21997652
provides a wrong definition at 0:27. it says “in **online** mode all data is stored in the database and then forwarded”. This is wrong, right?

Regards,

Bruno

Related:

  • No Related Posts