WCCP with Multicast IP Addressing

I need a solution

Hi All,

at this time we are trying to deploy WCCP transparent proxy using IP multicast addresing.
but when trying to create a WCCP group, there is always an error message, please check the picture that I attached.

I want to ask,

-how to configure IP multicast settings in ProxySG.

I have read the WCCP reference guide but it’s not written way to configure IP multicast addresing. the existing topology with WCPP deployment is attached, please advise.

Best Regards

Indra

0

Related:

Failover issues with SGOS on ESX and Cisco

I need a solution

Hi there,

I’m having an issue with failover on two virtualized Symantec ( Bluecoat) proxies on two ESX hosts in two datacenters connected with Cisco switches.

I can see the Mulicast traffice leaving the proxy getting out into the world over the Cisco switches till the firewall blocks them. The packets should be delivered on L2 to the other switch to get into the other ESX-host on the other proxy running there.

But on the other host I don’t see any multicast-traffic incoming. Hence both feel responsible for the virtual IP what makes problems with Skype etc.

Did anyone have such an issue before? On ESX we activated promiscuous mode already for that vlan/subnet. But that didn’t change the issue.

The hardware proxies in the same network see the multicast-traffic incoming from the virtual machines and behave accordingly. As the virtual proxies don’t receive any multicast traffic they always assume to be master as the other one is not sending any updates.

I would understand that there might be an issue between the two Cisco-Switches that multicast traffic is not forwarded to the other. Other idea is – that there is a special setting on the ESX-Machine I’m not aware of? Any idea?

Thanks in advance,

Manfred

0

Related:

Ghost multicast Server to client negotiation

I need a solution

Hi,

we are using ghost server to multi clients using multicast.

Just a question on bandwidth and communication.

When the image is being transferred my understanding is that the communication is via udp.

Does anyone know how the client server communication for setup works?

Also what determines the max transfer speed? I know this is the slowest client but how does this

client communicate the speed it is happy with to the server, is this again tcp?

Is there a tcp “management” stream between client and server checking for variations in speed capabilities?

Any help or if anyone knows where to find it much appreciated.

0

Related:

DAgent remote agent deploy settings not maintaining

I need a solution

If you select Tools – Remote Agent installer – Next – You’ll see the Agent Settings Summary:  

My settings are set to Connect to the Server Directly, and NOT to discover using Multicast.

No matter what i do, when i deply the agent, when i open the DAgent configuration on the end device, it shows discover using multicast as being selected instead of Connnect directly to the GSS Server….

Appreciate any pointers

0

1522955465

Related:

Failover: Explicit with 2 armed deployment

I need a solution

Hi,

Need your advise, i created a diagram for more understanding as attached.

1. proxy running active passive

2. SL switch running active active

for failover, here the step

proxy1: 10.100.10.1 , multicast 224.0.0.1

LAN: 192.168.0.1/28

WAN: 192.168.0.17/28

proxy2: 10.100.10.2, multicast 224.0.0.2

LAN: 192.168.0.2/28

WAN: 192.168.0.18/28

vip n group: 10.100.10.3 and request network team to enable the multicast but i think this is only valid if appliance failed

Question: since its 2-arm explicit what happen if

1. LAN or WAN port is down

2. port on SL switch is faulty?

3. how its failover?

may i know what is exactly BC failover mechanism ya? Just if appliance failed or what. 

Planning: If it like VIRP concept, maybe i just need to create the VIP for LAN  dan another for WAN and make sure multicast enabled at service leaf. 

It should be work,  am i right?

kindly advise 🙂

0

Related:

How to enable IPv4 Multicast Support on XenServer

By default, XenServer sends multicast traffic to all guest VMs leading to unnecessary load on host devices by requiring them to process packets they have not solicited. XenServer 7.3 now allows you to enable IGMP (Internet Group Manage Protocol) snooping to better support multicast traffic. With IGMP snooping enabled, it will prevent hosts on a local network from receiving traffic for a multicast group they have not explicitly joined, and improve the performance of multicast. This is especially useful for bandwidth-intensive IP multicast applications such as IPTV.

This option is disabled by default, you can enable it on a pool using either XenCenter or the xe CLI interface:

  • To enable/disable IGMP snooping using XenCenter:From XenCenter menu, navigate to Pool Properties -> Network Options, choose Enable IGMP snooping (or Disable IGMP snooping to turn off the feature):
  • To enable/disable IGMP snooping using xe CLI:
    • SSH to pool master, get pool uuid from command xe pool-list, for example:
[root@xrtuk-11-03 ~]# xe pool-list

uuid ( RO) : 4738ddc1-c801-6a9a-b25d-b3056a7e3aa0

name-label ( RW): XS7.3Pool

name-description ( RW):

master ( RO): ea90159f-82fe-477e-a888-3eb11fbdf279

default-SR ( RW): <not in database>
  • Enable/disable IGMP snooping for the pool use command:
xe pool-param-set uuid=<POOL_UUID> igmp-snooping-enabled=<true|false>

For example:

[root@xrtuk-11-03 ~]# xe pool-param-set uuid=4738ddc1-c801-6a9a-b25d-b3056a7e3aa0 igmp-snooping-enabled=true
Note:
  • IGMP snooping is available only when network backend uses Open vSwitch.
  • When enabling this feature on a pool, it may also be necessary to enable IGMP querier on one of the physical switches. Or else, multicast in the sub network will fallback to broadcast and may decrease XenServer performance.
  • When enabling this feature on a pool running IGMP v3, VM migration or network bond failover will result in IGMP version switching to v2.
  • To enable this feature with GRE network, users should set up an IGMP Querier in the GRE network or forward the IGMP query message from the physical network into the GRE network. Or else, multicast traffic in the GRE network will be blocked.
  • Only IPv4 multicast is supported, not applicable to IPv6.

Useful tool

After enabling this feature, you can check IGMP snooping table in DOM0 (Domain Zero) with following command:

# ovs-appctl mdb/show <bridge>

For example:

[root@NKGXENRT-16 ~]# ovs-appctl mdb/show xapi1

port VLAN GROUP Age

14 0 224.0.0.251 21

8 0 224.0.0.251 16

14 0 227.0.0.1 15

1 0 querier 24

Every record contains the port of the switch (OVS), VLAN ID of the traffic, multicast group that this port solicited and the age of this record (seconds).

If the GROUP column of the record is a multicast group, this means there is a receiver listening on this port and IGMP Report message comes from the port of this record. For example, the line “14 0 227.0.0.1 15” means there is a receiver listening on port#14 for multicast group 227.0.0.1. Open VSwitch Bridge will forward the traffic of 227.0.0.1 group to all listening ports for this multicast group (for example, port#14), rather than flood. The “15” under column “Age” means this record for mapping of port#14 and group 227.0.0.1 has existed for 15 seconds. By default, the timeout interval is 300s. This means, the record will expire and be removed from this table if the listening port can’t receive new IGMP report message after 300s.

If the GROUP column of the record is “querier”, that means the IGMP Query message comes from the port of this record. A querier will send IGMP query message periodically. This message will be flooded to query who is listening on which multicast group. Once received an IGMP query message, receiver would respond IGMP report message. Consequently, the querier keeps sending IGMP query message periodically to make receivers keep responding IGMP report messages. So that switches can keep the mapping in IGMP snooping table before it expires.

The VLAN column indicates the VLAN that a receiver/querier lives. “0” means native VLAN, if you want to run multicast on some tagged VLAN, there should be records on the VLAN. Take VLAN 1209 as an example:

[root@xrtuk-11-03 ~]# ovs-appctl mdb/show xenbr0

port VLAN GROUP Age

40 0 224.0.0.252 53

45 1209 224.0.0.252 52

45 1209 224.0.0.251 52

40 0 239.255.255.250 50

40 0 224.0.0.251 50

45 1209 239.255.255.250 47

1 0 querier 55

1 1209 querier 52

1 1210 querier 19

Note:

For VLAN scenario, you should have a querier record with VLAN column value equals to the VLAN ID of the network, otherwise multicast won’t work in the VLAN network.

Related:

VxRail 3.5 Installation Question

Hey There!



I’m encountering some strange behavior while trying to install a VxRail appliance in my lab, and was wandering if someone could help me make sense of it..

The appliance is a P470 model, VxRail version 3.5 with 4 nodes (QuantaPlex TS-2U running ESXi 6.0 U2 with a hybrid disk configuration).



Now i’m at the phase where I need to follow the installation’s “step by step” GUI, only thing is that after accepting the EULA, the discovery screen is not shown, and the “How would you like to configure VxRail?” screen just appears right after.



So, what happened to the node discovery screen? is it normal for this version (3.5) or something is wrong? How do I proceed from here?



I did make sure that the loudmouth service is running on all nodes, and that the settings in the TOR SW are correct (same native vlan for all ports, IPv4 and IPv6 multicast enabled).

Also i feel it is important to mention that this system has gone through a factory reset (using the SolVe procedure and the reset.pyc script) prior to the current installation attempt.

Thanks in advance!

Related:

Re: Vspex blue deployment

TOR is just a label for the switch/switches connected to VxRail/VSPEX Blue. You can use any switch as soon as the network requirements are met:

– IPv6 Multicast enabled on Management VLAN

– IPv4 Multicast enabled on vSAN VLAN

– The ports connected to VxRail allow all VLANs: Management, vSAN, vMotion + VM VLANs.

Connect the “Management PC” to a an access port with Management VLAN. Assign additional IP from VxRailManager subnet: for example 192.168.10.199 and then you should be able to access VxRailManager VM.

If the management VLAN is tagged, you need to configure all Nodes. Follow the installation procedure.

Related:

Vspex blue deployment

TOR is just a label for the switch/switches connected to VxRail/VSPEX Blue. You can use any switch as soon as the network requirements are met:

– IPv6 Multicast enabled on Management VLAN

– IPv4 Multicast enabled on vSAN VLAN

– The ports connected to VxRail allow all VLANs: Management, vSAN, vMotion + VM VLANs.

Connect the “Management PC” to a an access port with Management VLAN. Assign additional IP from VxRailManager subnet: for example 192.168.10.199 and then you should be able to access VxRailManager VM.

If the management VLAN is tagged, you need to configure all Nodes. Follow the installation procedure.

Related: