Currently, when we deploy CWP in our AWS environment it spins up EC2 instances for threat scanning. Although we don’t have any files into S3 coming up for a long time, servers keep on running. Is there a way we can containerize the solution and only when files landed to S3 it should spin up EC2 and scan for any threats and shut it down ?
AFS appears to use DFS namespace redirection to point you to individual nodes in the AFS cluster where your data is actually held. The ELM does not support DFS redirection, so when the STATUS_PATH_NOT_COVERED comes back from the initial node we reached, we fail the attempt instead of moving to the requested server. If randomly you happen to connect to the node where your data is, there is no redirection and no error.
Unfortunately, there does not appear to be a workaround except to point the ELM to a specific node in the AFS cluster instead of the main cluster address. This node probably has to be the AFS “leader” node.
We have networker server on windows , DD9500 DDOS 5.7.x , RHEL CLients 8.x and 9.x That Do not Cancel sessions on the Client side and on the Data Domain, when a Workflow action timeout occurs on a client due to a stale NFS Mount.
Issue is with 350+ RHEL Clients all using same NFS Mount from NFS Server cause maximum session limit reached on DD when the NFS server crashed and all the RHEL NFS Clients mount points went stale, resulting in backup reaching timeout, but sessions not cancelled on the data domain.
The only way to detect actual session count is netstat -an in SE mode and lsof -I TCP:20149 in BASH mode
Anyone else see this on RHEL clients? Currently have support investigating but wanted to get a feel for this in the community