Sophos Anti-Virus for Linux: Recommendations for On-Access scanning with Nautilus file browser

This article covers recommendations for On-Access scanning for Nautilus file browser.

Applies to the following Sophos product(s) and version(s)

Sophos Anti-Virus for Linux

Sophos Anti-Virus does not support the filesystem fuse.gvfs-fuse-daemon which is commonly used by the Nautilus file browser to mount remote shares. Git Virtual File System (GVFS) operations do not pass through the kernel as file operations and thus cannot be intercepted by the on-access scanner.

Sophos recommends to pre-mount remote shares using a different filesystem. For example, by using the mount command or configuring /etc/fstab.

If you’ve spotted an error or would like to provide feedback on this article, please use the section below to rate and comment on the article.

This is invaluable for us to ensure that we continually strive to give our customers the best information possible.

Related:

7023143: Sentinel 8.2 Appliance in Hyper-V Server 2016 does not start after a reboot.

This document (7023143) is provided subject to the disclaimer at the end of this document.

Environment

Sentinel 8.2

Situation

In Hyper-V Server 2016, Sentinel appliance does not start when you reboot it and displays the following message:
A start job is running for dev-disk-by..

Resolution

Manually modify the disk UUID.

1) After installing appliance (first boot), Login to root
2) Verify /dev/disk/by-id entries for partitions are matching in /etc/fstab file.
fstab partition IDs:
====================
CAF-HPV:~ # cat /etc/fstab
/dev/disk/by-id/scsi-14d53465420202020f21b50e22267274c823e145500a372b7-part1 / ext3 acl 1 1
/dev/disk/by-id/scsi-14d53465420202020f21b50e22267274c823e145500a372b7-part2 swap swap defaults 0 0
Actual Partition IDs:
=====================
CAF-HPV:~ # ls -l /dev/disk/by-id/*
lrwxrwxrwx 1 root root 9 Jun 26 22:10 /dev/disk/by-id/ata-Virtual_CD -> ../../sr0
-rw-r–r– 1 root root 444 Jun 26 22:10 /dev/disk/by-id/scsi-14d53465420202020f21b50e22267274c823e145500a372b7
lrwxrwxrwx 1 root root 9 Jun 26 22:24 /dev/disk/by-id/scsi-360022480f21b50e22267145500a372b7 -> ../../sda
lrwxrwxrwx 1 root root 10 Jun 26 22:24 /dev/disk/by-id/scsi-360022480f21b50e22267145500a372b7-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jun 26 22:24 /dev/disk/by-id/scsi-360022480f21b50e22267145500a372b7-part2 -> ../../sda2
3) Replace the actual partition IDs in the following files
– /etc/fstab
– /etc/default/grub
– /boot/grub2/grub.cfg
Examples as below:
Changing partition IDs in fstab file
=====================================
/dev/disk/by-id/scsi-360022480f21b50e22267145500a372b7-part1 / ext3 acl 1 1
/dev/disk/by-id/scsi-360022480f21b50e22267145500a372b7-part2 swap swap defaults 0 0
Changing partition IDs in default grub file
===========================================
GRUB_CMDLINE_LINUX=” root=/dev/disk/by-id/scsi-360022480811af9f56ec20d20c3d787db-part1 disk=/dev/disk/by-id/scsi-14d53465420202020811af9f56ec23a4197de0d20c3d787db resume=/dev/disk/by-id/scsi-360022480811af9f56ec20d20c3d787db-part2 nomodeset quiet”
Changing partition ID’s in grub.cfg file
========================================
linux /boot/vmlinuz-4.4.131-94.29-default root=UUID=ace9acb3-ac2b-47f0-960d-5b7cd5b51b47 root=/dev/disk/by-id/scsi-360022480811af9f56ec20d20c3d787db-part1 disk=/dev/disk/by-id/scsi-14d53465420202020
811af9f56ec23a4197de0d20c3d787db resume=/dev/disk/by-id/scsi-360022480811af9f56ec20d20c3d787db-part2 nomodeset quiet
4) Reboot the VM
It will detect the scsi partition ID’s correctly and boot appliance normally.

Cause

This issue occurs because the operating system modifies the disk UUID during installation. Therefore, during reboot it cannot find the disk.

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

7007308: NFS client cannot perform hundreds of NFS mounts in one batch

This document (7007308) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Linux Enterprise Server 11 Service Pack 2
SUSE Linux Enterprise Server 11 Service Pack 1
SUSE Linux Enterprise Server 10 Service Pack 3
SUSE Linux Enterprise Server 10 Service Pack 4

Situation

A system running SLES 10 or 11 has been set up to attempt to perform hundreds of NFS v3 client mounts at the same time. This could be via the /etc/fstab file and it could be during boot, or it could be other methods, later after the machine is booted. However, the system does not complete the task of mounting the directories.

Resolution

NOTE: This Technical Information Document (TID) is somewhat outdated now, as newer kernels and other code in newer SLES 11 support packs have had changes implemented to streamline the NFS client port usage, and make this situation less likely to arise. For example, people on SLES 11 SP4 can typically perform hundreds or thousands of NFS mounts in a large batch, without running into this problem.
However, this TID is being preserved for now for those on older support packs, who may still benefit from it.
NFS is implemented underneath the “sunrpc” specification. There is a special range of client side ports (665 – 1023) which are available for certain sunrpc operations. Every time a NFS v3 mount is performed, several of these client ports can be used. Any other process on the server can conceivably use these as well. If too much NFS related activity is being performed, then all these ports can be in use, or (when used for TCP) they can be in the process of closing and waiting their normal 60 seconds before being used by another process. This type of situation is typically referred to as “port exhaustion”. While this port range can be modified, such changes are not recommended, because of potential side effects (discussed in item #5 below).
In this scenario, the port exhaustion is happening because too many NFS client ports are being used to talk to an NFS Server’s portmapper daemon and/or mount daemon. There are several approaches to consider that can help resolve this issue:
1. The simplest solution is usually to add some options to each NFS client mount request, in order to reduce port usage. The additional mount options would be:
proto=tcp,mountproto=udp,port=2049
Those can be used in an fstab file, or directly on mount commands as some of the “-o” options. Note that successful usage of these options may depend on having the system up to date. SLES 10 SP3 and above, or SLES 11 SP1 and above, are recommended.
To explain them in more detail:
The option “proto=tcp” (for NFS transport protocol) insures that the NFS protocol (after a successful mount) will use TCP connections. Adding this setting is not mandatory (and is actually the default) but is mentioned to differentiate it from the second option, “mountproto=udp”.
The option “mountproto=udp” causes the initial mount request itself to use UDP. By using UDP instead of TCP, the port used briefly for this operation can be immediately reused instead of being required to wait 60 seconds. This does not effect which transport protocol (TCP or UDP) NFS itself will use. It only effects some brief work done during the initial mount of the nfs share. (In NFS v3, the mount protocol and daemon are separate from the nfs protocol and daemon.)
The option “port=2049” tells the procedure to expect the server’s NFS daemon to be listening on port 2049. Knowing this up front eliminates the usage of an additional TCP connection. That connection would have been to the sunrpc portmapper, which would have been used to confirm where NFS is listening. Usage of port 2049 for the NFS protocol is standard, so there is normally no need to confirm it through portmapper.
If many of the mounts point to the same NFS server, it may also help to allow one connection to an NFS server to be shared for several of the mounts. This is automatic on SLES 11 SP1 and above, but on SLES 10 it is configured with the command:
sysctl -w sunrpc.max_shared=10
and if you want to ensure it is in effect without rebooting:
echo 10 > /proc/sys/sunrpc/max_shared
NOTE: This feature was introduced to SLES in November 2008 so a kernel update may be needed on some older systems. Valid values are 1 to 65535. The default is 1, which means no sharing takes place. A number such as 10 means that 10 mounts can share the same connection. While it could be set very high, 10 or 20 should be sufficient. Going higher than is necessarily is not recommended, as too many mounts sharing the same connection can cause performance degradation.
2. Another option is to use automount (autofs) to mount nfs file systems when they are actually needed, rather than trying to have everything mount at the same time. However, even with automount, if an application is launched which suddenly starts accessing hundreds of paths at once, the same problem could come up.
3. Another option would be to switch to NFS v4. This newer version of the NFS specification uses fewer ports, for two reasons:
a. It only connects to one port on the server (instead of 3 to 5 ports, as NFS v3 will)
b. It does a better job using one connection for multiple activities. NFS v4 can be requested by the Linux NFS Client system by specifying mount type “nfs4 “. This can be placed in the NFS mount entry in the /etc/fstab file, or can be specified on a mount command with “-t nfs4 “. Note, however, that there are significant differences in how NFS v3 and v4 are implemented and used, so this change is not trivial or transparent.
4. A customized script could also be used to mount the volumes after boot, at a reasonable pace. For example, /etc/init.d/after.local could be created and designed to mount a certain number of nfs shares, then sleep for some time, then mount more, etc.
5. If none of the above options are suitable or help to the degree necessary, the last resort would be to change the range of ports in use. This is controlled with:
sysctl -w sunrpc.min_resvport = value
sysctl -w sunrpc.max_resvport = value
and can be checked (and set) on the fly in the proc area:
/proc/sys/sunrpc/min_resvport
/proc/sys/sunrpc/max_resvport
On Suse Linux, these normally default to 665 and 1023. Either the min or max (or both) can be changed, but there can be consequences of each:
a. Letting more ports be used for RPC (NFS and other things) increases the risk that a port is already taken when another service wants to acquire it. Competing services may fail to launch. To see what services can potentially use various ports, see the information in /etc/services. Note: That file is a list of possible services and their port usage, not necessarily currently active services.
b. Ports above 1023 are considered “non-privileged” or “insecure” ports. In other words, they can be used by non-root processes, and therefore they are considered less trustworthy. If an NFS Client starts making requests from ports 1024 or above, some NFS Servers may reject those requests. On a Linux NFS Server, you can tell it to accept requests from insecure ports by exporting it with the option, “insecure”.

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

Re: Quick Question

First you don’t really rename an export, you are changing the directory name that is exported. Now an NFS export on Isilon can contain multiple paths in the same export. So I guess the answer is that yes there will be disruption, but that’s mainly because the clients must:

1. Change their automount / fstab entries

2. Shutdown the application writing to the mount

3. unmount the current mount.

4. You make your change; so first (mv /ifs/data/mqs /ifs/data/mgx), then isi nfs export modify <5 or whatever the ID number is> –path=/ifs/datamgx

5. They re-mount the export

6. They restart the application and ensure everything is working.

All in all if done properly you could probably do this with maybe 5 minutes of downtime. You could try and cheat and give the export an NFS export alias, but really I would just keep it simple and do it properly. Your mv command is only updating a single node (that of the directory in question), so none of this should take a lot of time.

Just my approach, however, others may have other thoughts.

~Chris

Related:

Quick Question

First you don’t really rename an export, you are changing the directory name that is exported. Now an NFS export on Isilon can contain multiple paths in the same export. So I guess the answer is that yes there will be disruption, but that’s mainly because the clients must:

1. Change their automount / fstab entries

2. Shutdown the application writing to the mount

3. unmount the current mount.

4. You make your change; so first (mv /ifs/data/mqs /ifs/data/mgx), then isi nfs export modify <5 or whatever the ID number is> –path=/ifs/datamgx

5. They re-mount the export

6. They restart the application and ensure everything is working.

All in all if done properly you could probably do this with maybe 5 minutes of downtime. You could try and cheat and give the export an NFS export alias, but really I would just keep it simple and do it properly. Your mv command is only updating a single node (that of the directory in question), so none of this should take a lot of time.

Just my approach, however, others may have other thoughts.

~Chris

Related:

7022693: OES 2018 Volumes not Mounting Automatically at Boot

This document (7022693) is provided subject to the disclaimer at the end of this document.

Environment

Open Enterprise Server 2018 (OES 2018) Linux

Situation

Changes to /etc/fstab on an OES2018 server may result in NSS not mounting automatically at boot.

Resolution

This issue was reported to engineering and a one-off fix is currently available through Micro Focus Customer Care (formerly Novell Technical Services). To obtain this one-off fix, please open a support request.

A workaround other than the one-off fix is to edit the /etc/fstab file, find the line that loads the NSS Volume and remove options other than noauto and name from the line, as such:
Problem entry:
DATA /media/nss/DATA nssvol noauto,rw,name=DATA,norename,ns=Long 0 0
Fixed:
DATA /media/nss/DATA nssvol noauto,name=DATA 0 0
After doing this, reboot the server.

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

How to Roam Linux User Profile Through Network File System

Configuration overview

The following configurations are required to implement user profile roaming through NFS mechanism:

  • Configuring NFS Server
  1. Install required NFS packages
  2. Enable/start required services
  3. Configure Firewall
  4. Export shared directories
  • Configuring NFS Client
  1. Install required NFS packages
  2. Mount NFS shares on client
  3. Configure IDMAP

Note that a real example based upon RHEL 7.2 distribution is used to elaborate how to set up the configuration for each step in the following sections. As for other supported distributions, such as CentOs, SUSE and Ubuntu, this article also applies to them, however, package name and service name mentioned below may have minor differences, and this article does not cover that.

Configuring NFS Server

  1. Install required NFS packages

Install nfs-utils and libnfsidmap packages on NFS server using the following command:

yum install nfs-utils libnfsidmap
  1. Enable/start required services

Enable rpcbind and nfs-server services, using the following commands:

systemctl enable rpcbindsystemctl enable nfs-server

Activate the following four services using the following commands:

systemctl start rpcbindsystemctl start nfs-serversystemctl start rpc-statdsystemctl start nfs-idmapd

Additional details about the services mentioned above:

  • rpcbind — The rpcbind server converts RPC program numbers into universal addresses.
  • nfs-server — It enables the clients to access NFS shares.
  • rpc-statd — NFS file locking. Implements file lock recovery when an NFS server crashes and reboots.
  • nfs-idmap — It translates user and group ids into names, and translates user and group names into ids.
  1. Set up firewall configuration

We need to configure firewall on NFS server to allow client services to access NFS shares. To do that, run the following commands on the NFS server:

firewall-cmd --permanent --zone public --add-service mountdfirewall-cmd --permanent --zone public --add-service rpc-bindfirewall-cmd --permanent --zone public --add-service nfsfirewall-cmd --reload
  1. Export shared directories

There are two sub steps in this section.

  • Specify shared directory and its attributes in /etc/exports.
  • Export shared directory using command “exportfs -r”

Specify shared directory and its attributes in /etc/exports.

Example:

To share directory /home in NFS server with NFS client “10.150.152.167”, we need to add the following line to /etc/exports

/home 10.150.152.167(rw,sync, no_root_squash)

Note that:

/home — directory name in NFS server

10.150.152.167 — IP address of NFS client

rw,sync, no_root_squash — directory attributes

  1. – read/write permission to the shared folder
  2. – all changes to filesystem are immediately flushed to disk;
  3. : By default, any file request made by user root on the client machine is treated as by user nobody on the server. (Exactly which UID the request is mapped to depends on the UID of user “nobody” on the server, not the client.) If no_root_squash is selected, then root on the client machine will have the same level of access to the files on the system as root on the server.

We can get all options in the man page (man exports)

Export shared directory using command “exportfs -r”

Execute command “exportfs –r” to export the shared directory on the shell of NFS server.

We can also use the command “exportfs –v” to get a list for all shared directories.

More details on exportfs commands:

exportfs -v : Displays a list of shared files and export options on a server

exportfs -a : Exports all directories listed in /etc/exports

exportfs -u : Un-export one or more directories

exportfs -r : Re-export all directories after modifying /etc/exports

Configuring NFS Client

  1. Install required NFS packages

Install the nfs-utils package using the following command.

yum install nfs-utils
  1. Mount NFS shares on the client

There are two different ways to mount the exported directories.

  • Use command “mount” to manually mount the directories.
  • Update /etc/fstab to mount the directories at boot time.

Use the “mount” command to manually mount directories.

Example:

The command to mount remote directory /home in 10.150.138.34 to local /home/GZG6N, command is as follows:

mount -t nfs -o options 10.150.138.34:/home /home/GZG6N

Note that:

10.150.138.34 -IP address of NFS server

/home – Shared directory on NFS server

/home/GZG6N – Local mount point

Update /etc/fstab to mount the directories at boot time.

Examples:

Add a line similar to the following to /etc/fstab.

10.150.138.34:/home /home/GZG6N nfs defaults 0 0

Then execute command “mount –a” to mount all filesystems mentioned in fstab.

  1. Configure IDMAP

Update /etc/samba/smb.conf to make sure that each user has a unique UID across all the Linux VDAs. Add the following lines to [global] section in the smb.conf file:

[Global] idmap config * : backend = tdb idmap config <DomainREALM> : backend = rid idmap config <DomainREALM> : range = 100000-199999 idmap config <DomainREALM> : base_rid = 0 template homedir = /home/<DomainName>/%u

Now that all the configurations have been done, we can normally launch session from Linux VDA called NFS client in this article (its IP address is 10.150.152.167 in the example), however, user directory is actually located in NFS server (its IP address is 10.150.138.34 in the example).

Related:

7021107: Filr inaccessible because CIFS based /vashare is no longer accessible

This document (7021107) is provided subject to the disclaimer at the end of this document.

Environment

Micro Focus Filr 3.3

Situation

When Filr is configured for the /vashare disk to reside on a Windows server, it connects to the shared disk via SMBv1. If SMBv1 has been disabled on the Windows server, Filr cannot access the /vashare disk where Filr stores configuration and other data needed for a clustered Filr deployment. When this happens, the ‘mount -a’ command fails to mount the /vashare disk on the Filr application server(s).
Another possible reason for such failures could be that the Windows server requires NTLM Security Support Provider (ntlmssp) only.
End users may experience the following error when trying to access Filr on the Web (via browser):
HTTP Status 500 – Error creating bean with name ‘sPropsUtil’ defined in ServletContext resource [/WEB-INF/context/applicationContext.xml]: Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [org.kablink.teaming.util.SPropsUtil]: Constructor threw exception; nested exception is java.lang.IllegalStateException: PropsUtil is a singleton class

The appserver.log from the Filr server will reveal the following errors:

2017-07-06 15:23:21,336 ERROR [localhost-startStop-1] [org.springframework.web.context.ContextLoader] – Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘sessionFactory’ defined in ServletContext resource [/WEB-INF/context/applicationContext.xml]: Invocation of init method failed; nested exception is org.hibernate.cache.CacheException: Unable to initialize MemcachedClient
Caused by: org.hibernate.cache.CacheException: Unable to initialize MemcachedClient
Caused by: java.io.FileNotFoundException: Shared config file ‘/vashare/filr/conf/memcached.properties’ does not exist

Resolution

Possible solutions to this problem are as follows:

  1. Enable SMBv1 on the Windows server.

  2. Add an additional argument in the /etc/fstab entry for the /vashare so it uses sec=ntlmssp. The new line in /etc/fstab will look like:

    //<IP_Address_of_host-server>/<shared folder for filr> /vashare cifs credentials=/etc/opt/novell/base/.smbcredentials,rw,nounix,iocharset=utf8,uid=30,gid=8,sec=ntlmssp,file_mode=0777,dir_mode=0777 0 0

  3. Relocate the /vashare share to an OES server so it is a NFS Share as compared to a CIFS share (consult the Filr Install Guide on how to setup the shared disk). Updating the /etc/fstab entry for /vashare on the Filr appliance(s) will be required so that it is no longer using CIFS for the newly relocated /vashare as NFS share. It is also necessary to reconfigure the Filr appliance(s).

Cause

The Filr 3.2 appliance is built on SUSE 11.4, which includes the 3.0 version of the Linux kernel, which does not support passing a CIFS version to the mount command in order to force it to use CIFS/SMB 2, 2.1, or 3.
Also, by default, it does not use sec=ntlmssp which will be required if hosting server requires NTLM Security Support Provider (ntlmssp) only.

Status

Reported to Engineering

Additional Information

See also: TID 7015602 – The mount -t cifs command fails to mount an AD share if the AD server requires NTLMv2 with “Extended Security”

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

7002571: Identity Manager installation fails when unable to find Java Runtime Environment (JRE)

The IDM engine and Remote Loader (RL) installers never need a pre-installed Java Runtime Environment (JRE) in order to install or run, as these standard components come with these requirements.

This error is usually seen when the temporary directory to whichfiles are extracted resides in a mount point using the ‘noexec’flag. This flag prevents anything from being executed underthat mount point and is set by the operating system. Sampleoutput from the mount command showing /tmp mounted ‘noexec’ isshown below:

/dev/sda2 on /tmp type ext3(rw,noexec,nosuid,nodev)

The easiest way to resolve this is to mount the partition with thenoexec flag. The following command, which requires ‘root’privileges, can be used to remount /tmp with the noexec flag:

mount -o remount,exec/tmp

With this change the IDM installation should no longer complainabout its inability to run the virtual machine extracted by theinstaller.

To set the partition back to normal either reboot the system whichwill apply settings from /etc/fstab or run the followingcommand:

mount -o remount /tmp



The following could also explicitly set it back but should not benecessary:

mount -o remount,noexec/tmp

Another option would be to remove the ‘noexec’ line from /etc/fstabfor the /tmp mount point entry. This would be a permanentchange so it should be changed back once completed unless you aresure this is a change to be made permanent.

Another option may be to export the IATEMPDIR environment variable after setting it to a location that has sufficient space and allows command execution; for example:

export IATEMPDIR='/opt/novell/new-tmp-dir'

#run IDM installer

Related: