Sophos Anti-Virus for Linux: On-Access filesystem support

This article describes the filesystems supported for on-access scanning on Linux platforms.

The following sections are covered:

Known to apply to the following Sophos product(s) and version(s)

Sophos Anti-Virus for Linux 9 and Sophos Anti-Virus for Linux 10

Good filesystems

The following filesystems are known to work with Sophos Anti-Virus for Linux:

Filesystem Name Talpa Support? Fanotify Support?
btrfs Yes Yes
cifs Yes No
ecryptfs Yes Yes
ext2 Yes Yes
ext3 Yes Yes
ext4 Yes Yes
fuse Yes Yes
fuseblk Yes Yes
iso9660 Yes Yes
jfs Yes Yes
minix Yes Yes
msdos Yes Yes
ncpfs Yes Yes
nfs Yes Yes
nfs4 Yes* No
nssadmin Yes No
oes Yes No
overlayfs Yes Yes
overlay Yes Yes
ramfs Yes Yes
reiserfs Yes Yes
smbfs Yes Yes
tmpfs Yes Yes
udf Yes Yes
vfat Yes Yes
xfs Yes Yes
zfs Yes No

*Note: Talpa does not support locally mounted (non-network) nfs4 filesystems.

Unsupported filesystems

The following filesystems are unsupported. The majority of these are pseudo-filesystems that do not contain regular files and cannot be scanned.

Filesystem Name Talpa Support? Fanotify Support? Notes
aufs No No Pseudo-filesystem
autofs No No Pseudo-filesystem
binfmt_misc No No Pseudo-filesystem
bpf No No Pseudo-filesystem
cgroup No No Pseudo-filesystem
configfs No No Pseudo-filesystem
debugfs No No Pseudo-filesystem
devfs No No Pseudo-filesystem
devpts No No Pseudo-filesystem
devtmpfs No No Pseudo-filesystem
fuse.gvfs-fuse-daemon
No No See KBA 118982
fusectl No No Pseudo-filesystem
inotifyfs No No Pseudo-filesystem
mqueue No No Pseudo-filesystem
nfsd No No Pseudo-filesystem
nsspool No No Pseudo-filesystem
proc No No Pseudo-filesystem
romfs No No Pseudo-filesystem
rootfs No No Pseudo-filesystem
rpc_pipefs No No Pseudo-filesystem
securityfs No No Pseudo-filesystem
selinuxfs No No Pseudo-filesystem
squashfs No No
subfs No No Pseudo-filesystem
sysfs No No Pseudo-filesystem
usbdevfs No No Pseudo-filesystem
usbfs No No Pseudo-filesystem

Other filesystems

Behavior with other filesystems will depend on the on-access interception method:

If you’ve spotted an error or would like to provide feedback on this article, please use the section below to rate and comment on the article.

This is invaluable for us to ensure that we continually strive to give our customers the best information possible.

Related:

7023297: Low Disk Performance with high IO stalls system

What happens is that the filesystem with barriers enabled issues flush requests to all intermediate layers only to get discarded by the SCSI Disks and this slows down the system leading to the observed performance issue.

To alleviate this issue it is recommended to mount the relevant filesystems with

nobarrier

as mount option.

Extreme care should be taken to ensure that the device connected to this Filesystem really has a volatile cache or not.

If it does then setting

nobarrier

can result in data loss. Please never set nobarrier on a Filesystem on a device with cache enabled!

To identify whether the device has a cache or not, one can check

dmesg

and check for “cache” like

dmesg | grep cache

and the result might look like

[ 3.685928] sd 0:2:0:0: [sda] Write cache: disabled, read cache: enabled, doesn’t support DPO or FUA

[ 5.140281] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn’t support DPO or FUA

as can be seen the device identified as

sda

reports the Write cache as disabled, so a Filesystem associated with this device can use

nobarrier

the opposite in this example is the device identified as

sdb

this device reports the Write cache enabled, so no Filesystem associated with this device should

have the barriers removed to prevent data loss.

This can also be checked in the running system with the tool

sdparm

On the same system as in the example above the output of sdparm reads

belphegore:~ # sdparm –get=WCE=1 /dev/sda

/dev/sda: DELL PERC H730 Mini 4.27

WCE 0

which means Write Cache disabled for sda, nobarrier possible

belphegore:~ # sdparm –get=WCE=1 /dev/sdb

/dev/sdb: IFT DS 1000 Series 555Q

WCE 1

which means Write Cache enabled for sdb, barrier necessary

Related:

How To Enable Thin Provisioning On XenServer

By default, a new SR in XenServer will be formatted in LVM, which does not support thin provisioning. So if you add an SR and you don’t explicitly specify ext3, your disk will default to thick provisioning.

1. You can enable thin provisioning at installation time:

Select Enable thin provisioning (Optimized storage for XenDesktop) in the drive selection screen.

2. If you have already installed XenServer and you want to enable thin provisioning on a new drive that is adding, you need to use the CLI. The only prerequisite to enable it is that the drive must be formatted in ext3.

#xe sr-create host-uuid=$host_uuid content-type=user name-label="SR name"shared=falsedevice-config:device=/dev/sdXtype=ext

Related:

Re: The configuration of this client does not support browsing on Red Hat clients

thats what I thought.

so when you choose the folders in Backup and Restore – you get an error correct?

I am the same way NONE of my RH servers quilify for FLR (see below for Limitations)

so I have to backup my Linux VM’s as a vm once a week (so I can rebuild if needed)

and daily I do backups as a physical with the client installed on the server.

File-level restore limitations

The following limitations apply to file-level restore as described in “Restoring specific

folders or files” on page 86:

The following virtual disk configurations are not supported:

• Unformatted disks

• Dynamic disks

• GUID Partition Table (GPT) disks

• Ext4 filesystems

• FAT16 filesystems

• FAT32 filesystems

• Extended partitions (that is, any virtual disk with more than one partition or when

two or more virtual disks are mapped to single partition)

• Encrypted partitions

• Compressed partitions

Symbolic links cannot be restored or browsed

You cannot restore more than 5,000 folders or files in the same restore operation

The following limitations apply to logical volumes managed by Logical Volume

Manager (LVM):

• One Physical Volume (.vmdk) must be mapped to exactly one logical volume

• Only ext2 and ext3 formatting is supported

Related:

Re: Networker client to support btrfs filesystem

in the Networker 8.2 Admin Guide we read:

Linux Journaled file system support

Backup and recovery operations are supported on the following Linux

journaled file systems:

– ext3

– reiserfs

– jfs

– xfs

No statement about support of BTRFS.

I fear that the networker client cannot handle btrfs filesystem.

SuSE Enterprise Linux (SLES) 12 comes with the installation default of BTRFS for the OS, and docker can profit a lot from BTRFS based installations.

Networker client should really provide BTRFS support very soon!

You are welcome to voice affirmation

Regards, Tom

Related:

Concerns on corrupted tables

We have db2dart error like:

Index inspection phase start. Index obj: 15 In pool: 22
Error: CSUM read error for pool page 7675867, from object ID 15, pool 22,
Error: BPS Tail incorrect CBITS value — (a)
Error: in page 238099, pool page 7675867, of obj 15, in tablespace 22.
Error: CSUM read error for pool page 7675867, from object ID 15, pool 22,
Page contents dumped with CBITS intact.
Error: Page data will be dumped to report

As this is CSUM error, our best suspicion is FS corruption at OS/Storage level. Note that in this case we do have problems logged in the messages:

Dec 17 15:54:03 hostname kernel: EXT3-fs (dm-179): warning: mounting fs with errors, running e2fsck is recommended
Dec 17 15:54:03 hostname kernel: EXT3-fs (dm-173): warning: mounting fs with errors, running e2fsck is recommended
Dec 17 15:54:03 hostname kernel: EXT3-fs (dm-164): warning: mounting fs with errors, running e2fsck is recommended

Taking above into account, we have remounted FS as origins of the error are most likely due to problems with mount. Please note that even though FS was successfully remounted, the fsck does work on FS level (inodes etc) and it doesn’t necessarily “repair” corruptions, it just make FS consistent, e.g moves corrupted inodes to lost-and-found etc. The result will be that files are OK, in a sense that size matches, links are OK etc. But, the fsck won’t fix DB level corruption (and won’t report it as a problem). Even though we have not good state on FS level, the bad page remains on the table (where we have to export accessible data / recreate table / load data back).

One of the steps above is drop of problematic table. The concern is if bad pages leftover in the tablespace containers will be taken up by other tables in the future? If we drop the corrupted table will bad pages cause any issue on other tables ?

Related: