OneFS: How to recover individual file(s) from Snapshots using the OneFS’ Sync IQ

Article Number: 514023 Article Version: 3 Article Type: How To



Isilon OneFS,Isilon SyncIQ

This KB article explains how to restore/recover data from Snapshots via Sync IQ

A Snapshot is a copy of the files/folders within the location selected. The contents of each snapshot reflect the state of the file system at the time the snapshot was created. It is easy to navigate through each snapshot as if it were still active. Your directories/folders and files will appear as they were at the time that the snapshot was created. You can easily recover your own files, before snapshot expiration, simply by copying an earlier version from the snapshot to the original directory or to an alternate location.

Note: It is good practice to copy files to a temporary directory rather than overwriting current files/folders. This gives users the option to keep either the Snapshot copy, current copy, or both

Example:

Assume we have folder /ifs/original_folder andthis folder contained subfolders:

folder1 folder2 folder3 folder4

subfolders “folder1 , folder2 and folder3” got deleted from HEAD but they still exist in snapshot with ID 59 as below:

# ls /ifs/original_folder

folder4

# isi snapshot snapshots view snapshot

ID: 59

Name: snapshot

Path: /ifs/original_folder

Has Locks: No

Schedule: –

Alias Target ID: –

Alias Target Name: –

Created: 2017-11-15T11:04:58

Expires: –

Size: 10.0k

Shadow Bytes: 0

% Reserve: 0.00%

% Filesystem: 0.00%

State: active

Riptide-1# ls /ifs/original_folder/.snapshot/snapshot

folder1 folder2 folder3 folder4

We will recover subfolder “folder 2″ only from snapshot 59 to the path /ifs/recoverd_folder as below:

1- Create a policy with source /ifs/original_folder and to include subfolder “folder2” only

# isi sync policies create –name=recover –source-root-path=/ifs/original_folder –source-include-directories=/ifs/original_folder/folder2 —target-host=localhost –target-path=/ifs/recoverd_folder –action=sync

2- Start a sync job for policy recover but from snapshot id 59

In OneFS 7.x

# isi_classic sync pol start recover–use_snap 59

In OneFS 8.x

# isi sync jobs start –policy-name=recover –source-snapshot=59

3- Confirm that the sync job finished

# isi sync reports list

Policy Name Job ID Start Time End Time Action State

—————————————————————————–

recover 1 2017-11-15T11:38:02 2017-11-15T11:38:07 run finished

—————————————————————————–

Total: 1

4- Check recovered folder , it will contain subfolder “folder2” only.

# ls /ifs/recoverd_folder

folder2

Related:

  • No Related Posts

Isilon OneFS: FSAnalyze fails with “New entry doesn’t match existing entry for key: File exists”

Article Number: 503091 Article Version: 4 Article Type: Break Fix



Isilon OneFS 7.1,Isilon OneFS 7.2,Isilon OneFS 8.0,Isilon OneFS 8.1,Isilon OneFS

FSAnalyze job can fail with the following alert:

Failed jobs:

Job Errors Run Time End Time Retries Left

————————– —— ———- ————— ————

FSAnalyze[651] 4 0:06:43 06/01 22:06:49 0

Progress: Task results: found 1203, added 1203, 4 errors

06/01 22:05:49 Node 2: New entry doesn’t match existing entry for key

10c544d150: File exists

06/01 22:05:49 Node 3: New entry doesn’t match existing entry for key

10c4d74260: File exists

06/01 22:05:49 Node 4: New entry doesn’t match existing entry for key

10c47e4170: File exists

06/01 22:05:49 Node 5: New entry doesn’t match existing entry for key

10be5675b0: File exists

If you try to map out that LIN to a file, it states that the file doesn’t exist.

Cluster-1# isi get -D 1:0c544:d150

isi: 10c544d150: No such file or directory

Cluster-1# isi get -D 10c4d74260

isi: 10c4d74260: No such file or directory

Cluster-1# isi get -D 10c47e4170

isi: 10c47e4170: No such file or directory

Cluster-1# isi get -D 10c47e4170

isi: 10c47e4170: No such file or directory

This issue is caused by FSAnalyze snapshots.

Current Workaround:

1 – Remove the FSAnalyze snapshots that are on the cluster

Run this command to view the FSAnalyze Snapshots

# isi snapshot snapshots list -v –format=table –sort=path | grep -i fsa

Once you have the name for the snapshot, put the snapshot name in this command to mark it for deletion

#

isi snapshot snapshots delete –snapshot=<name of snapshot>

Once they are marked for deletion, start a snapshotdelete job

# isi job jobs start snapshotdelete

2 – Once the snapshots are removed, turn off FSAnalyze snapshot create


Please follow https://support.emc.com/kb/317479 to disable snapshots from being created from FSAnalyze
3 – Start the FSAnalyze again, which should finish without error

# isi job jobs start FSAnalyze

If there are any questions/issues, please contact Isilon Support.

Related:

  • No Related Posts

ECS Software: [“oc_map.py” | “s3_list.py”] S3 Bucket / Version List (Complete listing) and Check / Fix metering discrepancies

Article Number: 517138 Article Version: 26 Article Type: Break Fix



ECS Appliance,ECS Appliance Hardware,ECS Appliance Software with Encryption,ECS Appliance Software without Encryption,ECS Software

  • Inability to list complete bucket (more than 1000 objects), via S3 API.
  • Lack of support for all listing operations.
  • No easy method to check for metering discrepancies

  • Tedious manual efforts.
  • Lack of automation.

  • Automation through “oc_map
  • At end of listing, tool compares total size and counts from s3 listing, with those from Metering. Small report health report is generated

  1. Download “oc_map” (Contains “oc_map.py” and “s3_list.py” under “~/oc_map/suite/”). Download locations under “Notes” section.
  2. To get usage:
    ======== s3_list.py ========#python s3_list.py --help======== oc_map.py ========#python oc_map.py --s3_bkt_list_options 
  3. Examples / Typical Usage:
    * NOTE: For "s3_list.py" option "--s3_bkt_dump" is not used.======== s3_list.py ========- Run s3 List against one bucket: --> #python s3_list.py --s3_Key <user> --s3_Secret <secret_key> --bucket <bkt>- Run s3 List against all buckets: --> #python s3_list.py -u root -p ChangeMe --s3_all_bkt --chk_versions --new_map --> ** --quiet can be used on all lists / dumps to suppress output on screen, and only output to log file.- To get and fix (FAST reconstruction only) delta between metering and listing results, add options "--ns_mt <namespace>" and "mt_fix". Example: --> #python s3_list.py --s3_Key <user> --s3_Secret <secret_key> --bucket <bkt> --ns_mt <namespace> --mt_fix- Hybrid listing example (LS Dump, then s3 Prefix (Exact Match) Listing based on each entry from LS dump): --> #python s3_list.py --s3_Key <user> --s3_Secret <secret_key> --hybrid --bypass <namespace>:<bucket>======== oc_map.py ========- Run s3 List against one bucket: --> #python oc_map.py --s3_bkt_dump --s3_Key <user> --s3_Secret <secret_key> --bucket <bkt>- Run s3 List against all buckets: --> #python oc_map.py -u root -p ChangeMe --s3_bkt_dump --s3_all_bkt --chk_versions --new_map --> ** --quiet can be used on all lists / dumps to suppress output on screen, and only output to log file.- To get and fix delta between metering and listing results, add options "--ns_mt <namespace>" and "mt_fix". Example: --> #python oc_map.py --s3_bkt_dump --s3_Key <user> --s3_Secret <secret_key> --bucket <bkt> --ns_mt <namespace> --mt_fix- Hybrid listing example (LS Dump, then s3 Prefix (Exact Match) Listing based on each entry from LS dump): --> #python oc_map.py --s3_bkt_dump --s3_Key <user> --s3_Secret <secret_key> --hybrid --bypass <namespace>:<bucket> 
  4. Example output, listing versions. Tool determines when to use “–chk_versions” for versioning enabled buckets (v3.0.5.2 and up). Can be disabled with “–auto_ver_off“:
    #python s3_list.py --screen_off --screen_off --s3_Key jk_usr --s3_Secret yU4/zNd1M5zwOsjAqZFIfnMVvOSQ5Dr1pIIQUF2A --bucket jk_bkt7 --ns_mt jknss3_list_v1.0.0.0- Initiating s3 Version listing...- Dumping bucket "jk_bkt7" via S3 API to log file "/home/admin/oc_map/suite/07-30-2018/18:25:28_B1_jk_bkt7_s3_dmp.log". Updates will occur every ~1000 objects: [VERSION_ID]: 1524008496759 [IS_LATEST]: true [MTIME]: 2018-04-17T23:41:36Z [SIZE]: 8 [OBJECT]: .bash_history [VERSION_ID]: 1524008462063 [IS_LATEST]: true [MTIME]: 2018-04-17T23:41:02Z [SIZE]: 5 [OBJECT]: .profile [VERSION_ID]: 1532532464214 [IS_LATEST]: true [MTIME]: 2018-07-25T15:27:44Z [SIZE]: 1025 [OBJECT]: /home/admin/oc_map/07-25-2018/15:27:35_call_back_files/file_0 [VERSION_ID]: 1532532464214 [IS_LATEST]: true [MTIME]: 2018-07-25T15:27:44Z [SIZE]: 1025 [OBJECT]: /home/admin/oc_map/07-25-2018/15:27:35_call_back_files/file_1 [VERSION_ID]: 1532532998516 [IS_LATEST]: true [MTIME]: 2018-07-25T15:36:38Z [SIZE]: 1025 [OBJECT]: /home/admin/oc_map/07-25-2018/15:36:30_call_back_files/file_0 [VERSION_ID]: 1532532998516 [IS_LATEST]: true [MTIME]: 2018-07-25T15:36:38Z [SIZE]: 1025 [OBJECT]: /home/admin/oc_map/07-25-2018/15:36:30_call_back_files/file_1 [VERSION_ID]: 1532534079740 [IS_LATEST]: false [MTIME]: 2018-07-25T15:54:39Z [SIZE]: 1025 [OBJECT]: /home/admin/oc_map/07-25-2018/15:54:30_call_back_files/file_0 [VERSION_ID]: 1532534079741 [IS_LATEST]: false [MTIME]: 2018-07-25T15:54:39Z [SIZE]: 1025 [OBJECT]: /home/admin/oc_map/07-25-2018/15:54:30_call_back_files/file_1 [VERSION_ID]: 1532534131913 [IS_LATEST]: false [MTIME]: 2018-07-25T15:55:31Z [SIZE]: 1025 [OBJECT]: /home/admin/oc_map/07-25-2018/15:55:07_call_back_files/file_0 [VERSION_ID]: 1532534131913 [IS_LATEST]: false [MTIME]: 2018-07-25T15:55:31Z [SIZE]: 1025 [OBJECT]: /home/admin/oc_map/07-25-2018/15:55:07_call_back_files/file_1 [VERSION_ID]: 1532534227601 [IS_LATEST]: false [MTIME]: 2018-07-25T15:57:07Z [SIZE]: 1025 [OBJECT]: /home/admin/oc_map/07-25-2018/15:57:00_call_back_files/file_0 [VERSION_ID]: 1532534227601 [IS_LATEST]: false [MTIME]: 2018-07-25T15:57:07Z [SIZE]: 1025 [OBJECT]: /home/admin/oc_map/07-25-2018/15:57:00_call_back_files/file_1 [VERSION_ID]: 1532534274097 [IS_LATEST]: false [MTIME]: 2018-07-25T15:57:54Z [SIZE]: 1025 [OBJECT]: /home/admin/oc_map/07-25-2018/15:57:46_call_back_files/file_0 [VERSION_ID]: 1532534274099 [IS_LATEST]: false [MTIME]: 2018-07-25T15:57:54Z [SIZE]: 1025 [OBJECT]: /home/admin/oc_map/07-25-2018/15:57:46_call_back_files/file_1 [VERSION_ID]: 1532637686437 [IS_LATEST]: false [MTIME]: 2018-07-26T20:41:26Z [SIZE]: 1025 [OBJECT]: /home/admin/oc_map/suite/07-26-2018/20:41:19_call_back_files/file_0 [VERSION_ID]: 1532637686437 [IS_LATEST]: false [MTIME]: 2018-07-26T20:41:26Z [SIZE]: 1025 [OBJECT]: /home/admin/oc_map/suite/07-26-2018/20:41:19_call_back_files/file_1 [VERSION_ID]: null [IS_LATEST]: true [MTIME]: 2018-07-12T12:34:21Z [SIZE]: 20 [OBJECT]: 2195noè..txt [VERSION_ID]: 1522787140489 [IS_LATEST]: true [MTIME]: 2018-06-06T05:04:24Z [SIZE]: 15380 [OBJECT]: 67~_res.txt [VERSION_ID]: 1522782901712 [IS_LATEST]: true [MTIME]: 2018-07-17T20:44:23Z [SIZE]: 10583 [OBJECT]: Cass cript.txt [VERSION_ID]: 1523653003933 [IS_LATEST]: true [MTIME]: 2018-04-13T20:56:44Z [SIZE]: 9 [OBJECT]: Osservatore375'1..txt [VERSION_ID]: 1532038616217 [IS_LATEST]: false [MTIME]: 2018-07-19T22:16:56Z [SIZE]: 10080 [OBJECT]: diamler_07-10-18.txt [VERSION_ID]: 1532038613408 [IS_LATEST]: false [MTIME]: 2018-07-19T22:16:53Z [SIZE]: 10080 [OBJECT]: diamler_07-10-18.txt [VERSION_ID]: 1532038610279 [IS_LATEST]: false [MTIME]: 2018-07-19T22:16:50Z [SIZE]: 10080 [OBJECT]: diamler_07-10-18.txt [VERSION_ID]: 1532038607000 [IS_LATEST]: false [MTIME]: 2018-07-19T22:16:47Z [SIZE]: 10080 [OBJECT]: diamler_07-10-18.txt [VERSION_ID]: 1532038603560 [IS_LATEST]: false [MTIME]: 2018-07-19T22:16:43Z [SIZE]: 10080 [OBJECT]: diamler_07-10-18.txt [VERSION_ID]: 1532038601934 [IS_LATEST]: false [MTIME]: 2018-07-19T22:16:42Z [SIZE]: 13719 [OBJECT]: gm_evidence_07-09-18x.txt [VERSION_ID]: 1532038599381 [IS_LATEST]: false [MTIME]: 2018-07-19T22:16:39Z [SIZE]: 13719 [OBJECT]: gm_evidence_07-09-18x.txt [VERSION_ID]: 1532038596971 [IS_LATEST]: false [MTIME]: 2018-07-19T22:16:37Z [SIZE]: 13719 [OBJECT]: gm_evidence_07-09-18x.txt [VERSION_ID]: 1532038594170 [IS_LATEST]: false [MTIME]: 2018-07-19T22:16:34Z [SIZE]: 13719 [OBJECT]: gm_evidence_07-09-18x.txt [VERSION_ID]: 1523721924837 [IS_LATEST]: true [MTIME]: 2018-04-14T16:05:24Z [SIZE]: 13 [OBJECT]: jason$?isa&amp;!TSE [VERSION_ID]: 1522449699941 [IS_LATEST]: true [MTIME]: 2018-04-11T18:18:15Z [SIZE]: 21685 [OBJECT]: long_cmd.txt [VERSION_ID]: 1532033751424 [IS_LATEST]: true [MTIME]: 2018-07-19T20:55:52Z [SIZE]: 21688 [OBJECT]: long_cmd.txt2.txt [VERSION_ID]: 1532033735699 [IS_LATEST]: false [MTIME]: 2018-07-19T20:55:36Z [SIZE]: 21688 [OBJECT]: long_cmd.txt2.txt [VERSION_ID]: null [IS_LATEST]: false [MTIME]: 2018-07-10T22:02:08Z [SIZE]: 21688 [OBJECT]: long_cmd.txt2.txt [VERSION_ID]: 1523752430766 [IS_LATEST]: true [MTIME]: 2018-04-15T00:33:50Z [SIZE]: 8 [OBJECT]: ok)Man.txt [VERSION_ID]: 1529439636273 [IS_LATEST]: true [MTIME]: 2018-06-19T20:20:36Z [SIZE]: 7 [OBJECT]: test_ver_0.txt [VERSION_ID]: null [IS_LATEST]: true [MTIME]: 2018-07-10T22:32:10Z [SIZE]: 8 [OBJECT]: test_ver_0.txt2 [VERSION_ID]: 1523650878397 [IS_LATEST]: true [MTIME]: 2018-04-13T20:21:19Z [SIZE]: 9 [OBJECT]: u'café'.txt [VERSION_ID]: 1523636092931 [IS_LATEST]: false [MTIME]: 2018-04-13T16:14:53Z [SIZE]: 9 [OBJECT]: u'café'.txt [VERSION_ID]: 1532534080114 [IS_LATEST]: true [MTIME]: 2018-07-25T15:54:40Z [SIZE]: DMARKER [OBJECT]: /home/admin/oc_map/07-25-2018/15:54:30_call_back_files/file_0 [VERSION_ID]: 1532534080112 [IS_LATEST]: true [MTIME]: 2018-07-25T15:54:40Z [SIZE]: DMARKER [OBJECT]: /home/admin/oc_map/07-25-2018/15:54:30_call_back_files/file_1 [VERSION_ID]: 1532534132219 [IS_LATEST]: true [MTIME]: 2018-07-25T15:55:32Z [SIZE]: DMARKER [OBJECT]: /home/admin/oc_map/07-25-2018/15:55:07_call_back_files/file_0 [VERSION_ID]: 1532534132220 [IS_LATEST]: true [MTIME]: 2018-07-25T15:55:32Z [SIZE]: DMARKER [OBJECT]: /home/admin/oc_map/07-25-2018/15:55:07_call_back_files/file_1 [VERSION_ID]: 1532534227930 [IS_LATEST]: true [MTIME]: 2018-07-25T15:57:07Z [SIZE]: DMARKER [OBJECT]: /home/admin/oc_map/07-25-2018/15:57:00_call_back_files/file_0 [VERSION_ID]: 1532534227936 [IS_LATEST]: true [MTIME]: 2018-07-25T15:57:07Z [SIZE]: DMARKER [OBJECT]: /home/admin/oc_map/07-25-2018/15:57:00_call_back_files/file_1 [VERSION_ID]: 1532534274431 [IS_LATEST]: true [MTIME]: 2018-07-25T15:57:54Z [SIZE]: DMARKER [OBJECT]: /home/admin/oc_map/07-25-2018/15:57:46_call_back_files/file_0 [VERSION_ID]: 1532534274434 [IS_LATEST]: true [MTIME]: 2018-07-25T15:57:54Z [SIZE]: DMARKER [OBJECT]: /home/admin/oc_map/07-25-2018/15:57:46_call_back_files/file_1 [VERSION_ID]: 1532637686556 [IS_LATEST]: true [MTIME]: 2018-07-26T20:41:26Z [SIZE]: DMARKER [OBJECT]: /home/admin/oc_map/suite/07-26-2018/20:41:19_call_back_files/file_0 [VERSION_ID]: 1532637686562 [IS_LATEST]: true [MTIME]: 2018-07-26T20:41:26Z [SIZE]: DMARKER [OBJECT]: /home/admin/oc_map/suite/07-26-2018/20:41:19_call_back_files/file_1 [VERSION_ID]: 1531153614103 [IS_LATEST]: false [MTIME]: 2018-07-09T16:26:54Z [SIZE]: DMARKER [OBJECT]: 2195noè..txt [VERSION_ID]: 1523651053631 [IS_LATEST]: false [MTIME]: 2018-04-13T20:24:13Z [SIZE]: DMARKER [OBJECT]: 2195noè..txt [VERSION_ID]: 1532038627596 [IS_LATEST]: true [MTIME]: 2018-07-19T22:17:07Z [SIZE]: DMARKER [OBJECT]: diamler_07-10-18.txt [VERSION_ID]: 1532038627669 [IS_LATEST]: true [MTIME]: 2018-07-19T22:17:07Z [SIZE]: DMARKER [OBJECT]: gm_evidence_07-09-18x.txt [UPDATE 07-30-2018_18:25:37]: Currently 53 objects processed... -- Total Versions: 53 --> Current Versions: 29 --> Non-Current Versions: 24 --> Delete Markers: 14 -- Total Versions Size: 0.00 TB (0.00 GB 0.22 MB, 226.99 KB, 232434.00 B) --> Current versions: 0.00 TB (0.00 GB 0.07 MB, 71.80 KB, 73523.00 B) --> Non-Current versions: 0.00 TB (0.00 GB 0.15 MB, 155.19 KB, 158911.00 B) -- Metering Total Objects: 58 -- Metering Bucket Size: 0.00 TB (0.00 GB 0.23 MB, 231.00 KB) -- [INFO]: Metering and Bucket size are identical (GB) -- [INFO]: Metering Count > Bucket Count. Deviation within expected range (1000) --> DELTA: -5 Objects --> DUR: 388 (ms)- Tool is finished. Will now exit. 
  5. Example output, listing bucket. Tool determines when to use “–chk_versions” for versioning enabled buckets (v3.0.5.2 and up). Can be disabled with “–auto_ver_off“:
    admin@sandy-blue:~/oc_map> sudo python oc_map.py --s3_bkt_dump --w_to_log --s3_Key jk_usr --s3_Secret yU4/zNd1M5zwOsjAqZFIfnMVvOSQ5Dr1pIIQUF2A --bucket jk_bkt3oc_map_v3.0.5.2- Initiating s3 Bucket listing...- Dumping bucket "jk_bkt3" via S3 API to log file "/home/admin/oc_map/04-12-2018/21:38:53_B1_jk_bkt3_s3_dmp.log". Updates will occur every ~1000 objects: [MTIME]: 2018-04-12T21:37:36Z [SIZE]: 12289 [OBJECT]: ev4.txt [MTIME]: 2018-04-12T21:37:35Z [SIZE]: 48454 [OBJECT]: ev5.txt [MTIME]: 2018-04-12T21:37:34Z [SIZE]: 3306 [OBJECT]: ev6.txt [MTIME]: 2018-04-12T21:37:34Z [SIZE]: 26320 [OBJECT]: ev7.txt [MTIME]: 2018-04-12T21:37:36Z [SIZE]: 9487 [OBJECT]: ev8.txt [UPDATE]: Currently 5 objects processed... -- Total Objects: 5 -- Total Bucket Size: 0.00 TB (0.00 GB 0.10 MB, 97.52 KB, 99856.00 B) -- Metering Total Objects: NA -- Metering Bucket Size: NA --> DUR: 055 (ms)- Tool is finished. Will now exit. 
  6. Options:
    * S3 Bucket List / Dump options: -U --s3_bkt_dump Perform S3 bucket dump (Required ONLY in oc_map.py) -q --quiet Suppress printing -J --s3_dmp_log File path where output of bucket dump will go. ** Default is "s3_dmp_<MM>-<DD>-<YYYY>_<HH>:<MM>:<SS>_.log --> Timestamp is automatically generated for default File Path -w --w_to_log Output to default log ** Enabled by default --no_log Disable logging -K --s3_Key s3 Key / user -L --s3_Secret s3 Secret Key -a --s3_all_bkt Dump all buckets. Requires username (-u --user) to be root or system admin with root rights --> NOTE: In ECS 3.1 buckets owned by root cannot generate s3 secret key. Therefore, these buckets will be skipped -m --bucket bucket -Q --s3_ip <xx.xxx.xxx.xxx:902x> End point. Default is local successfully tested Public IP <ip>:9020 -W --max_keys Default is 1000 (Recommended). Range 1 - 1000 --> ** NOTE: Anything below 30 can cause inaccuracy of statistics, for later use of "--s3_list_restart" -A --ns_admin If bucket belongs to namespace admin, this option must be used (--ns_admin <namespace>) --chk_versions check all versions (Can also be used on non-versioning buckets) --auto_ver_off Disables automatic detection of s3 versioning. When disabled, "--chk_versions" should be used for version enabled bucket --etag print and/or log etag --ns_mt <namespace> metering information. Warning will be issued for large deviation --mt_fix Fixes Deltas detected after complete bucket listing. Prompt will appear if detected --> If needed --s3_list_restart can be added to last syntax. Prompt will appear if delta seen in comparison --> Checks for JP Lag first and for exceptions in "metering.log" --> ** Requires EE Approval and Passcode --s3_api_vlist generates list format "<obj_name>OC::::vId:MAP<version-id>" to be used with S3 API list calls "GET/HEAD/DELETE" --ignore_truncate ignores next marker when performing s3 bucket / version listing --update_cnt <1 - 1000000000>, Default = 100000 ("-s3_bkt_dump" is same as "--max_keys") --specific_ns <namespace>, MUST be used with "--chk_all_bkt". Checks all buckets under specific namespace --mk_bkt_list Makes list of buckets, with credential info (s3Key and secret). List can then be imported with "--import_bkt_list" --> Can also be used with "--specific_ns" switch. --> List can also be manually split up (once complete). Lists can then be used on different nodes. --import_bkt_list <file_path_of_list>, Imports list generated from option "--mk_bkt_list"* PREFIX, HYBRID and DELIMETER OPTIONS: --delimiter delimiter --prefix list with prefix --hybrid Performs LS Dump, then performs S3 listing using a prefix against each enty in LS dump. --> Also uses unique feature "--exact_match" and automatically ignores dirs --> Great WORKAROUND for Bucket Listing Issues. Multi-threaded for maximum performance --prefix_list List of prefixes (not to be used with --prefix) --exact_match Used with --prefix_list to only return exact match. Meaning noting else starting before or ending after prefix --> Automatically enabled for --hybrid --include_dirs Includes directories / keys ending with "/" --ignore_dirs Used with --prefix_list. Ignores directories / keys ending with "/" --> Automatically enabled for --hybrid* RESTART OPTIONS: --s3_marker <object>, object to start listing with --version_id <version_id> last version ID. Must be used with option "--chk_versions" --s3_list_restart Restarts last failed / stopped listing. Restores statistics and automates all other restart options -u --user user name -p --password password 

* NOTE #1:versioning” mode is enabled for all buckets when using “–ns_mt” (Metering Check). Versioning mode can be used on both buckets…

* NOTE #2: Metering Fix is for FAST Reconstruction Only. This is typically for large buckets on code level less than 3.2.1 GP 13. For later code level, FULL Reconstruction should be run, unless otherwise directed by OE.

* NOTE #3: Metering Fix “–mt_fix” (FAST Reconstruction Only) will prompt for passcode just before fixing metering. At this point JIRA should be opened with results from the tool. This should include the “JP Lag report“, “metering.log exception report“, “listing results“, and “metering comparison report“. EE should then consult with metering Dev. They will decide if automated “manual metering fix” is next best course of action. If approved by EE and or Dev, send email to “oc_map_keys” for passcode.

Related:

  • No Related Posts

http.method & ICAP

I need a solution

Hello,

I used to configure for ICAP RESPMOD …

<Cache>

policy.BC_malware_scanner policy.ICAP_Content_Scan_Security

Is there any internal/performance reason using instead …

<Cache>

http.method=GET policy.BC_malware_scanner policy.ICAP_Content_Scan_Security

… to prevent sending CONNECT, POST and all other non relevant methods

Best Regards,

Vincent

0

Related:

  • No Related Posts

Homeless in Vancouver: BC Ministry of Citizens’ Services joins the Linux Foundation

    1 of 12 of 1

    A funny thing happened on my way to writing about IBM buying the open-source company Red Hat: I noticed that a ministry of the B.C. government is listed as a member of the Linux Foundation.

    According to a B.C. government spokesperson, the Ministry of Citizens’ Services of British Columbiajoined the Linux Foundation as an associate member on September 4, 2018, as part of becoming an associate member of the Foundation’s Hyperledger project.

    The Linux Foundation is a nonprofit consortium founded in 2000 to encourage the development and widespread adoption of the open-sourceLinux operating system. The foundation currently has about 1,000 members, running the gamut from private-sector information technology companies, public-sector government and nongovernmental organizations, as well as open-source software developers—all of them having a direct interest or involvement in Linux.

    Associate membership, explains the the Linux Foundation, is open at no cost to government agencies and not-for-profit organizations that “have demonstrated a commitment to building, sustaining and using open source technologies”.

    The B.C. Ministry of Citizen Services’ is the only B.C. government ministry—apparently the only government agency in Canada—to have joined the Linux Foundation.

    It did so in order “to learn from other organizations, collaborate and share information around open source software,” according to the spokesperson, who emailed me in reply to a series of questions on Tuesday (November 6).

    Fast, functional and free—what’s not for a government to like?

    Specifically, the spokesperson explained, the B,C, Ministry of Citizens’ Services is keen to learn and share information about Hyperledger—a Linux Foundation initiative to extract the general-purpose functionality from the open source Bitcoin blockchain.

    The Bitcoin blockchain is a decentralized, cryptographic ledger designed to keep track of all Bitcoin transactions in the world.

    Being stored not on one central computer but linked across some 6,000 computers on the Internet gives the Bitcoin blockchain great computational power, certain economies of scale and makes its encrypted and multiply redundant record of transactions nearly indestructible.

    With Hyperledger, the Linux Foundation is hoping to create a decentralized ledger platform with all the benefits but none of the limitations of Bitcoin’s blockchain—adding scalability and capacity for confidential one-to-one transactions—making it suitable to the varied needs of both government and enterprise.

    An open-source blockchain solution like Hyperledger could be a potential Swiss Army knife for Big Data management—both powerful and inexpensive; suitable for everything from healthcare records, citizen identity and rights management, financials, census and demographics, stock market exchanges and large-scale longitudinal databases.

    For her part, the BC. Ministry of Citizens’ Services spokesperson described Hyperledger as “a collaborative effort created to advance blockchain technologies. It is a global collaboration, hosted by The Linux Foundation, including leaders in finance, banking, Internet of Things, supply chains, manufacturing and Technology”—adding:

    “The ministry is actively exploring the potential uses of technology such as block chain and innovative ways that service delivery for people could be improved.”

    “The ministry is responsible for considering ways that new technologies could be used to provide better services to British Columbians, and the Linux Foundation membership assists in this work,” the spokesperson concluded.

    Related:

    • No Related Posts

    Surface Pro 4 won’t boot after SEE credentials entered

    I need a solution

    After powering on the device and entering SEE credientials a black screen appears for about 10 seconds afterwhich the device restarts. In attempts to repair it has been noted that the nvme storage can not be detected in a WinPE. The next logical step would be to try and physically mount the drive as slave. However, we will not be opening the device for repair nor will we be ghosting the drive. Attempts have been made to recover the OS through bootable media though without access to drive during this process this attempt failed. A decision has been made to wipe SEE from the drive in hopes of further device recovery. Is it possible to wipe SEE from the drive without our old adminisrative support on hand? Furthermore, if it helps, the device currently has SED Ver. 10.3.2 (Build 21424). 

    0

    Related:

    • No Related Posts

    VxRail: Migration of VMs from host fails with “The target host does not support the virtual machine’s current hardware requirements”

    Article Number: 524619 Article Version: 3 Article Type: Break Fix



    VxRail Appliance Family

    When attempting to migrate VMs from a certain host in the cluster, the following compatibility error is encountered in the migration wizard:

    The target host does not support the virtual machine’s current hardware requirements.

    To resolve CPU incompatibilities, use a cluster with Enhanced vMotion Compatibility (EVC) enabled. See KB article 1003212.

    com.vmware.vim.vmfeature.cpuid.ssbd

    com.vmware.vim.vmfeature.cpuid.stibp

    com.vmware.vim.vmfeature.cpuid.ibrs

    com.vmware.vim.vmfeature.cpuid.ibpb

    The above instructions are related to the new Intel Spectre/Meltdown Hypervisor-assisted Guest Mitigation fixes.

    CPU features can be compared on the affected host and the other hosts via the following:

    > Host-01 (affected):$ cat /etc/vmware/config | wc -l57> Host-02:$ cat /etc/vmware/config | wc -l53

    Hence, when VMs start on the affected hosts, they have extra CPU requirement that wont be met when migrating to other hosts.

    In order to remove these CPU requirements, you will need to refresh the EVC baseline by disabling EVC and re-enable it again. This will update the /etc/vmware/config on all hosts in the cluster.

    Related:

    • No Related Posts