Avamar Reporting Using Grafana

I’ve always found Avamar reporting to be a bit challenging. Sure you have DPA and BRM (which is now dead I think), but what about just some simple report or stats to share with your team?

For a long time, I used simple bash scripts that would dump mccli data and email me reports via a cron job. I was hoping the SNMP-subagent was able to expose more information, but that didn’t display nearly as much as I hoped. Recently I went back to the drawing board to see if we can leverage Grafana to display some of this raw data for us. Instead of re-inventing the wheel and try to pull via the REST API, why not just query the DB directly from Grafana?


By Adding a PostgreSQL data source to Grafana, you have the full ability to run any queries you like. For example, as you see below, we can show job status for the last 24 hours.


As queries get more complex, you can use a different panel to show more detailed information. The Grafana table does a good job showing reports in a traditional style.


Someone asked me the other day if I’m able to display information from multiple Avamar systems. You definitely can as long as you add another data source. One thing I really want to try is to see if I can also graph data domain metrics along with the Avamar. Let’s face it, no one likes to find out their /ddvar partition is full. Would be nice to see it all on one screen.


  • No Related Posts

Can Avamar “process” complex passwords for applications?

Working with a customer and we are having issues configuring Avamar to work with Sharepoint servers.

Customer configuration is one front-end server and one database server.

We have been able to install the Avamar agent and the Sharepoint plugin, and can browse the Windows filesystems.

However, when we try to browse the Sharepoint farm, we get a 10007 error which basically means “no browse for you”.

This occurs whether we browse from the GUI or from command line using the avmossvss command.

The customer has informed me that his Sharepoint password is “complex” in terms of the characters it contains. However, there is no mention of restricted password characters that I have been able to find in the Avamar Sharepoint implementation document.

All Avamar should be doing is passing the characters to Sharepoint – but is it able to do this for more complex passwords? I seem to recall a while way back when where even application passwords had to be simple for Avamar to be able to process them, but I don’t know if that is still the case.

Anyone have current info on this? All comments/feedback appreciated – thanks.


  • No Related Posts

What’s the Difference Between a Data Lake, Data Warehouse and Database?

There are so many buzzwords these days regarding data management. Data lakes, data warehouses, and databases – what are they? In this article, we’ll walk through them and cover the definitions, the key differences, and what we see for the future.

Start building your own data lake with a free trial

Data Lake Definition

If you want full, in-depth information, you can read our article called, “What’s a Data Lake?” But here we can tell you, “A data lake is a place to store your structured and unstructured data, as well as a method for organizing large volumes of highly diverse data from diverse sources.”

The data lake tends to ingest data very quickly and prepare it later, on the fly, as people access it.

Data Warehouse Definition

A data warehouse collects data from various sources, whether internal or external, and optimizes the data for retrieval for business purposes. The data is primarily structured, often from relational databases, but it can be unstructured too.

Primarily, the data warehouse is designed to gather business insights and allows businesses to integrate their data, manage it, and analyze it at many levels.

Database Definition

Essentially, a database is an organized collection of data. Databases are classified by the way they store this data. Early databases were flat and limited to simple rows and columns. Today, the popular databases are:

  • Relational databases, which store their data in tables
  • Object-oriented databases, which store their data in object classes and subclasses

Data Mart, Data Swamp and Other Terms

And, of course, there are other terms such as data mart and data swamp, which we’ll cover very quickly so you can sound like a data expert.

Enterprise Data Warehouse (EDW): This is a data warehouse that serves the entire enterprise.

Data Mart: A data mart is used by individual departments or groups and is intentionally limited in scope because it looks at what users need right now versus the data that already exists.

Data Swamp: When your data lake gets messy and is unmanageable, it becomes a data swamp.

The Differences Between Data Lakes, Data Warehouses, and Databases

Data lakes, data warehouses and databases are all designed to store data. So why are there different ways to store data, and what’s significant about them? In this section, we’ll cover the significant differences, with each definition building on the last.

The Database

Databases came about first, rising in the 1950s with the relational database becoming popular in the 1980s.

Databases are really set up to monitor and update real-time structured data, and they usually have only the most recent data available.

The Data Warehouse

But the data warehouse is a model to support the flow of data from operational systems to decision systems. What this means, essentially, is that businesses were finding that their data was coming in from multiple places—and they needed a different place to analyze it all. Hence the growth of the data warehouse.

For example, let’s say you have a rewards card with a grocery chain. The database might hold your most recent purchases, with a goal to analyze current shopper trends. The data warehouse might hold a record of all of the items you’ve ever bought and it would be optimized so that data scientists could more easily analyze all of that data.

The Data Lake

Now let’s throw the data lake into the mix. And because it’s the newest, we’ll talk about this one more in depth. The data lake really started to rise around the 2000s, as a way to store unstructured data in a more cost-effective way. The key phrase here is cost effective.

Although databases and data warehouses can handle unstructured data, they don’t do so in the most efficient manner. With so much data out there, it can get expensive to store all of your data in a database or a data warehouse.

In addition, there’s the time-and-effort constraint. Data that goes into databases and data warehouses needs to be cleansed and prepared before it gets stored. And with today’s unstructured data, that can be a long and arduous process when you’re not even completely sure that the data is going to be used.

That’s why data lakes have risen to the forefront. The data lake is primarily designed to handle unstructured data in the most cost-effective manner possible. As a reminder, unstructured data can be anything from text to social media data to machine data such as log files and sensor data from IoT devices.

Data Lake Example

Going back to the grocery example that we used with the data warehouse, you might consider adding a data lake into the mix when you want a way to store your big data. Think about the social sentiment you’re collecting, or advertising results. Anything that is unstructured but still valuable can be stored in a data lake and work with both your data warehouse and your database.

Note 1: Having a data lake doesn’t mean you can just load your data willy-nilly. That’s what leads to a data swamp. But it does make the process easier, and new technologies such as having a data catalog will steadily make it easier to find and use the data in your data lake.

Note 2: If you want more information on the ideal data lake architecture, you can read the full article we wrote on the topic. It describes why you want your data lake built on object storage and Apache Spark, versus Hadoop.

What’s the Future of Data Lakes, Data Warehouses, and Databases?

Will one of these technologies rise to overtake the others? No, we don’t think so.

Here’s what we see. As the value and amount of unstructured data rises, the data lake will become increasingly popular. But there will always be an essential place for databases and data warehouses.

You’ll probably continue to keep your structured data in the database or data warehouse. But increasingly, companies are moving their unstructured data to data lakes on the cloud, where it’s more cost effective to store it and easy to move it when it’s needed. This workload that involves the database, data warehouse, and data lake in different ways is one that works, and works well. We’ll continue to see more of this for the foreseeable future.

If you’re interested in the data lake and want to try to build one yourself, we’re offering a free data lake trial with a step-by-step tutorial. Get started today.


  • No Related Posts

Warning: “Grace license is issued for the site” on NetScaler SD-WAN 10.0

NetScaler SD-WAN 10.0 includes a new centralized licensing feature to accomodate large scale deployments. New sites or license changes must generate their license through the MCN. The MCN hands down licenses to client nodes through their RCN using the control data path. If for whatever reason (e.g. when real license expires, connectivity is broken, user removes license, system boots up first time with no license etc) the client is unable to obtain a license from the MCN, it will be issued with a Grace License. This provides a grace period of 45 days.

The Grace License bandwidth will be the license rate selected in site configuration.


  • No Related Posts

ATP3.0-Splunk Event Forwarding

I need a solution


I set up ATP Endpooint3.0 and Splunk6.6, but it does not work.

There is no data at dashboard except for “Open Incidents”.

What should I do to display data in other areas?


I did..

-Splunk Settings

  -install Symantec ATP App for Splunk
  -install Symantec ATP Add-on for Splunk 
  -setting HTTP Event Collector

-ATP Settings

  -Splunk Event Forwarding settings
  -OAuth Clients settings



  • No Related Posts

Embedding a Customized QR Code in the Invite User Email

This document explains the procedure to add a customized QR code within the Invite User email template that is sent to users as a part of the ZENworks mobile enrollment process. By including a customized QR code, you can share a public or private app with users. Users only need to scan the QR code to download the app on their mobile devices.

+read more

The post Embedding a Customized QR Code in the Invite User Email appeared first on Cool Solutions. Abhinandan Narayan


  • No Related Posts

7018684: JoinProxy role shows in server lists on agent although it’s not defined in Location or Network Environment

This document (7018684) is provided subject to the disclaimer at the end of this document.


Novell ZENworks Configuration Management 11.4

Novell ZENworks Configuration Management 2017


Managed devices show JoinProxy servers in their serverlists on zenicon or zac zc -l even though the JoinProxy is not defined in any Location or Network Environment server list.


This is fixed in version ZENworks 2017 Update 1 (17.1.0) – see TID 7020155 “ZENworks Configuration Management 2017 Update 1 – update information and list of fixes” which can be found at https://www.novell.com/support/search.do?usemicrosite=true&searchString=7020155

For prior versions: Workaround: Add all primary and/ or satellite servers to each Location or Network environment explicitly and select the option to exclude closest server default rule. Do NOT leave configuration server list empty as this will cause agents to lose communication to the zone.

For 11.4.3 contact Micro Focus Customer Care ZENworks Technical Support for an official Field Test File with fix.


In ZCC / Configuration / Infrastructure / Closest Server Default Rule, using the “Verify Server List” is causing JoinProxy role to be added to the closest server list and it can’t be removed.

After 2017 update 1, the “Verify Server List” will no longer add join proxy to default CSR, and if it’s already there, will remove them.


This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.


  • No Related Posts

another day – another CIA lemon party gestapo tar budget cyberwarfare company opens office here …

  • https://www.vz.lt/pramone/2017/12/22/lietuvoje-gimsta-kalnakasyba-prie-kauno-statys-anhidrido-kasykla

    Notice the “businessman” doesn’t look nowhere near like a european? Looks like alibaba from iran, taj-mahals, taliban, etc.. Strong “skunk works” look. I tried looking a bit into history of this one man company, since it’s more than 10 years old. The shitbag lives in a very nice house. Big land lot near the lake that is also surrounded by firewall empty land lots under other fake names. All names are fake. Don’t forget that according to local law you can’t even use metal detector as hobby and anything you might find in the ground belongs to the government. Eg you can be arrested simply for walking around with metal detector and finding old tin fork.

    So in this sense you have to keep in mind the story about hell located underground and inhabitants there like devil, satan, Lucifer, etc as another superb product of mighty pig pens of brexit empire. Sure they are balanced together with NASA, cloud technology, house of lords, etc.. At the same time there is no smoke without fire, but you shouldn’t believe even your eyes in times of augmented reality. Try analyzing show such as “carbonaro effect” (maybe it’s a hint about carbon based life forms) as some sort of show and tell about how psychiatric condition can be inflicted on any target. Similar show and tell was movie “after.life”.

    Same story with evangelical christians, rapture, raising of the dead, second coming of the Christ, Antichrist, etc.. Obviously they were preparing for this effect in advance. Now you have to understand something about “second comings” by using wisdom about second time entered rivers as they right away turn to numbered flights of cuckoos nest.

    Read more in thread…

  • Related:

    • No Related Posts

    7022670: Error 404 “Host header validation failed…” after updating to Filr 3.3.1

    This document (7022670) is provided subject to the disclaimer at the end of this document.


    Filr 3 Appliance


    After updating Filr 3 to 3.3.1 the web client shows:

    ERROR: 404 “Host header validation failed…”


    Please contact Micro Focus Customer Care to resolve this issue, referencing this TID.


    Filr 3.3.1 brings a new and increased security level in regards of the host headers.

    This phenomena is mainly observed in an environment with multiple DNS domains, where the Filr system is accessed using a different DNS domain name then the one configured for the Filr appliance.

    The phenomena is also observed when filr is accessed via a reverse proxy or load balancer that is configured with an other DNS domain.

    For instance:

    Filr is running in a DMZ, where all hosts are configured with dms.da.com.

    Internal users access a load balancer using the int.da.com domain name.

    Users accessing Filr via the internet use a load balancer that uses the ext.da.com

    This causes the Filr to be accessed using different DNS domains.

    Additional Information

    Micro Focus Customer Support can assist in reconfiguring the filr appliance so it can be accessed using different trusted DNS domains.


    This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.


    • No Related Posts