SQL 2014 – AlwaysOn Cluster Backup Problem

Dear All,

I need your assistance – have exhausted all the troubleshooting options and still wasn’t able to get things running.

The problem basically is the inability to backup the databases in a cluster. When I run the client, it only backups up databases that are not protected by cluster…

Environment:

Windows OS : Windows Server 2012 R2

SQL : SQL Server 2014

Networker: 9.0.1 b. 614

Windows Cluster name: alwayson.ad.xpto.com

Named Instance: LST-REC

Database name: CPI

I assume that there may be a configuration problem, but I wasn’t able to find it until now.

What I did was – manually create three clients.. One for the Cluster name (LST-REC.ad.xpto.com) and two for the physical servers (svwa-bdc-cpd01.ad.xpto.com and svwa-bdc-cpd02.ad.xpto.com).

The configuration for the three clients are the same – the only difference is that the clients for the physical servers are not in any group and I start the backup only in the client for the cluster name.

This is the information from the information log.

43708:(pid 31960):Start time: Mon Sep 11 11:45:16 2017

153897:(pid 31960):Computer name: SVWA-BDC-CPD01 User name: administrator

Check the detailed logs at ‘C:Program FilesEMC NetWorkernsrapplogsnsrsqlsv.log’.

38571:(pid 31960): Bad switch: -o

38562:(pid 31960): Unsupported savegrp option -o will be ignored

107092:(pid 31960):Version information for C:Program FilesEMC NetWorkernsrbinnsrsqlsv.exe: Original file name: nsrsqlsv.exe Version: 9.0.1.614 Comments: Supporting SQL 2005, SQL 2008, SQL 2008 R2, SQL 2012 and SQL 2014

104810:(pid 31960):Detected application flag NSR_CONSISTENCY_CHECKS set to ALL in NW resources for client lst-rec.

97546:(pid 31960):Back up of Cpi.QA will be excluded as it is part of Availability group. It can be backed up using SQL federated workflow.

154717:(pid 31960):NSR_SERVER : sfli-bk-cpd01.ad.xpto.com

.129292:(pid 31960): Successfully established Client direct save session for save-set ID ‘1471572332’ (svwa-bdc-cpd01.ad.xpto.com:MSSQL$REC:master) with Data Domain volume ‘sfdibkcpd01adxptocom.001’.

53085:(pid 31960):Backing up of master succeeded.

129292:(pid 31960): Successfully established Client direct save session for save-set ID ‘1454795122’ (svwa-bdc-cpd01.ad.xpto.com:MSSQL$REC:model) with Data Domain volume ‘sfdibkcpd01adxptocom.001’.

53085:(pid 31960):Backing up of model succeeded.

129292:(pid 31960): Successfully established Client direct save session for save-set ID ‘1438017909’ (svwa-bdc-cpd01.ad.xpto.com:MSSQL$REC:msdb) with Data Domain volume ‘sfdibkcpd01adxptocom.001’.

53085:(pid 31960):Backing up of msdb succeeded.

Successfully completed databases

==============================

master

model

msdb

lst-rec: MSSQL$REC: level=full, 33 MB 00:00:14 3 file(s)

completed savetime=1505126737

Related:

  • No Related Posts

VNXのRAIDグループとシンプールの違いについて

RAIDグループとシンプールが使用されるケースについて教えていただきたいです。

VNXの機能でボリュームを作成する以下の方法があります。

・RAIDグループ

…単一のRAIDで構成

・シンプール

…複数のRAIDで構成

データムーバーの機能で、RAIDグループで作成されたLUNと他のRAIDグループで作成されたLUNとで

ストライプを行うことにより、一つのボリュームプールとして見立てることができる認識です。

このとき、シンプールと複数のRAIDグループLUNから作成したボリュームプールでは

どのような違いがあり、それぞれどのようなケースで使用されるのでしょうか。

Related:

DCTM 7.2 Data Dictionary issue

Hi,

I always do this with 2 separate files :

for Type :

LOCALE = en

CODEPAGE = ISO-8859-1

########################################################

########################################################

TYPE = <type_name>

LABEL_TEXT = <label_type_name>

for attributes :

LOCALE = en

CODEPAGE = ISO-8859-1

########################################################

########################################################

TYPE = <type_name>

ATTRIBUTE = <attribute_name>

LABEL_TEXT = <label_attribute_name>

then i use the following commands :

./dmbasic -f dd_populate.ebs -e LoadDataDictionary — <docbase> <Login> <Password> attribute_type_name_en.txt

./dmbasic -f dd_populate.ebs -e LoadDataDictionary — <docbase> <Login> <Password> type_name_en.txt

Repeat the operation for each locale you needed.

Related:

Transformation Play 1: The Performance Zone

EMC logo


In today’s uncertain political and economic climate, the ability to maintain business-as-usual should not be taken for granted. After all, an organization’s ‘status-quo’ business model is typically the source of more than 90 percent of its revenues and 100 percent of its profits.

It’s this “performance engine” that allows the ship to explore unchartered, potentially fruitful waters and steer its way safely back on course if things get choppy. Geoffrey A. Moore defines his concept of the ‘Performance Zone’ in his book ‘Zone to Win’.

performance

But striking the balance between keeping the engine room running to make the annual number and scaling a new, riskier, high potential market is notoriously tough. Even more so when, like so many of the organizations to which I speak, these organizations are desperately trying to play catch up to defend themselves against more agile, digitally-centric disruptors in their markets.

How to focus your defensive course 

To these organizations, I say both can be achieved but it is essential to focus your defensive course: specifically, manage any self-disruption carefully – you can’t cannibalize your core revenue too quickly. But eventually if you don’t cannibalize your core revenue, one of your competitors will. You also need to make the most of your ecosystem of partners that adds value to your established offering; by entrenching your partner network with you, they can help amplify your efforts. Critically, focus your R&D efforts on neutralization, not differentiation –  you already have the power of a huge customer-base behind you. You just need to develop an initial offering that’s strong enough to retain them, so that each of your follow-on offerings get better and better at providing a compelling customer experience.

The key is getting to market quickly; take any innovative assets you are working on in the “incubation” zone and put them into service straight away and focus on integrating them into your current offer.

Using data and analytics to steer the ship more accurately

Ultimately the engine room relies on data and the analytics. It’s all about the thoughtful prediction and management of the numbers to drive gradual, predictable and stable growth. You need to know what your numbers are doing in order to respond more quickly to any disruptive changes in your market places.

To use a maritime metaphor, if an iceberg damages the ship, you need to be able to determine why, how and when to replace any damaged parts, change course (and if so, in which direction). 

The average company can only steady itself from this kind of disruption once a year. This makes it all the more important to be able to couple internal and external data with advanced analytics in order to forecast the unseen as accurately as possible and prescribe corrective actions. It’s crucial to identify, validate, understand and either correct or nip underperforming assets in the bud as quickly as possible.

Critically, it’s about anticipating these icebergs BEFORE they emerge – and converting the enterprise into a predictive enterprise instead of a reactive enterprise. The new wave of insight isn’t going to come from looking in the rear-view mirror but will come from predicting disruption and change by looking ahead.

To put this into context, let’s consider a bank.

The bank may well assume, on the basis of historical insight and traditional hierarchy, that it’s most valuable customers are the largest depositors. These customers may get VIP access to exclusive services, and may get preferential rates and discounts. However, deep analysis of customer behaviors might identify three, much smaller groups of depositors who due to their banking and credit card behaviors, are more profitable for your bottom line. It’s only through deep analysis of vast amounts of customer and operational data that you can identify and quantify these situations.

And so big data in this context can drive the organization’s “performance zone” by not only helping with customer segmentation and allowing for more effective targeting of products, servicing and pricing, but it can also help mitigate risk and further produce margin by providing agile, real-time insight into potential hazards.

For example, this deep analysis might flag customers who are at risk of default, but could also predict likely fraud activity across its payments network. Historically fraud prevention measures were delivered in broad sweeps. Oh, that credit card isn’t usually in Taiwan, let’s block a transaction from Starbucks on Zhong Xiao Fu Xing Road. However, now they can be much more heuristic, learning from each transaction and developing personalized analytic or behavioral profiles that are not only better at picking up genuinely fraudulent transactions but better also at avoiding false positives.

Unlocking true customer segmentation and targeting

Everyone wants to better understand their customers and those customers’ behavioral tendencies. Historically they’ve had a one-dimensional view based on structured (RDBMS) customer databases. Now they have the potential to also capture, integrate and analyze semi-structure and unstructured data: e.g. a customer calls in with a complaint about something they experienced, you can cross reference that with their Twitter gripes about it, their location data, and provide a much more personalized response. For example, if a false positive fraud prevention block does happen for a legitimate credit card purchase, you could provide a 5% cashback payment on the transaction by way of apology in real time.

This helps minimize customer churn by providing a real-time incentive for loyalty by addressing challenging situations in real-time as they emerge. This drives huge savings compared to the cost of recruiting new customers and further boosts margins in the performance zone.

Enabling the performance zone

The key to delivering this capability involves circumventing the everyday pain-points of infrastructure management. Organizations need to undergo a data modernization process, migrating information off legacy platforms, and removing siloed access to that data for analytics systems. In addition, the platform needs to be elastic in nature – not only allowing for scaling, but to provide a 360 view of the customer and that individual customer’s behavioral tendencies and preferences at any given point in time. The keyword here is ‘predictive;’ it’s not enough simply to monitor the past better; we need to predict the future with real-time analytics to give meaningful guidance to the business.

This is where the data lake enters the equation. By providing a holistic repository for structured and unstructured data that supports both deep and real-time analytics, and which scales performance with capacity, you’re able to build the kinds of intelligent applications that provide and exploit this insight.

Of course, the process of migrating your data into a data lake isn’t necessarily trivial; applications will need to be migrated and you’ll need to assess where and what data you want to ingest to deliver value to your analytics applications. This is where a consultancy process to map your transformation data may be key, and provide you a route from where you are to the nirvana of the performance zone.

When you combine the three – scale-out data lake capacity, high levels of compute capacity and a plan for mining the data for business value – the performance zone will be within your grasp.

 

The post Transformation Play 1: The Performance Zone appeared first on InFocus Blog | Dell EMC Services.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

Re: Isilon ユーザマッピングについて

Isilon ユーザマッピングではすべてのディレクトリーサービスのIDを一つのアクセストークンとして生成し、

それぞれのディレクトリーサービスでもつそのUserのアカウント情報を拡張していきます。

そしてアクセストークンとファイルの許可データを比較し、ファイルへのアクセスを決定するので

ACLでアクセス制御しているファイルへのアクセスはNFSパーミッションがALLアクセス許可になっていたとしても

ACLで許可されていない場合はできないです。(ACL設定が優先されるということです)

ご質問の意味を取り違えていたら申し訳ないのですが、教えてください。

参考:

https://japan.emc.com/collateral/white-papers/h12417-identities-access-tokens-isilon-onefs-user-mapping-service-wp.pdfPage6 から)

http://japan.emc.com/collateral/software/white-papers/h10920-wp-onefs-multiprotocol.pdf (Page9 )

Related:

How to delete BarCode using Multi module in Captiva 7.6

Hi,

I am using EMC Captiva 7.6. Can anyone tell me how to delete barcode page using Multi module in Captiva Designer. Module Reference has old stuff and it’s not helping me into captiva designer.

Option Explicit

Private Sub Split_Finish(ByVal pRoot As IASLib.IAS_RECORD_7)

Dim p1 As IAS_RECORD_1 ‘ Declare level 1 node

Set p1 = pRoot.Tree.L1Child(0) ‘ Get the first level 1 node in the batch

While Not p1 Is Nothing ‘ Loop through all level 1 nodes

If p1.Tree.NumChildren(0) = 0 Then ‘ Delete tp1.DeleteEmpty.Ready = IAMULTI_READY

he node if level 1 is empty, else set Multi trigger to ready

p1.DeleteEmpty.Ready = IAMULTI_DELETE

End If

Else

Set p1 = p1.Tree.NextInLevel ‘ Go to next level 1 node Wend

End Sub

Regards

Shivendra

Related: