Failed to migrate

I need a solution

Dear, 

Thanks for your help in advance.

We are trying to upgrade from Symantec DLP 15.1 MP1 to DLP 15.5 MP2.

First we tried to upgrade to 15.5, and we had problems.

Environment Description:
• S.The Redhat Linux 7.6 (all servers)
• Enforce, Oracle, Network Monitor, and Endpoint are separate machines.

We follow the Install Guide 15.1 chapter to secure communication between Enforce and Database.

About securing communications between the Enforce Server and the database

Pg. 63

Symantec_DLP_15.1_Install_Guide_Lin

We ran URT and all evaluated items passed the verification successfully, however, with a total of 6 Warnings.

We tried to run the Migration Utility, but it failed making the upgrade impossible.

URT

    —————————— —————————— —————————— —————- —————-
End  : Sequence Validation – elapsed 2.19s – PASSED

Start: Oracle System Parameter Validation – 2019-11-08 19:25:41
    Parameter Name                 Current Value        Recommended Value
    —————————— ——————– ——————–
    memory_target                  0                    3072
    pga_aggregate_target           1073741824           0
    sessions                       1524                 1500
    sga_max_size                   3221225472           0
    sga_target                     3221225472           0
    sort_area_size                 65536                0
End  : Oracle System Parameter Validation – elapsed .02s – WARNING (6 warnings)

Start: Data Validation – 2019-11-08 19:25:42
End  : Data Validation – elapsed 0s – PASSED

Data Objects Check Summary: There are total of 6 warnings and 0 errors.

MigrationUtility

SEVERE: Failed to create migration action context
com.symantec.dlp.migrationcommon.MigrationException: Failed to create database connection
        at com.symantec.dlp.enforceservermigrationutility.EnforceMigrationContextCreator.createMigrationActionContext(EnforceMigrationContextCreator.java:60)
        at com.symantec.dlp.migrationcommon.MigrationUtility.runMigrationUtility(MigrationUtility.java:108)
        at com.symantec.dlp.migrationcommon.MigrationUtility.runMigrationUtility(MigrationUtility.java:70)
        at com.symantec.dlp.enforceservermigrationutility.EnforceServerMigrationUtility.runMigration(EnforceServerMigrationUtility.java:17)
Caused by: java.sql.SQLRecoverableException: IO Error: General SSLEngine problem, connect lapse 90 ms., Authentication lapse 0 ms.
        at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:794)
        at oracle.jdbc.driver.PhysicalConnection.connect(PhysicalConnection.java:688)
        at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:39)
        at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:691)
        at com.symantec.databasemigration.OracleConnectionProvider.getConnection(OracleConnectionProvider.java:22)
        at com.symantec.dlp.enforceservermigrationutility.EnforceMigrationContextCreator.createMigrationActionContext(EnforceMigrationContextCreator.java:56)

Thanks.

0

Related:

How USC’s Person Data Integration Project Went Enterprise-Wide

Like other institutions, USC has many different legacy and modern systems that keep operations running on a daily basis. In some cases, we have the same kind of data in different systems; in other cases, we have different data in different systems. The majority of our student data, is in the Student Information System, which is over 30 years old, whereas the majority of our staff data is now in a cloud software with APIs. At the same time, the majority of our financial data is in another system, and the faculty research data in yet another system. You get the idea.

Naturally, all of these different systems make it hard to combine the data to make better decisions. A plethora of ways to extract data from these systems along with different time intervals make it easier to make mistakes each time a report needs to be created. Even though many groups on campus need to extract/combine the same kind of data, each group needs to individually do the work to extract it, which wastes time and resources across the entire organization.

There is no single system that will be able to accommodate all of the different functions USC needs to operate; USC simply does too much: Academics, Athletics, Healthcare, Research, Construction, HR, etc. Because of this, the solution is to systematically integrate the data from the different systems one time, so every department on campus can access the same data while the different systems continue to operate.

The Person Entity Project

At a high level, the Person Entity (PE) Project is essentially a database of everything applicable to a Person. This includes every kind of person – pre-applicant, applicant, admit, student, alum, donor, faculty, staff, etc. It aims to be a centralized, high-integrity database, encompassing data from multiple systems, that can supply data to every department at USC. Such complete, high-quality data encompassing every Trojan’s entire academic and professional life at USC can then motivate powerful decision-making in recruiting, admissions, financial aid, advancement, advising, and many other domains.

As one of the original members of the team that started the PE as a skunk works project, I have seen some of the challenges of transforming a small project to an enterprise-backed service.

Where It All Began

The project came about shortly after Dr. Douglas Shook became the USC Registrar. He wanted the data at USC to be more easily accessible to the academic units on campus as well as the university for its business processes. The initial business processes of getting data and using data was filled with manual text file ftps, excel files emailed back and forth, nightly data dumps, etc. and providing delayed, incomplete, costly data was the norm.

The odds were against us because there had been numerous attempts in the past to transition out of our custom-coded legacy student information system that died mid-project. Though our project was not entirely the same as the failed ones, it was quite similar so that if we were able to move the data into a relational database and make it more accessible to other systems, the eventual transition would be a lot easier. There was not a lot of documentation on the student information system, and there was 30 years worth of custom code and fields in the system. We were also using a message broker for the first time to connect to a legacy system like ours.

Working on this project was rewarding and worthwhile because we were able to not only overcome technical hurdles, but get some early wins for the project such as providing data for a widely used applicant portal and providing updated data faster than the existing method. We steadily grew our list of data consumers and now our service provides data for the entire University—schools and administrative units.

If you are currently working on a small experimental project in a large organization, here are some key drivers to keep in mind as you embark on your journey.

1. Create your own success criteria.

Good success criteria for a small project are the small proof of concepts that can be completed for the people on campus that are interested in your services or have not had all of their needs met by one of the enterprise services. The initial ROI will definitely not be as high as other projects so don’t measure things by the monetary amount. For us, the success criteria were things like successfully moved data from one system to the other and reducing the amount of hours a process used to take to get to the same result. We also had a timeline of the “Firsts” that the project had. For instance, first successful push of data, first database tables created, first triggers, first procedures, first production database, first integration with another system, first data client, etc. We could show to the sponsors that we were making progress, and we had a growing list of people who were excited about the problems we could solve for them. By keeping the lines of communication open about our progress, we were given the time and funding we needed to keep working on the project.

2. Minimize spending to maximize your budget.

Don’t be afraid to ask others in your organization for help. In my organization, we piggybacked on other internal organizations’ licenses to lock in lower prices on the renewals versus signing on as a new customer. Another way to cut costs is to use student workers or interns to do research and work. Student workers bring a different set of skills along with challenges, but have been instrumental in our project. In our case, the project initially started with only one full time member and three to four student workers; I was one of them. We did the data modeling at first, some requirements gathering, and the coding as well. Though our work was far from perfect and we needed some guidance from full-time employees, the entire team was able to gain traction on the project by accomplishing tasks with the help of the students at a very low cost.

I would also suggest doing a cost benefit analysis for software purchases. For example, we decided to purchase a professional data modeling tool even though we could have continued using Visio, because the monetary and time cost we would spend on finicking with Visio would be higher than the license fee in the long run. Lastly, find others willing to partially sponsor the project with equipment. I.e. You can ask other groups if they can give you a slice of their Virtual Machine while you are just getting started developing.

3. Work in bite-sized chunks.

It’s easy to be overwhelmed if your project scope is large. That’s why it’s extremely useful to do proof of concepts and pilot projects. For us, the project was so large that if we tried to plan everything out in the very beginning we would get too overwhelmed to even start. The goal of the project was to migrate all of the data over to the Oracle relational database. We needed to divide the system into different data domains and start with one or two. We chose to do Admissions and Person first. This is a little unconventional I think but we just deployed some tables and created some procedures to populate the tables just so we could get started, before we completely finished the model. As a small project it is usually okay to be wrong, because you can just start over!

4. Balance selling/promoting the project with working on the project.

The project sponsor is an important member of the team for this point. If there is no interest in your project then it will die, but if there is too much interest and too many expectations you will fall short of those expectations and people will lose faith in the project. Both are important but too much of one could be fatal. He pitched the project solutions to the problems that USC was trying to solve, how much faster and easier it would be, got people excited about using it and lining up to be the next customer. We started partnering with other groups on campus and doing proof of concepts together.

5. Base your design choices on mass adoption and impact.

When working on the project you need to think of what the best design choice is once the project is in production. This is easier said than done because the future is unknown. Essentially, don’t knowingly make design choices that will cripple your project in the future once it is in production. Think of the potential benefits to the organization if what you are working on is implemented at scale. It is a balancing act to get things out and get things perfect, find this balance that your organization finds acceptable, I think it is different for each feature and for our case, each data field. In other words, prioritize the features so you can make sure to pay special attention to the high priority ones.

6. Document everything.

Increase speed of onboarding new members and for remembering the rationale behind the decisions you made at the time. This will save you a lot of time down the road. Now that we are on year five of the project, sometimes we will come back to old code and tables, and wonder why we designed them the way they are; this is not ideal and we should have done a better job at documentation. This is less of an issue with the newer portions of the project but in the beginning we did not do much documentation and this has slowed us down recently. Members of the team could leave during a critical stage of the project and without documentation, it is very hard to progress at the same rate. Another thing is to at least version your documentation, so even if you don’t make it a habit to update it after things change in production, at least you should know when it was last updated.

7. Embrace change.

There’s always a lot of change to be expected when working on a small project. Decisions about the future of the project can be made almost any time, especially since there are very few services that are being provided in the beginning. The decisions could be made without you in the room as well. Funding and resources can be reallocated which can severely impact your project. We were lucky in that we didn’t have any of these happen to us but I think that is because we were able to overcome the major technical roadblocks in the beginning.

Now that we are an enterprise-backed service, things have definitely changed, and though I am proud that the project has progressed so far, I am a little nostalgic of the time when we were accomplishing milestones of what felt like every week. You should definitely enjoy the time when the project is just starting and small because it will never get smaller after it gains momentum.

The Person Entity project team now has around 3 full-time members, including one member strictly focusing on data quality. The project is now part of a larger data team, with a portfolio of data systems to manage like Tableau and Cognos. We provide some or all of the data & dashboards for all the different schools on campus such as Financial Aid, Registrar dashboards, Admissions, etc. We now have integration between cVent, Campaign!, the mandatory education modules, myUSC, and with plenty more on the way. We are still far from completing our task of extracting all the data from our legacy student information system, and will steadily continue to work on it, as well as moving our entire infrastructure to the cloud.

Guest Author, Stanley Su is currently a data architect on the Enterprise Data and Analytics team at USC. Stan was one of the original members on the Person Entity project, an enterprise data layer with integrated data from multiple systems at USC, and is the current lead on the project. During the Fall semester, he also TAs for a database class at the Marshall School of Business. Stan is interested in using technology to increase business efficiency and reduce repetitive tasks at work.

Related:

Data Protection is More Than an Insurance Policy – It’s an AI Must-Have

EMC logo


Data is king.

In a world reshaped by digital transformation, data has become an integral piece of the decision-making process within an organization. All business models today are built around data, enabling leaders to make big decisions to increase revenue, decrease cost, and reduce risk. However, too often organizations view the protection of that data as an overhead or an insurance policy, rarely ever looking at the data they’ve actually stored. That data has a lot of value in it that if accessed…could be a game changer when it comes to business insights and intelligence.

a pair of eye glasses sitting on an open laptop

Organizations need to start thinking about data management by considering the following questions:

  1. How accessible is your data?
  2. How likely is it that your data will survive multiple disasters?
  3. What meta-data do you need?
  4. How important is the privacy and security of your data?

And of course, there are industry trends that will shape the way you think data management and protection.

With Artificial Intelligence and Deep Learning, risk and responsibility lead to big outcomes.

Over 70 percent of enterprise companies are expected to leverage AI by the year 2021. Innovation in specialized hardware is accelerating deep learning capabilities in the data center, translating into business insights that drive growth through historical data that can now be derived from machine learning algorithms.

Building AI models and instantiating them is a vital investment – which of course, comes with risk.  That data is incredibly valuable.  The reality is that at some point your valuable data may be lost, damaged, corrupted, or compromised – the equivalent of taking 500 of the smartest people at your company and having them disappear.

Data protection, and the accessibility of that data, has become significantly more important (than ever) for the future of the data center – and everywhere it lives – including the cloud and at the edge. This is more than just an insurance policy – this is critical in ensuring your most valuable data assets are there when you need them to drive growth and a competitive market position.

Making your data protection intelligent is complicated.

Data protection today is a very complex task. There are a lot of manual configurations to consider, a need for specialized administrators, and an intense process for accessing historical data.

Fast forward into the future: we would like to have a fully autonomous system that isn’t just storing data arbitrarily, but using compute capacity to gain insights and develop a model to create the brain for your business. This would instantiate data through which the algorithms, results, trends, and trained models could extract the most value. Think about what this could do!

When you consider data protection – protecting and trusting where your data lands – it’s not just about making sure it is resilient. It’s about making sure that all that effort you expended to develop an AI learning system over an extended period of time – the code and the data – lands somewhere that it is actually protected. The system must be intelligent enough to know on which location to store the data, what meta-data you need to produce the data, and be able to predict possible data loss events and prepare for them in advance.  And intelligence drives simplicity.

New storage technologies give data protection a boost.

Most protection data is kept on spindles. As new media storage is becoming cheaper, high capacity QLC flash devices continue to expand in capacity without increasing the price. Arriving to the market, these systems are able to store secondary data (i.e. the backup data) on a much faster storage device and still gain the price and efficiency of a data protection system.

The future protection of storage will be integrated into an AI system and thus insights of past data will be available.

Being able to gain insights directly from the secondary protection system will also allow reducing some of the load and capacity on the primary storage arrays, allowing a wider adoption of new storage technologies like storage class memory (SCM) for primary storage.

It’s an AI, new application, multi-cloud, and IoT must have!

As data becomes critical to the organization, a simplified, coherent data protection plan is required for new application development mechanisms too. Not only that, but more and more enterprises are moving to use hybrid cloud and multi-cloud strategies, meaning data not only resides on premise or on a single cloud, but rather on multiple different clouds. Over 20 percent of the enterprises believe that they will use more than five separate clouds in the future. This makes management protection of the data an even more complex task. But it also opens opportunities. In addition, the IoT devices that generate huge amounts of data pose a challenge for proper data management and protection strategies. I plan to explore all of this and more in future posts.

As we can see, the value of business data paired with the reality of artificial intelligence has created a new world order. Our research shows that data protection initiatives were one of the most common initiatives undertaken by companies that are looking to transform and modernize their IT. The outcome? A robust, current environment adapted to keep up with the likes of the data generated by artificially intelligent infrastructure. Data protection is not merely an insurance policy. It’s a must-have to make big decisions and insights in order to stay competitive in this digital landscape.



ENCLOSURE:https://blog.dellemc.com/uploads/2018/04/glasses-laptop_1000x500.jpg

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

Data Capital: Separating the Winners and Losers of Tomorrow

EMC logo


It’s undeniable; digital transformation is reshaping the world around us. Analyst firm IDC predicts that by 2021, at least 50 percent of global GDP will be digitized, with growth in every industry being driven by digitally enhanced offerings, operations, and relationships.1 Less obvious, however, is the fact that data is driving this change. In fact, the ability to put data to use is what will separate the winners and losers of tomorrow.

wall of tvs displaying blue images

Data is rapidly becoming any organizations’ most valuable asset and will be a strong indicator of future success. IDC also predicts that by 2020 investors will use platform, data value, and customer engagement metrics as valuation factors for all enterprises.1

Dell EMC discovered examples of valuations heavily influenced by data during a recent data valuation project with Dr. Jim Short of the San Diego Supercomputer Center.  Notable examples include:

  • The most valuable asset in Caesar’s Palace bankruptcy filing is their Total Rewards Customer Loyalty database, valued at one billion dollars.
  • Ninety percent of the value from LinkedIn’s $1.5B acquisition of Lynda.com was attributed to their data.
  • Tesco internally valued their Dunnhumby data asset, which contained the shopping habits of some 770 million shoppers, at over a billion dollars.

View the entire report – MIT’s Sloan Management Review

It’s time to change the way you look at your data

The data platform shouldn’t be an afterthought or ancillary decision that you cede to a technology provider; it must be a centerpiece in your decision making to achieve success.

Your focus must be on maximizing your Data Capital.

We define Data Capital as:

Wealth in the form of value derived from organizational data. Used to power digital experiences and/or unlock business insights. A source of competitive differentiation across all industries.

As we’ve seen, this can be the difference between success and failure in this digital age.

However, making use of data isn’t easy

Organizations that were not born in this new digital world have to augment or adapt existing business processes and technologies to leverage their data. They must also account for the massive amounts of unstructured data new digital experiences will create and consume. It is estimated that Unstructured Data will account for 80 percent of all data.

When looking at how to unlock your Data Capital, there are three challenges to overcome:

  1. Data often goes unused. Analysts estimate only .5 percent of data is ever put to use. Data is often siloed and strewn across an organization’s footprint, limiting its usefulness. Without data consolidation, gathering and exposing it to the right users and applications requires substantial effort.
  2. Keeping up with data growth can stifle innovation. As data grows, management overhead, infrastructure costs, and the inability to rapidly add capacity can be problematic. It is vital to optimize infrastructure and processes so you can continue innovating while keeping costs in check.
  3. Properly aligning data with organizational goals. As my colleague Bill Schmarzo is fond of saying, “You don’t need a big data strategy, you need a business strategy that utilizes big data.” While it’s not always apparent, it is critical to ask the right questions and be thoughtful about the role data will play in your success.

Crossing the Data Divide

No matter the industry, we will see data increasingly important to differentiating among competitors. Ultimately, successful organizations will need to overcome these hurdles to put their data capital to use. However, those who do not take the necessary steps to integrate their data will find it difficult to succeed in this digital world.

In the coming weeks, we will discuss ways in which you can set your strategy to maximize your data capital, and provide real-world examples of companies that have already begun this journey.


1IDC FutureScape: Worldwide IT Industry 2018 Predictions Oct 2017 – – Doc # US43171317



ENCLOSURE:https://blog.dellemc.com/uploads/2018/03/Data-Experiences.jpg

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

[Data Geeks Podcast]: Turning Big Data into Actionable Insights – Our Challenges and Successes

EMC logo


Last week I introduced a new podcast series, A Conversation with Two Data Geeks, to tell the story of how we are digitally transforming Dell EMC customer support. Today I want to share the second installment in this three-part series. As with any endeavor where you’re breaking new ground, there are bound to be challenges and successes along the way. We’ve had our share of both! 

Challenges

As we worked across teams in our company to break down data walls in order to gain access to all the data sources needed to build a comprehensive data lake, we faced our first challenges. Initially there was some resistance to sharing and moving data. It was also critical that we find a platform on which to centralize this data and enable “analytics at the speed of thought,” as Michael Shepherd likes to say.

Next was a shift in mindset across the organization from thinking of big data as strictly IT-enabled to thinking of big data as IT and business-enabled. Giving all members of an organization, not just IT, access to discover and share data-driven insights allows a company to start to transform and make smarter decisions across its operations, driving transformation. This co-creation model is a newer way of thinking that brings greater value to an organization.

Successes

At the core of all our efforts is the goal to improve the customer support experience. This is why Dr. Rex Martin shared his team’s vision for the data visualization-based technology — MyService360 — with some of our customers at the outset to get their feedback. Many were enthusiastic about the idea. These conversations helped his team prioritize the feature sets they would deliver in the initial release, focusing on the features customers wanted most and found most impactful. When MyService360 was released, those customers were thrilled to see the speed at which this technology had been created and were interested to learn how they could utilize it to manage their data centers.

Another milestone came when Michael Shepherd and his colleagues were able to leverage part supplier data such as motherboard testing results, dispatch data, call logs and repair data. Being able to combine pre-purchase data with post-purchase data provides even more value to customers and speaks to how the technology culminates into a full 360-view of the product lifecycle.

Data-driven Insights

So, how do we know when we’ve been successful? There’s no better proof than hearing directly from customers that our intelligent technologies help them run their businesses better. Here’s a quote from a MyService360 user about how its big data insights impact their business:

MyService360 has definitely helped us to be more proactive because we can see everything that’s going on in our environment. It is really easy to use, very intuitive.

– Roberto Hurtado, Enterprise Storage Engineer, CHRISTUS Health

The “everything” Roberto is referencing includes insights such as:

  • Health and Risk Scoring– displays proactive and predictive system health indicators to identify areas that may be at risk, giving customers time to prioritize actions and react accordingly.
  • Code Levels– analyzes the percentage of the global install base that is up to code, with ability to drill down to determine what specific systems are due for code upgrades.
  • Actionable Service Insights– allows customers to review IT service activities across their enterprise and dive into specific sites to understand what needs attention and the action required.
  • Connectivity Status– displays the percentage of the install base that’s remotely connected to Dell EMC so customers can take action to get remaining systems connected.
  • Incident Management– taps into proactive data to identify analytical trends on service incidents.

In addition to positive customer feedback, MyService360 and SupportAssist, the technologies that provide insights and improve the experience for our customers, have collectively won two TSIA (Technology Services Industry Association) STAR awards, including Innovation in the Transformation of Support Services and Innovation in Leveraging Technology for Service Excellence.

These insights are also helping our own Dell EMC IT team deliver exceptional service to internal customers. The team currently uses SupportAssist to monitor over 90,000 Dell EMC employee PCs. Proactive and predictive alerts are generated by SupportAssist and automatically entered in the team’s help desk application for easy incident tracking, global visibility and fast resolution.

Making It Real

Big data intelligence can feel complex and intangible. It’s hard to truly appreciate its impact until you hear stories of how it solved real problems. On this week’s podcast, Michael Shepherd discusses how our big data intelligence solved customer issues. A geo services company with over 550 Dell EMC PowerEdge servers with locations in England and Texas noticed their systems were running slowly. They thought this was due to a memory issue in a single location. By leveraging the power of big data, the Dell EMC Services team was able to find the root cause of the issue. They discovered it was much larger than the customer initially perceived and was impacting multiple locations. Within three days the issue was not just accurately identified, but fully resolved.

A second customer, based in India, was preparing for deployment of over 12,000 PowerEdge servers. The servers were built in groups over time, resulting in different BIOS versions. The customer needed visibility into each server’s BIOS so they could put the same information on all the servers. Using our big data intelligence, we were able to get this information in an hour. If done manually, the customer estimated it could’ve taken a full-time resource up to four months as he or she would have had to turn on each system one-by-one. This is why connectivity and the ability for our data lake to pull data from sources throughout the product lifecycle are so important.

These are just a few examples that illustrate the type of clarity big data technologies provide when approaching problems that could have significant impact on a customer’s environment and business. Now, I invite you to join us for our next podcast installment.

A Conversation with Two Data Geeks | Part 2 of 3: Turning Big Data into Actionable Insights – Our Challenges and Successes

Like many companies, tearing down data walls and ensuring data quality have been part of our digital transformation. Hear how we overcame those in order to build award-winning, intelligent technologies – SupportAssist and MyService360 – for our customers.

What challenges and successes has your team faced when using big data intelligence in your digital transformation journey? Please share your experiences below. I look forward to posting the final installment in our series next week.

In case you missed it:

Did you miss part 1 of this podcast series? Listen now [Data Geeks Podcast]: How Big Data Intelligence and Connectivity Enhance Your Support Experiences

.


Speakers:

Rex Martin, Ph.D. | Director, Advanced Proactive Services

Dr. Martin holds 8 U.S. patents in cognitive computing and artificial intelligence (AI). His current focus is leveraging Deep Learning methods to enable a competitively differentiated customer service experience. In 2016, his team won the Technology Services Industry Association award for Innovation in the Transformation of Support Services for MyService360. He has also received Presidential awards for innovation from EMC, Oracle and Sun Microsystems.


Michael Shepherd | AI Research Technologist

Michael holds U.S. patents in both hardware and software and is a Technical Evangelist who provides vision through transformational AI data science. With experience in supply chain, manufacturing and services, he enjoys demonstrating real scenarios with the SupportAssist Intelligence Engine showing how predictive and proactive AI platforms running at the “speed of thought” are feasible in every industry.

The post [Data Geeks Podcast]: Turning Big Data into Actionable Insights – Our Challenges and Successes appeared first on InFocus Blog | Dell EMC Services.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

Information Analyzer thin client workspace is empty

Customer had installed GOV RUP 7 and since then they are seeing not seeing workspace overview and the data analysis information is not showing correctly , example the dashboard results and table status are not correct. Also seeing data sets data quality status as “in queue” , operations like “run quality analysis ” , “Remove”, “Publish” are disabled .

Related:

How do I set-up field data validation or RAM that endures the value input for a pay rate field cannot be less than our minimum hourly rate?

I am trying to set-up some type of field data validation or RAM that will ensure that the value input for a pay rate field cannot be less than our minimum hourly rate. I have tried the Regex Validation Tool and creating a RAM and neither of those give the options I need for this data validation. Am I missing something or is there another way to accomplish what I need?

Related:

  • No Related Posts