tommy

Data Catalog: Enabling Self-Service Analytics

Enabling Self-Service Analytics

A Chinese proverb says, “The best time to plant a tree was 20 years ago; the second-best time is now”.  Let’s assume you’re already up and running with a big data Hadoop platform for advanced analytics use-cases.  Perhaps you’ve ingested multi-structured data from disparate sources and are performing product delivery/development for proofs-of-concept.  In order to organize the data associated with your products, make it easily searchable/findable, and rapidly provision to your end-users, a Data Catalog is necessary.  As with the tree, the best time to implement a Data Catalog (DC) is during early planning stages; however, the second-best time is today.

There are various associated use-cases relating to ‘why’ a DC is necessary:

  • Fill in the gaps: You’re deep in the midst of new failure analysis and find that you’re missing 60% of maintenance start dates, due an error in the archive job; what other data sets might help fill in that missing data?
  • Explore what’s possible: You’re on the hunt for data keys that will let you hop-scotch from application to application; with a multi-step cross-reference, can you finally unlock those measurement logs from a one-time sensor study?
  • Test out hypotheses: Your team talks in anecdotes and examples; can you prove that there IS a seasonal correlation between new customers and off-season items?
  • Streamline or rationalize: You’re starting an application rationalization and want to trace data lineage from the system of record; how many different versions of “the truth” are there?
  • Learn from the traffic: You’re responsible for enterprise data governance, so you want the metadata about the DC; who is looking for what data, and how can you better meet their needs?
  • Find fresher data: The team’s monitoring report runs off of quarterly inventory losses that are allocated monthly to different organizations; can you track down the raw, weekly data so that the team isn’t surprised at month end?

These use-cases can be boiled down to three considerations: what data do I have (and therefore not have) in the lake, how can I provision data effectively to enable self-service analytics, and how do I classify data to be most useful.

What’s in the Lake?

A DC should provide similar functionality and user experience as a brick & mortar super-store.  Imagine your consumers needing to find proppant levels for the past 6 months for an unconventional well.  Similar to the sign-posts hanging from the ceiling in your local Costco, you should lead them to the right aisle; for example, Upstream → Production → Unconventional → Region → Well → Proppant → Time-Frame.  Spending some time brainstorming structure and multiple-paths to discovery will benefit end-users and increase their retention in utilizing the service.

Provisioning Best Practices

Once those users have found the right data, how do you get it in their hands?  First, a good relationship with your data source stewards is important; they need to feel secure to quickly allow data consumption across many requests, have line-of-sight on lineage for tracking derived data through transformation, and should help with tagging the data coming from their respective system(s).

Second, there should be a quick turnaround between request and provisioning; otherwise, end-users’ ability to leverage data for business decisions is limited.  As such, the DC should have inherent processes for automating provisioning when/where possible.  DevOps processes/culture can go a long way toward meeting the needs of the organization in regards to rapid provisioning.  Change managers are also essential for training those stewards on the tools.

Classification

Upon ingestion into the lake, metadata needs to be gathered and the data should be tagged – ideally by a representative (data custodian), with significant business knowledge who can differentiate and assign tags effectively.  As demonstrated in Figure 1, not all data is created equal, and various levels of rigor can be used for tagging, based on its intended use.

 

If you’re up and running with your Big Data engine, perhaps you’re comfortable in piecemeal-procuring data for pilots and the like.  That can work during inception and early stages, but eventually, you will have new ideas coming through the pike and to-be product owners approaching you to understand what’s in the lake already and what they’ll need to source.  Being able to provide that information, as well as provision/classify it effectively, will buy credibility and can facilitate data gravity (the idea that the more data in a lake, the more data it will attract), which can be a key differentiator in the Enterprise Hub game.

By Tommy Ogden, Senior Manager at Enaxis Consulting

Cloud Syndicate

The 'Cloud Syndicate' is a mix of short term guest contributors, curated resources and syndication partners covering a variety of interesting technology related topics.

Contact us for syndication details on how to connect your technology article or news feed to our syndication network.

The Cloudification of Healthcare: Benefits and Risks

The Cloudification of Healthcare: Benefits and Risks

Cloud Healthcare: Benefits and Risks Many organizations are moving most of their business-critical applications and workloads to the cloud. The ...
Mark Carrizosa

Despite Record Breaches, Secure Third Party Access Still Not An IT Priority

Record Breaches Research has revealed that third parties cause 63 percent of all data breaches. From HVAC contractors, to IT ...
How Strategy – Not Technology – Is The Real Driver For Digital Transformation

How Strategy – Not Technology – Is The Real Driver For Digital Transformation

The Real Driver For Digital Transformation Business owners and executives today know the power of social media, mobile technology, cloud ...
New Report Reveals Just How Bad The Cybersecurity Skills Gap Is

New Report Reveals Just How Bad The Cybersecurity Skills Gap Is

The Cybersecurity Skills Gap It’s not difficult to find worrying predictions from experts who say the cybersecurity sector desperately needs ...
Server-less Computing Necessitates A Significant Mind Shift

Server-less Computing Necessitates A Significant Mind Shift

Server-less is More The author of the Pied Piper of Hamelin, Robert Browning, is one of my favorite English poets ...
Technology Certification Courses

Top Five Technology Certification Courses To Choose From In 2018

Technology Certification Courses Gartner predicts that the global public cloud services market is projected to grow by 55 percent in the ...
Exclusive: North American, UK, Asian regulators press EU on data privacy exemption

Exclusive: North American, UK, Asian regulators press EU on data privacy exemption

WASHINGTON/BRUSSELS (Reuters) - Financial watchdogs from North America, Britain and Asia are urgently seeking a formal exemption from the European Union’s tough new data privacy law to avoid hampering cross-border investigations, regulatory officials told Reuters ...
Teradata sues Germany's SAP, alleging it stole trade secrets

Teradata sues Germany’s SAP, alleging it stole trade secrets

FRANKFURT (Reuters) - SAP SE, Europe’s most valuable technology company, was sued on Wednesday by U.S. company Teradata, which accused it of stealing trade secrets, copyright infringement and anti-trust violations. The case, filed at the ...
Adobe set to join rush of foreign giants opening AI labs in Canada

Adobe set to join rush of foreign giants opening AI labs in Canada

Adobe Systems Inc. is the latest foreign technology giant planning to open an artificial intelligence lab in Canada. The Silicon Valley software giant, best known for document-creation products Photoshop and Acrobat, says it is looking ...