Remember the Detroit’s Big Three? Well, It Seems There is a New Cloud Big Three

Remember the Detroit’s Big Three? Well, It Seems There is a New Cloud Big Three

Remember the Detroit’s Big Three Moving to the cloud? Remember Studebaker, Hudson or Packard? Of course you don’t. They were crushed as the automobile industry consolidated into Detroit’s Big Three. So what? We are watching the same thing happening in cloud computing and the same lessons
What makes ‘Cloud’ a dependable solution for the independent field workforce?

What makes ‘Cloud’ a dependable solution for the independent field workforce?

5 Major Reasons to Switch from Paper to Digital! The web of wireless networks has connected the entire world. It’s not wrong to say that today’s generation breathes in the air of the Internet. Such is the influence of the technology that every corner is

CONTRIBUTORS

egnyte

Cost of the Cloud: Is It Really Worth It?

Cost of the Cloud Cloud computing is more than just another storage tier. Imagine if you’re able to scale up ...
Minna Wang

Using Cloud Technology In The Education Industry

Student Collaboration Arguably one of society's most important functions, teaching can still seem antiquated at times. Many schools still function ...
Backups And Disaster Recovery Are Not The Same Thing

Backups And Disaster Recovery Are Not The Same Thing

Backups And Disaster Recovery Most business owners are aware of the consequences of losing data. Much of the value of ...

RECENT NEWS

Amazon picks New York City, Virginia for $5 billion new headquarters

Amazon picks New York City, Virginia for $5 billion new headquarters

SAN FRANCISCO (Reuters) - Amazon.com Inc (AMZN.O) said on Tuesday it will build offices for up to 25,000 people in ...
Capgemini in Gartner Magic Quadrant

Capgemini in Gartner Magic Quadrant

Paris, November 9, 2018 – Capgemini, today announced that Capgemini (Prosodie) has been positioned as a Leader by Gartner in its ...
Batteryless smart devices closer to reality

Batteryless smart devices closer to reality

Researchers at the University of Waterloo have taken a huge step towards making smart devices that do not use batteries ...
The New Industrial Revolution – According to the WSJ

The New Industrial Revolution – According to the WSJ

The insert in today’s US print edition of the Wall Street Journal is called The New Industrial Revolution. The paper updates ...
Pressure grows on Zuckerberg to attend Facebook committee hearing

Pressure grows on Zuckerberg to attend Facebook committee hearing

Australia, Argentina and Ireland join UK and Canada in urging Facebook CEO to give evidence to parliaments Parliamentary committees from ...
Bill Schmarzo

Driving AI Revolution with Pre-built Analytic Modules

What is the Intelligence Revolution equivalent to the 1/4” bolt?  

I asked this question in the blog “How History Can Prepare Us for Upcoming AI Revolution?” when trying to understand what history can teach us about technology-induced revolutions.  One of the key capabilities of the Industrial and Information revolutions was the transition from labor-intensive, hand-crafted to mass manufactured solutions.  In the Information Revolution, it was the creation of standardized database management systems, middleware and operating systems.  For the Industrial Revolution, it was the creation of standardized parts – like the ¼” bolt – that could be used to assemble versus hand-craft solutions. So, what is the ¼” bolt equivalent for the AI Revolution?  I think the answer is Analytic engines or modules!

Analytic Modules are pre-built engines – think Lego blocks – that can be assembled to create specific business and operational applications.  These Analytics Modules would have the following characteristics:

  • pre-defined data input definitions and data dictionary (so it knows what type of data it is ingesting, regardless of the origin of the source system).
  • pre-defined data integration and transformation algorithms to cleanse, align and normalize the data.
  • pre-defined data enrichment algorithms to create higher-order metrics (e.g., reach, frequency, recency, indices, scores) necessitated by the analytic model.
  • algorithmic models (built using advanced analytics such as predictive analytics, machine learning or deep learning) that takes the transformed and enriched data, runs the algorithmic model and generates the desired outputs.
  • layer of abstraction (maybe using Predictive Model Markup Language or PMML[1]) above the Predictive Analytics, Machine Learning and Deep Learning frameworks that allows application developers to pick/use their preferred or company mandated standards.
  • orchestration capability to “call” the most appropriate machine learning or deep learning framework based upon the type of problem being addressed. See Keras, which is a high-level neural networks API, written in Python and capable of running on top of popular machine learning frameworks such as TensorFlow, CNTK, or Theano.
  • pre-defined outputs (API’s) that feeds the analytic results to the downstream operational systems (e.g., operational dashboards, manufacturing, procurement, marketing, sales, support, services, finance).

Analytic Modules produce pre-defined analytic results or outcomes, while providing a layer of abstract that enables the orchestration and optimization of the underlying machine learning and deep learning frameworks.

Monetizing IOT with Analytic Modules

The BCG Insights report titled “Winning in IoT: It’s All About the Business Processes” highlighted the top 10 IoT use cases that will drive IoT spending including predictive maintenance, self-optimized production, automated inventory management, fleet management and distributed generation and storage (see Figure 1).

Figure 1:  Top 10 IoT Use Cases That Will Drive IoT Market Growth

But these IoT applications will be more than just reports and dashboards that monitor what is happening. They’ll be “intelligent” – learning with every interaction to predict what’s likely to happen and prescribe corrective action to prevent costly, undesirable and/or dangerous situations – and the foundation for an organization’s self-monitoring, self-diagnosing, self-correcting and self-learning IoT environment.

While this is a very attractive list of IoT applications to target, treating any of these use cases as a single application is a huge mistake.  It’s like the return of the big bang IT projectsof ERP, MRP and CRM days, where tens of millions of dollars are spent in hopes that 2 to 3 years later, something of value materializes.

Instead, these IoT “intelligent” applications will be comprised of analytic modules integrated to address the key business and operational decisions that these IoT intelligent applications need to address.  For example, think of Predictive maintenance as comprised of an assembly of analytic modules addressing the following predictive maintenance decisions including:

  • identifying at-risk component failure prediction.
  • optimizing resource scheduling and staffing.
  • matching Technician and Inventory to the maintenance and repair work to be done.
  • ensuring tools and repair equipment availability.
  • ensuring First-time-fix optimization.
  • optimizing Parts and MRO inventory.
  • predicting Component fixability.
  • optimizing the Logistics of parts, tools and technicians.
  • leveraging Cohorts analysis to improve service and repair predictability.
  • leveraging Event association analysis to determine how weather, economic and special events impact device and machine maintenance and repair needs.

As I covered in the blog “The Future Is Intelligent Apps,” the only way to create intelligent applications is to have a methodical approach that starts the predictive maintenance hypothesis development process with the identification, validation, valuing and prioritizing of the decisions (or use cases) that comprise these intelligent applications (see Figure 2).

Figure 2:  Thinking Like A Data Scientist

As you take your business and operational stakeholders through the “Thinking Like A Data Scientist” process to uncover those decisions, it only makes sense to create Analytic Modulesthat address the specific advanced analytic and operational data requirements to support these operational decisions. Consequently, these analytic modules, if constructed using modern DevOps methodologies and capabilities, can be linked together like Lego pieces to create these intelligent IoT applications.

IOT Analytic Modules

One example of an IoT analytic modules would be Anomaly Detection.  Anomaly detection is the identification of items, events or observations which do not conform to an expected pattern or other items in a dataset (see Figure 3).

Figure 3:  Anomaly Detection Example

Anomaly detection occurs when a substantial change in normal behavior might indicate the presence of intended or unintended attacks, faults, defects and others.  A number of different machine learning techniques can be used to help flag and assess the severity of detected anomalies including:

  • k-Nearest Neighbor (k-NN): in pattern recognition, the k-nearest neighbors algorithm is a non-parametric method used for classification and regression.
  • Neural Networks: a series of algorithms that identify underlying relationships in the data by using layers of interconnected nodes. Neural networks have the ability to adapt to changing input so the model produces the best possible result without the need to redesign the output criteria.
  • Decision Trees: a decision support tool that uses a tree-like graph to model decisions and their possible consequences, including chance event outcomes, resource costs, and utility.
  • Support Vector Machine: in machine learning, support vector machines are supervised learning models that analyze data used for classification and regression analysis.
  • Self-organizing map:  a type of artificial neural network (ANN) that is trained using unsupervised learning to produce a two-dimensional map of the training data to aid in dimensionality reduction.
  • k-means clustering: k-means clustering partitions n observations into k clusters in which each observation belongs to the cluster with the nearest mean.
  • Fuzzy C-means: fuzzy clusteringis a form of clustering in which each data point can belong to more than one cluster.
  • Expectation-Maximization Meta: an iterative method to find maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models.

One technique in particular that is to be gaining traction for anomaly detection is adaptive resonance theory (ART).  ART is a technique that has been used extensively to detect network intrusions. In managing networks, anomalies are often malicious intrusion attempts that represent a serious threat to network security.

Real-world Anomaly Detection Case Study

Power generating facilities and industrial plants need to maximize operational efficiency by optimizing operating conditions based on fuel and raw material lot changes, aging deterioration of equipment, and so on. Hitachi has developed high-efficiency operational support technologies leveraging advanced anomaly detection in a joint effort with the Universiti Teknologi PETRONAS for industrial plants that detect equipment and operating anomalies.

With conventional anomaly diagnosis technologies that are based on initial conditions, as fluctuations within normal operating ranges are also determined to be “anomalous,” the application of these technologies in places where the conditions considered appropriate can change on a daily basis has been difficult.

However, Hitachi’s newly developed technology employs a sequential learning-type data classification technology known as adaptive resonance theory (ART). Since ART can teach a system the “normal” conditions that correspond to a wide range of operating states, anomalies can be detected accurately (see Figure 4).

Figure 4: Anomaly Detection Using Adaptive Resonance Theory (ART)

A pilot plant for distillation towers, a key piece of equipment used in crude oil refining plants, was used to verify the system. Even when the composition of the raw materials changed, it was demonstrated that anomalies such as malfunctioning flow adjustment valves and sensor drift could be detected.

For more details on this Hitachi case study, please check out the “High-efficiency Operational Support Technologies for Industrial Plants” paper from Hitachi’s Research and Development Group.

Summary

Analytic Modules are one way to not only simplify the development of intelligent IoT applications, but also provide a way to monetize one’s analytics capabilities by reusing the same modules across a multitude of IoT use cases or applications.  For example, an Anomaly Detection module could be used across a number of different IoT use cases or applications as depicted in Figure 5.

Figure 5:  Monetizing Anomaly Detection Analytic Module across several IOT use cases

Any improvements in the effectiveness of that particular analytic module immediately drives economic value to all the other use cases that analytic module supports.  When this happens, not only are organizations deriving and driving the economic value of their IoT data, they are deriving and driving the economic value of their IoT analytics.

As we found in our University of San Francisco research project on the economic value of data, we are only now beginning to understand how to monetize our digital assets through re-use, or as Adam Smith would say, “value in use” versus “value in exchange.”

By Bill Schmarzo

Sources: Machine Learning Techniques for Anomaly Detection: An Overview https://pdfs.semanticscholar.org/0278/bbaf1db5df036f02393679d485260b1daeb7.pdf

Anomaly detection using adaptive resonance theory https://open.bu.edu/handle/2144/12205

[1]PMML (Predictive Model Markup Language) is an XML-based language that enables the definition and sharing of predictive models between applications. A predictive model is a statistical model that is designed to predict the likelihood of target occurrences given established variables or factors.

Bill Schmarzo

CTO, IoT and Analytics at Hitachi Vantara (aka “Dean of Big Data”)

Bill Schmarzo, author of “Big Data: Understanding How Data Powers Big Business” and “Big Data MBA: Driving Business Strategies with Data Science”. He’s written white papers, is an avid blogger and is a frequent speaker on the use of Big Data and data science to power an organization’s key business initiatives. He is a University of San Francisco School of Management (SOM) Executive Fellow where he teaches the “Big Data MBA” course. Bill also just completed a research paper on “Determining The Economic Value of Data”. Onalytica recently ranked Bill as #4 Big Data Influencer worldwide.

Bill has over three decades of experience in data warehousing, BI and analytics. Bill authored the Vision Workshop methodology that links an organization’s strategic business initiatives with their supporting data and analytic requirements. Bill serves on the City of San Jose’s Technology Innovation Board, and on the faculties of The Data Warehouse Institute and Strata.

Previously, Bill was vice president of Analytics at Yahoo where he was responsible for the development of Yahoo’s Advertiser and Website analytics products, including the delivery of “actionable insights” through a holistic user experience. Before that, Bill oversaw the Analytic Applications business unit at Business Objects, including the development, marketing and sales of their industry-defining analytic applications.

Bill holds a Masters Business Administration from University of Iowa and a Bachelor of Science degree in Mathematics, Computer Science and Business Administration from Coe College.

Cloud Community Supporters

(ISC)²
Cisco
SAP
CA Technologies
Dropbox

Cloud community support comes from (paid) sponsorship or (no cost) collaborative network partnership initiatives.