Category Archives: Big Data

Mission Digital Transformation: Is Your Infrastructure Ready?

Mission Digital Transformation: Is Your Infrastructure Ready?

Mission Digital Transformation

By and large, most enterprises are developing or executing a digital strategy to transform their businesses. But what is digital transformation? In general, it’s the adoption of technology to deliver new products and experiences through digital channels, either to complement or, in some cases, replace physical interactions. Changing user expectations, new modes of engagement, and the need to improve speed and responsiveness are the main factors driving companies to update outdated processes and develop new applications. For the first time, even large enterprises are moving the focus on technology from the back office to core elements of their brands in order to compete and keep pace with the market.

Who’s Ready to Fully Embrace Digital?

Keeping pace with the evolving digital marketplace requires not only increased innovation, but also updated systems, tools, and teams. In order to deliver on the promise of digital transformation, organizations must also modernize their infrastructure to support the increased speed, scale, and change that comes with it.

A recent survey on digital transformation readiness by SignalFx uncovered that companies of all sizes are investing in at least the fundamental stages of readiness: 79% are implementing or have already implemented a plan to optimize infrastructure to enable digital transformation.

And, while 95% of IT Ops and DevOps indicated that their individual role impacts the success of their company’s digital transformation initiatives, the strategy for modernizing infrastructure prior to digital transformation isn’t exclusively owned by IT or dev management, but overseen by C-suite executives (CEO, CIO, CTO) half the time, indicating that operational preparedness may be understood as essential to success at the highest level of the business.

The infographic below outlines additional key findings.

digital-transformation-readiness-infographic_001

Cashless Society Part 3 – Digital Wallets and More…

Cashless Society Part 3 – Digital Wallets and More…

Digital Wallets and More…

To finish off our Cashless Society series I want to look at the Fintech giants that are leading the digital money revolution. Whilst services like Apple Pay and Google Wallet have become more widely available, they haven’t quite taken off yet. They seem to be offering the transition to the digital economy that we are told is all but inevitable, but they haven’t managed to take off in the way that say, contactless cards have.

Jordan McKee, an analyst at 451 research commented that, “Mobile wallets haven’t yet proven they are measurably better than incumbent payment mechanisms, which general work quite well”. Avivah Litan, an Analyst at Gartner, put the lack of uptake of digital wallets down to the ease of current systems,

“It’s incredibly easy to swipe or dip a credit or debit card at a payment terminal and U.S. consumers are used to this mature payment application where they know they are well protected from financial loss…..It will take a lot of persuasion and financial incentives to get consumers to change their payment habits.”

Apple Pay

Apple Pay is built around contactless payment technology. It pulls your credit cards, debit cards, and other sensitive-payment data from the Wallet app, enabling you to use an iPhone or Apple Watch like a contactless card at store checkouts.

AppleWatch

Apple Pay is growing fast as well, with some experts commenting that it could well be Apple’s saviour. Users of Apple Pay completed more transactions in September 2016 than they did in the entire year of 2015. And on top of that transaction volume was up 500% in the fourth quarter, compared to the same quarter in 2015. Someone in Kensington, England, even used the service to pay for a 1964 Aston Martin DB5 worth over $1 million.

This growth can be partially attributed to the expansion in service from just the US and the UK, to now include Switzerland, Canada, Australia, China, France, Hong Kong, Singapore, Japan and Russia, with Spain soon to follow. Apple has also expanded the payment service to the web, to enable it to be used on mobile phones and desktop computers through Safari, and to be used in apps like Uber or Starbucks. According to CEO Tim Cook, hundreds of thousands of websites are now Apple Pay ready.

Google Wallet/Android Pay

Android Pay has been developed by Google to power NFC (Near Field Communication) payments with phones, tablets, and watches, as a rival to Apple Pay. At the minute, they are only in the US, UK, Singapore, Hong Kong and Australia – lagging behind Apple on the availability of the service – though they are rumoured to be starting up in Canada in the near future! They have also have benefitted from the expansion of MasterPass to cover Google Wallet transactions online, expanding their coverage and viability as an alternative to Apple Pay.

Android pay is available to use, in the countries it operates, nearly everywhere that Apple Pay is (though you might not see branding in quite the same way) and has a major bonus in that you can collect rewards for purchases, unlike Apple Pay.

These digital wallets operate under varied circumstances, but the premise and underlying goals remain similar. Yet, despite their adoption by major providers, there are still alternatives that are being implemented by retailers and businesses.

Retailers Alternatives

apple-iwatch

Aside from all the fanfare of mega-investments from Apple, Samsung and Google in NFC on smartphones, Starbucks, Dunkin’ Donuts and Walmart Pay allow customers to pay using a QR code displayed on a smartphone, which is a much most cost effective alternative. Starbucks customers spent an estimated $3 billion using the Starbucks app, though the success of apps of this nature can be partially attributed to the customer loyalty that the apps build with vouchers and offers for users.

Nitesh Patel, an analyst at Strategy Analytics, suggested that this could be the main reason for their success over digital wallets, “so far, mobile wallets, particularly NFC, have yet to integrate payments with loyalty in a compelling way…. You need a single tap to redeem or accumulate points and coupons”. Ultimately, the frills of the service are what is going to sell it to the general public, and digital wallets just don’t have those frills yet (especially Apple Pay, though it makes up for it somewhat in its widespread adoption).

Ultimately, we are still very early on in the transition to a cashless society. The technology is all but there, but the infrastructure and cultural acceptance hasn’t quite got there. It isn’t clear quite yet as to whether the digital wallet market will remain as open or competitive, or whether it will become an Android vs Apple battle. We shall simply have to wait and see who establishes themselves as the frontrunner.

By Josh Hamilton

Part 2 – Connected Vehicles: Paving the Way for IoT on Wheels

Part 2 – Connected Vehicles: Paving the Way for IoT on Wheels

Connected Vehicles: IoT on Wheels

As vehicles become the hottest “thing” in IoT, the automotive, heavy equipment, and machinery industries face some of the most significant opportunities in decades.

I’ve previously explored some connected car use cases and the opportunities and challenges that need to be considered when developing a monetization strategy.

Specifically in Part 1, I covered the four main business offering categories: 1) Transportation as a Service; 2) Post-Sale/Lease Secondary Services; 3) Road Use Measurement Services; and 4) Secondary Data Stream Monetization.

Here, in Part 2, I offer further guidelines, best practices, and guardrails to maximize commercial success of major players in this technology marketplace.

Who are the main players?

General Motors

So who are they? There are many systems and vendors that belong in the back-office stack and are required to make a connected car initiative successful. The following represents lead players who are already on the front line, offering monetized connected car services of all types to a growing market.

  • Original Equipment Manufacturers (OEMs) – The actual makers of vehicles themselves stand front and center in the connected car world, arguably better positioned than anyone to realize secondary monetization potential from a captive audience with high propensity for brand loyalty. Examples include household names: General Motors, Audi, BMW, Subaru, Caterpillar, and Komatsu, and John Deere.
  • Third-Party Device Manufacturers – These devices often connect to vehicle systems using the existing capabilities of On-Board Diagnostic (OBD) ports, which have been mandated since 1996. Along with the connectivity offered by WiFi, cellular networks and Bluetooth, this group includes entities offering everything from aftermarket telematics devices to personalization and safety systems like Verizon Hum, and mobile phones themselves. Ubiquitous mobility providers Google and Apple loom heavily within this broad group of players determining how exactly to stake their ground.
  • Third-Party Service Providers – Included here is any service provider that is largely agnostic to a car or the device’s manufacturer. Insurance and maintenance providers are perhaps the most well-known members of this group. However, the broadest definition of this category also includes any service not directly offered by OEMs or third-party devices themselves: the popular Waze mapping app and the fleet management service offered by companies like WEX are two examples.

Where are the opportunities and challenges?

Below are some high-level challenges and opportunities, with some real-life examples:

OEMs Monetizing Transportation as a Service

  • Opportunities – Viable opportunities differ depending on the exact OEM and the exact TaaS model. For example, luxury brands such as Audi, BMW, and Bentley are best positioned to offer services which benefit from brand affinity and prestige. OEMs with a broader install base and lower ASPs are better positioned to benefit from models like group, and peer-to-peer sharing models (e.g. the “airport sharing” component of Ford’s new Ford Pass offering).
  • Challenges – Existing system infrastructure for selling/leasing cars was designed for no more than two names on a title or lease and must be dramatically enhanced or replaced. Managing vehicle inventory for subscription or on-demand access requires fleet management strategies and systems that must be tightly tailored to individual marketplaces, in terms of geography, demographics, and days/times. Effective and comprehensive fleet management direct to consumers, perfected by entities like Zipcar, remain elusive to OEMs entering the game, and will require a significant amount of trial-and-error — potentially resulting in the disintermediation of the entrenched dealer networks.

Aftermarket Device Manufacturers Monetizing Secondary On-Board Services

  • Opportunities – Most cars on the road now (and for a few years to come) do not benefit from OEM-provided connected services, but are being driven by consumers who will demand them nonetheless in what can be called a ‘retrofit’ model. Due to both the long lifespan of vehicles and the extensive amount of time it takes any OEM to go from concept to production, the aftermarket is inherently positioned to be far more agile and much faster to market. Aftermarket devices are far more natively bound to drivers than to vehicles, so the services can move from one car to another as desired.
  • Challenges – OEMs are natively positioned to deliver these offerings at point of initial sale/lease, even to the point of treating them as ‘loss leaders’ and effectively giving them away in order to incent the vehicle purchase itself, or to make money solely on the accompanying service subscription that enables their ‘embedded’ devices, whereas aftermarket devices require stickiness to encourage an additional purchase and/or relationship on the part of the customer above and beyond what they are already paying for the vehicle.

Third-Party Service Providers Monetizing Road Use Measurement Services

  • Opportunities – Insurance companies are already leveraging the available data streams from embedded OBD systems to provide usage-based insurance to infrequent drivers (potentially opening up a new market segment), or to reward safe drivers with additional discounts on their premiums. Government entities, suffering from declining fuel tax revenues due to more efficient (and non-fuel-consuming electric) vehicles are looking to compensate with direct taxation models based on actual public road use, using data streams from embedded or ‘add-on’ telematics devices (e.g. the trial ‘OReGO’ program underway in Oregon). Unlike many other on-board services which require an ‘always connected’ or ‘almost always connected’ state in order to work properly, most Road Use Measurement Services work with just intermittent connectivity by using a data ‘store-and-forward’ model, thus removing the cost of and dependency on cellular or satellite network providers.
  • Challenges – Effective market penetration will likely require ‘device agnosticism’, which will in turn require sophisticated data stream management platforms capable of organizing and transforming data from myriad sources and formats. Third-party device manufacturers may opt to offer their own (i.e. proprietary) accompanying secondary services, which may supplant the possibility of a vibrant ‘open’ market in which device-agnostic service providers can play.

OEMs Monetizing Secondary Data Streams

meta-data

  • Opportunities – The sensors on vehicles can produce data streams that the OEM may choose not to share with third parties. This data may be used to provide ongoing revenue streams to dealers. On a granular level, this data can be used to build driver profiles, which in turn can drive individualized direct marketing efforts for available add-on services, or the suggested purchase of a more suitable next vehicle when the time comes. On an aggregate level, this data can be sold to third parties for subsequent commercial or informational purposes.
  • Challenges – OEMs tend to possess neither the mindset nor the infrastructure required to think and act like purveyors of data, as they are historically purveyors of steel. Consumers are ever more wary of implicit data collection, from a privacy standpoint and a security standpoint.

Regardless of the kind of business or the offerings, all the players are working to create new revenue streams from connected services and monetizing alternatives that are tied to the customer rather than traditional purchasing and leasing streams that are tied to the product. The winners will be the ones that provide the best customer experience.

The road to success has its share of potholes. The biggest ones have little to do with the technology itself but constructing a viable business model (along with processes and supporting technologies) where monetization and recurring revenue streams are profitable.

While we aren’t sure whether the next big buck will come from – inside the vehicle or outside of it – we are clear that the IoT is igniting new and more revenue tracks for the automotive industry. The themes and scenarios offered, which are nearly universally applicable across all potential connected car go-to-market efforts, can help the category on their journey towards IoT nirvana.

By Tom Dibble

Microsoft Releases Beta of Microsoft Cognitive Toolkit For Deep Learning Advances

Microsoft Releases Beta of Microsoft Cognitive Toolkit For Deep Learning Advances

Microsoft Cognitive Toolkit

Microsoft has released an updated version of Microsoft Cognitive Toolkit, a system for deep learning that is used to speed advances in areas such as speech and image recognition and search relevance on CPUs and NVIDIA® GPUs.

The toolkit, previously known as CNTK, was initially developed by computer scientists at Microsoft who wanted a tool to do their own research more quickly and effectively. It quickly moved beyond speech and morphed into an offering that customers including a leading international appliance maker and Microsoft’s flagship product groups depend on for a wide variety of deep learning tasks.

We’ve taken it from a research tool to something that works in a production setting,” said Frank Seide, a principal researcher at Microsoft Artificial Intelligence and Research and a key architect of Microsoft Cognitive Toolkit.

The latest version of the toolkit, which is available on GitHub via an open source license, includes new functionality that lets developers use Python or C++ programming languages in working with the toolkit.  With the new version, researchers also can do a type of artificial intelligence work called reinforcement learning.

Finally, the toolkit is able to deliver better performance than previous versions. It’s also faster than other toolkits, especially when working on big datasets across multiple machines. That kind of large-scale deployment is necessary to do the type of deep learning across multiple GPUs that is needed to develop consumer products and professional offerings…

Read Full Article: Microsoft

A President’s Trove of Data

A President’s Trove of Data

Then vs Now

According to some popular opinions, today’s information age affords more information to teens today than a few of the world leaders had access to 20 years ago. C+R Research has put this hypothesis through its paces, comparing access to information across areas such as private data, classified information, genetics, public opinion, and more and finds that in many ways the average smartphone user does, in fact, have access to a lot more information than those with the highest clearance would have two decades ago. However, the accuracy and quality of data available don’t necessarily compare.

Critical Information vs. the Non-Essentials

C+R Research finds that just about any 13-year- old with a smartphone in 2016 would beat President Bill Clinton’s 1996 intelligence and access in areas such as traffic data, music, trivia, opinion, and even genetics. But then, the president of the United States might not have time to listen to the 30 million songs immediately accessible via Spotify, nor would Air Force One likely be constrained by the same traffic limitations as the rest of us. Of course, political campaign teams of 20 years ago would drool for the polling possibilities that Twitter offers today.

Data

On the other hand, President Clinton would have had better access to classified information, data from satellites, and research journals, as well as access to private data – though there are rules governing this, some very important people tend to be ‘incorporated’ into such regulations. Happily, or unhappily depending on how much privacy you desire, tracking of family members via the secret service in 1996 was about as proficient as the smartphone apps we use today to monitor friends and family.

In the end, the 13-year-old wins 7 to 5 in the most general terms, but it’s important to recognize that the broad scope of information available today doesn’t necessarily point to accurate or significant information, two traits President Clinton could be sure of.

By Jennifer Klostermann

Blockchain and the IoT

Blockchain and the IoT

IoT Blockchain

Blockchain, also known as Distributed Ledger Technology (DLT), is the innovative technology behind Bitcoin. The impact of Bitcoin has been tremendous and, as with any revolutionary technology, was treated initially with awe and apprehension. Since its open source release back in 2009, Bitcoin became a transformative force in the global payments system, establishing itself without the aid or support of the traditional financial infrastructure. While initial usage saw huge success in black markets, Bitcoin defied odds, and the blockchain technology spawned other cryptocurrencies, exchanges, commercial ventures, alliances, consortiums, investments, and uptake by governments, merchants, and financial services worldwide.

block-chain

On August 12, the World Economic Forum (WEF) published a report on the future of the financial infrastructure, and in particular on the transformative role that blockchain technology is set to play. Notably, it analyzes the technology’s impact on the financial services industry and how it can provide more transparency and security. Potential use cases are examined, including for insurance, deposits and lending, insurance, capital raising, investment management, and market provisioning. The report also looks at the current challenges in instituting a widespread implementation of blockchain, many of which will require international legal frameworks, harmonized regulatory environments, and global standardization efforts.

DLT is already having a serious impact on the financial services industry. The WEF report states that 80% of banks will initiate a DLT project by next year, and more than $1.4 billion has already been invested in the technology in the past three years. More than that, governments and law firms are seriously staking their claim in advancing the technology. Law firm Steptoe & Johnson LLP recently announced the expansion of its Blockchain Team into a multidisciplinary practice involving FinTech, financial services, regulatory, and law enforcement knowledge. The firm is also one of the founders of the Blockchain Alliance, a coalition of blockchain companies and law enforcement and regulatory agencies, alongside the U.S. Chamber of Digital Commerce and Coin Center. This expansion is an endorsement of the potential of DLT, within and eventually beyond financial services.

blockchain-landscape-2016

(Image source: Startupmanagement.org)

The possible applications of blockchain are already being explored in numerous new sectors: energy, transportation, intellectual property, regulation and compliance, international trade, law enforcement, and government affairs, among many others. Ethereum is one blockchain endeavor that features smart contract functionality. The distributed computing platform provides a decentralized virtual machine to execute peer-to-peer contracts using the Ether cryptocurrency. The Ether Hack Camp is launching a four-week hackathon in November 2016 for DLT using Ether. Currently, the Camp is requesting developers to propose ideas to the public, which will be voted on by registered fans and those selected will be able to take part in the hackathon. The ideas can be seen online already and are vast and varied, ranging from academic publishing without journals, music licensing reform, decentralized ISP, voting on the blockchain, alternative dispute resolution, and rural land register. The idea winning first place in November will collect $50,000 USD.

IBM is one of the most dynamic forerunners currently pushing DLT for the IoT. The firm just announced it is investing $200 million in blockchain technology to drive forward its Watson IoT efforts. The firm is opening up a new office in Germany, which will serve as headquarter to new blockchain initiatives. The investment is part of the $3 billion that IBM pledged to develop Watson’s cognitive computing for the IoT. The goal of the new investment is to enable companies to share IoT data in a private blockchain. A commercial implementation is already underway with Finnish company Kouvola Innovation, which wants to integrate its capabilities into the IBM Blockchain and link devices for tracking, monitoring, and reporting on shipping container status and location, optimizing packing, and transfer of shipments.

IBM is working hard to align its IoT, AI and Blockchain technologies through Watson. The new headquarters in Germany will be home to a Cognitive IoT Collaboratories for researchers, developers and engineers.

Many of IBM’s current projects are developed leveraging the open source Hyperledger Project fabric, a consortium founded by the Linux Foundation in which IBM is a significant contributor, alongside Cisco and Intel. IBM pushed its involvement even further with the June launch of its New York-based Bluemix Garage. The idea is to allow developers and researchers the opportunity to use IBM Cloud APIs and blockchain technologies to drive cognitive, IoT, unstructured data, and social media technology innovation. Just one month after the launch, IBM announced the launch of a cloud service for companies running blockchain technology. The cloud service is underpinned by IBM’s LinuxONE technology, which is specifically designed to meet the security requirements of critical sectors, such as financial, healthcare, and government.

The potential for DLT is certainly broad and rather long-term, but the engagement by the financial services industry is a testament to its potential. While FinTech remains the big focus for blockchain technologies, its success will drive the use of DLT for other areas. The promise of blockchain is to deliver accountability and transparency; although this could be disrupted significantly if announcements, such as the one made by Accenture on ‘editable’ Blockchain, become a reality. While banks may welcome the feature, it would be a serious blow to not only the integrity, but also the security of blockchain technology.

By Michela Menting

Great Cloud Platforms Need to Win the Hearts and Minds of Developers First

Great Cloud Platforms Need to Win the Hearts and Minds of Developers First

Great Cloud Platforms 

Adoption of cloud computing services is growing exponentially all around the world. Companies are realizing that so much of the hard, expensive work that they used to have to do internally can now be outsourced to cloud providers, allowing the companies to focus on what it is that they do best. That’s the reason why tech research firm Gartner projects that over the next five years, the shift to the cloud is looking to be a US$1-trillion market.

Everything from running payrolls, to marketing, logistics, data analysis and much, much more is moving to the cloud, and one of the most successful uses of the cloud is the concept of Platform-as-a-Service (PaaS, as it is known). What this does is enable customers to develop, run and manage their own applications without having to invest heavily in the infrastructure required in order to develop and launch a web application.

The key to creating a good product on the right platform is to win the hearts and minds of web developers so that they choose the right platform to go with. SAP, the world’s largest enterprise cloud company with over 320,000 customers and over 110 million cloud users in 190 countries is using its extensive experience and knowledge in the business space to offer the SAP HANA Cloud Platform, a remarkable service for all company sizes. This platform is already being used extensively by developers who are creating apps for their customers or their various organizations and employees.

hcp_customer_journey_october12_2016_v5_001

The SAP HANA Cloud Platform enables developers to build business applications in the cloud quickly and easily.

Three features of this platform stand out:

  1. its ability to extend your cloud and on-premise applications to develop customized hybrid solutions,
  2. the awesome feature allowing you to integrate applications seamlessly and securely to synchronize data and processes across cloud, on-premise and third-party applications, as well as
  3. the core feature which allows you to build new enterprise-ready applications rapidly with an open standards platform that brings out the best in developers.

The Director of Group Software at the Danone Group, Ralf Steinbach, says that “with SAP HANA Cloud Platforms, we can quickly develop beautiful, user-friendly applications that are opening new opportunities to connect our customers directly to our back-end systems.”

Cloud services are a rapidly expanding market, and research indicates there are over 150 PaaS offerings to choose from. Too often companies simply choose the PaaS of a cloud-service provider that they’re already working with, without exploring the offerings in-depth and with a long-term focus.

According to John Rymer of Forrester Research, there are three types of developers who make use of PaaS offerings to build apps:

  1. Coders, who want the ability to do it all themselves,
  2. DevOps developers who want the ability to do some coding if they need to but can also plug into some level of abstraction, and
  3. RapidDevs who don’t want to code at all but just to configure a task to the capabilities of the platform.

For each of these types of developers, the SAP HANA Cloud Platform can deliver, due to its flexibility, requiring fewer skills and still at a lower cost. That flexibility extends to the choices that customers are offered between selecting to use a private, managed cloud, a public pay-as-you-go model or even public cloud infrastructure-as-a-service or platform-as-a-service.

In order for a platform to survive and thrive, it requires developers to regard it as the best choice for what they have to do on a daily basis: easily and quickly deploy applications that leverage a proven in-memory platform for next generation applications and analytics supported by a world-class technical team at every step of the way.

A great way to get started with SAP HANA Cloud Platform is with the user-based packages. Priced per users, they offer the flexibility to choose the package that best fits your needs. You can get started for as little as $25 / user / month, and scale as you go, adding more users or upgrading to add more resources when you need them.

For a limited time, you can get 30% off SAP HANA Cloud Platform user-based packages on the SAP Store by using the promo code HCP30.

Sponsored spotlight series by SAP

By Jeremy Daniel

The Intelligent Industrial Revolution

The Intelligent Industrial Revolution

AI Revolution

Prefatory Note: Over the past six weeks, we took NVIDIA’s developer conference on a world tour. The GPU Technology Conference (GTC) was started in 2009 to foster a new approach to high performance computing using massively parallel processing GPUs. GTC has become the epicenter of GPU deep learning — the new computing model that sparked the big bang of modern AI. It’s no secret that AI is spreading like wildfire. The number of GPU deep learning developers has leapt 25 times in just two years. Some 1,500 AI startups have cropped up. This explosive growth has fueled demand for GTCs all over the world. So far, we’ve held events in Beijing, Taipei, Amsterdam, Tokyo, Seoul and Melbourne. Washington is set for this week and Mumbai next month. I kicked off four of the GTCs. Here’s a summary of what I talked about, what I learned and what I see in the near future as AI, the next wave in computing, revolutionizes one industry after another.

A New Era of Computing

a_new_era-iot

Intelligent machines powered by AI computers that can learn, reason and interact with people are no longer science fiction. Today, a self-driving car powered by AI can meander through a country road at night and find its way. An AI-powered robot can learn motor skills through trial and error. This is truly an extraordinary time. In my three decades in the computer industry, none has held more potential, or been more fun. The era of AI has begun.

Our industry drives large-scale industrial and societal change. As computing evolves, new companies form, new products are built, our lives change. Looking back at the past couple of waves of computing, each was underpinned by a revolutionary computing model, a new architecture that expanded both the capabilities and reach of computing.

In 1995, the PC-Internet era was sparked by the convergence of low-cost microprocessors (CPUs), a standard operating system (Windows 95), and a new portal to a world of information (Yahoo!). The PC-Internet era brought the power of computing to about a billion people and realized Microsoft’s vision to put “a computer on every desk and in every home.” A decade later, the iPhone put “an Internet communications” device in our pockets. Coupled with the launch of Amazon’s AWS, the Mobile-Cloud era was born. A world of apps entered our daily lives and some 3 billion people enjoyed the freedom that mobile computing afforded.

Today, we stand at the beginning of the next era, the AI computing era, ignited by a new computing model, GPU deep learning. This new model — where deep neural networks are trained to recognize patterns from massive amounts of data — has proven to be “unreasonably” effective at solving some of the most complex problems in computer science. In this era, software writes itself and machines learn. Soon, hundreds of billions of devices will be infused with intelligence. AI will revolutionize every industry.

GPU Deep Learning “Big Bang”

Why now? As I wrote in an earlier post (“Accelerating AI with GPUs: A New Computing Model”), 2012 was a landmark year for AI. Alex Krizhevsky of the University of Toronto created a deep neural network that automatically learned to recognize images from 1 million examples. With just several days of training on two NVIDIA GTX 580 GPUs, “AlexNet” won that year’s ImageNet competition, beating all the human expert algorithms that had been honed for decades. That same year, recognizing that the larger the network, or the bigger the brain, the more it can learn, Stanford’s Andrew Ng and NVIDIA Research teamed up to develop a method for training networks using large-scale GPU-computing systems.

The world took notice. AI researchers everywhere turned to GPU deep learning. Baidu, Google, Facebook and Microsoft were the first companies to adopt it for pattern recognition. By 2015, they started to achieve “superhuman” results — a computer can now recognize images better than we can. In the area of speech recognition, Microsoft Research used GPU deep learning to achieve a historic milestone by reaching “human parity” in conversational speech.

Image recognition and speech recognition — GPU deep learning has provided the foundation for machines to learn, perceive, reason and solve problems. The GPU started out as the engine for simulating human imagination, conjuring up the amazing virtual worlds of video games and Hollywood films. Now, NVIDIA’s GPU runs deep learning algorithms, simulating human intelligence, and acts as the brain of computers, robots and self-driving cars that can perceive and understand the world. Just as human imagination and intelligence are linked, computer graphics and artificial intelligence come together in our architecture. Two modes of the human brain, two modes of the GPU. This may explain why NVIDIA GPUs are used broadly for deep learning, and NVIDIA is increasingly known as “the AI computing company.”

An End-to-End Platform for a New Computing Model

As a new computing model, GPU deep learning is changing how software is developed and how it runs. In the past, software engineers crafted programs and meticulously coded algorithms. Now, algorithms learn from tons of real-world examples — software writes itself. Programming is about coding instruction. Deep learning is about creating and training neural networks. The network can then be deployed in a data center to infer, predict and classify from new data presented to it. Networks can also be deployed into intelligent devices like cameras, cars and robots to understand the world. With new experiences, new data is collected to further train and refine the neural network. Learnings from billions of devices make all the devices on the network more intelligent. Neural networks will reap the benefits of both the exponential advance of GPU processing and large network effects — that is, they will get smarter at a pace way faster than Moore’s Law.

Whereas the old computing model is “instruction processing” intensive, this new computing model requires massive “data processing.” To advance every aspect of AI, we’re building an end-to-end AI computing platform — one architecture that spans training, inference and the billions of intelligent devices that are coming our way.

Let’s start with training. Our new Pascal GPU is a $2 billion investment and the work of several thousand engineers over three years. It is the first GPU optimized for deep learning. Pascal can train networks that are 65 times larger or faster than the Kepler GPU that Alex Krizhevsky used in his paper. A single computer of eight Pascal GPUs connected by NVIDIA NVLink, the highest throughput interconnect ever created, can train a network faster than 250 traditional servers.

Soon, the tens of billions of internet queries made each day will require AI, which means that each query will require billions more math operations. The total load on cloud services will be enormous to ensure real-time responsiveness. For faster data center inference performance, we announced the Tesla P40 and P4 GPUs. P40 accelerates data center inference throughput by 40 times. P4 requires only 50 watts and is designed to accelerate 1U OCP servers, typical of hyperscale data centers. Software is a vital part of NVIDIA’s deep learning platform. For training, we have CUDA and cuDNN. For inferencing, we announced TensorRT, an optimizing inferencing engine. TensorRT improves performance without compromising accuracy by fusing operations within a layer and across layers, pruning low-contribution weights, reducing precision to FP16 or INT8, and many other techniques.

Someday, billions of intelligent devices will take advantage of deep learning to perform seemingly intelligent tasks. Drones will autonomously navigate through a warehouse, find an item and pick it up. Portable medical instruments will use AI to diagnose blood samples onsite. Intelligent cameras will learn to alert us only to the circumstances that we care about. We created an energy-efficient AI supercomputer, Jetson TX1, for such intelligent IoT devices. A credit card-sized module, Jetson TX1 can reach 1 TeraFLOP FP16 performance using just 10 watts. It’s the same architecture as our most powerful GPUs and can run all the same software.

In short, we offer an end-to-end AI computing platform — from GPU to deep learning software and algorithms, from training systems to in-car AI computers, from cloud to data center to PC to robots. NVIDIA’s AI computing platform is everywhere.

AI Computing for Every Industry

Our end-to-end platform is the first step to ensuring that every industry can tap into AI. The global ecosystem for NVIDIA GPU deep learning has scaled out rapidly. Breakthrough results triggered a race to adopt AI for consumer internet services — search, recognition, recommendations, translation and more. Cloud service providers, from Alibaba and Amazon to IBM and Microsoft, make the NVIDIA GPU deep learning platform available to companies large and small. The world’s largest enterprise technology companies have configured servers based on NVIDIA GPUs. We were pleased to highlight strategic announcements along our GTC tour to address major industries:

AI Transportation: At $10 trillion, transportation is a massive industry that AI can transform. Autonomous vehicles can reduce accidents, improve the productivity of trucking and taxi services, and enable new mobility services. We announced that both Baidu and TomTom selected NVIDIA DRIVE PX 2 for self-driving cars. With each, we’re building an open “cloud-to-car” platform that includes an HD map, AI algorithms and an AI supercomputer.

Driving is a learned behavior that we do as second nature. Yet one that is impossible to program a computer to perform. Autonomous driving requires every aspect of AI — perception of the surroundings, reasoning to determine the conditions of the environment, planning the best course of action, and continuously learning to improve our understanding of the vast and diverse world. The wide spectrum of autonomous driving requires an open, scalable architecture — from highway hands-free cruising, to autonomous drive-to-destination, to fully autonomous shuttles with no drivers.

NVIDIA DRIVE PX 2 is a scalable architecture that can span the entire range of AI for autonomous driving. At GTC, we announced DRIVE PX 2 AutoCruise designed for highway autonomous driving with continuous localization and mapping. We also released DriveWorks Alpha 1, our OS for self-driving cars that covers every aspect of autonomous driving — detection, localization, planning and action.

We bring all of our capabilities together into our own self-driving car, NVIDIA BB8. Here’s a little video:

NVIDIA is focused on innovation at the intersection of visual processing, AI and high performance computing — a unique combination at the heart of intelligent and autonomous machines. For the first time, we have AI algorithms that will make self-driving cars and autonomous robots possible. But they require a real-time, cost-effective computing platform.

At GTC, we introduced Xavier, the most ambitious single-chip computer we have ever undertaken — the world’s first AI supercomputer chip. Xavier is 7 billion transistors — more complex than the most advanced server-class CPU. Miraculously, Xavier has the equivalent horsepower of DRIVE PX 2 launched at CES earlier this year — 20 trillion operations per second of deep learning performance — at just 20 watts. As Forbes noted, we doubled down on self-driving cars with Xavier.

  • AI Enterprise: IBM, which sees a $2 trillion opportunity in cognitive computing, announced a new POWER8 and NVIDIA Tesla P100 server designed to bring AI to the enterprise. On the software side, SAP announced that it has received two of the first NVIDIA DGX-1 supercomputers and is actively building machine learning enterprise solutions for its 320,000 customers in 190 countries.
  • AI City: There will be 1 billion cameras in the world in 2020. Hikvision, the world leader in surveillance systems, is using AI to help make our cities safer. It uses DGX-1 for network training and has built a breakthrough server, called “Blade,” based on 16 Jetson TX1 processors. Blade requires 1/20 the space and 1/10 the power of the 21 CPU-based servers of equivalent performance.
  • AI Factory: There are 2 billion industrial robots worldwide. Japan is the epicenter of robotics innovation. At GTC, we announced that FANUC, the Japan-based industrial robotics giant, will build the factory of the future on the NVIDIA AI platform, from end to end. Its deep neural network will be trained with NVIDIA GPUs, GPU-powered FANUC Fog units will drive a group of robots and allow them to learn together, and each robot will have an embedded GPU to perform real-time AI. MIT Tech Review wrote about it in its story “Japanese Robotics Giant Gives Its Arms Some Brains.”
  • The Next Phase of Every Industry: GPU deep learning is inspiring a new wave of startups — 1,500+ around the world — in healthcare, fintech, automotive, consumer web applications and more. Drive.ai, which was recently licensed to test its vehicles on California roads, is tackling the challenge of self-driving cars by applying deep learning to the full driving stack. Preferred Networks, the Japan-based developer of the Chainer framework, is developing deep learning solutions for IoT. Benevolent.ai, based in London and one of the first recipients of DGX-1, is using deep learning for drug discovery to tackle diseases like Parkinson’s, Alzheimer’s and rare cancers. According to CB Insights, funding for AI startups hit over $1 billion in the second quarter, an all-time high.

The explosion of startups is yet another indicator of AI’s sweep across industries. As Fortune recently wrote, deep learning will “transform corporate America.”  

AI Everyone

AI for Everyone

AI can solve problems that seemed well beyond our reach just a few years back. From real-world data, computers can learn to recognize patterns too complex, too massive or too subtle for hand-crafted software or even humans. With GPU deep learning, this computing model is now practical and can be applied to solve challenges in the world’s largest industries. Self-driving cars will transform the $10 trillion transportation industry. In healthcare, doctors will use AI to detect disease at the earliest possible moment, to understand the human genome to tackle cancer, or to learn from the massive volume of medical data and research to recommend the best treatments. And AI will usher in the 4thindustrial revolution — after steam, mass production and automation — intelligent robotics will drive a new wave of productivity improvements and enable mass consumer customization. AI will touch everyone. The era of AI is here.

Syndicated article courtesy of Nvidia

By Jen-Hsun Huang

jen-hsun-huangJen-Hsun Huang founded NVIDIA in 1993 and has served since its inception as president, chief executive officer and a member of the board of directors.

NVIDIA invented the GPU in 1999 and, from its roots as a PC graphics company, has gone on to become the world leader in AI computing.

CloudTweaks Comics
5 Reasons Why Your Startup Will Grow Faster In The Cloud

5 Reasons Why Your Startup Will Grow Faster In The Cloud

Cloud Startup Fast-tracking Start-ups face many challenges, the biggest of which is usually managing growth. A start-up that does not grow is at constant risk of failure, whereas a new business that grows faster than expected may be hindered by operational constraints, such as a lack of staff, workspace and networks. It is an unfortunate…

The Industries That The Cloud Will Change The Most

The Industries That The Cloud Will Change The Most

The Industries That The Cloud Will Change The Most Cloud computing is rapidly revolutionizing the way we do business. Instead of being a blurry buzzword, it has become a facet of everyday life. Most people may not quite understand how the cloud works, but electricity is quite difficult to fathom as well. Anyway, regardless of…

Five Signs The Internet of Things Is About To Explode

Five Signs The Internet of Things Is About To Explode

The Internet of Things Is About To Explode By 2020, Gartner estimates that the Internet of Things (IoT) will generate incremental revenue exceeding $300 billion worldwide. It’s an astoundingly large figure given that the sector barely existed three years ago. We are now rapidly evolving toward a world in which just about everything will become…

The Monstrous IoT Connected Cloud Market

The Monstrous IoT Connected Cloud Market

What’s Missing in the IoT? While the Internet of Things has become a popular concept among tech crowds, the consumer IoT remains fragmented. Top companies continue to battle to decide who will be the epicenter of the smart home of the future, creating separate ecosystems (like the iOS and Android smartphone market) in their wake.…

The Cloud Above Our Home

The Cloud Above Our Home

Our Home – Moving All Things Into The Cloud The promise of a smart home had excited the imagination of the movie makers long ago. If you have seen any TV shows in the nineties or before, the interpretation presented itself to us as a computerized personal assistant or a robot housekeeper. It was smart,…

Shadow IT To Remain A Focus For Both Cloud Vendors And CIOs

Shadow IT To Remain A Focus For Both Cloud Vendors And CIOs

Shadow IT To Remain A Focus Shadow IT, a phenomenon defined as building internal IT systems without the official organizational approval has been a growing concern for CIOs over the last few years. In 2015, it climbed to the top of the list of the emerging IT threats, with as much as 83% CIOs reporting…

Cloud Infographic: The Future of File Storage

Cloud Infographic: The Future of File Storage

 The Future of File Storage A multi-billion dollar market Data storage has been readily increasing for decades. In 1989, an 8MB Macintosh Portable was top of the range; in 2006, the Dell Inspiron 6400 became available, boasting 160GB; and now, we have the ‘Next Generation’ MacBook Pro with 256GB of storage built in. But, of course,…

What Top SaaS Vendors Do To Ensure Successful Onboarding

What Top SaaS Vendors Do To Ensure Successful Onboarding

What Top SaaS Vendors Do I am not going to mention names in this article, but if you want to be the best, you must look at what the best do – and do it better. The importance of investing in SaaS onboarding can be easily overlooked in favor of designing efficient and powerful software…

5 Predictions For Education Technology

5 Predictions For Education Technology

Education Technology Although technology has fast influenced most sectors of our world, education is an area that’s lagged behind. Many classrooms still employ the one-to-many lecturing model wherein the average student is catered for while a few are left behind, and others bored. Recently, there’s been a drive to uncover how to use technology successfully…

The Future of M2M Technology & Opportunities

The Future of M2M Technology & Opportunities

The Future Of The Emerging M2M Here at CloudTweaks, most of our coverage is centered around the growing number of exciting and interconnected emerging markets. Wearable, IoT, M2M, Mobile and Cloud computing to name a few. Over the past couple of weeks we’ve talked about Machine to Machine (M2M) such as the differences between IoT and…

Your Biggest Data Security Threat Could Be….

Your Biggest Data Security Threat Could Be….

Paying Attention To Data Security Your biggest data security threat could be sitting next to you… Data security is a big concern for businesses. The repercussions of a data security breach ranges from embarrassment, to costly lawsuits and clean-up jobs – particularly when confidential client information is involved. But although more and more businesses are…

The Security Gap: What Is Your Core Strength?

The Security Gap: What Is Your Core Strength?

The Security Gap You’re out of your mind if you think blocking access to file sharing services is filling a security gap. You’re out of your mind if you think making people jump through hoops like Citrix and VPNs to get at content is secure. You’re out of your mind if you think putting your…

Three Tips To Simplify Governance, Risk and Compliance

Three Tips To Simplify Governance, Risk and Compliance

Governance, Risk and Compliance Businesses are under pressure to deliver against a backdrop of evolving regulations and security threats. In the face of such challenges they strive to perform better, be leaner, cut costs and be more efficient. Effective governance, risk and compliance (GRC) can help preserve the business’ corporate integrity and protect the brand,…

Three Factors For Choosing Your Long-term Cloud Strategy

Three Factors For Choosing Your Long-term Cloud Strategy

Choosing Your Long-term Cloud Strategy A few weeks ago I visited the global headquarters of a large multi-national company to discuss cloud strategy with the CIO. I arrived 30 minutes early and took a tour of the area where the marketing team showcased their award winning brands. I was impressed by the digital marketing strategy…

How To Humanize Your Data (And Why You Need To)

How To Humanize Your Data (And Why You Need To)

How To Humanize Your Data The modern enterprise is digital. It relies on accurate and timely data to support the information and process needs of its workforce and its customers. However, data suffers from a likability crisis. It’s as essential to us as oxygen, but because we don’t see it, we take it for granted.…

Cost of the Cloud: Is It Really Worth It?

Cost of the Cloud: Is It Really Worth It?

Cost of the Cloud Cloud computing is more than just another storage tier. Imagine if you’re able to scale up 10x just to handle seasonal volumes or rely on a true disaster-recovery solution without upfront capital. Although the pay-as-you-go pricing model of cloud computing makes it a noticeable expense, it’s the only solution for many…

Don’t Be Intimidated By Data Governance

Don’t Be Intimidated By Data Governance

Data Governance Data governance, the understanding of the raw data of an organization is an area IT departments have historically viewed as a lose-lose proposition. Not doing anything means organizations run the risk of data loss, data breaches and data anarchy – no control, no oversight – the Wild West with IT is just hoping…

Beacons Flopped, But They’re About to Flourish in the Future

Beacons Flopped, But They’re About to Flourish in the Future

Cloud Beacons Flying High When Apple debuted cloud beacons in 2013, analysts predicted 250 million devices capable of serving as iBeacons would be found in the wild within weeks. A few months later, estimates put the figure at just 64,000, with 15 percent confined to Apple stores. Beacons didn’t proliferate as expected, but a few…

Staying on Top of Your Infrastructure-as-a-Service Security Responsibilities

Staying on Top of Your Infrastructure-as-a-Service Security Responsibilities

Infrastructure-as-a-Service Security It’s no secret many organizations rely on popular cloud providers like Amazon and Microsoft for access to computing infrastructure. The many perks of cloud services, such as the ability to quickly scale resources without the upfront cost of buying physical servers, have helped build a multibillion-dollar cloud industry that continues to grow each…