Category Archives: Technology

Digital Twin And The End Of The Dreaded Product Recall

Digital Twin And The End Of The Dreaded Product Recall

The Digital Twin 

How smart factories and connected assets in the emerging Industrial IoT era along with the automation of machine learning and advancement of artificial intelligence can dramatically change the manufacturing process and put an end to the dreaded product recalls in the future.

In recent news, Samsung Electronics Co. has initiated a global recall of 2.5 millions of their Galaxy Note 7 smartphones, after finding that the batteries of some of their phones exploded while charging. This recall would cost the company close to $1 Billion.

This is not a one-off incident.

Product recalls have plagued the manufacturing world for decades, right from food and drug to automotive industries, causing huge losses and risk to human life. In 1982, Johnson & Johnson recalled 31 million bottles of Tylenol which retailed at $100 million after 7 people died in Chicago-area. In 2000, Ford recalled 20 million Firestone tires losing around $3 billion, after 174 people died in road accidents due to faulty tires. In 2009, Toyota issued a recall of 10 million vehicles due to numerous issues including gas pedals and faulty airbags that resulted in $2 billion loss consisting of repair expenses and lost sales in addition to the stock prices dropping more than 20% or $35 billion.

Most manufacturers have very stringent quality control processes for their products before they are shipped. Then how and why do these faulty products make it to the market which poses serious life risks and business risks?

Koh Dong-jin, president of Samsung’s mobile business, said that the cause of the battery issue in Samsung Galaxy Note 7 device was “a tiny problem in the manufacturing process and so it was very difficult to find out“. This is true for most of the recalls that happens. It is not possible to manually detect these seemingly “tiny” problems early enough before they result in catastrophic outcomes.

But this won’t be the case in the future.

The manufacturing world has seen 4 transformative revolutions:

  • 1st Industrial Revolution brought in mechanization powered by water and stream.
  • 2nd Industrial Revolution saw the advent of the assembly line powered by gas and electricity
  • 3rd Industrial Revolution introduced robotic automation powered by computing networks
  • The 4th Industrial Revolution has taken it to a completely different level with smart and connected assets powered by machine learning and artificial intelligence.

It is this 4th Industrial Revolution that we are just embarking on that has the potential to transform the face of the manufacturing world and create new economic value to the tune of tens of trillions of dollars, globally, from costs savings and new revenue generation. But why is this the most transformative of all revolutions? Because it is this revolution that has transformed mechanical lifeless machines into digital life-forms with the birth of the Digital Twin.


Digital Twin refers to the computerized companions (or models) of the physical assets that use multiple internet-connected sensors on these assets to represent their near real-time status, working condition, position, and other key metrics that help understand the health and functioning of these assets at granular levels. This helps us understand asset and asset health like we understand humans and human health, with the ability to do diagnosis and prognosis like never before.

How can this solve the recall problem?

Sensor enabling the assembly line and creating Digital Twin of all the individual assets and workflows provides timely insights into tiniest of the issues that can otherwise be easily missed in the manual inspection process. This can detect causes and predict potential product quality issues right in the assembly line as early as possible so that the manufacturers can take proactive action to resolve them before they start snowballing.  This can not only prevent recalls but also reduce scraps in the assembly line taking operational efficiency to unprecedented heights.

What is so deterrent? Why is this problem not solved most organizations that have smart-enabled their factories?

The traditional approach of doing data science and machine learning to analyze data doesn’t scale for this problem. Traditionally, predictive models are created by taking a sample of data from a sample of assets and then these models are generalized for predicting issues on all assets. While this can detect common known issues, which otherwise get caught in the quality control process itself, but it fails to detect the rare events that cause the massive recalls. Rare events have failure patterns that don’t commonly occur in the assets or the assembly line. Although, highly sensitive generalized models can be created to detect any and all deviations but that would generate a lot of false positive alerts which cause a different series of problems altogether. The only way to ensure that we get accurate models that detect only the true issues is to model each asset and the workflow channels individually, understand their respective normal operating conditions and detect their respective deviations. But this is what makes this challenge beyond human-scale. When there are hundreds, thousands or millions of assets and components it is impossible to keep generating and updating models for each one of them manually. It requires automation of the predictive modeling and the machine learning process itself, as putting human data scientists in the loop doesn’t scale.

But aren’t there standard approaches or scripts to automate predictive modeling?

Yes, there are. However, these plain vanilla automation of modeling process which just runs all permutations of algorithms and hyper-parameters again doesn’t work. The number of assets and as such the number of individual models, the frequency at which models need to be updated to capture newer real-world events, the volume of the data and the wide variety of sensor attributes all create prohibitive computational complexity (think millions or billions of permutations), even if someone has infinite infrastructure to process them. The only solution is Cognitive Automation, which is an intelligent process that mimics how a human data scientists leverage prior experience to run fewer experiments to get to an optimal ensemble of models in the fastest possible way. In short, this is about teaching machines to do machine learning and data science like an A.I. Data Scientist.

This is the technology that is required to give Digital Twin a true life-form that delivers the end business value – in this case to prevent recalls.

Does it sound like sci-fi?

It isn’t and it is already happening with the advancement in the world of machine learning and artificial intelligence. Companies like Google are using algorithms to create self-driving cars or beat world champions in complex games. At the same time, we at DataRPM are using algorithms to teach machines to do data analysis and detect asset failures and quality issues on the assembly line. This dramatically improves operational efficiency and prevents the product recalls.

The future, where the dreaded product recalls will be a thing of the past, is almost here!

By Ruban Phukan, Co-Founder and Chief Product & Analytics Officer, DataRPM

The History of Containers and Rise of Docker

The History of Containers and Rise of Docker

Containers 101

Docker started out as a means of creating single application containers, but since has grown into a widely used dev tool and runtime environment. It has been downloaded around two billion times, and Redmonk has said that “we have never seen a technology become ubiquitous so quickly.” The Docker registry stores container images and provides a central point of access which can be used to share containers. Users can either place images into the registry or obtain images from it to deploy directly from the registry. Despite its widespread growth and acceptance, Docker still retains its free open source roots, and hosts a free public registry for containers from which anyone can obtain official Docker images. Below is an infographic discovered via Twistlock which a really nice overview of Container technologies.


By Jonquil McDaniel

Write Once, Run Anywhere: The IoT Machine Learning Shift From Proprietary Technology To Data

Write Once, Run Anywhere: The IoT Machine Learning Shift From Proprietary Technology To Data

The IoT Machine Learning Shift

While early artificial intelligence (AI) programs were a one-trick pony, typically only able to excel at one task, today it’s about becoming a jack of all trades. Or at least, that’s the intention. The goal is to write one program that can solve multi-variant problems without the need to be rewritten when conditions change—write once, run anywhere. Digital heavyweights—notably Amazon, Google, IBM, and Microsoft—are now open sourcing their machine learning (ML) libraries in pursuit of that goal as competitive pressures shift focus from proprietary technologies to proprietary data for differentiation.

Machine learning is the study of algorithms that learn from examples and experience, rather than relying on hard-coded rules that do not always adapt well to real-world environments. ABI Research forecasts ML-based IoT analytics revenues will grow from $2 billion in 2016 to more than $19 billion in 2021, with more than 90% of 2021 revenue to be attributed to more advanced analytics phases. Yet while ML is an intuitive and organic approach to what was once a very rudimentary and primal way of analyzing data, it is worth noting that the ML/AI model creation process itself can be a very complex.


The techniques used to develop machine learning algorithms fall under two umbrellas:

  • How they learn: based on the type of input data provided to the algorithm (supervised learning, unsupervised learning, reinforcement learning, and semi-supervised learning)

  • How they work: based on type of operation, task, or problem performed on I/O data (classification, regression, clustering, anomaly detection, and recommendation engines)

Once the basic principles are established, a classifier can be trained to automate the creation of rules for a model. The challenge lies in learning and implementing the complex algorithms required to build these ML models, which can be costly, difficult, and time-consuming.

Engaging the open-source community introduces an order of magnitude to the development and integration of machine learning technologies without the need to expose proprietary data, a trend which Amazon, Google, IBM, and Microsoft swiftly pioneered.

At more than $1 trillion, these four companies have a combined market cap that dwarfs the annual gross domestic product of more than 90% of countries in the world. Each also open sourced its own deep learning library in the past 12 to 18 months: Amazon’s Deep Scalable Sparse Tensor Network Engine (DSSTNE; pronounced “destiny”), Google’s TensorFlow, IBM’s SystemML, and Microsoft’s Computational Network Toolkit (CNTK). And others are quickly following suit, including Baidu, Facebook, and OpenAI.

But this is just the beginning. To take the most advanced ML models used in IoT to the next level (artificial intelligence), modeling, and neural network toolsets (e.g., syntactic parsers) must improve. Open sourcing such toolsets is again a viable option, and Google is taking the lead by open sourcing its neural network framework, Google’s SyntaxNet, driving the next evolution in IoT from advanced analytics to smart, autonomous machines.

But should others continue to jump on this bandwagon and attempt to shift away from proprietary technology and toward proprietary data? Not all companies own the kind of data that Google collects through Android or Search, or that IBM picked up with its acquisition of The Weather Company’s B2B, mobile, and cloud-based web-properties. Fortunately, a proprietary data strategy is not the panacea for competitive advantage in data and analytics. As more devices get connected, technology will play an increasingly important role for balancing insight generation from previously untapped datasets, and the capacity to derive value from the highly variable, high-volume data that comes with these new endpoints—at a cloud scale, with zero manual tuning.



Collaborative economics is an important component in the analytics product and service strategies of these four leading digital companies all seeking to build a greater presence in IoT and more broadly the convergence of the digital and the physical. But “collaboration” should be placed in context. Once one company open-sourced its ML libraries, other companies were forced to release theirs as well. Millions of developers are far more powerful than a few thousand in-house employees. As well, open sourcing offers these companies tremendous benefits because they can use the new tools to enhance their own operations. For example, Baidu’s Paddle ML software is being used in 30 different online and offline Baidu businesses ranging from health to financial services.

And there are other areas for these companies to invest resources that go beyond the analytics toolsets. Identity management services, data exchange services and data chain of custody are three key areas that will be critical in the growth of IoT and the digital/physical convergence. Pursuing ownership or proprietary access to important data has its appeal. But the new opportunities in the IoT landscape will rely on great technology and the scale these companies possess for a connected world that will in the decades to come reach hundreds of billions of endpoints.

martin-ryan-hi-rezBy  Ryan Martin and Dan Shey

Ryan Martin, Senior Analyst at ABI Research, covers new and emerging mobile technologies, including wearable tech, connected cars, big data analytics, and the Internet of Things (IoT) / Internet of Everything (IoE). 

Ryan holds degrees in economics and political science, with an additional concentration in global studies, from the University of Vermont and an M.B.A. from the University of New Hampshire.

Why Do Television Companies Need A Digital Transformation

Why Do Television Companies Need A Digital Transformation

Cloud TV

Over just a few years, the world of television production, distribution, and consumption has changed dramatically. In the past, with only a few channels to choose from, viewers watched news and entertainment television at specific times of the day or night. They were also limited by where and how to watch. Options included staying home, going to a friend’s house, or perhaps going to a restaurant or bar to watch a special game, show, news story, or event. When we are talking about the TV industry has now been completing and moving to the high definition from the standard definition, now the discussion is about 4K and 8K video standard. But before all these things happen, analog based broadcasting needs to transform digitally. That means TV industry is unavoidable needing a disruptive transformation in their ICT platform to cope with the new processes of acquisition, production, distribution and consumption.


Fast-forward to today, and you have a very different scenario. Thanks to the rise of the Internet – and, in particular, mobile technology – people have nearly limitless options for their news and entertainment sources. Not only that, but they can choose to get their news and other media on TV or on a variety of smart devices, including phones, tablets, smart watches, and more.

Improved Business Value From New Information and Communication Technologies (ICT)

The world has changed, and continues to change, at a rapid pace. This change has introduced a number of challenges to businesses in the television industry. Making the digital media transformation can do a number of things to resolve these challenges and improve your business and viewership.

With leading new ICT, you can see significant business value and improved marketing and production strategies. For example, making this transformation can vastly improve your television station’s information production and service capabilities. It can also smooth the processes involved with improving broadcasting coverage and performance as well.

With these improvements, your station will have faster response times when handling time-sensitive broadcasts. This delivers to your audience the up-to-the-minute coverage and updates they want across different TV and media devices and platforms.

Improved Social Value with New ICT

A television station that refuses to change and evolve with viewers’ continuously evolving needs and wants will find themselves falling behind competitors. However, a TV station that understands the necessity to make the digital media transformation will have significantly improved social value with their audiences.


Television stations that embrace new technology, digital media, storage, cloud computing and sharing will see massive improvements in social value. Consider that this transformation enables your station to produce timely and accurate reports faster, giving your audience the freshest information and entertainment.

By bringing news and entertainment media to your audience when, where and how they want and need it, you can enrich their lives and promote a culture of information sharing that will also serve to improve your ratings and business. With technologies like cloud-based high-definition video production and cloud-based storage and sharing architectures, you can eliminate many of the challenges and pain points associated with reporting news and bringing TV entertainment to a large audience.

Why Do Television, Media, and Entertainment Companies Need a Digital Transformation?

Consider the basic steps that a TV news station must take to get the news to their audience:

  • Acquisition
  • Production
  • Distribution
  • Consumption

For television stations that have not yet embraced a digital media transformation, these steps do not just represent the process of delivering news media to the public. They also represent a series of pain points that can halt progress and delay deadlines. These include:

  • Traditional AV matrices use numerous cables, are limited by short transmission distance for HD signals and require complicated maintenance, slowing down 4K video evolution.
  • Delays when attempting to transmit large video files from remote locations back to the television station.
  • Delays when reporters edit videos because office and production networks in TV stations are separated from each other, requiring them to move back and forth between the production zone and the office zone in their building to do research
  • Delays due to the time it takes to transmit a finished program (between six and twenty-four minutes, depending on the length and whether or not it is a high-definition video) to the audience.
  • 4K video production has much higher requirements on bandwidth and frame rates.

These challenges all occur in traditional structures and architectures for media handling, but they quickly dissolve when a TV station makes the digital transformation and begin using a cloud-based architecture with new ICT.

Keeping Up With Viewer Demand via Ultra High Definition (UHD) Omnimedia

Increasingly, viewers demand more and more individualized experiences. These include interactive programming, rich media, UHD video, and they want it across all applicable devices. Delivering UHD omnimedia is only possible through new ICT, as older IT infrastructures simply cannot scale to the levels necessary to keep up with viewer demands.

Fortunately, through cloud-based architectures and faster sharing, networks and stations may not only keep up with consumer demand but actually surpass it. For example, when using 4K formatting, your station can provide viewers with the highest resolution possible (4096 x 2160 pixels), and your video formatting will be easily scalable for different platforms for the most convenient viewing possible.

Furthermore, by becoming an omnimedia center, your station can enjoy the benefits of converged communications. Essentially, this means that you will be creating information and/or entertainment that can be used in multiple different ways for television, social media, news sites, etc., giving you more coverage and exposure than ever before.

What Is Required to Make the Transformation to Digital Media?

Cloud computing and embracing 4K for video formatting are both essential to digital media transformation, but they are not all that is necessary. Aside from these two elements, television stations can take advantage of advances in technology in a number of ways to improve their marketing and production strategies through the use of new ICTs.

For example, thin clients and cloud computing could enable video editing anywhere and anytime, increasing efficiency. In order to improve the latency between the thin clients and the cloud, with the help of enhanced display protocol, virtual machine and GPU virtualization technology, the new ICT architectures today can enable a smooth editing of 8-track HD video in audio / video synchronization, or even support 6-track 4K video editing on clients via the industry’s only IP storage system.

As mentioned earlier, through cloud computing, it is no longer necessary to physically transport video from a news site to the station. Likewise, it is no longer necessary to do all production work and research in separate areas. Thanks to cloud storage and sharing, these pain points can easily be eliminated, as sharing and sending information becomes much simpler and faster.

An all-IP based video injection process is a must if TV stations want to lower network complexity and simplify system maintenance. There are two ways to approach this:

  1. For example, IP cables can replace traditional SDI signals. Each cable transmits 1 channel of 4K video signal. (SDI requires 4 cables to transmit the same video.) Thus, using IP cables can reduce the number of necessary cables by up to 92%, improving O&M efficiency by 60%, and bringing convenience to system interworking and interaction.
  2. With the help of mobile broadband, WAN accelerated networks, smart phones or tablets, journalists in the field can now shorten the video submission process by 90%. Most importantly, cloud computing allows journalists to edit video anywhere and anytime. With the help of fast trans-coding resources in the cloud, real time video reporting is now possible.

Another major factor in any digital media transformation is big data and data analytics. By collecting and analyzing information on your station’s viewers, you can better create more personalized viewing experiences. Netflix has, perhaps, one of the best and most widely known examples of this, as they have created specific algorithms based on previous customer behavior to predict whether or not a viewer will enjoy a certain film or show, and which media to recommend for any viewer.


Through these and other information and communication technologies, such as the Internet of Things(IoT), SDN (software-defined networking), improved mobile broadband, etc., television stations can bring faster, more accurate, and more convenient news and entertainment to their customers and viewers.

Who Is Leading the Way in the Transformation?

In my opinion, the company who has complete agile innovations across cloud-pipe-device collaboration will lead the way to transformation. One of companies in China called Huawei is now trying to create an ecosystem for the global channel partners and solution partners across the news and media entertainment industry, and it provides an open ICT platform that encourages media industry developers to continue to innovate their products. With strong development in cloud-based architectures, SDN, mobile broadband, and IoT, developers and partners are able to create the most comprehensive solutions that best empower media stations of all kinds to move into the future.

What do you think of the digital media transformation in the Television Industry?

(Originally published September 7th, 2016)

By Ronald van Loon

The Importance of Cloud Backups: Guarding Your Data Against Hackers

The Importance of Cloud Backups: Guarding Your Data Against Hackers

The Importance of Cloud Backups

Cloud platforms have become a necessary part of modern business with the benefits far outweighing the risks. However, the risks are real and account for billions of dollars in losses across the globe per year. If you’ve been hacked, you’re not alone. Here are some other companies in the past several years that have fallen victim to data breaches and lived to survive:

  • Premera Blue Cross had 11 million member’s names, birthdates, email addresses, addresses, telephone numbers, Social Security numbers, member identification numbers, bank account information and medical claims information compromised in 2014.
  • Sony had 47,000 current and former employee’s names, addresses, Social Security numbers, internal emails and other personal information exposed in 2014.
  • Staples had 1.16 million customer’s credit card numbers from 115 stores stolen.
  • Home Depot had 56 million customer’s credit card numbers stolen.
  • LinkedIn had passwords for nearly 6.5 million user accounts taken by Russian cybercriminals in 2012 which resurfaced in May 2016.
  • Worse yet, in 2013, the U.S. Army, Department of Energy, Department of Health and Human Services had their databases breached which included personal information on at least 104,000 employees, contractors, family members and others associated with the Department of Energy, along with information on almost 2,0000 bank accounts.

At the risk of sounding like misery loves company, you are clearly not alone and anyone is susceptible to a breach in data security. In fact, a survey conducted by security consultancy company CyberEdge found that 71% of those surveyed fell victim to a cyber attack in 2014 and 52% believed they would be hit by a successful cyber attack in the near future. Another online backup survey found that 56% of SMBs store mission-critical data both on-premise and in the cloud, identifying that extra precautions are common practice.


While malware comes in different forms of destruction, Ramsonware is one of the newest ploys to handicap the systems of cyber victims. “’Ransomware’ is malicious software that allows a hacker to access an individual or company’s computers, encrypt sensitive data and then demand some form of payment to decrypt it. Doing so essentially lets hackers hold user data or a system hostage,” reports Security Magazine.

Cybercriminals could seek to exploit weak or ignored corporate security policies established to protect cloud services. Home to an increasing amount of business confidential information, such services, if exploited, could compromise organizational business strategy, company portfolio strategies, next-generation innovations, financials, acquisition and divestiture plans, employee data and other data.”

So how do you backup your cloud so that you can recover data without falling victim to Ransomware? Many cloud platforms BDR (backup and disaster recovery) services allow you to schedule backup frequency and replicate data off-site to reduce the possibilities of losing valuable information due to a Ransomware attack, natural disaster or other cause which prevents you from accessing your cloud. For example, if you use Microsoft Azure, they offer Azure Site Recovery which can replicate workloads to the company’s Premium Storage tier of cloud storage which uses SSDs packed with flash chips to speed up cloud applications and associated storage operations. “If you are running I/O [input/output] intensive enterprise workloads on-premises, we recommend that you replicate these workloads to Premium Storage,” wrote Poornima Natarajan, a Microsoft Cloud and Enterprise program manager, in a blog post. “At the time of a failover of your on-premises environment to Azure, workloads replicating to Premium storage will come up on Azure virtual machines running on high speed solid state drives (SSDs) and will continue to achieve high-levels of performance, both in terms of throughout and latency.

While it may be a hassle to backup your cloud data, not doing so could be detrimental to your business. From natural disasters to hackers to human error, taking the time to secure your cloud-based data may cost you a few minutes but could end up being priceless.

By Alex Miller

The Future of Cognitive Computing

The Future of Cognitive Computing

Cognitive Computing

Recently popularized by IBM’s highly intelligent Watson supercomputer, which competed on the hit game show Jeopardy, cognitive computing refers to machines that are capable of learning concepts and patterns through advanced language processing algorithms. A system that involves incredibly advanced artificial intelligence, cognitive computing is one facet of computer science that isn’t for the faint of heart.

Consumer Uses for Cognitive Computing

Although much of the hype around cognitive computing is centered on big business and big data processing, there are a number of consumer applications. Whereas business leaders might use the technology to increase their bottom line, streamline daily operations and achieve greater profitability, consumers can take advantage of cognitive computing to ease some of the burdens of everyday life.

In fact, many consumers are using some form of cognitive computing without realizing it. Smartphone apps, in-store kiosks, and e-Commerce use cognitive computing to offer users and customers greater accessibility, increased support and cost comparison. According to Deloitte, more than half of all mobile users currently use their devices while shopping in order to browse prices and download coupons.

How Cognitive Computing Is Changing the Workplace

While cognitive computing has yet to reach its full potential, there are nearly infinite possibilities for its future implementation. According to some sources, cognitive computing can bolster the recordkeeping and documentation process within the healthcare sector by collating patient history, recommending the appropriate diagnostic tools and even suggesting relevant articles or whitepapers. Some analysts predict that approximately 30 percent of all healthcare IT systems will use cognitive computing by the year 2018.

Our ability to manage ad-hoc projects can also benefit from cognitive computing. By utilizing a system like IBM’s Watson as a personal, AI-driven secretary, project managers can obtain accurate information, monitor timelines and deliverables or even participate in the overall project planning and budgeting phases. People who actively use a project portfolio management strategy can use the technology to achieve greater resource allocation, track multiple projects and collate data from various sources.

People in the insurance industry also stand to benefit from cognitive computing and advanced AI. According to research by experts with IBM, cognitive computing systems can bolster human-computer engagement, strengthen information discovery and make important business decisions. Additional benefits include improvements in risk management, cost analysis and customer service.

Companies use cognitive computing in a myriad of other ways, too. Some use the technology as a means of supporting internal troubleshooting and third-party software, while others use it to collect, store and analyze financial data on behalf of individual clientele.

Receiving Brand Name Support

Cognitive computing is receiving support from some of the top names in the IT world. Apart from IBM and their Watson supercomputer, brands such as Microsoft, Cisco, Google and Spark have thrown their respective hats into the mix. Moreover, they all add something different to the concept of cognitive computing.

For example, Microsoft offers various software development kits and utilities in order to support the programming and implementation of advanced artificial intelligence in modern software. Conversely, Cisco’s Cognitive Threat Analytics suite is meant to identify and resolve cyber-threats as soon as possible.

The Longevity of Cognitive Computing

Despite the fact that it’s still a relatively new concept, there’s no denying cognitive computing’s impact on our daily lives. As more companies pledge resources to the development of the technology and as more consumers embrace cognitive computing in their personal lives, we’ll only see the technology improve even further. Indeed, cognitive computing is definitely here to stay.

By Kayla Matthews

Is Automation The Future Of Radiology?

Is Automation The Future Of Radiology?

Future of Radiology

For those of you who don’t already know, radiology is a subset of medicine that specializes in the diagnosis and treatment of diseases, illnesses and injuries based on imaging techniques. X-rays, MRI’s, CT scans, ultrasounds and PET scans all fall under the umbrella of radiology. Even within this medical niche you will find doctors that are highly specialized in treating certain parts of the body. Once you go down this rabbit hole, you will be shocked to see how deep it can go.

But how close are we to having the entirety of radiation automated by all-knowing robots that can do the job equally well, if not better than our well-trained doctors? The idea of automation in medicine is nothing new, as our exponential progress in technology has brought up the valid concern that robots are the future of medicine. We are already in the process of designing nano-sized robots to solve certain medical problems. We invest millions of dollars in building the best equipment that doctors and healthcare workers can get their hands on. What stops us from taking it one step further and having robots perform our jobs without having to lift a finger?


Take IBM, for example. Their radiologist software Avicenna is already showcasing the future of automation in action. It was specifically programmed to make diagnoses and suggest treatments based on the patient’s medical images and data within their record. Early demos are already showing that its accuracy is on par with independent diagnoses made by trained radiologists. With more data fed to this software in the form of millions of anonymized patient data, it will gradually escape from demo testing and become a seriously useful tool in hospitals all around the world.

Another recent case study of robot-guided radiology in action is Entilic, a deep-learning machine system that is engineered for medical image recognition. According to a test that involved analyzing a CT scan of a patient’s lungs done against three expert human radiologists, “Enlitic’s system was 50% better at classifying malignant tumours and had a false-negative rate (where a cancer is missed) of zero, compared with 7% for the humans”. If this is the kind of result that we are seeing from a startup, just imagine what the implications will be when this technology is fully developed and integrated with the IT systems in healthcare facilities worldwide.

Many people are divided on the implications of automatizing the radiology-guided diagnosis and treatment of patients. The common argument against automation is that it will put a lot of radiologists out of work. Several decades of intense study and hard work will be thrown down the drain because a machine will be able to do their job with greater accuracy and success. Thanks to advances in artificial intelligence and deep learning within machines, this possibility cannot be disregarded any longer. We are already seeing several jobs in the transportation and manufacturing industries being lost to robots and well-programmed machines.


On the other hand, those in favour of automation are arguing that radiology robots are going to help radiologists do their jobs instead of taking them away. Indeed, we still have a long way to go when it comes to rigorous testing and optimizing the ability of intelligent programs to accurately diagnose complex medical programs. One might go as far as to argue that radiology software will act as a checking system in which we can compare independent diagnoses against a machine-produced result. In the end, problems would be found far sooner and fixed far faster. It could even lead to reduced patient wait times!

We are fortunate enough that medical automation is still in its early developmental stages. There is still time left in the future for us to debate over the pros and cons of automation in radiology. No matter the outcome, it is blatantly clear that jobs needs to be at the forefront of this discussion. Either we provide hard-working radiologists with a new career path, or we find a way for automation to work alongside their work instead of against it.

What are your thoughts about automation in radiology? Are you for it or against it? Leave your thoughts in the comments below!

By Tom Zakharov

Tom is a Master’s student at McGill University, currently specializing in the field of Experimental Medicine. After graduating from the University of Ottawa as a Summa Cum Laude undergraduate, he is currently investigating novel indicators of chemotherapy toxicity in stage IV lung cancer patients. Tom also has 4+ years of scientific research in academia, government, and the pharmaceutical industry. Tom’s first co-authored paper investigated a novel analytical chemistry method for detecting hydrazine in nuclear power plants at parts-per-billion (ppb) concentrations, which can be viewed here.

3 Groundbreaking Wearables In The Travel Space

3 Groundbreaking Wearables In The Travel Space

3 Groundbreaking Wearables

The advent of wearable technologies had many expecting a utopia free of 20th-century pains such as paper maps, customer loyalty cards, lost luggage, and sluggish airport security.

Unfortunately, technological limitations have prevented wearables from revolutionizing the world. A number of devices struggle with voice recognition: Travel technology company Sabre found that about 16 percent of voice commands were ineffective with Google Glass during tests at an airport. To top it off, GPS in smartwatches and smartphones sometimes misses the mark, and battery life in most wearables is dismal.

If wearable use were as prolific as smartphone use, the potential applications while traveling might be nearly limitless. For now, initial excitement over wearables has not translated to long-term use. While a 2016 survey by PricewaterhouseCoopers found about 49 percent of respondents owned at least one wearable, the same study found that daily use of those devices decreased over time.

With data networks everywhere upgrading to 5G, connectivity woes might soon be a thing of the past. Rising interest in virtual reality and augmented reality technologies and Internet of Things applications is fueling curiosity in the devices, and advances in batteries and charging capabilities have the technology poised to break into the mainstream.

Wearables on the Rise

Nearly every tech company has a full line of wearables, and even fashion juggernauts such as Under Armour are moving toward connected clothing. Three main types of wearables are reaching mainstream success, and their applications will revolutionize the way we travel.


1. Smartwatches.

Smartwatches most often connect to mobile phones, although Samsung’s Gear S2 and several pending releases also support separate data plans. This wristwear has a screen and an operating system that makes it ideal for notifications. Activity-tracking bands often sport similar features.

Smartwatches can be used to pay for meals, book hotels or cars, and check the status of flights. Most major airlines already have an Apple Watch app that allows travelers to board by scanning their wrists rather than tickets. Hotel chains are investigating ways to use smartwatches for room keys. And vibrational GPS while traversing an unfamiliar city is invaluable.

2. Smart glasses.

The high-tech eyewear connects to your phone, and headsets such as Samsung’s Gear VR and Mattel’s View-Master VR use smartphone cameras to deliver AR. Several generations of consumers are being introduced to untethered AR experiences, while Google negotiates with retailers and manufacturers to embed its Glass technology into eyewear across the globe.

Travelers will soon be able to use AR to provide interactive maps, travel guides, notifications, and flight updates while they interact with the real world. The technology is still in its infancy, but the smart glass industry will change travel when it reaches full maturity.

3. Wearable cameras.

Wearable cameras are often mentioned in relation to police officers, but tourists could also benefit from this technology. With the small cameras now readily available for a modest price, travelers can get in on the action to document their adventures in innovative ways and share them with friends and family.

In fact, the action camera market already has moved to spherical cameras, with Kodak’s Pixpro SP360 4K camera offering the most compact solution. Using two GoPro-sized SP360s, anyone can capture immersive, 360-degree views of exotic locales from around the world. With social networks pushing for more visual content, capturing and sharing vacation photos will only become easier.

Signs of a Wearable Revolution

Passenger IT Trends Survey found 77 percent of respondents were comfortable with airport staff using wearable technology to help them. That same year, World Travel Market named wearable tech as one of its top trends.

The benefits are clear for travelers: Wearable tech can replace sagging fanny packs and wallets bulging with paperwork. Rather than carrying around credit cards, tickets, receipts, and identification documents, travelers can store and access everything from a watch to glasses to eventually even their own solar- and motion-powered clothing. The technology can help simplify the entire customs process for both passengers and agents.

The devices also should help travel agents respond to increased demand for personalized services. By using the technology to customize holiday packages and enhance communications with clients, wearables could be a boon for the travel industry as a whole.

Smartphones and tablets have fully saturated the market, and interest in technology such as AR and gesture commands is reaching a fever pitch. These technologies are converging for both consumers and enterprises out in the wild as people untether from their desktops and make data-driven decisions on the go.

While the shift likely will have wide-reaching effects throughout society, the travel industry in particular is in line for momentous changes.

By Tony Tie,

tonytie.expediaTony is a numbers-obsessed marketer, life hacker, and public speaker who has helped various Fortune 500 companies grow their online presence.

Located in Toronto, he is currently the senior search marketer at Expedia Canada, the leading travel booking platform for flights, hotels, car rentals, cruises, and local activities.

CloudTweaks Comics
Cloud Infographic – DDoS attacks, unauthorized access and false alarms

Cloud Infographic – DDoS attacks, unauthorized access and false alarms

DDoS attacks, unauthorized access and false alarms Above DDoS attacks, unauthorized access and false alarms, malware is the most common incident that security teams reported responding to in 2014, according to a recent survey from SANS Institute and late-stage security startup AlienVault. The average cost of a data breach? $3.5 million, or $145 per sensitive…

Cloud Infographic: Security And DDoS

Cloud Infographic: Security And DDoS

Security, Security, Security!! Get use to it as we’ll be hearing more and more of this in the coming years. Collaborative security efforts from around the world must start as sometimes it feels there is a sense of Fait Accompli, that it’s simply too late to feel safe in this digital age. We may not…

Reuters News: Powerfull DDoS Knocks Out Several Large Scale Websites

Reuters News: Powerfull DDoS Knocks Out Several Large Scale Websites

DDoS Knocks Out Several Websites Cyber attacks targeting the internet infrastructure provider Dyn disrupted service on major sites such as Twitter and Spotify on Friday, mainly affecting users on the U.S. East Coast. It was not immediately clear who was responsible. Officials told Reuters that the U.S. Department of Homeland Security and the Federal Bureau…

The DDoS That Came Through IoT: A New Era For Cyber Crime

The DDoS That Came Through IoT: A New Era For Cyber Crime

A New Era for Cyber Crime Last September, the website of a well-known security journalist was hit by a massive DDoS attack. The site’s host stated it was the largest attack of that type they had ever seen. Rather than originating at an identifiable location, the attack seemed to come from everywhere, and it seemed…

A New CCTV Nightmare: Botnets And DDoS attacks

A New CCTV Nightmare: Botnets And DDoS attacks

Botnets and DDoS Attacks There’s just so much that seems as though it could go wrong with closed-circuit television cameras, a.k.a. video surveillance. With an ever-increasing number of digital eyes on the average person at all times, people can hardly be blamed for feeling like they’re one misfortune away from joining the ranks of Don’t…

The Conflict Of Net Neutrality And DDoS-Attacks!

The Conflict Of Net Neutrality And DDoS-Attacks!

The Conflict Of Net Neutrality And DDoS-Attacks! So we are all cheering as the FCC last week made the right choice in upholding the principle of net neutrality! For the general public it is a given that an ISP should be allowed to charge for bandwidth and Internet access but never to block or somehow…

Timeline of the Massive DDoS DYN Attacks

Timeline of the Massive DDoS DYN Attacks

DYN DDOS Timeline This morning at 7am ET a DDoS attack was launched at Dyn (the site is still down at the minute), an Internet infrastructure company whose headquarters are in New Hampshire. So far the attack has come in 2 waves, the first at 11.10 UTC and the second at around 16.00 UTC. So…

Multi-Cloud Integration Has Arrived

Multi-Cloud Integration Has Arrived

Multi-Cloud Integration Speed, flexibility, and innovation require multiple cloud services As businesses seek new paths to innovation, racing to market with new features and products, cloud services continue to grow in popularity. According to Gartner, 88% of total compute will be cloud-based by 2020, leaving just 12% on premise. Flexibility remains a key consideration, and…

How The CFAA Ruling Affects Individuals And Password-Sharing

How The CFAA Ruling Affects Individuals And Password-Sharing

Individuals and Password-Sharing With the 1980s came the explosion of computing. In 1980, the Commodore ushered in the advent of home computing. Time magazine declared 1982 was “The Year of the Computer.” By 1983, there were an estimated 10 million personal computers in the United States alone. As soon as computers became popular, the federal government…

Digital Twin And The End Of The Dreaded Product Recall

Digital Twin And The End Of The Dreaded Product Recall

The Digital Twin  How smart factories and connected assets in the emerging Industrial IoT era along with the automation of machine learning and advancement of artificial intelligence can dramatically change the manufacturing process and put an end to the dreaded product recalls in the future. In recent news, Samsung Electronics Co. has initiated a global…

How Formal Verification Can Thwart Change-Induced Network Outages and Breaches

How Formal Verification Can Thwart Change-Induced Network Outages and Breaches

How Formal Verification Can Thwart  Breaches Formal verification is not a new concept. In a nutshell, the process uses sophisticated math to prove or disprove whether a system achieves its desired functional specifications. It is employed by organizations that build products that absolutely cannot fail. One of the reasons NASA rovers are still roaming Mars…

5 Things To Consider About Your Next Enterprise Sharing Solution

5 Things To Consider About Your Next Enterprise Sharing Solution

Enterprise File Sharing Solution Businesses have varying file sharing needs. Large, multi-regional businesses need to synchronize folders across a large number of sites, whereas small businesses may only need to support a handful of users in a single site. Construction or advertising firms require sharing and collaboration with very large (several Gigabytes) files. Financial services…

Protecting Devices From Data Breach: Identity of Things (IDoT)

Protecting Devices From Data Breach: Identity of Things (IDoT)

How to Identify and Authenticate in the Expanding IoT Ecosystem It is a necessity to protect IoT devices and their associated data. As the IoT ecosystem continues to expand, the need to create an identity to newly-connected things is becoming increasingly crucial. These ‘things’ can include anything from basic sensors and gateways to industrial controls…

Do Not Rely On Passwords To Protect Your Online Information

Do Not Rely On Passwords To Protect Your Online Information

Password Challenges  Simple passwords are no longer safe to use online. John Barco, vice president of Global Product Marketing at ForgeRock, explains why it’s time the industry embraced more advanced identity-centric solutions that improve the customer experience while also providing stronger security. Since the beginning of logins, consumers have used a simple username and password to…

Using Cloud Technology In The Education Industry

Using Cloud Technology In The Education Industry

Education Tech and the Cloud Arguably one of society’s most important functions, teaching can still seem antiquated at times. Many schools still function similarly to how they did five or 10 years ago, which is surprising considering the amount of technical innovation we’ve seen in the past decade. Education is an industry ripe for innovation…


Sponsored Partners