Author Archives: CloudTweaks

Using Private Cloud Architecture For Multi-Tier Applications

Using Private Cloud Architecture For Multi-Tier Applications

Cloud Architecture

These days, Multi-Tier Applications are the norm. From SharePoint’s front-end/back-end configuration, to LAMP-based websites using multiple servers to handle different functions, a multitude of apps require public and private-facing components to work in tandem. Placing these apps in entirely public-facing platforms and networks simplifies the process, but at the cost of security vulnerabilities. Locating everything across back-end networks causes headaches for the end-users who try to access the systems over VPN and other private links.

Many strategies have been implemented to address this issue across traditional datacenter infrastructures. Independent physical networks with a “DMZ” for public-facing components, complex routers and firewall configurations have all done the job, although they do add multiple layers of complexity and require highly specialized knowledge and skill sets to accomplish.

Virtualization has made management much easier, but virtual administrators are still required to create and manage each aspect of the configuration – from start to finish. Using a private cloud configuration can make the process much simpler, and it helps segment control while still enabling application administrators to get their jobs done.

Multi-tenancy in the Private Cloud

Private cloud architecture allows for multi-tenancy, which in turn allows for separation of the networking, back-end and front-end tiers. Cloud administrators can define logical relationships between components and enable the app admins to manage their applications without worrying about how they will connect to each other.

One example is a web-based application using a MySQL back-end data platform. In a traditional datacenter platform, the app administrators would request connectivity to either isolate the back-end database or to isolate everything and allow only minimal web traffic to cross the threshold. This requires network administrators to spend hours working with the app team to create and test firewalls and other networking rules to ensure the access they need without opening any security holes that could be exploited.

Applying private cloud methodology changes the game dramatically.

Two individual virtual networks can be created by the cloud administrator. Within each network, traffic flows freely, removing the need to manually create networking links between components in the same virtual network entirely. In addition, a set of security groups can be established that will only allow specified traffic to route between the back-end data network and the front-end web server network – specifically ports and protocols used for the transfer of MySQL data and requests. Security groups utilize per-tenant access control list (ACL) rules, which allow each virtual network to independently define what traffic it will and will not accept and route.

Private cloud networking

Due to the nature of private cloud networking, it becomes much easier to not only ensure that approved data is flowing between the front and back end networks, but to ensure that traffic only flows if it originates from the application networks themselves. This allows for free-flow of required information but blocks anyone outside the network from trying to enter through those same ports.

In the front-end virtual network, all web traffic ports are opened so that users can access those web servers. With the back-end network, the front-end network can be configured to easily reject any other protocol or port and only allow routing from the outside world to the front-end servers, but nowhere else. This has the dual effect of enabling the web servers to do their jobs but won’t allow other administrators or anyone else in the datacenter to gain access, minimalizing faults due to human error or malicious intent.

Once application and database servers are installed and configured by the application administrators, the solution is complete. MySQL data flows from the back-end network to the front-end network and back, but no traffic from other sources reaches that data network. Web traffic from the outside world flows into and out of the front-end network, but it cannot “leapfrog” into the back-end network because external routes would not be permitted to any other server in the configuration. As each tenant is handled separately and governed by individual security groups, app administrators from other groups cannot interfere with the web application. The admins also cannot cause security vulnerabilities by accidentally opening unnecessary ports across the board because they need them for their own apps.

Streamlined Administration

Finally, the entire process becomes easier when each tenant has access to self-service, only relying on the cloud administrator for configuration of the tenancy as a whole and for the provisioning of the virtual networks. The servers, applications, security groups and other configurations can now be performed by the app administrator, and will not impact other projects, even when they reside on the same equipment. Troubleshooting can be accomplished via the cloud platform, which makes tracking down problems much easier. Of course, the cloud administrator could manage the entire platform, but they no longer have to.

Using a private cloud model allows for greater flexibility, better security, and easier management. While it is possible to accomplish this with a traditional physical and virtual configuration, adding the self-service and highly configurable tools of a private cloud is a great way to take control, and make your systems work the way you want, instead of the other way around.

By Ariel Maislos, CEO, Stratoscale

ariel-maislosAriel brings more than twenty years of technology innovation and entrepreneurship to Stratoscale. After a ten-year career with the IDF, where he was responsible for managing a section of the Technology R&D Department, Ariel founded Passave, now the world leader in FTTH technology. Passave was established in 2001, and acquired in 2006 by PMC-Sierra (PMCS), where Ariel served as VP of Strategy. In 2006 Ariel founded Pudding Media, an early pioneer in speech recognition technology, and Anobit, the leading provider of SSD technology acquired by Apple (AAPL) in 2012. At Apple, he served as a Senior Director in charge of Flash Storage, until he left the company to found Stratoscale. Ariel is a graduate of the prestigious IDF training program Talpiot, and holds a BSc from the Hebrew University of Jerusalem in Physics, Mathematics and Computer Science (Cum Laude) and an MBA from Tel Aviv University. He holds numerous patents in networking, signal processing, storage and flash memory technologies.

Infographic: 9 Things To Know About Business Intelligence (BI) Software

Infographic: 9 Things To Know About Business Intelligence (BI) Software

Business Intelligence (BI) Software 

How does your company track its data? It’s a valuable resource—so much so that it’s known as Business Intelligence, or BI. But using it, integrating it into your daily processes, that can be significantly difficult. That’s why there’s software to help.

But when it comes to software, there are lots of options, and it’s hard to weigh all the pros and cons. First, you must realize what makes up BI software, and how it works. BI software is going to focus on gathering all that information, and enabling you to create reports for analysis.

It may not seem as though BI software is worth it, but it can do a lot for your workflow. You might find decisions easier to make, or your operations more efficient. You also might be able to build your business by figuring out both trends and opportunities.

No matter what software you decide on, make sure it has some essential elements, including dashboards and reports. This infographic discovered via Salesforce can work you through the often complicated BI software decision.


Moving Your Email To The Cloud? Beware Of Unintentional Data Spoliation!

Moving Your Email To The Cloud? Beware Of Unintentional Data Spoliation!

Cloud Email Migration

In today’s litigious society, preserving your company’s data is a must if you (and your legal team) want to avoid hefty fines for data spoliation.

But what about when you move to the cloud?

Of course, you’ve probably thought of this already. You’ll have a migration strategy in place and you’ll carefully select migration vendors to help you get there. All that remains is following the plan, right?

CloudTweaks Comic

Everything going well, your valuable data will be migrated without a hitch and will be at your fingertips just in case legal or compliance issues arise.

But what if it doesn’t? What if, when you go to pull-up company records, your data is corrupted or not there at all?

That’s when the headache starts. Your legal team starts knocking on your door, compliance officers look to point their fingers at one of your team, and you face a hefty fine for data spoliation.


As much as I’d like to say I’m painting a dystopian picture that has very little foothold in reality… that’s just not the case.

Just ask UBS Warburg, West and Phillip Morris.

Now, I’m not an expert on all things data migration, but I do know a thing or two about the backbone of corporate communications: email.

Seemingly innocuous, email is the kind of thing many don’t tend to think much about. You send, receive, agonize over and trash thousands of emails a year. But it’s just email right? How difficult can it be to migrate it to a new cloud service?

The truth is, it’s pretty darn difficult.

Sure, there are lots of email migration vendors out there, waving their hands in the air and jumping up and down saying: “Pick me! Pick me!” (full disclosure: I work for Fookes Software and our product Aid4Mail is right there jumping and waving with the best of them). But how many of them can you really trust?

The fact is, converting email accurately is incredibly complex and we’ve seen very few vendors actually do it well.

Here are just some of the issues we’ve seen come up with poorly converted emails, all of which could result in data spoliation sanctions:

  • Entire folders of emails skipped because of a few special characters in the folder name
  • Unable to render special characters or character based languages (like Chinese, Arabic and Hebrew)
  • Lost attachments
  • Alteration of the SMTP header, this means:
    • Loss of original sent, received and stored dates
    • Loss of email addresses
    • Loss of status information (read, unread etc)
  • Emails being skipped as they’re too large

We even have an example for you:



And here’s another one:



So to put this into context, let’s say you produce consumer electronics. You’ve just launched a flagship device after a few years of planning and development. Everyone’s thrilled!

BUT there’s an issue with the battery. It overheats and catches fire in certain circumstances.

A few of your customers’ houses burnt down, a couple of cars caught on fire and around 20 people suffered from burns.

One of the burn victims has leaked some inside information, and now they want to sue you for negligence. The plaintiff is saying you knew about the battery issue and chose not to do anything about so as not to jeopardize the launch.

You’re not worried, you can prove that no one knew anything about it. So, you go back into your archives to pull up all the relevant communication between the project team and your Chinese battery supplier.

That’s when you see that all the emails between the China-based purchasing manager and the battery supplier are all question marks and blank spaces.

The sent date is showing as after the received date and all the emails look like they’re unread.

This, my friends, is data spoliation. Sure it’s not your fault, but you’ll still get sanctioned.

So, what’s the moral of this cautionary tale? Test, test and test again before you choose ANY migration application to move your email to the cloud.

Taking the time now to thoroughly test the application in all scenarios before you commit will pay off in the long run.


katie-cullen-montgomerieBy Katie Cullen Montgomerie

Katie is the marketing and communications manager for Fookes Software, the developers of email migration software Aid4Mail.

Problem In Customer Support – What Mobile App Developers Can Learn From AmEx

Problem In Customer Support – What Mobile App Developers Can Learn From AmEx

Mobile App Developers

Peanut butter and jelly, salt and pepper, marketing and… customer support?

We don’t tend to consider customer support as a complement to marketing, but when an organization experiences success in retaining customers and securing customer loyalty, it’s likely they owe it to the two segments of business working together. Marketing, with all its bells and whistles, gets all the glory for attracting new customers, but customer service – good customer service — is the secret sauce that keeps the customers coming back – and bringing their friends and family with them.

If you’re reading this, you’re probably a marketer or involved with marketing in some respect, so it’s likely you already know about the concept of “customer segmentation” – the process of separating a customer base into groups based on specific demographics or company engagement. However, customer segmentation can also provide a big payoff through customer support. Segmentation enables customer support reps to deliver differentiated experiences to users, allowing organizations to adjust their approach and service level based on:

  • Customer lifetime value (actual or potential)
  • Recent purchase/transaction
  • App engagement and usage history
  • Customer support team size
  • Fluctuating ticket volumes

By segmenting customers by these factors, it enables companies to define distinct service-level agreement per segment and optimize resource allocation accordingly. For example, a company may have a 48-hour wait time for a lower-tier customer, but a two-hour response time for a V.I.P. Additionally, being able to access customer segmentation data in real-time enables businesses to appropriately route their users’ tickets. A lower-tier customer may get routed to a general hotline, while V.I.P.s get a dedicated concierge.


Most businesses deliver customer support based on the problem, routing customers to the right team or agent to address a given issue. But businesses that have taken a tiered customer group approach by focusing on the customer’s segment type have experienced happier customers and longer customer retention. Take American Express, for example. The financial institution aligned its help desk workflows with an eye for people, not problems. Over the last few years, the organization has created a tier of customers with a series of customer support services available to each. Its V.I.P. members expect V.I.P. service, whether they’re being notified of potential fraudulent activity or forget their password to their online account. By focusing on the customer first and the transaction second, American Express is able to deliver differentiated, higher-touch customer service to its highest value customers. The results speak for themselves: since shifting to a customer-centric model, the organization has been able to triple customer satisfaction, increase cardmember spend by 10 percent, and decreased its card member attrition by 400 percent.

When considering how to improve your customer retention and overall customer satisfaction, a natural first step is to start with changing the model for customer support to be customer-centric rather than problem-centric.

Here are some other things to consider:

  • Who are your most valuable customers?
  • Are you treating them differently than the rest of the pack?
  • Once you secure a customer, what’s the retention plan? Do you have one?

By aligning these elements with your overall marketing and customer support strategies, you’ll be well on your way to ensuring customer retention and loyalty.

By Barry Coleman, CTO, UserCare

barry_colemanBarry Coleman is CTO at UserCare, an in-app customer service solution that uses Big Data to help companies grow lifetime value by blending real-time support with relationship management.  Prior to UserCare, Coleman served as CTO and vice president of support and customer optimization products at ATG, which was acquired by Oracle for $1 billion. Coleman is the author on several patents and applications in the areas of online customer support, including cross-channel data passing, dynamic customer invitation, and customer privacy. He holds a B.A. in Artificial Intelligence from the University of Sussex.

Infographic: 12 Interesting Big Data Careers To Explore

Infographic: 12 Interesting Big Data Careers To Explore

Big Data Careers

A Career in Big Data isn’t just a dream job anymore nor is the terminology associated just another buzzword. It is now operational in almost every business vertical possible. Strategic decisions employ a variety of applications for Big Data in various industries and continue to create value for businesses across the board.

Everyone wants a piece of Big Data and the demand for jobs in the sector continues to outrank the supply. In order to carve out a career in this field, a course in Big Data and Analytics can provide an aspirant with a ladder to scale quickly.

This Infographic by simplilearn (below) takes you through 12 Interesting Career options in Big Data which opens the door for those seeking a career in this vertical.


Digital Twin And The End Of The Dreaded Product Recall

Digital Twin And The End Of The Dreaded Product Recall

The Digital Twin 

How smart factories and connected assets in the emerging Industrial IoT era along with the automation of machine learning and advancement of artificial intelligence can dramatically change the manufacturing process and put an end to the dreaded product recalls in the future.

In recent news, Samsung Electronics Co. has initiated a global recall of 2.5 millions of their Galaxy Note 7 smartphones, after finding that the batteries of some of their phones exploded while charging. This recall would cost the company close to $1 Billion.

This is not a one-off incident.

Product recalls have plagued the manufacturing world for decades, right from food and drug to automotive industries, causing huge losses and risk to human life. In 1982, Johnson & Johnson recalled 31 million bottles of Tylenol which retailed at $100 million after 7 people died in Chicago-area. In 2000, Ford recalled 20 million Firestone tires losing around $3 billion, after 174 people died in road accidents due to faulty tires. In 2009, Toyota issued a recall of 10 million vehicles due to numerous issues including gas pedals and faulty airbags that resulted in $2 billion loss consisting of repair expenses and lost sales in addition to the stock prices dropping more than 20% or $35 billion.

Most manufacturers have very stringent quality control processes for their products before they are shipped. Then how and why do these faulty products make it to the market which poses serious life risks and business risks?

Koh Dong-jin, president of Samsung’s mobile business, said that the cause of the battery issue in Samsung Galaxy Note 7 device was “a tiny problem in the manufacturing process and so it was very difficult to find out“. This is true for most of the recalls that happens. It is not possible to manually detect these seemingly “tiny” problems early enough before they result in catastrophic outcomes.

But this won’t be the case in the future.

The manufacturing world has seen 4 transformative revolutions:

  • 1st Industrial Revolution brought in mechanization powered by water and stream.
  • 2nd Industrial Revolution saw the advent of the assembly line powered by gas and electricity
  • 3rd Industrial Revolution introduced robotic automation powered by computing networks
  • The 4th Industrial Revolution has taken it to a completely different level with smart and connected assets powered by machine learning and artificial intelligence.

It is this 4th Industrial Revolution that we are just embarking on that has the potential to transform the face of the manufacturing world and create new economic value to the tune of tens of trillions of dollars, globally, from costs savings and new revenue generation. But why is this the most transformative of all revolutions? Because it is this revolution that has transformed mechanical lifeless machines into digital life-forms with the birth of the Digital Twin.


Digital Twin refers to the computerized companions (or models) of the physical assets that use multiple internet-connected sensors on these assets to represent their near real-time status, working condition, position, and other key metrics that help understand the health and functioning of these assets at granular levels. This helps us understand asset and asset health like we understand humans and human health, with the ability to do diagnosis and prognosis like never before.

How can this solve the recall problem?

Sensor enabling the assembly line and creating Digital Twin of all the individual assets and workflows provides timely insights into tiniest of the issues that can otherwise be easily missed in the manual inspection process. This can detect causes and predict potential product quality issues right in the assembly line as early as possible so that the manufacturers can take proactive action to resolve them before they start snowballing.  This can not only prevent recalls but also reduce scraps in the assembly line taking operational efficiency to unprecedented heights.

What is so deterrent? Why is this problem not solved most organizations that have smart-enabled their factories?

The traditional approach of doing data science and machine learning to analyze data doesn’t scale for this problem. Traditionally, predictive models are created by taking a sample of data from a sample of assets and then these models are generalized for predicting issues on all assets. While this can detect common known issues, which otherwise get caught in the quality control process itself, but it fails to detect the rare events that cause the massive recalls. Rare events have failure patterns that don’t commonly occur in the assets or the assembly line. Although, highly sensitive generalized models can be created to detect any and all deviations but that would generate a lot of false positive alerts which cause a different series of problems altogether. The only way to ensure that we get accurate models that detect only the true issues is to model each asset and the workflow channels individually, understand their respective normal operating conditions and detect their respective deviations. But this is what makes this challenge beyond human-scale. When there are hundreds, thousands or millions of assets and components it is impossible to keep generating and updating models for each one of them manually. It requires automation of the predictive modeling and the machine learning process itself, as putting human data scientists in the loop doesn’t scale.

But aren’t there standard approaches or scripts to automate predictive modeling?

Yes, there are. However, these plain vanilla automation of modeling process which just runs all permutations of algorithms and hyper-parameters again doesn’t work. The number of assets and as such the number of individual models, the frequency at which models need to be updated to capture newer real-world events, the volume of the data and the wide variety of sensor attributes all create prohibitive computational complexity (think millions or billions of permutations), even if someone has infinite infrastructure to process them. The only solution is Cognitive Automation, which is an intelligent process that mimics how a human data scientists leverage prior experience to run fewer experiments to get to an optimal ensemble of models in the fastest possible way. In short, this is about teaching machines to do machine learning and data science like an A.I. Data Scientist.

This is the technology that is required to give Digital Twin a true life-form that delivers the end business value – in this case to prevent recalls.

Does it sound like sci-fi?

It isn’t and it is already happening with the advancement in the world of machine learning and artificial intelligence. Companies like Google are using algorithms to create self-driving cars or beat world champions in complex games. At the same time, we at DataRPM are using algorithms to teach machines to do data analysis and detect asset failures and quality issues on the assembly line. This dramatically improves operational efficiency and prevents the product recalls.

The future, where the dreaded product recalls will be a thing of the past, is almost here!

By Ruban Phukan, Co-Founder and Chief Product & Analytics Officer, DataRPM

Write Once, Run Anywhere: The IoT Machine Learning Shift From Proprietary Technology To Data

Write Once, Run Anywhere: The IoT Machine Learning Shift From Proprietary Technology To Data

The IoT Machine Learning Shift

While early artificial intelligence (AI) programs were a one-trick pony, typically only able to excel at one task, today it’s about becoming a jack of all trades. Or at least, that’s the intention. The goal is to write one program that can solve multi-variant problems without the need to be rewritten when conditions change—write once, run anywhere. Digital heavyweights—notably Amazon, Google, IBM, and Microsoft—are now open sourcing their machine learning (ML) libraries in pursuit of that goal as competitive pressures shift focus from proprietary technologies to proprietary data for differentiation.

Machine learning is the study of algorithms that learn from examples and experience, rather than relying on hard-coded rules that do not always adapt well to real-world environments. ABI Research forecasts ML-based IoT analytics revenues will grow from $2 billion in 2016 to more than $19 billion in 2021, with more than 90% of 2021 revenue to be attributed to more advanced analytics phases. Yet while ML is an intuitive and organic approach to what was once a very rudimentary and primal way of analyzing data, it is worth noting that the ML/AI model creation process itself can be a very complex.


The techniques used to develop machine learning algorithms fall under two umbrellas:

  • How they learn: based on the type of input data provided to the algorithm (supervised learning, unsupervised learning, reinforcement learning, and semi-supervised learning)

  • How they work: based on type of operation, task, or problem performed on I/O data (classification, regression, clustering, anomaly detection, and recommendation engines)

Once the basic principles are established, a classifier can be trained to automate the creation of rules for a model. The challenge lies in learning and implementing the complex algorithms required to build these ML models, which can be costly, difficult, and time-consuming.

Engaging the open-source community introduces an order of magnitude to the development and integration of machine learning technologies without the need to expose proprietary data, a trend which Amazon, Google, IBM, and Microsoft swiftly pioneered.

At more than $1 trillion, these four companies have a combined market cap that dwarfs the annual gross domestic product of more than 90% of countries in the world. Each also open sourced its own deep learning library in the past 12 to 18 months: Amazon’s Deep Scalable Sparse Tensor Network Engine (DSSTNE; pronounced “destiny”), Google’s TensorFlow, IBM’s SystemML, and Microsoft’s Computational Network Toolkit (CNTK). And others are quickly following suit, including Baidu, Facebook, and OpenAI.

But this is just the beginning. To take the most advanced ML models used in IoT to the next level (artificial intelligence), modeling, and neural network toolsets (e.g., syntactic parsers) must improve. Open sourcing such toolsets is again a viable option, and Google is taking the lead by open sourcing its neural network framework, Google’s SyntaxNet, driving the next evolution in IoT from advanced analytics to smart, autonomous machines.

But should others continue to jump on this bandwagon and attempt to shift away from proprietary technology and toward proprietary data? Not all companies own the kind of data that Google collects through Android or Search, or that IBM picked up with its acquisition of The Weather Company’s B2B, mobile, and cloud-based web-properties. Fortunately, a proprietary data strategy is not the panacea for competitive advantage in data and analytics. As more devices get connected, technology will play an increasingly important role for balancing insight generation from previously untapped datasets, and the capacity to derive value from the highly variable, high-volume data that comes with these new endpoints—at a cloud scale, with zero manual tuning.



Collaborative economics is an important component in the analytics product and service strategies of these four leading digital companies all seeking to build a greater presence in IoT and more broadly the convergence of the digital and the physical. But “collaboration” should be placed in context. Once one company open-sourced its ML libraries, other companies were forced to release theirs as well. Millions of developers are far more powerful than a few thousand in-house employees. As well, open sourcing offers these companies tremendous benefits because they can use the new tools to enhance their own operations. For example, Baidu’s Paddle ML software is being used in 30 different online and offline Baidu businesses ranging from health to financial services.

And there are other areas for these companies to invest resources that go beyond the analytics toolsets. Identity management services, data exchange services and data chain of custody are three key areas that will be critical in the growth of IoT and the digital/physical convergence. Pursuing ownership or proprietary access to important data has its appeal. But the new opportunities in the IoT landscape will rely on great technology and the scale these companies possess for a connected world that will in the decades to come reach hundreds of billions of endpoints.

martin-ryan-hi-rezBy  Ryan Martin and Dan Shey

Ryan Martin, Senior Analyst at ABI Research, covers new and emerging mobile technologies, including wearable tech, connected cars, big data analytics, and the Internet of Things (IoT) / Internet of Everything (IoE). 

Ryan holds degrees in economics and political science, with an additional concentration in global studies, from the University of Vermont and an M.B.A. from the University of New Hampshire.

Is Automation The Future Of Radiology?

Is Automation The Future Of Radiology?

Future of Radiology

For those of you who don’t already know, radiology is a subset of medicine that specializes in the diagnosis and treatment of diseases, illnesses and injuries based on imaging techniques. X-rays, MRI’s, CT scans, ultrasounds and PET scans all fall under the umbrella of radiology. Even within this medical niche you will find doctors that are highly specialized in treating certain parts of the body. Once you go down this rabbit hole, you will be shocked to see how deep it can go.

But how close are we to having the entirety of radiation automated by all-knowing robots that can do the job equally well, if not better than our well-trained doctors? The idea of automation in medicine is nothing new, as our exponential progress in technology has brought up the valid concern that robots are the future of medicine. We are already in the process of designing nano-sized robots to solve certain medical problems. We invest millions of dollars in building the best equipment that doctors and healthcare workers can get their hands on. What stops us from taking it one step further and having robots perform our jobs without having to lift a finger?


Take IBM, for example. Their radiologist software Avicenna is already showcasing the future of automation in action. It was specifically programmed to make diagnoses and suggest treatments based on the patient’s medical images and data within their record. Early demos are already showing that its accuracy is on par with independent diagnoses made by trained radiologists. With more data fed to this software in the form of millions of anonymized patient data, it will gradually escape from demo testing and become a seriously useful tool in hospitals all around the world.

Another recent case study of robot-guided radiology in action is Entilic, a deep-learning machine system that is engineered for medical image recognition. According to a test that involved analyzing a CT scan of a patient’s lungs done against three expert human radiologists, “Enlitic’s system was 50% better at classifying malignant tumours and had a false-negative rate (where a cancer is missed) of zero, compared with 7% for the humans”. If this is the kind of result that we are seeing from a startup, just imagine what the implications will be when this technology is fully developed and integrated with the IT systems in healthcare facilities worldwide.

Many people are divided on the implications of automatizing the radiology-guided diagnosis and treatment of patients. The common argument against automation is that it will put a lot of radiologists out of work. Several decades of intense study and hard work will be thrown down the drain because a machine will be able to do their job with greater accuracy and success. Thanks to advances in artificial intelligence and deep learning within machines, this possibility cannot be disregarded any longer. We are already seeing several jobs in the transportation and manufacturing industries being lost to robots and well-programmed machines.


On the other hand, those in favour of automation are arguing that radiology robots are going to help radiologists do their jobs instead of taking them away. Indeed, we still have a long way to go when it comes to rigorous testing and optimizing the ability of intelligent programs to accurately diagnose complex medical programs. One might go as far as to argue that radiology software will act as a checking system in which we can compare independent diagnoses against a machine-produced result. In the end, problems would be found far sooner and fixed far faster. It could even lead to reduced patient wait times!

We are fortunate enough that medical automation is still in its early developmental stages. There is still time left in the future for us to debate over the pros and cons of automation in radiology. No matter the outcome, it is blatantly clear that jobs needs to be at the forefront of this discussion. Either we provide hard-working radiologists with a new career path, or we find a way for automation to work alongside their work instead of against it.

What are your thoughts about automation in radiology? Are you for it or against it? Leave your thoughts in the comments below!

By Tom Zakharov

Tom is a Master’s student at McGill University, currently specializing in the field of Experimental Medicine. After graduating from the University of Ottawa as a Summa Cum Laude undergraduate, he is currently investigating novel indicators of chemotherapy toxicity in stage IV lung cancer patients. Tom also has 4+ years of scientific research in academia, government, and the pharmaceutical industry. Tom’s first co-authored paper investigated a novel analytical chemistry method for detecting hydrazine in nuclear power plants at parts-per-billion (ppb) concentrations, which can be viewed here.

CloudTweaks Comics
The DDoS That Came Through IoT: A New Era For Cyber Crime

The DDoS That Came Through IoT: A New Era For Cyber Crime

A New Era for Cyber Crime Last September, the website of a well-known security journalist was hit by a massive DDoS attack. The site’s host stated it was the largest attack of that type they had ever seen. Rather than originating at an identifiable location, the attack seemed to come from everywhere, and it seemed…

Cloud Infographic: Security And DDoS

Cloud Infographic: Security And DDoS

Security, Security, Security!! Get use to it as we’ll be hearing more and more of this in the coming years. Collaborative security efforts from around the world must start as sometimes it feels there is a sense of Fait Accompli, that it’s simply too late to feel safe in this digital age. We may not…

Reuters News: Powerfull DDoS Knocks Out Several Large Scale Websites

Reuters News: Powerfull DDoS Knocks Out Several Large Scale Websites

DDoS Knocks Out Several Websites Cyber attacks targeting the internet infrastructure provider Dyn disrupted service on major sites such as Twitter and Spotify on Friday, mainly affecting users on the U.S. East Coast. It was not immediately clear who was responsible. Officials told Reuters that the U.S. Department of Homeland Security and the Federal Bureau…

Cloud Infographic – DDoS attacks, unauthorized access and false alarms

Cloud Infographic – DDoS attacks, unauthorized access and false alarms

DDoS attacks, unauthorized access and false alarms Above DDoS attacks, unauthorized access and false alarms, malware is the most common incident that security teams reported responding to in 2014, according to a recent survey from SANS Institute and late-stage security startup AlienVault. The average cost of a data breach? $3.5 million, or $145 per sensitive…

A New CCTV Nightmare: Botnets And DDoS attacks

A New CCTV Nightmare: Botnets And DDoS attacks

Botnets and DDoS Attacks There’s just so much that seems as though it could go wrong with closed-circuit television cameras, a.k.a. video surveillance. With an ever-increasing number of digital eyes on the average person at all times, people can hardly be blamed for feeling like they’re one misfortune away from joining the ranks of Don’t…

Staying on Top of Your Infrastructure-as-a-Service Security Responsibilities

Staying on Top of Your Infrastructure-as-a-Service Security Responsibilities

Infrastructure-as-a-Service Security It’s no secret many organizations rely on popular cloud providers like Amazon and Microsoft for access to computing infrastructure. The many perks of cloud services, such as the ability to quickly scale resources without the upfront cost of buying physical servers, have helped build a multibillion-dollar cloud industry that continues to grow each…

The Cancer Moonshot: Collaboration Is Key

The Cancer Moonshot: Collaboration Is Key

Cancer Moonshot In his final State of the Union address in January 2016, President Obama announced a new American “moonshot” effort: finding a cure for cancer. The term “moonshot” comes from one of America’s greatest achievements, the moon landing. If the scientific community can achieve that kind of feat, then surely it can rally around…

5 Ways To Ensure Your Cloud Solution Is Always Operational

5 Ways To Ensure Your Cloud Solution Is Always Operational

Ensure Your Cloud Is Always Operational We have become so accustomed to being online that we take for granted the technological advances that enable us to have instant access to everything and anything on the internet, wherever we are. In fact, it would likely be a little disconcerting if we really mapped out all that…

Do Not Rely On Passwords To Protect Your Online Information

Do Not Rely On Passwords To Protect Your Online Information

Password Challenges  Simple passwords are no longer safe to use online. John Barco, vice president of Global Product Marketing at ForgeRock, explains why it’s time the industry embraced more advanced identity-centric solutions that improve the customer experience while also providing stronger security. Since the beginning of logins, consumers have used a simple username and password to…

Digital Transformation: Not Just For Large Enterprises Anymore

Digital Transformation: Not Just For Large Enterprises Anymore

Digital Transformation Digital transformation is the acceleration of business activities, processes, and operational models to fully embrace the changes and opportunities of digital technologies. The concept is not new; we’ve been talking about it in one way or another for decades: paperless office, BYOD, user experience, consumerization of IT – all of these were stepping…

What You Need To Know About Choosing A Cloud Service Provider

What You Need To Know About Choosing A Cloud Service Provider

Selecting The Right Cloud Services Provider How to find the right partner for cloud adoption on an enterprise scale The cloud is capable of delivering many benefits, enabling greater collaboration, business agility, and speed to market. Cloud adoption in the enterprise has been growing fast. Worldwide spending on public cloud services will grow at a…

Virtual Immersion And The Extension/Expansion Of Virtual Reality

Virtual Immersion And The Extension/Expansion Of Virtual Reality

Virtual Immersion And Virtual Reality This is a term I created (Virtual Immersion). Ah…the sweet smell of Virtual Immersion Success! Virtual Immersion© (VI) an extension/expansion of Virtual Reality to include the senses beyond visual and auditory. Years ago there was a television commercial for a bathing product called Calgon. The tagline of the commercial was Calgon…

Having Your Cybersecurity And Eating It Too

Having Your Cybersecurity And Eating It Too

The Catch 22 The very same year Marc Andreessen famously said that software was eating the world, the Chief Information Officer of the United States was announcing a major Cloud First goal. That was 2011. Five years later, as both the private and public sectors continue to adopt cloud-based software services, we’re interested in this…


Sponsored Partners

Hybrid IT Matures Just In Time To Tackle Complex Challenges
Collaborative Economy – The Death Of “Death By Meeting”
Skin Based Technology – The Intelligent Tattoo
Understanding The Importance Of A Flexible Hybrid Cloud Solution
Confused By The Cloud? A New eBook Reveals All…
AT&T Pinpoints 4 Key Elements To Achieving Security With The Internet of Things
The Benefits of Cloud-Based Phone Systems
Collaboration Clouds: The Logical Next Step To Cloud Computing
Security Training Through Practical Experience
The Value of Hybrid Cloud
Internet Performance Management In Today’s Volatile Online Environment