Category Archives: Technology

Using Private Cloud Architecture For Multi-Tier Applications

Using Private Cloud Architecture For Multi-Tier Applications

Cloud Architecture

These days, Multi-Tier Applications are the norm. From SharePoint’s front-end/back-end configuration, to LAMP-based websites using multiple servers to handle different functions, a multitude of apps require public and private-facing components to work in tandem. Placing these apps in entirely public-facing platforms and networks simplifies the process, but at the cost of security vulnerabilities. Locating everything across back-end networks causes headaches for the end-users who try to access the systems over VPN and other private links.

Many strategies have been implemented to address this issue across traditional datacenter infrastructures. Independent physical networks with a “DMZ” for public-facing components, complex routers and firewall configurations have all done the job, although they do add multiple layers of complexity and require highly specialized knowledge and skill sets to accomplish.

Virtualization has made management much easier, but virtual administrators are still required to create and manage each aspect of the configuration – from start to finish. Using a private cloud configuration can make the process much simpler, and it helps segment control while still enabling application administrators to get their jobs done.

Multi-tenancy in the Private Cloud

Private cloud architecture allows for multi-tenancy, which in turn allows for separation of the networking, back-end and front-end tiers. Cloud administrators can define logical relationships between components and enable the app admins to manage their applications without worrying about how they will connect to each other.

One example is a web-based application using a MySQL back-end data platform. In a traditional datacenter platform, the app administrators would request connectivity to either isolate the back-end database or to isolate everything and allow only minimal web traffic to cross the threshold. This requires network administrators to spend hours working with the app team to create and test firewalls and other networking rules to ensure the access they need without opening any security holes that could be exploited.

Applying private cloud methodology changes the game dramatically.

Two individual virtual networks can be created by the cloud administrator. Within each network, traffic flows freely, removing the need to manually create networking links between components in the same virtual network entirely. In addition, a set of security groups can be established that will only allow specified traffic to route between the back-end data network and the front-end web server network – specifically ports and protocols used for the transfer of MySQL data and requests. Security groups utilize per-tenant access control list (ACL) rules, which allow each virtual network to independently define what traffic it will and will not accept and route.

Private cloud networking

Due to the nature of private cloud networking, it becomes much easier to not only ensure that approved data is flowing between the front and back end networks, but to ensure that traffic only flows if it originates from the application networks themselves. This allows for free-flow of required information but blocks anyone outside the network from trying to enter through those same ports.

In the front-end virtual network, all web traffic ports are opened so that users can access those web servers. With the back-end network, the front-end network can be configured to easily reject any other protocol or port and only allow routing from the outside world to the front-end servers, but nowhere else. This has the dual effect of enabling the web servers to do their jobs but won’t allow other administrators or anyone else in the datacenter to gain access, minimalizing faults due to human error or malicious intent.

Once application and database servers are installed and configured by the application administrators, the solution is complete. MySQL data flows from the back-end network to the front-end network and back, but no traffic from other sources reaches that data network. Web traffic from the outside world flows into and out of the front-end network, but it cannot “leapfrog” into the back-end network because external routes would not be permitted to any other server in the configuration. As each tenant is handled separately and governed by individual security groups, app administrators from other groups cannot interfere with the web application. The admins also cannot cause security vulnerabilities by accidentally opening unnecessary ports across the board because they need them for their own apps.

Streamlined Administration

Finally, the entire process becomes easier when each tenant has access to self-service, only relying on the cloud administrator for configuration of the tenancy as a whole and for the provisioning of the virtual networks. The servers, applications, security groups and other configurations can now be performed by the app administrator, and will not impact other projects, even when they reside on the same equipment. Troubleshooting can be accomplished via the cloud platform, which makes tracking down problems much easier. Of course, the cloud administrator could manage the entire platform, but they no longer have to.

Using a private cloud model allows for greater flexibility, better security, and easier management. While it is possible to accomplish this with a traditional physical and virtual configuration, adding the self-service and highly configurable tools of a private cloud is a great way to take control, and make your systems work the way you want, instead of the other way around.

By Ariel Maislos, CEO, Stratoscale

ariel-maislosAriel brings more than twenty years of technology innovation and entrepreneurship to Stratoscale. After a ten-year career with the IDF, where he was responsible for managing a section of the Technology R&D Department, Ariel founded Passave, now the world leader in FTTH technology. Passave was established in 2001, and acquired in 2006 by PMC-Sierra (PMCS), where Ariel served as VP of Strategy. In 2006 Ariel founded Pudding Media, an early pioneer in speech recognition technology, and Anobit, the leading provider of SSD technology acquired by Apple (AAPL) in 2012. At Apple, he served as a Senior Director in charge of Flash Storage, until he left the company to found Stratoscale. Ariel is a graduate of the prestigious IDF training program Talpiot, and holds a BSc from the Hebrew University of Jerusalem in Physics, Mathematics and Computer Science (Cum Laude) and an MBA from Tel Aviv University. He holds numerous patents in networking, signal processing, storage and flash memory technologies.

Battle of the Clouds: Multi-Instance vs. Multi-Tenant

Battle of the Clouds: Multi-Instance vs. Multi-Tenant

Multi-Instance vs. Multi-Tenant

The cloud is part of everything we do. It’s always there backing up our data, pictures, and videos. To many, the cloud is considered to be a newer technology. However, cloud services actually got their start in the late 90s when large companies used it as a way to centralize computing, storage, and networking. Back then, the architecture was built on database systems originally designed for tracking customer service requests and running financial systems. For many years, companies like Oracle, IBM, EMC and Cisco thrived in this centralized ecosystem as they scaled their hardware to accommodate customer growth.

Unfortunately, what is good for large enterprises, does not typically translate to a positive experience for customers. While the cloud providers have the advantage of building and maintaining a centralized system, the customers must share the same software and infrastructure. This is known as a multi-tenant architecture, a legacy system that nearly all clouds still operate on today.


Here are three major drawbacks of the multi-tenant model for customers:

  • Commingled data – In a multi-tenant environment, the customer relies on the cloud provider to logically isolate their data from everyone else’s. Essentially, customers and their competitors’ data could be commingled in a single database. While you cannot see another company’s data, the data is still not physically separate and relies on software for separation and isolation. This has major implications for government, healthcare and financial regulations, and not to mention, a security breach that could expose your data along with everyone else.
  • Excessive maintenance and downtime – Multi-tenant architectures rely on large and complex databases that require hardware and software maintenance on a regular basis, resulting in availability issues for customers.  While some departments can experience downtime in the off hours such as sales or marketing, enterprise applications that are used across the entire enterprise need to be operational nearly 100 percent of time. Ideally, enterprise applications should not experience more than 26 seconds of downtime a month on average. They simply cannot suffer the excessive maintenance downtime of a multi-tenant architecture.
  • All are impacted – In a multi-tenant cloud, any action that affects the multi-tenant database such as outages, upgrades, or availability issues affect all those who share that multi-tenancy. When software or hardware issues are found on a multi-tenant database, it causes an outage for all customers. The same goes with upgrades. The main issue arises when this model is applied to run enterprise–wide business services. Entire organizations cannot tolerate this shared approach on applications that are critical to their success. Instead, they require upgrades done on their own schedule for planning purposes and for software and hardware issues to be isolated and resolved quickly.

With its inherent data isolation and multiple availability issues, multi-tenancy is a legacy cloud computing architecture that will not stand the test of time. To embrace and lead today’s technological innovations; companies need to look at an advanced cloud architecture called multi-instance. A multi-instance architecture provides each customer with their own unique database. Rather then using a large centralized database, instances are deployed on a per-customer basis, allowing the multi-instance cloud to scale horizontally and infinitely.

With this architecture and deployment model come many benefits, including data isolation, advanced high availability, and customer-driven upgrade schedules.

Here’s a closer look at each of these areas:

  • True data isolation – In a multi-instance architecture, each customer has its own unique database making sure their data is not shared with other customers. A multi-instance architecture is not built on a large centralized database, instead, instances are deployed on a per-customer basis, making hardware and software maintenance easier to perform and issues can be resolved on a customer-by-customer basis.
  • Advanced high availability – Ensuring high availability of data and achieving true redundancy is no longer possible through legacy disaster recovery tactics. Multiple sites being tested infrequently and used only in the most dire of times, is simply not enough. Through multi-instance cloud technology, true redundancy is achieved with the application logic and database for each customer instance being replicated between two paired, yet geographically separate data centers. Each redundant data center is fully operational and active resulting in almost real-time replication of the customer instances and databases. Coupling a multi-instance cloud with automation technology, the customer instances can be quickly moved between each data center resulting in high availability of data.
  • Customer-driven upgrades – As described above, the multi-instance architecture allows cloud service providers to perform actions on individual customer instances, this also includes upgrades. A multi-instance cloud allows each instance to be upgraded on a schedule that fits compliance requirements and the needs of individual customers.

When it comes down to it, the multi-instance architecture clearly has significant advantages over the antiquated multi-tenant clouds. With its data isolation and a fully replicated environment that provides high availability and scheduled upgrades, the multi-instance architecture puts customers in control of their cloud.

By Allan Leinwand

Infographic: 9 Things To Know About Business Intelligence (BI) Software

Infographic: 9 Things To Know About Business Intelligence (BI) Software

Business Intelligence (BI) Software 

How does your company track its data? It’s a valuable resource—so much so that it’s known as Business Intelligence, or BI. But using it, integrating it into your daily processes, that can be significantly difficult. That’s why there’s software to help.

But when it comes to software, there are lots of options, and it’s hard to weigh all the pros and cons. First, you must realize what makes up BI software, and how it works. BI software is going to focus on gathering all that information, and enabling you to create reports for analysis.

It may not seem as though BI software is worth it, but it can do a lot for your workflow. You might find decisions easier to make, or your operations more efficient. You also might be able to build your business by figuring out both trends and opportunities.

No matter what software you decide on, make sure it has some essential elements, including dashboards and reports. This infographic discovered via Salesforce can work you through the often complicated BI software decision.


Cukes and the Cloud

Cukes and the Cloud

The Cloud, through bringing vast processing power to bear inexpensively, is enabling artificial intelligence. But, don’t think Skynet and the Terminator. Think cucumbers!

Artificial Intelligence (A.I.) conjures up the images of vast cool intellects bent on our destruction or at best ignoring us the way we ignore ants. Reality is a lot different and much more prosaic – A.I. recommends products or movies and shows you might like from Amazon or Netflix learning from your past preferences. Now you can do it yourself as one farmer in Japan did. He used it to sort his cucumber harvest.


Makoto Koike, inspired by seeing Google’s AlphaGo beat the world’s best Go player, decided to try using Google’s open source TensorFlow offering to address a much less exalted challenge but nonetheless a difficult one: sorting the cucumber harvest from his parent’s farm.

Now these are not just any cucumbers. They are thorny cucumbers where straightness, vivid color and a large number of prickles command premium prices. Each farmer has his own classification and Makoto’s father had spent a lifetime perfecting his crop and customer base for his finest offerings. The challenge was to sort them quickly during the harvest so the best and freshest could be sent to buyers as rapidly as possible.

This sorting was previously a “human only” task that required much experience and training – ruling out supplementing the harvest with part-time temporary labor. The result was Makoto’s poor mother would spend eight hours a day tediously sorting them by hand.

Makoto tied together a video inspection system and mechanical sorting machines with his DIY software based on the Google TensorFlow and it works! If you want a deep dive on the technology check out the details here. Essentially the machine is trained to recognize a set of images that represent the different classifications of quality. The challenge is using just a standard local computer required keeping the images at a relatively low resolution. The result is 75% accuracy in the actual sorting. Even achieving that required three days of training the computer on recognizing the 7000 images.

Expanding to a server farm (no pun intended) large enough to raise that accuracy to 95% would be cost prohibitive and only needed during harvest. But Makoto is excited because Google offers Cloud Machine Learning (Cloud ML), a low-cost cloud platform for training and prediction that dedicates hundreds of cloud servers to training a network with TensorFlow. With Cloud ML, Google handles building a large-scale cluster for distributed training, and you just pay for what you use, making it easier for developers to try out deep learning without making a significant capital investment.

If you can do this with sorting cucumbers imagine what might be possible as cloud power continues to increase inexpensively and the tools get easier to use. The personal assistant on your phone will really become your personal assistant and not the clunky beasts they are today. In your professional life they’ll be your right-hand minion taking over the tedious aspects of your job. Given what Makoto achieved perhaps you should try your hand at it. Who knows what you might come up with?

By John Pientka

(Originally published Sept 22nd, 2016. You can periodically read John’s syndicated articles here on CloudTweaks. Contact us for more information on these programs)

Ransomware’s Great Lessons

Ransomware’s Great Lessons


The vision is chilling. It’s another busy day. An employee arrives and logs on to the network only to be confronted by a locked screen displaying a simple message: “Your files have been captured and encrypted. To release them, you must pay.

Ransomware has grown recently to become one of the primary threats to companies, governments and institutions worldwide. The physical nightmare of inaccessible files pairs up with the more human nightmare of deciding whether to pay the extortionists or tough it out.

Security experts are used to seeing attacks of all types, and it comes as no surprise that ransomware attacks are becoming more frequent and more sophisticated.


(See full (ISC)2 Infographic)

Security Experts Take Note

Chris Sellards, a Certified Cloud Security Professional (CCSP) working in the southwestern U.S. as a senior security architect points out that cyber threats change by the day, and that ransomware is becoming the biggest risk of 2016. Companies might start out with adequate provisions against infiltration, but as they grow, their defenses sometimes do not grow with them. He points out the example of a corporate merger or acquisition. As two companies become one, the focus may be on the day-to-day challenges of the transition. But in the background, the data that the new company now owns may be of significantly higher value than it was before. This can set the company up as a larger potential target, possibly even disproportionate to its new size.

The problem with ransomware as a security threat is that its impact can be significantly reduced through adequate backup and storage protocols. As Michael Lyman, a Boston-area CCSP states, when companies are diligent about disaster recovery, they can turn ransomware from a crisis to merely a nuisance. He says that organizations must pay attention to their disaster recovery plans. It’s a classic case of the ounce of prevention being worth more than the pound of cure. However, he points out that such diligence is not happening as frequently as it should.

As an independent consultant, Michael has been called into companies either to implement a plan or to help fix the problem once it has happened. He points out that with many young companies still in their first years of aggressive growth, the obligation to stop and make sure that all the strategic safeguards are in place is often pushed aside. “These companies,” he says, “tend to accept the risk and focus instead on performance.” He is usually called in only after the Board of Directors has asked management for a detailed risk assessment for the second time.

Neutralizing The Danger

Adequate disaster preparations and redundancy can neutralize the danger of having unique files held hostage. It is vital that companies practice a philosophy of “untrust,” meaning that everything on the inside must remain locked up. It is not enough to simply have a strong wall around the company and its data; it must be assumed that the bad people will find their way in somehow, which means all the data on the inside must be adequately and constantly encrypted.


It is essential to also bear in mind that ransomware damage does not exist solely inside the organization. There will also be costs and damage to the company-client relationship. At the worst is the specter of leaked confidential files – the data that clients entrusted to a company – and the recrimination and litigation that will follow. But even when a ransom event is resolved, meaning files are retained and no data is stolen, there is still the damage to a company’s reputation when the questions start to fly: “How could this have happened?” and “How do we know it won’t happen again?”

As cloud and IOT technologies continue to connect with each other, businesses and business leaders must understand that they own their risk. It is appropriate for security experts to focus on the fear factor, especially when conversing with the members of the Executive, for whom the cost of adequate security often flies in the face of profitability. Eugene Grant, a CCSP based in Ontario, Canada, suggests that the best way to adequately convey the significance of a proactive security plan is to use facts to back up your presentation; facts that reveal a quantitative risk assessment as opposed to solely qualitative. In other words, bring it down to cost versus benefit.

No company is too small to be immune or invisible to the black hats. It is up to the security specialists to convey that message.

For more on the CCSP certification from (ISC)2, please visit their website. Sponsored by (ISC)2.

By Steve Prentice

Micro-segmentation – Protecting Advanced Threats Within The Perimeter

Micro-segmentation – Protecting Advanced Threats Within The Perimeter


Changing with the times is frequently overlooked when it comes to data center security. The technology powering today’s networks has become increasingly dynamic, but most data center admins still employ archaic security measures to protect their network. These traditional security methods just don’t stand a chance against today’s sophisticated attacks.

That hasn’t stopped organizations from diving dive head-first into cloud-based technologies. More and more businesses are migrating workloads and application data to virtualized environments at an alarming pace. While the appetite for increased network agility drives massive changes to infrastructure, the tools and techniques used to protect the data center also need to adapt and evolve.

Recent efforts to upgrade these massive security systems are still falling short. Since data centers by design house huge amounts of sensitive data, there shouldn’t be any shortcuts when implementing security to protect all that data. The focus remains on providing protection only at the perimeter to keep threats outside. However, implementing strictly perimeter-centric security such as a Content Delivery Network (CDN) leaves the inside of the data center vulnerable, where the actual data resides.


(Infographic Source: Internap)

Cybercriminals understand this all too well. They are constantly utilizing advanced threats and techniques to breach external protections and move further inside the data center. Without strong internal security protections, hackers have visibility to all traffic and the ability to steal data or disrupt business processes before they are even detected.

Security Bottleneck

At the same time businesses face additional challenges as traffic behavior and patterns are shifting. There are greater numbers of applications within the data center, and these applications are all integrated with each other. The increasing number of applications has caused the amount of traffic going east-west traffic – or laterally among applications and virtual machines – within the data center to drastically grow as well.

As more data is contained with the data center and not crossing the north-south perimeter defenses, security controls are now blind to this traffic – making lateral threat movement possible. With the rising number of applications, hackers have a broader choice of targets. Compounding this challenge is the fact that traditional processes for managing security are manually intensive and very slow. Applications now are being rapidly created and evolving far more quickly than static security controls are able to keep pace with.

To address these challenges, a new security approach is needed—one that requires effectively bringing security inside the data center to protect against advanced threats: Micro-segmentation.



Micro-segmentation works by grouping resources within the data center and applying specific security policies to the communication between those groups. The data center is essentially divided up into smaller, protected sections (segments) with logical boundaries which increase the ability to discover and contain intrusions. However, despite the separation, application data needs to cross micro-segments in order to communicate with other applications, hosts or virtual machines. This makes lateral movement still possible, which is why in order to detect and prevent lateral movement in the data center it is vital for threat prevention to inspect traffic crossing the micro-segments.

For example, a web-based application may utilize the SQL protocol for interacting with database servers and storage devices. The application web services are all logically grouped together in the same micro-segment and rules are applied to prevent these application services from having direct contact with other services. However SQL may be used across multiple applications, thus providing a handy exploit route for advanced malware that can be inserted into the web service for the purpose of laterally spreading itself throughout the data center.

Micro-segmentation with advanced threat prevention is emerging as the new way to improve data center security. This provides the ability to insert threat prevention security – Firewall, Intrusion Prevention System (IPS), AntiVirus, Anti-Bot, Sandboxing technology and more – for inspecting traffic moving into and out of any micro-segment and prevent the lateral spread of threats. However, this presents security challenges due to the dynamic nature of virtual networks, namely the ability to rapidly adapt the infrastructure to accommodate bursts and lulls in traffic patterns or the rapid provisioning of new applications.

In order to address data center security agility so it can cope with rapid changes, security in a software-defined data center needs to learn about the role, scale, and location of each application. This allows the correct security policies to be enforced, eliminating the need for manual processes. What’s more, dynamic changes to the infrastructure are automatically recognized and absorbed into security policies, keeping security tuned to the actual environment in real-time.

What’s more, by sharing context between security and the software-defined infrastructure, the network then becomes better able to adapt to and mitigate any risks. As an example, if an infected VM is identified by an advanced threat prevention security solution protecting a micro-segment, the VM can automatically be re-classified as being infected. Re-classifying the VM can then trigger a predefined remediation workflow to quarantine and clean the infected VM.

Once the threat has been eliminated, the infrastructure can then re-classify the VM back to its “cleaned” status and remove the quarantine, allowing the VM to return to service. Firewall rules can be automatically adjusted and the entire event logged – including what remediation steps were taken and when the issue was resolved – without having to invoke manual intervention or losing visibility and control.

Strong perimeter security is still an important element to an effective defense-in-depth strategy, but perimeter security alone offers minimal protections for virtualized assets within the data center. It is difficult to protect data and assets that aren’t known or seen. With micro-segmentation, advanced security and threat prevention services can be deployed wherever they are needed in the virtualized data center environment.

By Yoav Shay Daniely

Part 1 – Connected Vehicles: Paving The Way For IoT On Wheels

Part 1 – Connected Vehicles: Paving The Way For IoT On Wheels

Connected Vehicles

From cars to combines, the IoT market potential of connected vehicles is so expansive that it will even eclipse that of the mobile phone. Connected personal vehicles will be the final link in a fully connected IoT ecosystem. This is an incredibly important moment to capitalize on given how much time people spend in cars. When mobility services and autonomous cars begin to take hold, it will become even more critical as people will be behind a screen instead of behind the wheel.

Industrial equipment and mobility solutions may prove to be even larger and more lucrative than the consumer side. McKinsey & Company says that by 2025, IoT will have an economic impact up to $11 trillion a year, and that 70 percent of the revenue IoT creates will be generated from B2B businesses. In the connected vehicle world, industrial applications will likely follow this trend.


(Infographic source: Spireon)

Technologies like GPS, telematics services, on-board computers, specialized sensors, internet connectivity, and cloud-based data stream management are all fairly mature. The challenge businesses are facing now is how all this technology can be interconnected and used to monetize IoT services in connected vehicles.

To get started, here is a framework for understanding the connected vehicle space—from consumer to industrial offerings. I’ve outlined challenges and opportunities associated with the various business models and lessons learned from other markets to offer guidelines, best practices, and guardrails to maximize the chances for commercial success.

What are the offerings?

The number of individual connected car services that are available or coming to the marketplace is large, but we can place them in four basic categories:

  • Transportation as a Service – Any alternative to traditional sale or lease of a vehicle that requires some amount of connectivity in order to work. Examples: peer-to-peer car sharing, multi-entity (group) leasing, and fleet subscriptions.
  • Post-Sale/Lease Secondary Services – Services offered to vehicle owners/lessees after initial vehicle acquisition. Examples: entertainment delivery, driver experience personalization, roadside assistance, mapping and geo-fencing, human-assisted services like on-demand concierge parking, and intelligent preventive maintenance subscriptions.
  • Road Use Measurement Services – Services directly based upon telematics-sourced data streams. Examples: usage-based insurance, road-use-based taxation, and commercial fleet tracking and management.
  •  Secondary Data Stream Monetization – The analysis of individual driving habits and patterns.

Examples: personalized discounted insurance promotions, or data for to third parties.

What should you do next?

Although these use cases appear to be vastly different from one another, there are some common themes and guidance which can be gleaned from them.

Here are next steps for any connected car industry player:

  • Get your Data House in Order – The connected car depends on data streams produced by sensors and devices, but knowing everything about a person’s driving can be dangerous in malicious hands, so stringently secured systems and policies must be put into place. Some data is vehicle-specific and other data is individual-specific, and different data will need to go to different places, so companies need to deploy sophisticated data management systems that can handle an ever-shifting ‘many-to-many-to-many’ landscape of identity management. Savvy organizations will put systems in place at the outset in order to ‘future-proof’ themselves and lay the groundwork for market agility and consumer safety.
  • Embrace Your Data – If you have access to data, even when it has no apparent direct influence on how you choose to charge for your offering, retain it anyway. Insights are always available via analysis of the consumption patterns of users, but only if you keep the data in an organized, centralized, accessible place. Knowing how users consume a service allows you to stay nimble in a market that is guaranteed to constantly shift and change.
  • The Money Isn’t in the ‘Thing’ – In the IoT world in general, many businesses realize too late that the true monetization opportunity lies NOT with the physical ‘thing’ itself, but rather with the virtual perpetual service that it unlocks. Companies like Jawbone, GoPro, and FitBit all learned this the hard way. Recurring, perpetual services provide an ongoing linkage to a customer that cannot be achieved via any one-and-done sales model, and build an annuity for the enterprise that keeps on giving and doesn’t require constant reinvestment in costly customer acquisition.
  • Learn to ‘Count the Beans’ Differently – Connected vehicle services, like any IoT-based services, lend themselves readily to recurring revenue business models which are measure by fundamentally different KPIs than the one-time-sale model. Profit margins in recurring revenue models are often not fully realized at point of sale, but with a ‘long tail’ of annuity-style profit once margin has been reached. The lessons learned by OEMs years ago when many also became direct lenders are the best analog for recurring revenue success available to them. OEMs should consider housing their connected car strategies within their lending arms for this reason.
  • Take Engineers Out of the Equation Where You Can – Agility and speed to market are of paramount importance in the connected car world. Organizations that show a willingness and ability to get to market quickly, even if imperfectly, and to aggressively pursue multiple simultaneous monetization strategies knowing that not all will succeed—these will be the organizations that ‘win’ in the connected car space.
  • Prepare Back-Office Systems – Unfortunately, especially for mature enterprises like OEMs that have developed dependencies on legacy back-office systems, existing infrastructure was not built to rapidly support new offering models like recurring revenue. More modern (usually cloud-based) back-office systems offer not only dramatically shorter implementation timelines than legacy or home-grown systems, but also put the power of innovation and change in the hands of business users rather than engineers.

The Clearest True North—Connected Vehicles

The obsession with the cult of cars is understandable, but the truth is that there are many segments of “IoT on Wheels” that are moving quickly towards full-on connectedness.

Like many subsets of the IoT (e.g. wearables, connected home) consumer applications for connected personal vehicles get all the hype, but the business applications are really demonstrating the most traction.

Whether B2B or B2C, there is no magic formula for exactly how to make money in the connected vehicle space, however, we are sure to see fascinating changes occurring in the transportation landscape in the coming years. There is hardly anything as captivating, and potentially profitable, as the emergence of IoT on wheels.

Stay tuned for Part 2 of the series next month…

By Tom Dibble

Security: Avoiding A Hatton Garden-Style Data Center Heist

Security: Avoiding A Hatton Garden-Style Data Center Heist

Data Center Protection

In April 2015, one of the world’s biggest jewelry heists occurred at the Hatton Garden Safe Deposit Company in London. Posing as workmen, the criminals entered the building through a lift shaft and cut through a 50cm-thick concrete wall with an industrial power drill. Once inside, the criminals had free and unlimited access to the company’s secure vault for over 48 hours during the Easter weekend, breaking into one safety deposit box after another to steal an estimated $100m worth of jewelry.

So why weren’t the criminals caught? How did they have free reign into all of the safety deposit boxes? It turns out that the security systems only monitored the perimeter, not inside the vault. Despite the burglars initially triggering an alarm to which the police responded, no physical signs of burglary were found outside the company’s vault. So the perpetrators were able to continue their robbery uninterrupted. In other words, the theft was made possible by simply breaching the vault’s perimeter – once the gang was inside, they could move around undetected and undisturbed.


(Image Source: Wikipedia)

Most businesses do not have store gold, diamonds or jewelry. Instead, their most precious assets are data. And they’re not stored in reinforced vaults, but in data centers. Yet in many cases, both vaults and data centers are secured against breaches in similar ways. Organizations often focus on reinforcing the perimeter and less on internal security.

If attackers are able to breach the external protection, they can often move inside the data center from one application to the next, stealing data and disrupting business processes for some time before they are detected – just like the criminal gang inside the Hatton Garden vault were able to move freely and undetected. In some recent data center breaches, the hackers had access to applications and data for months, due to lack of visibility and internal security measures.

Security Challenges in Virtualized Environments

This situation is made worse as enterprises move from physical data center networks to virtualized networks – to accelerate configuring and deploying applications, reduce hardware costs and reduce management time. In this new data center environment, all of the infrastructure elements – networking, storage, compute and security – are virtualized and delivered as a service. This fundamental change means that the traditional security approaches of securing the network’s perimeter is no longer suitable to address the dynamic virtualized environment.

Main security challenges are:

Traffic behavior shifts – Historically, the majority of traffic was ‘north-south’ traffic, which crosses the data center perimeter and is managed by traditional perimeter security controls. Now, intra-data center ‘east-west’ traffic has drastically increased, as the number of applications has multiplied and those applications need to interconnect and share data in order to function. With the number of applications growing, hackers have a wider choice of targets: they can focus on a single low-priority application and then use it to start moving laterally inside the data center, undetected. Perimeter security is no longer enough.

Manual configuration and policy changes – In these newly dynamic data centers, traditional, manual processes for managing security are too slow, taking too much of the IT team’s time – which means security can be a bottleneck, slowing the delivery of new applications. Manual processes are also prone to human errors which can introduce vulnerabilities. Therefore, automating security management is essential to enable automated application provisioning and to fully support data center agility.

Until recently, delivering advanced threat prevention and security technologies within the data center would involve managing a large number of separate VLANs and keeping complicated network diagrams and configuration constantly up-to-date using manual processes. In short, an unrealistically difficult and expensive management task for most organizations.

Micro-segmentation: armed guards inside the vault

But what if we could place the equivalent of a security guard on every safety deposit box in the vault so that even if an attacker breaches the perimeter, there is protection for every valuable asset inside? As data centers become increasingly software-defined with all functions managed virtually, this can be accomplished by using micro-segmentation in the software-defined data center (SDDC).

Micro-segmentation works by coloring and grouping resources within the data center with communication between those groups applied with specific dynamic security policies. Traffic within the data center is then directed to virtual security gateways. The traffic is deeply inspected at the content level using advanced threat prevention techniques to stop attackers attempting to move laterally from one application to another using exploits and reconnaissance techniques.

Whenever a virtual machine or server is detected executing an attack using the above techniques, it can be tagged as infected and immediately quarantined automatically by the ‘security guard’ in the data center: the security gateway. This way, a system breach does not compromise the entire infrastructure.

Once an application is added and evolves over time, it is imperative for the security policy to instantly apply and automatically adapt to the dynamic changes. Using integration to cloud management and orchestration tools, the security in the software defined data center learns about the role of the application, how it scales and its location. As a result, the right policy is enforced enabling applications inside the data center to securely communicate with each other. For example, when servers are added or an IP address changes, the object is already provisioned and inherits the relevant security policies removing the need for a manual process.

Just as virtualization has driven the development of scalable, flexible, easily-managed data centers, it’s also driving the next generation of data center security. Using SDDC micro-segmentation delivered via an integrated, virtualized security platform, advanced security and threat prevention services can be dynamically deployed wherever they are needed in the software-defined data center environment. This puts armed security guards around inside the organization’s vault, protecting each safety deposit box and the valuable assets they hold – helping to stop data centers falling victim of a Hatton Garden-style breach.

By Yoav Shay Daniely

CloudTweaks Comics
Cloud Infographic: Security And DDoS

Cloud Infographic: Security And DDoS

Security, Security, Security!! Get use to it as we’ll be hearing more and more of this in the coming years. Collaborative security efforts from around the world must start as sometimes it feels there is a sense of Fait Accompli, that it’s simply too late to feel safe in this digital age. We may not…

Update: Timeline of the Massive DDoS DYN Attacks

Update: Timeline of the Massive DDoS DYN Attacks

DYN DDOS Timeline This morning at 7am ET a DDoS attack was launched at Dyn (the site is still down at the minute), an Internet infrastructure company whose headquarters are in New Hampshire. So far the attack has come in 2 waves, the first at 11.10 UTC and the second at around 16.00 UTC. So…

Reuters News: Powerfull DDoS Knocks Out Several Large Scale Websites

Reuters News: Powerfull DDoS Knocks Out Several Large Scale Websites

DDoS Knocks Out Several Websites Cyber attacks targeting the internet infrastructure provider Dyn disrupted service on major sites such as Twitter and Spotify on Friday, mainly affecting users on the U.S. East Coast. It was not immediately clear who was responsible. Officials told Reuters that the U.S. Department of Homeland Security and the Federal Bureau…

The DDoS That Came Through IoT: A New Era For Cyber Crime

The DDoS That Came Through IoT: A New Era For Cyber Crime

A New Era for Cyber Crime Last September, the website of a well-known security journalist was hit by a massive DDoS attack. The site’s host stated it was the largest attack of that type they had ever seen. Rather than originating at an identifiable location, the attack seemed to come from everywhere, and it seemed…

Four Keys For Telecoms Competing In A Digital World

Four Keys For Telecoms Competing In A Digital World

Competing in a Digital World Telecoms, otherwise largely known as Communications Service Providers (CSPs), have traditionally made the lion’s share of their revenue from providing pipes and infrastructure. Now CSPs face increased competition, not so much from each other, but with digital service providers (DSPs) like Netflix, Google, Amazon, Facebook, and Apple, all of whom…

Protecting Devices From Data Breach: Identity of Things (IDoT)

Protecting Devices From Data Breach: Identity of Things (IDoT)

How to Identify and Authenticate in the Expanding IoT Ecosystem It is a necessity to protect IoT devices and their associated data. As the IoT ecosystem continues to expand, the need to create an identity to newly-connected things is becoming increasingly crucial. These ‘things’ can include anything from basic sensors and gateways to industrial controls…

What You Need To Know About Choosing A Cloud Service Provider

What You Need To Know About Choosing A Cloud Service Provider

Selecting The Right Cloud Services Provider How to find the right partner for cloud adoption on an enterprise scale The cloud is capable of delivering many benefits, enabling greater collaboration, business agility, and speed to market. Cloud adoption in the enterprise has been growing fast. Worldwide spending on public cloud services will grow at a…

Data Breaches: Incident Response Planning – Part 1

Data Breaches: Incident Response Planning – Part 1

Incident Response Planning – Part 1 The topic of cybersecurity has become part of the boardroom agendas in the last couple of years, and not surprisingly — these days, it’s almost impossible to read news headlines without noticing yet another story about a data breach. As cybersecurity shifts from being a strictly IT issue to…

Don’t Be Intimidated By Data Governance

Don’t Be Intimidated By Data Governance

Data Governance Data governance, the understanding of the raw data of an organization is an area IT departments have historically viewed as a lose-lose proposition. Not doing anything means organizations run the risk of data loss, data breaches and data anarchy – no control, no oversight – the Wild West with IT is just hoping…

The Importance of Cloud Backups: Guarding Your Data Against Hackers

The Importance of Cloud Backups: Guarding Your Data Against Hackers

The Importance of Cloud Backups Cloud platforms have become a necessary part of modern business with the benefits far outweighing the risks. However, the risks are real and account for billions of dollars in losses across the globe per year. If you’ve been hacked, you’re not alone. Here are some other companies in the past…

Cloud-based GRC Intelligence Supports Better Business Performance

Cloud-based GRC Intelligence Supports Better Business Performance

Cloud-based GRC Intelligence All businesses need a strategy and processes for governance, risk and compliance (GRC). Many still view GRC activity as a burdensome ‘must-do,’ approaching it reactively and managing it with non-specialized tools. GRC is a necessary business endeavor but it can be elevated from a cost drain to a value-add activity. By integrating…

Moving Your Email To The Cloud? Beware Of Unintentional Data Spoliation!

Moving Your Email To The Cloud? Beware Of Unintentional Data Spoliation!

Cloud Email Migration In today’s litigious society, preserving your company’s data is a must if you (and your legal team) want to avoid hefty fines for data spoliation. But what about when you move to the cloud? Of course, you’ve probably thought of this already. You’ll have a migration strategy in place and you’ll carefully…


Sponsored Partners