The way most companies today are boosting competitiveness and relevance is by focusing on being as nimble as possible. Amazon pushes new code countless times a day, so do Google, Facebook, Uber and many others. Forward looking software developer leaders understand that to deliver innovation to customers they must effectively manage entire SaaS application lifecycles across a diverse range of infrastructures, a process that begins with identifying and eliminating bugs as early as possible so that teams can focus on adding end-user value.
Testing is a crucial part of an application’s lifecycle, but it’s inherently challenging to ensure that tests done in development will mirror what happens in production. A recent survey from ClusterHQ uncovered that 60% of developer team members spend up to half their day debugging errors, instead of developing new features—proving that debugging is a huge resource drain for DevOps team.
Why are bugs in production so commonplace?
A deeper look are the challenges around application testing showed that recreating production environments was cited as the leading cause of bugs appearing in production. This challenge was followed closely by interdependence on external systems that makes integration testing cumbersome, which leads into the third most cited challenge: testing against unrealistic data. At present, data is difficult to move between all the places that it is needed, including in test infrastructure. As a result, unrealistic, mock data sets are often used to test applications. However, these unrealistic data sets cannot prepare applications for all real world variables, and thus cause serious, expensive, and time consuming issues down the line.
The infographic below outlines additional key findings.
By Glenn Blake