4 Most Common Defects Found During Performance Testing | iLAB

4 Most Common Defects Found During Performance Testing

There may not be many certainties in life, but in the world of business it’s always true that in order to succeed you must be able to scale. As a business grows and attracts more clients, customers, or users, the company must be able to handle the added attention and patronage. For a brick and mortar business, this seem like an easy fix – simply hire more people to work in the store. But when technology is at play, whether as a product or an application solution, the answer comes in the form of performance testing. A top-tier team will be able to spot issues in software by increasing the load and usage to put the code under more stress. While every application is different, there are still some common errors to check for when engaging in a performance test. Here are four of the most common defects found during the testing process.

Misconfigured Network Load Balancers

As more and more individuals use a piece of software, these uses are registered as server requests. So, as common sense would suggest, the next steps are to add additional servers to handle this increase in traffic. Much like with a busy highway or city street, where hundreds or even thousands trying to get through at once, the best way to keep everyone in their lane and turning at the right exit is by putting someone there to direct the masses. When software faces this kind of challenge, one common good solution is a load balancer, which acts like a traffic cop for inbound server requests. However, with the increased pressure of a performance test, a common defect is to find that these load balancers are sending all the traffic to one or two servers, while the rest sit idly by doing nothing. This can result in overloading the single server or servers in use.

Poorly Designed Database Queries

Each time someone uses a website or database, it submits a query for a specific task. Oftentimes we find that the site or system was set up haphazardly, meaning that each request is returning responses that pull from a large data set. Initially, this might not be a huge issue, especially if the user base is fairly small or sets about doing simple tasks. But as more features are added and the number of visitors increase, so does the data set. For example, someone might click on a “Find Nearby ATM” button for a fledgling banking company. As more and more physical locations are added, the site has to sort through huge amounts of data to provide results. This could cause long waits or timeouts during the query process, a defect found primarily during a performance test.

Memory Leaks

These defects can be difficult to spot, even when being tested, as can only be found after a system is left alone and allowed to run for hours or even days under increased and sustained user loads. As a piece of software evolves, certain tasks or threads become irrelevant and bits of memory become vacant and available. Normally this available memory should be released so that a new process can utilize it. But, as the system begins to degrade over time during increased loads, available system memory is claimed but never correctly released.

Real World Scenarios

Software design shouldn’t occur in a techie bubble. These solutions and products need to be designed for real world situations that test the capabilities, particularly in high-pressure situations. Unfortunately, performance tests will often reveal defects that are inherent to the application’s design, primarily because they were constructed without giving much thought for these types of extenuating circumstances.

  • The system is unable to perform well under short bursts of heavy load. For example, let’s suppose a company runs an ad on TV, which receives positive feedback and perhaps even goes viral. In the hours or days following, a large number of new users access the website and it immediately crashes.
  • There’s no plan in place for data growth. A system may perform very well in the first 3 months, leading to a sense of complacency among the IT team. However, as the backend databases continue to grow with more data from customer orders, uploaded content, or new memberships, things really start to slow down.
  • The software team designed and tested “locally” and did not think “globally” for performance. This is when things work really well with the developer, where the software runs on servers in their own office, or in a data-center across town.  But when customers start popping on the opposite coast, or logging into your site from Canada or Japan, it’s possible they may see a painfully slow loading website.

A company’s growth may be caused by a multitude of factors; users may be flocking to the product or service due to marketing, design, or just plain old hustle by the team. But a business’ success – sustained success – comes from an ability to adequately manage and handle that growth. Performance tests provide a safe space to identify scaling defects that could potentially hold them back. When it comes to finding these issues, there’s no one better at the job than iLAB. To find out what performance testing could do for your business, contact us today.