Imagine one company that just purchased an out of the box HR software that has integrations with other existing apps they use, and another company that developed their own software from the ground up that does the same thing. Which one do you presume is taking the greater risk? Would it surprise you to learn that from a quality assurance perspective, the person buying ready-made software may face greater risk than the person who did it themselves?
People in IT think that if software comes out of a box, it works correctly, no matter what kind of settings you establish within. On the other hand, even a small custom code change is usually subjected to some form of quality assurance.The truth is, information technology runs itself in circles around terminology while the practical facts get ignored. There isn’t a difference between custom and configured software, at least not from a quality assurance perspective.
Some out of the box software is customized, meaning the code itself is altered to make the software fit the user requirements. But far more often, out of the box software is simply configured, meaning every customer installs the same package and then spends a thousand hours deciding on their settings and preferences. All those customers think they don’t need to test that software for quality or functionality because all they are doing is changing a setting or taking advantage of an existing integration. But just because the software comes with the ability to connect to Google Drive, or be accessed from mobile, doesn’t mean that capability is working the way you think it does. Change is change, and whether it’s a code change or a configuration change, in software a change means quality assurance testing is needed. Testing for performance, functionality, and the ability to work under a high volume of users should always be conducted on either kind of product.
Say Apple offered to sell you a MacBook at a 90% discount, but only on the agreement that you would call them every week and tell them all the errors you experienced in the last seven days. Would you accept the machine confident it was going to work well? Put in terms of hardware, this seems ludicrous, but it’s where many SaaS subscribers end up on the software side. Yet, these subscribers trust implicitly that the software will work, because they haven’t done anything besides change some configurations. They think the provider did the quality assurance, and maybe the provider even did, but now an operating system update or change to a third-party program has created a new issue. These are the kinds of things that go unnoticed until they cause a bug or are exploited by a criminal. Failing to conduct quality assurance on configured software ultimately means the burden is put on the end-user, through system failures, security risks, and lost time and money.
A configuration change can have as much impact as a custom development. In fact, much of the work we do at iLAB is fixing configurations, because people don’t take the need for quality assurance and user acceptance testing of configured software seriously until it’s too late. The time and expense of preliminary quality assurance is far less than the costs of not fully examining the unique implications of your software configurations, and any changes. To achieve quality, the first step is to test for it. Call on iLAB and get a quality leader on your side.