How engineering quality and customer trust are interlinked
The quality of your engineering team’s work directly impacts customer satisfaction and loyalty, as high-quality products build trust.
The quality of your engineering team’s work directly impacts customer satisfaction and loyalty, as high-quality products build trust. So what are the important aspects to track when holding yourself accountable to your users?
We approach evaluating quality with a two-step process:
Define quality for our specific application: Software quality is a web of different characteristics related to a software’s design and execution, so pinpointing and prioritizing certain aspect is crucial for bringing clarity to a team during development.
Measure quality thoroughly and consistently: There are countless ways to measure the various aspects of software quality, but the primary goal of the software – providing users with a tool that fulfills its primary value proposition – should be kept in mind when deciding how to measure the quality of your software.
Functional and Non-Functional Quality
There are two main buckets from which we can begin to define quality, functional and non-functional:
Functional Requirements are about having a product that will provide users value when it works as intended: if everything is working as it should, what value does the software provide? What are its features and functions?
Non-functional Requirements relate to how the software works: how often does everything work as it should, and what kinds of things cause it to fail?
Example: If you’re building a SAAS application, you need to make sure that your app can facilitate the tasks your customers might need to complete using your app. Functional characteristics cover how well your app’s functions provide value when they are working correctly. Non-functional requirements cover how well the functions get to shine through – do bugs or a confusing interface prevent users from getting the most out of the app’s functions?
The International Organization for Standardization’s ISO/IEC 25010 software quality model gives a hierarchy of eight characteristics of software quality. The first, Functional Stability, is the only of these characteristics that describes a functional requirement.
Functional Characteristics
Functional Stability describes how well a product’s functions match users’ needs. Does it accurately do all the things a user needs it to do? This can be further broken down into three parts:
Functional completeness - the degree to which the set of functions covers all the specified tasks and user objectives.
Functional correctness - the degree to which a product or system provides the correct results with the needed degree of precision.
Functional appropriateness - the degree to which the functions facilitate the accomplishment of specific tasks and objectives.
Non-Functional Characteristics
Seven characteristics correspond to non-functional requirements. These describe what kinds of things can limit a software product’s ability to do what it is intended to do for its users.
Performance Efficiency - the “hardware resources needed to perform the different functions of the software system,” whether they are processing speed, storage capacity, or data communication capability.
Compatibility - does the software play well with others? Does its performance suffer or does it cause the performance of other software to suffer when they are in the same environment and using the same resources?
Usability - how easy to use and intuitive is the product for its users? Do its parts do what people assume they do? How accessible is the product for all its users? Does it respond well to common user errors?
Reliability - the “risk of software failure and the stability of a program when exposed to unexpected conditions”.
Security - how safe is data and information in the hands of the software product? Does information only get put in front of the right eyes, and does the product ensure that things stay that way?
Maintainability - how easily can a product be updated, and how well does it hold up as it is updated over time? How often do new updates result in bugs or other problems?
Portability - how easily can the product be transferred to different usage environments? Does it work using different hardware, operating systems, etc?
Non-Functional Requirements Allow Functional Requirements to Matter
Functional requirements depend on the unique role of a product, but different software products can suffer from similar non-functional problems related to performance issues, poor usability, or compatibility problems. Thus, third party tools (like our own PlayerZero) almost exclusively focus on the non-functional side. A third party tool can’t tell you what your product should do to be helpful to your users, but it can help you identify, fix, or avoid problems with how well your product works as intended.
Non-functional requirements support and enable functional requirements. You can have a great idea for a useful software product (successfully passing functional requirements), but if your software is full of bugs, takes a long time to process, and only works on Windows 10, it might as well be designed to infuriate your users – it fails non-functional requirements. Non-functional requirements allow functional requirements to matter.
What makes up functional stability?
Functional Stability relies on non-functional requirements, but the non-functional requirements influence each other in a complicated web of ways. Performance Efficiency might enable Compatibility, which enables Portability. A product that isn’t Reliable might not be Secure simply because its bugs negate attempts to be careful with data, and it might not be Maintainable because its spaghetti code breaks when it is updated. The important thing to keep in mind is that you have to get the fundamentals right before other aspects of your product can become important.
Measuring Software Product Quality
To ensure that your product is moving in the right direction, you have to be able to consistently measure the above characteristics of software quality with accuracy.
So how do you measure them? The ISO 25010 standards list gives suggestions for how to measure each characteristic, but keep in mind that there are tradeoffs to each measurement method and a good way of measuring a characteristic of one project might not work for another project – what makes a developer tool usable might be completely different than what makes a eCommerce app usable, for instance. Let’s take a closer look.
Functional Stability
As the most abstract characteristic, Functional Stability is particularly un-amenable to simple one-size-fits-all measurements, since you need to figure out whether your idea is valuable and unique enough to provide market value. Instead, you’ll likely have to rely on qualitative feedback on how well your product facilitates users’ ability to achieve their goals and get results that are precise, accurate, and useful. Therefore, we recommend relying on a multi-step approach to measuring functional stability (laid out below).
Functional stability is an important aspect of software product quality, as it measures the product's ability to perform its intended functions without unexpected errors or failures. If your product is an eCommerce app, its functional stability is the degree to which it covers all the different use cases your customers might need. Does it let users pay how they want? How well-designed is its search engine? Measuring functional stability involves testing your software product for stability under various conditions and scenarios. Here are some steps to measure functional stability for software products:
Identify the key functions of the product - before testing for functional stability, it’s important to identify the key functions of your product. These are the functions that the product is designed to perform, and they should be clearly defined and documented in your product roadmap & manifesto.
Develop test cases - once the key functions of your product have been established, the next step is to develop test cases that simulate different scenarios and conditions. These test cases should cover all of the key functions that you’ve listed out for your product and be designed to test for stability under different conditions.
Execute test cases - once you’ve developed thorough test cases on paper, the test cases should be executed to measure the functional stability of your product. During testing, any errors or failures should be documented, and the severity of each issue should be assessed. The severity of each test case can be measured by assessing the impact that any defect or failure present in the product would have on an end user - with extra emphasis placed on errors that would fully stop a user from getting value out of your product.
Record metrics - metrics should be recorded during testing to measure the functional stability of the product. These metrics can include the number of defects found, the severity of each defect, and the time it takes to fix each defect.
Analyze results - after testing is complete, the results should be analyzed to identify any patterns or trends. This analysis can help to identify areas of the product that are particularly prone to instability and can guide future development efforts plus give you vital context on which critical flows in your product need the most monitoring (learn more about how we automate the flow monitoring process).
Address issues - any issues identified during testing should be addressed as quickly as possible. This may involve having your developers make changes to your code, improving & standardizing testing procedures, or developing new features to improve product stability.
By following these steps, you can measure the functional stability of your software product with objectivity and ensure that you’re consistently meeting user needs and expectations. Measuring functional stability is an ongoing process that requires continuous testing (it certainly doesn’t end once your product is out in the wild), analysis, and improvement to ensure that the product remains stable and reliable over time.
Performance Efficiency
Performance Efficiency can be measured by load testing, stress testing, or measuring response times to estimate how well a product will perform under realistic conditions. Essentially, you need to know whether your product will hold up when it is under a lot of use – for example, can your eCommerce app handle holiday traffic without slowing down or breaking?
Load & Stress Testing
To carry out a load/stress test, you should first identify your key objectives and set up a test environment to simulate high loads and typical user behavior. Then, you should define the test scenarios, configure your load/stress testing tool (examples include Apache JMeter and LoadRunner for load testing), and execute the test while collecting performance data. Finally, you should analyze the results and address any performance issues identified during testing to improve your product's ability to handle high loads and maintain efficient performance under stress.
To prevent permanent damage to your product experience under heavy loads, it's essential to identify the breaking point through stress testing and find solutions to avoid such conditions. Consider the impact of a shopping website going down during a Christmas sale - the potential loss could be significant.
Compatibility
Measuring compatibility first requires thinking of what other products your product could be sharing an environment and resources with. You can then measure how efficiently your product can perform its functions while sharing an environment and resources with other products, how much it detrimentally impacts the performance of these other products, and how well products can share information with each other and utilize the information they receive from each other.
Alternatively, compatibility testing can refer to the process of testing your product's ability to function correctly and efficiently across multiple platforms, devices, and configurations. To perform compatibility testing, you should first identify the target platforms and configurations that your users will be most likely to use, then execute test cases that cover all the relevant features and functionalities. Finally, you should analyze the results and address any compatibility issues identified during testing to ensure that your product works seamlessly across all intended environments.
Usability
Your product is usable if it’s usable for your particular users. Measures like completion rate and satisfaction level can help you determine how usable your product is, but on their own they don’t distinguish between problems with usability itself and problems with other characteristics like Performance Efficiency or Reliability that might affect usability.
Focus groups can give a more detailed picture of how users understand and try to navigate your software, and tools like FullStory can let you look into how users navigate an application. You can learn whether users are having problems finding your checkout button or using your search options. This will help you more easily prioritize the features and portions of your product that need the most love in upcoming product development cycles and set an effective cadence for your engineering team.
Monitoring usability over key flows is one of our specialties here at PlayerZero. To learn more about how we automate the flow monitoring process, click here.
Reliability
Since it’s crucial for you to know how reliable your product is for your users, you can use load testing to see how it will hold up in realistic high stress scenarios, and then use the number of high-priority bugs that this testing surfaces to measure how often it is critically breaking in ways that directly harm the user experience. If you’re finding a high-priority bug every few days, your product likely isn’t reliable.
Security
The Security of your product is about how well it protects information and data, so measurements of security involve measuring the risks of your product failing to keep information and data safe in any one of a variety of ways. This can be how often data inputs are validated, how often security defects are critical, the proportion of security defects that are discovered by your own team instead of your users, or the time your product’s security defects take to fix.
Maintainability
You can measure maintainability by counting lines of code – the fewer lines of code, the more maintainable a product is. The number of lines of code can be a decent proxy for maintainability that doesn’t require extra work to find, but it doesn’t necessarily capture the quality and consistency of code.
You can also measure maintainability using changes in maintenance cost, frequency of new bugs per feature touch, knowledge transfer time (the time team members take to understand new features well enough to fix issues), and the increase in unplanned work when new features are added. Consider whether these measures are worth the extra effort; the value you get from a measurement should exceed the cost of taking the measurement, and, crucially, the way you measure maintainability shouldn’t incentivize poor practices.
Portability
The way in which your product needs to be portable will depend entirely on how your users need to be able to access it. Your product needs to work and work well on whatever devices and setups your users are using to access it. Can it be installed where it needs to be installed? How much time does porting it to a new device take?
Do your app’s features work as intended on whatever devices your target customers use?
Software Quality Metrics in Perspective
Characteristics of software quality are valuable to the extent that they facilitate good user experiences and help users achieve their goals. How you measure characteristics of software quality should capture how they provide value to your users. For instance, how does Portability affect your users? You should measure it – or save resources not measuring it – with how it affects your particular users as your guiding principle.
Keep in mind what can undermine your product’s value, as flaws in one characteristic of software quality can easily bleed into other aspects, and succeeding at the fundamentals can enable your good idea to shine through.
Are you tired of dealing with software defects and frustrated users? PlayerZero can help! Our hybrid monitoring/analytics approach to surfacing the most costly incidents in your product can identify and address issues before they impact your users, improving the quality and reliability of your software product and making your day-to-day a lot less stressful.
The quality of your engineering team’s work directly impacts customer satisfaction and loyalty, as high-quality products build trust. So what are the important aspects to track when holding yourself accountable to your users?
We approach evaluating quality with a two-step process:
Define quality for our specific application: Software quality is a web of different characteristics related to a software’s design and execution, so pinpointing and prioritizing certain aspect is crucial for bringing clarity to a team during development.
Measure quality thoroughly and consistently: There are countless ways to measure the various aspects of software quality, but the primary goal of the software – providing users with a tool that fulfills its primary value proposition – should be kept in mind when deciding how to measure the quality of your software.
Functional and Non-Functional Quality
There are two main buckets from which we can begin to define quality, functional and non-functional:
Functional Requirements are about having a product that will provide users value when it works as intended: if everything is working as it should, what value does the software provide? What are its features and functions?
Non-functional Requirements relate to how the software works: how often does everything work as it should, and what kinds of things cause it to fail?
Example: If you’re building a SAAS application, you need to make sure that your app can facilitate the tasks your customers might need to complete using your app. Functional characteristics cover how well your app’s functions provide value when they are working correctly. Non-functional requirements cover how well the functions get to shine through – do bugs or a confusing interface prevent users from getting the most out of the app’s functions?
The International Organization for Standardization’s ISO/IEC 25010 software quality model gives a hierarchy of eight characteristics of software quality. The first, Functional Stability, is the only of these characteristics that describes a functional requirement.
Functional Characteristics
Functional Stability describes how well a product’s functions match users’ needs. Does it accurately do all the things a user needs it to do? This can be further broken down into three parts:
Functional completeness - the degree to which the set of functions covers all the specified tasks and user objectives.
Functional correctness - the degree to which a product or system provides the correct results with the needed degree of precision.
Functional appropriateness - the degree to which the functions facilitate the accomplishment of specific tasks and objectives.
Non-Functional Characteristics
Seven characteristics correspond to non-functional requirements. These describe what kinds of things can limit a software product’s ability to do what it is intended to do for its users.
Performance Efficiency - the “hardware resources needed to perform the different functions of the software system,” whether they are processing speed, storage capacity, or data communication capability.
Compatibility - does the software play well with others? Does its performance suffer or does it cause the performance of other software to suffer when they are in the same environment and using the same resources?
Usability - how easy to use and intuitive is the product for its users? Do its parts do what people assume they do? How accessible is the product for all its users? Does it respond well to common user errors?
Reliability - the “risk of software failure and the stability of a program when exposed to unexpected conditions”.
Security - how safe is data and information in the hands of the software product? Does information only get put in front of the right eyes, and does the product ensure that things stay that way?
Maintainability - how easily can a product be updated, and how well does it hold up as it is updated over time? How often do new updates result in bugs or other problems?
Portability - how easily can the product be transferred to different usage environments? Does it work using different hardware, operating systems, etc?
Non-Functional Requirements Allow Functional Requirements to Matter
Functional requirements depend on the unique role of a product, but different software products can suffer from similar non-functional problems related to performance issues, poor usability, or compatibility problems. Thus, third party tools (like our own PlayerZero) almost exclusively focus on the non-functional side. A third party tool can’t tell you what your product should do to be helpful to your users, but it can help you identify, fix, or avoid problems with how well your product works as intended.
Non-functional requirements support and enable functional requirements. You can have a great idea for a useful software product (successfully passing functional requirements), but if your software is full of bugs, takes a long time to process, and only works on Windows 10, it might as well be designed to infuriate your users – it fails non-functional requirements. Non-functional requirements allow functional requirements to matter.
What makes up functional stability?
Functional Stability relies on non-functional requirements, but the non-functional requirements influence each other in a complicated web of ways. Performance Efficiency might enable Compatibility, which enables Portability. A product that isn’t Reliable might not be Secure simply because its bugs negate attempts to be careful with data, and it might not be Maintainable because its spaghetti code breaks when it is updated. The important thing to keep in mind is that you have to get the fundamentals right before other aspects of your product can become important.
Measuring Software Product Quality
To ensure that your product is moving in the right direction, you have to be able to consistently measure the above characteristics of software quality with accuracy.
So how do you measure them? The ISO 25010 standards list gives suggestions for how to measure each characteristic, but keep in mind that there are tradeoffs to each measurement method and a good way of measuring a characteristic of one project might not work for another project – what makes a developer tool usable might be completely different than what makes a eCommerce app usable, for instance. Let’s take a closer look.
Functional Stability
As the most abstract characteristic, Functional Stability is particularly un-amenable to simple one-size-fits-all measurements, since you need to figure out whether your idea is valuable and unique enough to provide market value. Instead, you’ll likely have to rely on qualitative feedback on how well your product facilitates users’ ability to achieve their goals and get results that are precise, accurate, and useful. Therefore, we recommend relying on a multi-step approach to measuring functional stability (laid out below).
Functional stability is an important aspect of software product quality, as it measures the product's ability to perform its intended functions without unexpected errors or failures. If your product is an eCommerce app, its functional stability is the degree to which it covers all the different use cases your customers might need. Does it let users pay how they want? How well-designed is its search engine? Measuring functional stability involves testing your software product for stability under various conditions and scenarios. Here are some steps to measure functional stability for software products:
Identify the key functions of the product - before testing for functional stability, it’s important to identify the key functions of your product. These are the functions that the product is designed to perform, and they should be clearly defined and documented in your product roadmap & manifesto.
Develop test cases - once the key functions of your product have been established, the next step is to develop test cases that simulate different scenarios and conditions. These test cases should cover all of the key functions that you’ve listed out for your product and be designed to test for stability under different conditions.
Execute test cases - once you’ve developed thorough test cases on paper, the test cases should be executed to measure the functional stability of your product. During testing, any errors or failures should be documented, and the severity of each issue should be assessed. The severity of each test case can be measured by assessing the impact that any defect or failure present in the product would have on an end user - with extra emphasis placed on errors that would fully stop a user from getting value out of your product.
Record metrics - metrics should be recorded during testing to measure the functional stability of the product. These metrics can include the number of defects found, the severity of each defect, and the time it takes to fix each defect.
Analyze results - after testing is complete, the results should be analyzed to identify any patterns or trends. This analysis can help to identify areas of the product that are particularly prone to instability and can guide future development efforts plus give you vital context on which critical flows in your product need the most monitoring (learn more about how we automate the flow monitoring process).
Address issues - any issues identified during testing should be addressed as quickly as possible. This may involve having your developers make changes to your code, improving & standardizing testing procedures, or developing new features to improve product stability.
By following these steps, you can measure the functional stability of your software product with objectivity and ensure that you’re consistently meeting user needs and expectations. Measuring functional stability is an ongoing process that requires continuous testing (it certainly doesn’t end once your product is out in the wild), analysis, and improvement to ensure that the product remains stable and reliable over time.
Performance Efficiency
Performance Efficiency can be measured by load testing, stress testing, or measuring response times to estimate how well a product will perform under realistic conditions. Essentially, you need to know whether your product will hold up when it is under a lot of use – for example, can your eCommerce app handle holiday traffic without slowing down or breaking?
Load & Stress Testing
To carry out a load/stress test, you should first identify your key objectives and set up a test environment to simulate high loads and typical user behavior. Then, you should define the test scenarios, configure your load/stress testing tool (examples include Apache JMeter and LoadRunner for load testing), and execute the test while collecting performance data. Finally, you should analyze the results and address any performance issues identified during testing to improve your product's ability to handle high loads and maintain efficient performance under stress.
To prevent permanent damage to your product experience under heavy loads, it's essential to identify the breaking point through stress testing and find solutions to avoid such conditions. Consider the impact of a shopping website going down during a Christmas sale - the potential loss could be significant.
Compatibility
Measuring compatibility first requires thinking of what other products your product could be sharing an environment and resources with. You can then measure how efficiently your product can perform its functions while sharing an environment and resources with other products, how much it detrimentally impacts the performance of these other products, and how well products can share information with each other and utilize the information they receive from each other.
Alternatively, compatibility testing can refer to the process of testing your product's ability to function correctly and efficiently across multiple platforms, devices, and configurations. To perform compatibility testing, you should first identify the target platforms and configurations that your users will be most likely to use, then execute test cases that cover all the relevant features and functionalities. Finally, you should analyze the results and address any compatibility issues identified during testing to ensure that your product works seamlessly across all intended environments.
Usability
Your product is usable if it’s usable for your particular users. Measures like completion rate and satisfaction level can help you determine how usable your product is, but on their own they don’t distinguish between problems with usability itself and problems with other characteristics like Performance Efficiency or Reliability that might affect usability.
Focus groups can give a more detailed picture of how users understand and try to navigate your software, and tools like FullStory can let you look into how users navigate an application. You can learn whether users are having problems finding your checkout button or using your search options. This will help you more easily prioritize the features and portions of your product that need the most love in upcoming product development cycles and set an effective cadence for your engineering team.
Monitoring usability over key flows is one of our specialties here at PlayerZero. To learn more about how we automate the flow monitoring process, click here.
Reliability
Since it’s crucial for you to know how reliable your product is for your users, you can use load testing to see how it will hold up in realistic high stress scenarios, and then use the number of high-priority bugs that this testing surfaces to measure how often it is critically breaking in ways that directly harm the user experience. If you’re finding a high-priority bug every few days, your product likely isn’t reliable.
Security
The Security of your product is about how well it protects information and data, so measurements of security involve measuring the risks of your product failing to keep information and data safe in any one of a variety of ways. This can be how often data inputs are validated, how often security defects are critical, the proportion of security defects that are discovered by your own team instead of your users, or the time your product’s security defects take to fix.
Maintainability
You can measure maintainability by counting lines of code – the fewer lines of code, the more maintainable a product is. The number of lines of code can be a decent proxy for maintainability that doesn’t require extra work to find, but it doesn’t necessarily capture the quality and consistency of code.
You can also measure maintainability using changes in maintenance cost, frequency of new bugs per feature touch, knowledge transfer time (the time team members take to understand new features well enough to fix issues), and the increase in unplanned work when new features are added. Consider whether these measures are worth the extra effort; the value you get from a measurement should exceed the cost of taking the measurement, and, crucially, the way you measure maintainability shouldn’t incentivize poor practices.
Portability
The way in which your product needs to be portable will depend entirely on how your users need to be able to access it. Your product needs to work and work well on whatever devices and setups your users are using to access it. Can it be installed where it needs to be installed? How much time does porting it to a new device take?
Do your app’s features work as intended on whatever devices your target customers use?
Software Quality Metrics in Perspective
Characteristics of software quality are valuable to the extent that they facilitate good user experiences and help users achieve their goals. How you measure characteristics of software quality should capture how they provide value to your users. For instance, how does Portability affect your users? You should measure it – or save resources not measuring it – with how it affects your particular users as your guiding principle.
Keep in mind what can undermine your product’s value, as flaws in one characteristic of software quality can easily bleed into other aspects, and succeeding at the fundamentals can enable your good idea to shine through.
Are you tired of dealing with software defects and frustrated users? PlayerZero can help! Our hybrid monitoring/analytics approach to surfacing the most costly incidents in your product can identify and address issues before they impact your users, improving the quality and reliability of your software product and making your day-to-day a lot less stressful.
The quality of your engineering team’s work directly impacts customer satisfaction and loyalty, as high-quality products build trust. So what are the important aspects to track when holding yourself accountable to your users?
We approach evaluating quality with a two-step process:
Define quality for our specific application: Software quality is a web of different characteristics related to a software’s design and execution, so pinpointing and prioritizing certain aspect is crucial for bringing clarity to a team during development.
Measure quality thoroughly and consistently: There are countless ways to measure the various aspects of software quality, but the primary goal of the software – providing users with a tool that fulfills its primary value proposition – should be kept in mind when deciding how to measure the quality of your software.
Functional and Non-Functional Quality
There are two main buckets from which we can begin to define quality, functional and non-functional:
Functional Requirements are about having a product that will provide users value when it works as intended: if everything is working as it should, what value does the software provide? What are its features and functions?
Non-functional Requirements relate to how the software works: how often does everything work as it should, and what kinds of things cause it to fail?
Example: If you’re building a SAAS application, you need to make sure that your app can facilitate the tasks your customers might need to complete using your app. Functional characteristics cover how well your app’s functions provide value when they are working correctly. Non-functional requirements cover how well the functions get to shine through – do bugs or a confusing interface prevent users from getting the most out of the app’s functions?
The International Organization for Standardization’s ISO/IEC 25010 software quality model gives a hierarchy of eight characteristics of software quality. The first, Functional Stability, is the only of these characteristics that describes a functional requirement.
Functional Characteristics
Functional Stability describes how well a product’s functions match users’ needs. Does it accurately do all the things a user needs it to do? This can be further broken down into three parts:
Functional completeness - the degree to which the set of functions covers all the specified tasks and user objectives.
Functional correctness - the degree to which a product or system provides the correct results with the needed degree of precision.
Functional appropriateness - the degree to which the functions facilitate the accomplishment of specific tasks and objectives.
Non-Functional Characteristics
Seven characteristics correspond to non-functional requirements. These describe what kinds of things can limit a software product’s ability to do what it is intended to do for its users.
Performance Efficiency - the “hardware resources needed to perform the different functions of the software system,” whether they are processing speed, storage capacity, or data communication capability.
Compatibility - does the software play well with others? Does its performance suffer or does it cause the performance of other software to suffer when they are in the same environment and using the same resources?
Usability - how easy to use and intuitive is the product for its users? Do its parts do what people assume they do? How accessible is the product for all its users? Does it respond well to common user errors?
Reliability - the “risk of software failure and the stability of a program when exposed to unexpected conditions”.
Security - how safe is data and information in the hands of the software product? Does information only get put in front of the right eyes, and does the product ensure that things stay that way?
Maintainability - how easily can a product be updated, and how well does it hold up as it is updated over time? How often do new updates result in bugs or other problems?
Portability - how easily can the product be transferred to different usage environments? Does it work using different hardware, operating systems, etc?
Non-Functional Requirements Allow Functional Requirements to Matter
Functional requirements depend on the unique role of a product, but different software products can suffer from similar non-functional problems related to performance issues, poor usability, or compatibility problems. Thus, third party tools (like our own PlayerZero) almost exclusively focus on the non-functional side. A third party tool can’t tell you what your product should do to be helpful to your users, but it can help you identify, fix, or avoid problems with how well your product works as intended.
Non-functional requirements support and enable functional requirements. You can have a great idea for a useful software product (successfully passing functional requirements), but if your software is full of bugs, takes a long time to process, and only works on Windows 10, it might as well be designed to infuriate your users – it fails non-functional requirements. Non-functional requirements allow functional requirements to matter.
What makes up functional stability?
Functional Stability relies on non-functional requirements, but the non-functional requirements influence each other in a complicated web of ways. Performance Efficiency might enable Compatibility, which enables Portability. A product that isn’t Reliable might not be Secure simply because its bugs negate attempts to be careful with data, and it might not be Maintainable because its spaghetti code breaks when it is updated. The important thing to keep in mind is that you have to get the fundamentals right before other aspects of your product can become important.
Measuring Software Product Quality
To ensure that your product is moving in the right direction, you have to be able to consistently measure the above characteristics of software quality with accuracy.
So how do you measure them? The ISO 25010 standards list gives suggestions for how to measure each characteristic, but keep in mind that there are tradeoffs to each measurement method and a good way of measuring a characteristic of one project might not work for another project – what makes a developer tool usable might be completely different than what makes a eCommerce app usable, for instance. Let’s take a closer look.
Functional Stability
As the most abstract characteristic, Functional Stability is particularly un-amenable to simple one-size-fits-all measurements, since you need to figure out whether your idea is valuable and unique enough to provide market value. Instead, you’ll likely have to rely on qualitative feedback on how well your product facilitates users’ ability to achieve their goals and get results that are precise, accurate, and useful. Therefore, we recommend relying on a multi-step approach to measuring functional stability (laid out below).
Functional stability is an important aspect of software product quality, as it measures the product's ability to perform its intended functions without unexpected errors or failures. If your product is an eCommerce app, its functional stability is the degree to which it covers all the different use cases your customers might need. Does it let users pay how they want? How well-designed is its search engine? Measuring functional stability involves testing your software product for stability under various conditions and scenarios. Here are some steps to measure functional stability for software products:
Identify the key functions of the product - before testing for functional stability, it’s important to identify the key functions of your product. These are the functions that the product is designed to perform, and they should be clearly defined and documented in your product roadmap & manifesto.
Develop test cases - once the key functions of your product have been established, the next step is to develop test cases that simulate different scenarios and conditions. These test cases should cover all of the key functions that you’ve listed out for your product and be designed to test for stability under different conditions.
Execute test cases - once you’ve developed thorough test cases on paper, the test cases should be executed to measure the functional stability of your product. During testing, any errors or failures should be documented, and the severity of each issue should be assessed. The severity of each test case can be measured by assessing the impact that any defect or failure present in the product would have on an end user - with extra emphasis placed on errors that would fully stop a user from getting value out of your product.
Record metrics - metrics should be recorded during testing to measure the functional stability of the product. These metrics can include the number of defects found, the severity of each defect, and the time it takes to fix each defect.
Analyze results - after testing is complete, the results should be analyzed to identify any patterns or trends. This analysis can help to identify areas of the product that are particularly prone to instability and can guide future development efforts plus give you vital context on which critical flows in your product need the most monitoring (learn more about how we automate the flow monitoring process).
Address issues - any issues identified during testing should be addressed as quickly as possible. This may involve having your developers make changes to your code, improving & standardizing testing procedures, or developing new features to improve product stability.
By following these steps, you can measure the functional stability of your software product with objectivity and ensure that you’re consistently meeting user needs and expectations. Measuring functional stability is an ongoing process that requires continuous testing (it certainly doesn’t end once your product is out in the wild), analysis, and improvement to ensure that the product remains stable and reliable over time.
Performance Efficiency
Performance Efficiency can be measured by load testing, stress testing, or measuring response times to estimate how well a product will perform under realistic conditions. Essentially, you need to know whether your product will hold up when it is under a lot of use – for example, can your eCommerce app handle holiday traffic without slowing down or breaking?
Load & Stress Testing
To carry out a load/stress test, you should first identify your key objectives and set up a test environment to simulate high loads and typical user behavior. Then, you should define the test scenarios, configure your load/stress testing tool (examples include Apache JMeter and LoadRunner for load testing), and execute the test while collecting performance data. Finally, you should analyze the results and address any performance issues identified during testing to improve your product's ability to handle high loads and maintain efficient performance under stress.
To prevent permanent damage to your product experience under heavy loads, it's essential to identify the breaking point through stress testing and find solutions to avoid such conditions. Consider the impact of a shopping website going down during a Christmas sale - the potential loss could be significant.
Compatibility
Measuring compatibility first requires thinking of what other products your product could be sharing an environment and resources with. You can then measure how efficiently your product can perform its functions while sharing an environment and resources with other products, how much it detrimentally impacts the performance of these other products, and how well products can share information with each other and utilize the information they receive from each other.
Alternatively, compatibility testing can refer to the process of testing your product's ability to function correctly and efficiently across multiple platforms, devices, and configurations. To perform compatibility testing, you should first identify the target platforms and configurations that your users will be most likely to use, then execute test cases that cover all the relevant features and functionalities. Finally, you should analyze the results and address any compatibility issues identified during testing to ensure that your product works seamlessly across all intended environments.
Usability
Your product is usable if it’s usable for your particular users. Measures like completion rate and satisfaction level can help you determine how usable your product is, but on their own they don’t distinguish between problems with usability itself and problems with other characteristics like Performance Efficiency or Reliability that might affect usability.
Focus groups can give a more detailed picture of how users understand and try to navigate your software, and tools like FullStory can let you look into how users navigate an application. You can learn whether users are having problems finding your checkout button or using your search options. This will help you more easily prioritize the features and portions of your product that need the most love in upcoming product development cycles and set an effective cadence for your engineering team.
Monitoring usability over key flows is one of our specialties here at PlayerZero. To learn more about how we automate the flow monitoring process, click here.
Reliability
Since it’s crucial for you to know how reliable your product is for your users, you can use load testing to see how it will hold up in realistic high stress scenarios, and then use the number of high-priority bugs that this testing surfaces to measure how often it is critically breaking in ways that directly harm the user experience. If you’re finding a high-priority bug every few days, your product likely isn’t reliable.
Security
The Security of your product is about how well it protects information and data, so measurements of security involve measuring the risks of your product failing to keep information and data safe in any one of a variety of ways. This can be how often data inputs are validated, how often security defects are critical, the proportion of security defects that are discovered by your own team instead of your users, or the time your product’s security defects take to fix.
Maintainability
You can measure maintainability by counting lines of code – the fewer lines of code, the more maintainable a product is. The number of lines of code can be a decent proxy for maintainability that doesn’t require extra work to find, but it doesn’t necessarily capture the quality and consistency of code.
You can also measure maintainability using changes in maintenance cost, frequency of new bugs per feature touch, knowledge transfer time (the time team members take to understand new features well enough to fix issues), and the increase in unplanned work when new features are added. Consider whether these measures are worth the extra effort; the value you get from a measurement should exceed the cost of taking the measurement, and, crucially, the way you measure maintainability shouldn’t incentivize poor practices.
Portability
The way in which your product needs to be portable will depend entirely on how your users need to be able to access it. Your product needs to work and work well on whatever devices and setups your users are using to access it. Can it be installed where it needs to be installed? How much time does porting it to a new device take?
Do your app’s features work as intended on whatever devices your target customers use?
Software Quality Metrics in Perspective
Characteristics of software quality are valuable to the extent that they facilitate good user experiences and help users achieve their goals. How you measure characteristics of software quality should capture how they provide value to your users. For instance, how does Portability affect your users? You should measure it – or save resources not measuring it – with how it affects your particular users as your guiding principle.
Keep in mind what can undermine your product’s value, as flaws in one characteristic of software quality can easily bleed into other aspects, and succeeding at the fundamentals can enable your good idea to shine through.
Are you tired of dealing with software defects and frustrated users? PlayerZero can help! Our hybrid monitoring/analytics approach to surfacing the most costly incidents in your product can identify and address issues before they impact your users, improving the quality and reliability of your software product and making your day-to-day a lot less stressful.
PlayerZero is AI for defect resolution and
support engineering.
PlayerZero is AI for defect resolution and
support engineering.