Background: Why V&V is Critical in Medical Device Development
Technology is moving at lightning speed and the medical device industry is evolving right along with it. Years ago, software in a medical device may have been the exception and not the rule but in today’s fast-paced digital world, software is front and center. And the reliability and functionality of software are paramount to ensure medical devices work as intended so that they are safe and effective for patients. This is where software verification and validation (V&V) testing comes into play.
Verification and validation (V&V) are two essential components of the software testing life cycle that work together to help support and demonstrate the quality of your software product. While these two terms sound very similar, they serve two distinct purposes in the development process. Software verification focuses on checking that the software correctly implements the specified requirements… it ensures that you’re building the product right. Software validation on the other hand, helps ensure that the final product meets the user’s needs and expectations, essentially confirming that you’ve built the right product.
If you are not sure where to start, see some tips and tricks below to develop your V&V plan and protocol!
Prepare for Verification Validation Plan and Protocol
Verification and Validation are foundational elements within any design control process and manufacturers should review their Quality System (QMS) to understand where V&V activities and deliverables should be incorporated for their device.
One of the most critical activities that needs to be completed before manufacturer’s can get started on a V&V plan is the development of requirements for the product. To develop a robust V&V Plan and Protocol, manufacturers need to have both the user needs as well as product requirements well defined. Development of these should begin early in the design control process and will likely be updated throughout the development process.
Risk management activities should also be on-going for the product. Risk management should be incorporated into the testing strategy to enable a risk-based testing approach and help ensure appropriate coverage for high risk areas.
So if these activities haven’t begun, now is the time to get started so the V&V strategy can follow!
FOUNDATIONAL ELEMENTS: Requirements
Requirements are a foundational element of any V&V testing strategy and its more than just “having” requirements. Well written requirements help pave the way for efficient and effective V&V testing.
User Needs generally come first. These are essentially the goals, tasks, and problems that end users want to address or solve with a product. They are generally user-centric and high-level focusing on desired outcomes. User needs are often abstract in that they don’t define the implementation and they also generally provide context of how the product will be used. They are the foundation of what ultimately needs to be tested in Validation testing.
Product requirements flow from the product’s user needs. Essentially they translate user needs into actionable specifications that developers can implement. Unlike user needs, product requirements are detailed and specific and define the functionalities and constraints for the product. Product requirements are also technical and measurable and a well written product specification may even include acceptance criteria, or at least have sufficient detail that acceptance criteria is easily developed for Verification testing. As product requirements are developed, each user need should be traced to 1 or more product requirements.
Starting to trace user needs to product requirements early in the design process will be critical in helping ensure nothing gets missed in the design process and this tracing is foundational to the V&V strategy.
V&V PLAN DESIGN: Define the types of testing to be performed:
With the foundations of the product requirements in mind, different types of testing that are appropriate for your product can be evaluated. It is likely that to ensure the quality and functionality of your product, multiple types of testing will be required.
Here are some different types of tests and guidelines on when to use them:
- Unit tests
- Purpose: To verify that each unit of the software performs as expected in isolation.
- When: Throughout development, after individual components or modules are coded.
- Integration tests
- Purpose: Verify components work together seamlessly including that data is passed correctly and no unexpected behavior arises.
- When: After unit testing, once individual units are combined.
- Regression tests
- Purpose: To ensure that new changes haven't introduced new bugs or broken existing functionality (and can be automated to run frequently!). When: Throughout the software lifecycle, after any code modifications, bug fixes, or new feature additions.
- System tests
- Purpose: Ensure the system meets its intended purpose and behaves as specified.
- When: Throughout development as the system starts to come together and before deployment/release.
- Functional tests
- Purpose: Verify that the software functions according to the specified requirements.
- When: Throughout development and can be incorporated into unit, integration, and system testing
- Error handling tests
- Purpose: Ensure software manages and responds to errors, exceptions, or unexpected situations in a stable, secure, and (ideally) user-friendly manner.
- When: Throughout development and can be incorporated into unit, integration, and system testing
- Operating system validation
- Purpose: Verify the software runs correctly and interacts seamlessly with the target operating system(s).
- When: During the deployment phase, before release.
- Off-the-shelf software validation
- Purpose: To validate that the off-the-shelf software meets the specific needs and integrates properly with the custom software.
- When: Before or when integrating third-party or commercial off-the-shelf (COTS) software.
- Performance testing
- Purpose: Identify performance bottlenecks, measure response times under various loads, and ensure the system meets performance requirements such as system responsiveness, speed, and scalability under load.
- When: Throughout development, following system testing, and before deployment. Can follow system testing.
- Interoperability tests
- Purpose: Verify if products can exchange data and use each other's functionality without compatibility issues.
- When: Generally after system testing, often in parallel with or as part of acceptance testing before deployment/release.
- Security testing
- Purpose: Identify and mitigate security weaknesses to protect the confidentiality, integrity, and availability of the system and data.
- When: Throughout development and before deployment/release.
- User acceptance testing
- Purpose: Ensure the software is usable, meets user needs, and fulfills the intended user experience.
- When: Performed by representative users before deployment/release.
- End-to-end testing
- Purpose: Verify the integrated components function as a whole, data flows correctly across all components, and the user experience is smooth.
- When: Throughout development and before deployment/release.
V&V PLAN DESIGN: Define the plan
With all the requirements in hand, some of the risk management activities completed, and various tests in mind, it's time to start mapping out the V&V strategy.
- Define verification tests: Starting with the list product requirements, outline the type of testing that will be performed for each requirement. Each product requirement will have one or more tests and it may be a product requirement that has multiple test types that will be performed to demonstrate the product meets the product requirement. For example, one product specification may be tested with both unit and integration testing (as well as regression testing with changes are made). While outlining the different tests across product requirements, it’s important to review the risk management activities for where testing may have been identified as a risk control measure.
- Define validation tests: Once the verification tests can be defined, it's time to start evaluating the types of tests required for validation. This is where the trace of user needs to product requirements can come in handy. As the list of User Needs is reviewed to outline the appropriate method to demonstrate the product meets the user need, there may be some verification tests that can also be used to demonstrate the user need is met (along with the product requirement). Again, it's important to review related risk management documents to ensure that validation tests are in place anywhere it may have been identified as a risk control measure.
Based upon the various tests that will be required to verify and validate the device, an approach for protocols or documentation can be created based on the product’s quality system and the complexity of the product. For example, an overall V&V plan may be developed for the product with additional individual protocols for verification and validation. And verification testing may be broken into multiple protocols… one that captures security testing, another for integration tests, and a third for off-the-shelf software validation. There is no one-size-fits-all approach and manufacturers should pull together cross-functional representatives from Quality, Regulatory, Clinical, R&D, and Marketing to align on best approach for the product and organization.
V&V PLAN DESIGN: Define the test cases
Independent of whether the tests will all be captured in one protocol or ten, test cases need to be created for each of the tests identified to be performed. Test cases should be clear, concise, and reproducible. They should be easy to understand and execute. Essentially test cases should be written simply and with enough detail that someone brand new coming in to execute testing would obtain the same results as those who designed the test case.
In order to create reproducible test cases, include details such as the following:
- Preconditions: Document what conditions need to be met in order to start test steps (ex. user is logged in to a system or a record needs to be in the database).
- Test steps: The individual instructions that guide a tester through the testing process for a particular functionality or scenario. Each test step details a specific action required to be performed in order to complete the test. Test steps should be clear, concise, listed in the order required to be executed, and enable reproducibility of the test.
- Requirements Tested: To help with traceability, each test case should be tied to one or more requirements.
- Expected results and acceptance criteria: With the testing approach defined, the expected results and clear acceptance criteria needs to be documented.
As test cases are designed to evaluate requirements, both positive and negative test cases should be created. Positive test cases are those that test valid inputs and the system is expected to behave correctly. Negative test cases are those that test invalid or unexpected inputs to ensure the system handles errors gracefully.
Document the V&V Plan and Protocol
Hopefully documentation has been happening all along the way but before the V&V Plan and Protocol can be finalized, the following should be formally documented and released within your QMS:
- User Needs
- Product Requirements
- Preliminary Risk analysis documents
Then it's time to bring together information from those documents as well the information and testing strategy outlined above into a formal V&V Plan and Protocol. This should be a standalone, controlled document in alignment with Quality System regulations.
Once the V&V Plan and Protocol is documented, depending on device type and precedence, FDA encourages manufacturers to leverage the Q-Submission process for obtaining FDA feedback prior to submitting a premarket submission.
V&V EXECUTION: What to know before you start testing
After the team gets the protocol signed off, now the hard work of execution begins! Before jumping into testing, it's important to ensure all testers are aligned on expectations for the documentation of results for test cases. While a binary result of “pass” or “fail” is important, that information alone is insufficient.
A well documented result will include details on what in fact was observed. This observation can be quantitative or qualitative and it's helpful if the detailed observations tie back to the expected results for each step. In the case of expected quantitative results, the documented observations should include the numerical results observed. And in instances where something should be calculated, like error, then the equation that was used to calculate the error should also be provided.
Results should also be supported by screenshots/recordings, exports, other attachments that help objectively demonstrate the documented results of the testing.
Ultimately, the detailed observations documented as the “actual results” along with attachments should allow someone else (who was not involved!) to follow the rationale for the final determination of “pass” or “fail”. Here are some examples (Note: procedural steps were left out for simplicity):
Save Time with Our V&V Plan, Protocol, and Report Templates
Creating a V&V plan from scratch can feel overwhelming—but it doesn’t have to be. We’ve designed a comprehensive V&V plan, protocol, and report template (in Word format) to help you get started. The templates include:
- Step-by-step instructions.
- Aligns with industry best practices.
- FDA and EU MDR aligned guidance.
- Conformity to 62304
Ready to streamline your V&V process? Purchase the Software V&V Templates here or Contact Cosm for additional support.
By following these steps and leveraging our resources, you’ll save time and ensure your V&V efforts align with FDA and industry standards. Let us help you build safer, more effective medical devices.
Disclaimer - https://www.cosmhq.com/disclaimer