top of page

Test Method Development

Writer's picture: Simon FögerSimon Föger

Updated: Dec 11, 2024

Test methods don't fall from the sky – we have to develop them.


We often talk about test method validation, but we rarely address test method developmenthow do you create a suitable test method for your medical devices? In this article, we explore the key considerations when developing a test method.


Test Method Development

We will talk about:


We all use many test methods throughout the development and manufacturing process, from incoming inspection to outgoing/final inspection. We use standard or compendial methods defined in ASTM, ISO, or similar standards when possible. Even if we use standard methods like the ASTM F88/F88M, we must validate those as required per ISO 11607-1 section 4.4.3..


I talk a lot about test method validation, but I rarely talk about test method development.


In the case of a test method already described in a standard, the test method development part is somewhat obsolete – well, not entirely.


While the test method no longer needs to be developed, it must be implemented fully and correctly, as defined in the relevant standard(s) (e.g., ASTM F88/F88M, EN 868-5, etc.).


I recommend objective evidence (aka a document) that shows that the test method was fully and correctly implemented. We call it "Work instruction Assessment Form" which verifies whether the standard was correctly and fully implemented. Get your free copy of a "Work instruction Assessment Form" template in our MedTech Free Resources section (you just need to register with your e-mail address to get free access).


Why is it helpful to have a "Work Instruction Assessment Form"?


For example, the standard test method for seal strength, ASTM F88/F88M, defines that the first and last 10% shall be cut off, and the average of the remaining middle 80% shall be reported. This is only a tiny section of the standard's requirements, but one can make plenty of mistakes here, e.g., the first and/or last 10% are not cut off, not the average, but the peak values are reported of the remaining 80% and many more. Imagine, you dissected the requirements of the ASTM F88/F88M, requirement by requirement, and show evidence of how you implemented it; would you agree that it is less likely that you do not follow the standard?


What if there is no standard test method available?


Well, that is not the end of the world – but we must give it more thought. Standards like the ASTM D8282 (Standard Practice for Laboratory Test Method Validation and Method Development) provide a path through this process.


  1. Define a test method development team


Having multiple perspectives on the test method is essential as we have different stakeholders throughout the life and use of a test method. We want to make sure that we have people on the team that understand the requirements from a product, manufacturing, quality, regulatory and other perspectives as applicable.


  1. Purpose and Scope of the Test Method


21 CFR TITLE 820.72(a) states very clearly “Each manufacturer shall ensure that all inspection, measuring, and test equipment, including mechanical, automated, or electronic inspection and test equipment, is suitable for its intended purposes and is capable of producing valid results”. The definition of the purpose is essential as this is what we are going to be validating against. The purpose may very often either be “to measure the true value” in case of variable (measure) tests or “to find defective parts” in case of attribute (pass/fail) tests. The scope definition might include a brief overview of the materials or phenomenon being tested.


  1. Apparatus and Materials


Think about what you want to test. What are the specifications for the test method (e.g., measure the true dimensions of an injection molded part made from polycarbonate with a nominal value of 10,0mm and a tolerance of ±0,1mm), what is the required resolution of the measuring equipment, what is the equipment’s tolerance, what are the calibration requirements, what are the properties of the material under test, how do you hold the parts in place during testing, how do you prepare the specimen before testing, etc.


  1. Test Specimen Preparation and/or Calibration


Sometimes, specimens need to be prepared in a certain way to be able to execute the test. Again, a well-known example of such a preparation is the seal strength. According to the ASTM F88/F88M, specimens must be cut in a certain way and width. Test specimen preparation is also a source of variation in measurement results; thus, try to reduce variation to as low as reasonably possible at that step. ASTM F88/F88M mentions a specimen cutter that maintains a certain width across operators and trials. When we perform precision (=repeatability and reproducibility) studies during the test method validation, preparation must be carried out by each operator. The same is true for potential calibration steps that need to be performed, e.g., the ASTM F88/F88M requires such. Be as detailed as possible: where, what, how, etc.


  1. Test Procedure


Provide clear, step-by-step instructions detailing every aspect of conducting the test. This section should be structured to minimize variability and ensure that different operators achieve consistent results. Key components include Setup and Execution.


Test Method Development – Setup

Outline all necessary preparations before the test begins. This might include, among others:


  • Testing Equipment Setup: Describe the equipment and tools required to and ensure all items are qualified and calibrated. Include any setup requirements for specialized instruments.


  • Software Configuration: If any software is involved, specify the version and settings that must be used. If applicable, include instructions for setting up data acquisition systems.


  • Parameters and Controls: Detail the test parameters, such as speed, temperature, humidity, pressure, or other relevant (environmental) conditions that could affect results. Clearly state any tolerance allowed.


  • Fixture Requirements: Describe any fixtures or mounts necessary to secure the specimen, including their dimensions and materials, to avoid shifting or vibration during the test.


  • Lighting conditions: specify lighting intensity, color temperature, and direction, etc. to standardize visibility, especially for visual inspections or imaging. . This is critical in identifying potential defects of and ensuring clear observation. See standards like EN 13018 or similar.


  • Distance and Positioning: Define the distance between equipment (e.g., camera or measurement tool) and the test specimen and the exact positioning requirements to maintain accuracy.


  • Operator Qualifications: Note any specific skills or certifications operators need to perform this test correctly. This may include visual acuity testing as required per EN 13018.


Test Method Development – Execution


Provide a detailed sequence of actions from sample preparation to final documentation. This section should prevent ambiguity and ensure each step is reproducible. Key steps include, among others:


  • Sample Preparation: Describe how to prepare the test specimen, including cleaning, conditioning, labeling, and other preparatory steps. Specify storage or handling requirements to prevent contamination or degradation.


  • Calibration: Include a step for verifying equipment calibration immediately before testing. Document calibration data to ensure traceability.


  • Cleaning and Environmental Control: Outline any necessary cleaning procedures for the equipment and testing environment. This is essential to maintaining consistent conditions and avoiding contamination.


  • Documentation Protocol: Detail the type of data to be recorded at each test stage, including observations, measurements, and any anomalies. A format or template should be recommended to ensure data consistency.


  • Specimen Handling and Movement: Specify how to handle, position, and move or rotate the specimen during the test to ensure standardized exposure to the measurement system. Include these instructions if any manipulation enhances defect visibility (such as rotating under specific lighting).


  • Use of Dye or other Enhancers: If dyes or other agents enhance visibility or defects, specify the exact type, concentration, and application method. Include any precautions to avoid false positives or specimen contamination.


  • Data Recording and Analysis: Provide guidelines for data collection, such as taking multiple measurements or readings to ensure accuracy and repeatability. Describe any calculations of statistical analyses required immediately after data collection.


Safety Considerations


List recommendations and precautions for maintaining safety during testing:


  • Personal Protective Equipment (PPE): Outline PPE requirements (e.g., gloves, goggles) specific to the materials or chemicals used in the test.


  • Hazardous Material Handling: If applicable, include instructions for handling, storing, and disposing of hazardous substances.


  • Electrical and Mechanical Safety: Address any potential electrical or mechanical hazards from the equipment and specify protocols for safe operation.


  • Emergency Procedures: Specify actions to take in case of an accident, such as first aid measures and emergency shutdown steps for equipment.


  • Environmental Health and Safety (EHS) Compliance: Ensure all testing adheres to relevant EHS standards and local regulations, such as ventilation requirements for tests involving fumes or dust.


Developing your test method will not be a straight line


The more complex the test method, the higher the likelihood of encountering setbacks. Expecting and preparing for these fallbacks is not just practical – it’s essential. Running small-scale trials, pilot tests, or preliminary tests can help you identify weaknesses, refine procedures, and address feasibility issues before committing resources to test method validation and full-scale implementation.


Iteration is important in test method development because it helps address the challenges of getting equipment, samples, operators, and environmental conditions to work together smoothly. Things often don’t go perfectly on the first try.


By testing and refining, you can spot unexpected issues, like how temperature or vibrations might affect results. It also allows you to adjust the process to make it more reliable and consistent, ensuring the method works as intended. Increasing the number of iterations gives you a better understanding of where variability comes from and how to reduce it. To keep the number of iterations as small as possible please find a couple of potential causes for variation:

Potential Causes for Variation

Man (People)

1. Insufficient training on assembly procedures

2. Low experience level of operators

3. Fatigue affecting focus

4. Inconsistent shift performance

5. Attention to detail

6. High workload/Overload

7. Motivation issues

8. Physical/cognitive limitations impacting performance

Machine (Equipment)

9. Frequent equipment breakdowns

10. Incorrect calibration of machinery

11. Outdated or inappropriate tools for precise assembly

12. Precision variability across units

13. Inadequate resolution

14. High wear rate affecting precision

15. Limited calibration ranges

16. Low sensitivity to minor defects

Method (Process

17. Unclear or outdated assembly instructions

18. Poorly defined quality control steps

19. Lack of standard operating procedures (SOPs)

20. Inefficient workflow causing delays

21. Insufficient preparation of materials/tools

22. Improper fixation methods

23. Difficult access to parts

24. High assembly speed reducing accuracy

Material (Raw Materials)

25. Variation in component quality from suppliers

26. Deformable materials affecting assembly

27. Specification variability

28. Poor storage conditions impacting material quality

29. Inconsistent material condition upon receipt

30. Variability in raw material consistency

31. Material prone to deformation

32. Incorrect material type for assembly

Measurement (Inspection)                

33. Uncalibrated inspection tools

34. Inconsistent inspection techniques

35. Lack of verification in measurement

36. Inadequate testing procedures

37. Detection limits of tools

38. Low precision in measurements

39. Accuracy issues in defect identification

Mother Nature (Environment)

40. Humidity and temperature inconsistencies

41. Poor lighting conditions for visual inspections

42. Dust contamination

43. High noise levels distracting operators

44. Equipment vibrations

45. Uncontrolled temperature variations

46. Inconsistent lighting affecting defect visibility

Frequent mistakes in test method development


I want to summarize a few frequently made mistakes I have made and/or seen others make incl. some context for better understanding.


1.        Vague Purpose or Scope Definition


Clearly define the test method’s purpose and scope, leading to ambiguity about what the method is intended to measure or validate. An example could be defining the test purpose as “to measure strength” without specifying which type (e.g., tensile strength, peel strength) or under what conditions.


2.        Incorrect or Incomplete Reference to Standards


We also touched on that issue. Make sure that you fully and correctly implement a standard (if applicable), even if a standard only covers some parts of the test method. Again, an example would be using ASTM F88/F88M for seal strength testing but neglecting to cut off the first and last 10% of the seal area as required.


  1. Insufficient Operator Training


Assuming that operators instinctively know how to perform the test or providing insufficient training can lead to significant errors. For example, during a Gage R&R study on a micrometer gage (the tool that looks like a hook), we discovered a problem. Micrometer gages are designed to be operated by turning a specific torque-limited area, ensuring consistent clamping force during measurements. However, one operator consistently reported smaller measurement values. Upon investigation, we found that this operator was unaware of the proper operating method and had been closing the gage by turning the non-torque-limited area. This resulted in excessive clamping force and inaccurate, smaller measurements.


  1. Overlooking Environmental Factors


Overlooking environmental conditions, such as temperature, humidity, or lighting, can lead to unreliable test results. For instance, a test conducted in an uncontrolled environment showed significant variability due to temperature fluctuations affecting the material properties being measured. This inconsistency skewed the results, making them unusable for validation purposes. To prevent such issues, it’s essential to define and control environmental conditions as part of the test setup. Referencing standards like ISO 554 (Standard atmospheres for conditioning and/or testing — Specifications) can help specify and maintain consistent conditions, ensuring accurate and reproducible results.


 

  1. Insufficient Sample Preparation


Inconsistent preparation of test specimens can introduce significant variability in the results. For example, cutting seal-strength specimens by hand instead of using a qualified sample cutter can result in uneven widths, which affects the accuracy and reliability of the test. To avoid this, it’s crucial to standardize specimen preparation procedures. Provide detailed instructions and use calibrated tools specifically designed for the task. This ensures consistency across samples, reducing variability and improving the validity of the test results.


  1. Ignoring Equipment Calibration and Maintenance


Using equipment that is not calibrated can lead to inaccurate or unreliable results. For example, a micrometer used for dimensional checks may be out of calibration, resulting in incorrect measurements being reported. Over time, a lack of regular maintenance can cause equipment to drift from its specified tolerance range, further compounding errors. Without routine calibration, this drift may go unnoticed, leading to persistent inaccuracies in test results. To prevent these issues, establish a robust calibration and maintenance schedule for all testing equipment. Document each calibration activity to ensure traceability and provide evidence of compliance, helping to maintain accuracy and reliability over time. For more insights, check out our blog post on calibration errors to learn how to avoid these pitfalls and ensure your equipment performs as expected. 


  1. Lack of Repeatability and Reproducibility (R&R) Studies


Failing to verify test method consistency across operators, equipment, and days can lead to unreliable results. For example, during a Gage R&R study on a micrometer gauge, one operator’s incorrect use of the non-torque-limited area caused excessive clamping force and smaller measurements. Conducting Gage R&R studies early, such as during trial runs, helps identify variability sources like operator technique or setup issues. This allows for targeted training and procedural improvements to ensure consistent and reliable results.


  1. Ignoring Measurement System Limitations


Selecting equipment with insufficient resolution or tolerance is a common issue in test method development, often leading to inaccurate results. For example, using a balance with ±1g accuracy to measure a component requiring ±0.1g precision fails to meet the necessary measurement criteria. Insufficient resolution is one of the most frequent problems I encounter and can easily undermine the validity of the test method. To avoid this, equipment capabilities must always be evaluated against the required measurement tolerances during the development phase, ensuring the chosen tools meet or exceed the necessary precision standards.


  1. Rushing Without Preliminary Trials

  2. Neglecting Statistical Analysis

  3. Overlooking Safety Considerations

  4. Lack of Ongoing Review and Updates

  5. Poor Documentation Practices

  6. ...


This list can be expanded, but I hope this gives you a good impression of what you should focus on.


I made the experience that while we think thoroughly about our manufacturing processes and their validation, we often spend too little time with the test methods. However, test methods are our eyes on the process – How do you want to judge a manufacturing process if the method that you are evaluating your manufacturing process with is “blind”?

 

What has been your experience with test method development? Feel free to share your thoughts in the comment section.


If you have any questions about test method development, we're happy to support you. Send us an e-mail to office@sifo-medical.com and we'll get back to you asap.


Author: Simon Föger






Author: Simon Föger


 

Further helpful links & resources:

SIFo Medical YouTube Channel: Short, valuable videos on Quality Management

MedTech Free Resources: Get free access to checklists & templates

TMV Guide: Your practical guide to perform test method validation (incl. templates & videos)


 

130 views0 comments

Recent Posts

See All

Comentários


bottom of page