In today's rapidly evolving digital landscape, software quality assurance (QA) has become an indispensable component of the software development lifecycle. As organizations strive to deliver reliable, secure, and user-friendly applications, the role of QA has expanded from mere bug detection to a comprehensive quality management approach that spans the entire development process.
What is QA?
QA, which stands for Quality Assurance, describes various processes and activities that occur during product development. In other words, it refers to the methods and procedures used to protect quality standards. QA encompasses a systematic approach to ensuring that software products meet specified requirements and customer expectations before they reach the end users.
It's important to note that some people equate QA testing with software testing, but software testing is just a part of QA. In terms of the scope of work, it can be understood as QA > software testing. QA represents a broader quality management framework, while software testing focuses specifically on executing tests to identify defects.
QA is sometimes confused with the concept of QC (Quality Control). They have differences in technical and objective aspects, but their ultimate goal is the same—to ensure product quality, identify potential issues, and facilitate the final launch of the product. The main difference lies in the timing:
No matter how testing technology iterates, certain core principles remain the guiding direction of testing work. Combined with the latest technology trends, they can be summarized into the following six fundamental principles:
Testing should not be limited to after development is completed but should intervene during the requirements stage, participate in requirements review, and identify requirement ambiguities or omissions in advance. The core value of early testing in the demand stage is to "reduce the cost of defect repair." Data shows that if defects introduced in the demand stage are discovered only after they go online, the repair cost is more than 100 times that of repairing in the demand stage.
Key Implementation Points:
The input and scenario combinations of software are unlimited, and exhaustive testing cannot be achieved. Therefore, it is necessary to prioritize testing through risk assessment and focus on high-risk modules such as core transaction processes and high-frequency use functions.
Implementation Method: Divide priorities through the "risk matrix": use two dimensions of "impact degree" and "probability of occurrence" to classify test objects into three risk levels: high, medium, and low.
80% of defects are often concentrated in 20% of modules. This classic law still holds true in modern software. During the testing process, when multiple defects are found in a certain module, the testing intensity of that module should be increased.
Implementation Approach:
Testers should be independent from developers and maintain an objective perspective of judgment. Avoid "developer self-testing and self-verification," and adopt the model of "development self-testing + independent verification by the testing team."
Enhanced Suggestion: For core business systems, introduce an "independent testing team" (not directly affiliated with the development team) to further ensure objectivity.
All test cases should be traceable to specific demand points to ensure full coverage of requirements. Establish a two-way traceability system of "requirements-use cases-defects."
AI can efficiently complete repetitive testing work (such as use case generation, regression testing), but logical verification of complex scenarios, user experience evaluation, etc., still require manual intervention.
Division of Labor Principle:
A quality model is a standard system for quantitatively evaluating software quality. ISO 25010 is a classic model commonly used in the industry, dividing software quality into two dimensions: "product quality" (8 characteristics) and "use quality" (4 characteristics).
The 8 Core Characteristics of Product Quality:
Quality assurance methodologies describe the actions taken by teams to plan, design, monitor, and optimize the QA process for an organization. QA, software testing, and development methods usually fall into the following categories:
Agile testing methods operate on a method that includes "sprints," which can be defined as short iterative sequences. In general, agile practices are carried out by a group of members or a small testing team who consider the testing requirements of each phase of the sprints, including planning, analysis, and testing.
Key Features:
Waterfall is another popular method designed to proceed step by step. The main stages of the waterfall model involve documenting project plans to define steps, as further steps cannot be planned before the tasks defined earlier are completed.
Main Drawback: The inability to make quick adjustments due to its strict rules.
This is an incremental model of software testing where development and testing processes run in parallel. Once a specific development portion is implemented, the testing team immediately starts testing the developed product component.
The incremental testing process follows multiple iterations, each containing some value related to functionality and product features. In most cases, the incremental approach includes three stages:
It provides great flexibility for the testing team and ensures a smoother testing and editing process.
The spiral method is often considered part of the incremental approach, consisting of cycles that follow one another. These cycles include planning, risk analysis, engineering, and evaluation. The next cycle begins at the end of the previous one, allowing the testing team to quickly gain quality feedback.
Extreme Programming requires close collaboration between two testers, with one responsible for writing the code and the other responsible for reviewing it. XP considers completion of each stage when testing the code, helping testers develop high-quality code by closely examining it.
The core logic of the four-level progressive testing level is "from small to large, from inside to outside, from local to whole." By gradually expanding the test scope, defects are filtered layer by layer to ensure the quality of the final delivered product.
Core Definition: Unit testing is the testing of the smallest testable unit (such as function, method, class) in the software.
Core Goal: Discover code-level defects as early as possible, ensure the independence and correctness of each unit.
Test Object: Single function, method, class
Test Timing and Responsible Person: The development stage (after the code is written) is led by developers
Commonly Used Methods and Tools:
AI-Assisted Unit Testing: AI tools can automatically generate unit test cases. For example, Amazon CodeWhisperer and GitHub Copilot can automatically generate test cases including normal scenarios, boundary values, and abnormal scenarios based on code logic.
Core Definition: Integration testing is to combine multiple modules that have passed unit tests to test whether the interface interaction between modules is normal.
Core Goal: Discover defects in the interface between modules (such as parameter transfer errors, incompatible data formats, abnormal interface call timing).
Test Object: Interfaces between modules (such as API calls between microservices, front-end and back-end interfaces)
Test Timing and Responsible Person: After the unit test is completed, before the system test; Can be led by developers or testers
Commonly Used Methods and Tools:
Core Definition: System testing takes the entire software system as the test object and verifies whether the overall functions, performance, compatibility, security, etc. meet the requirements specifications.
Core Goal: Comprehensively verify the "overall usability" of the system, discover system-level defects.
Test Object: The entire software system (including front-end, back-end, database, and third-party dependent services)
Test Timing and Responsible Person: After the integration test is completed and before the acceptance test; Led by testers
Commonly Used Methods and Tools:
Core Definition: Acceptance testing is a test led by the user or product owner after the system test is passed to verify whether the software meets the user's actual business needs.
Core Goal: Confirm whether the software "meets the real needs of users" rather than just conforming to the requirements document.
Test Object: The entire software system (focusing on the user's core business processes)
Test Timing and Responsible Person: After the system test is completed and before the product is launched; Led by users and product owners
Types:
Core Goal: Verify whether the software function meets the requirements and whether it can correctly complete the established business process.
Testing Method: Mainly black box testing, focusing on covering normal scenarios, abnormal scenarios, and boundary scenarios.
Applicable Levels: Unit testing, integration testing, system testing, acceptance testing
Core Goal: Evaluate the performance of the software under different loads and discover performance bottlenecks.
Types:
Tools: JMeter, LoadRunner, Gatling
Core Goal: Discover security vulnerabilities in the software and ensure the security of user data and systems.
Key Coverage: Identity authentication vulnerabilities, authorization vulnerabilities, data encryption vulnerabilities, interface security vulnerabilities
Tools: OWASP ZAP, Nessus, Burp Suite
Core Goal: Ensure the normal operation of the software in different hardware, software, and network environments.
Types:
Core Goal: Evaluate the user experience of the software to ensure that users can quickly understand, learn and use it.
Focus: Simplicity of operation steps, rationality of interface layout, clarity of error prompts
Methods: User research, eye tracking, usability testing
Test cases are specific scenarios or conditions that are designed to test the functionality, performance, usability, and security of a software application. A test case typically includes the following elements:
High-Level Test Case:
Low-Level Test Case:
Test Scenario:
Test Case:
It's critical to understand what the QA system is meant to accomplish and the kinds of queries it should be able to address. This will direct the system's development and testing.
Make a broad range of test cases that are typical of the kinds of inquiries the QA system will run into. These test scenarios ought to include a variety of question types and levels of difficulty.
Create thorough test cases and test scenarios that cover many facets of the application or system being tested based on the test plan. These test cases are intended to verify various elements of usability, performance, security, and functionality.
Carry out the test cases and document the findings, keeping track of any flaws or problems found during the testing procedure. To achieve thorough coverage, it is suggested to use a variety of testing approaches, including user acceptance testing (UAT), integration testing, regression testing, and black-box testing.
Log, organize, and monitor bugs discovered during testing using a powerful defect-tracking system. This facilitates efficient communication and teamwork with the development team to identify and fix the found flaws.
If the system's performance falls short of the expected standards, pinpoint the problem areas and take action to fix them. This could entail improving the model, adding more training data, or changing the architecture of the system.
Regularly monitoring the QA system's performance and making adjustments as necessary is crucial for ensuring that it keeps performing well. This may entail regularly repeating the testing and improvement procedure.
It is highly recommended to keep the lines of communication with the development team, product owners, and stakeholders open and productive. This makes sure that everyone agrees with the testing goals, the development of the process, and any difficulties encountered.
Assemble feedback from users and stakeholders to comprehend their perspectives and acquire information for future enhancements. To assess the efficiency of the QA process, track and examine pertinent QA metrics like defect density, test coverage, and test execution progress.
Try to keep up with the most recent business trends, cutting-edge technological developments, and top QA procedures. This enables the testers to constantly pick up new approaches or tools that can improve the testing process, adapt, and use them.
User Acceptance Testing (UAT), also known as acceptance testing, is the final stage of the software testing process. UAT plays a major, even critical, role as it validates whether the business requirements are met before the actual product release. It is defined as a user methodology where software developed through business user testing verifies that the software works as expected according to the documented specifications.
1. UAT Sign-off: This important KPI shows whether the system has passed UAT and is ready for production release. It symbolizes the formal approval and support of the target audience, key stakeholders, or corporate representatives.
2. Test Cycle Time: This KPI is the length of time required to complete a UAT cycle and is measured by test cycle time. Test planning, execution, defect resolving, and retesting are all included.
3. Defect Resolution Time: This KPI tracks how quickly bugs found during UAT are fixed and retested. It aids in assessing how quickly the development and testing teams respond to and resolve problems.
4. User Satisfaction: A subjective KPI, user satisfaction gauges how satisfied end users are with the system being tested. Surveys, feedback forms, or user interviews can be used to measure it.
5. Test Case Execution Rate: The rate at which test cases are carried out during UAT is gauged by this KPI. It aids in assessing the effectiveness of the testing process.
6. Defect Density: This KPI calculates the number of flaws or problems found during UAT and divides it by the volume or complexity of the system under test.
7. Test Coverage: The amount of the system or application that has undergone UAT testing is measured by test coverage.
8. Requirements Coverage: The proportion of user requirements that have undergone testing and validation during UAT is gauged by this KPI.
A sanity checklist or sanity testing is a type of software testing performed by testers to ensure that new builds of software work properly. This quick process prevents the developer and QA team from wasting time and resources on more rigorous testing of software builds that aren't ready yet.
A sanity checklist is usually run on stable but not necessarily functional software. For example, after making small changes to a software build, software testers can run sanity tests to ensure those changes work correctly before proceeding to full regression testing.
Functional requirements detail user behavior requirements and functionalities that software systems must master to fulfill users' needs and business purposes.
Characteristics:
Examples:
Performance requirements are parameters that describe the minimum level of performance and desired characteristics the system should demonstrate in commendation of speed, response time, scalability, and resource usage.
Characteristics:
Examples:
End-to-end testing is a method by which we test all scenarios in the system to ensure that each of the steps is followed to a specific protocol. E2E testing is a method of collecting and analyzing data from a system to make changes to the system. It can be done by simulating real users, or by using automated tools that test all aspects of an application.
1. User Functions:
2. Conditions:
3. Test Cases:
System Integration Testing (SIT) is the process of testing an application or system to ensure that it meets security requirements, meets performance goals, and has been designed according to system specifications. SIT gives developers a realistic environment where they can interact with the application without fear of breaking something.
1. Top-down Approach: Start with an initial idea and then build upon it by adding new concepts until you reach a solution.
2. Bottom-up Approach: Explore multiple solutions before narrowing down your focus on one specific course of action.
3. Sandwich Approach: Merging the top two approaches like a sandwich, making a system of three layers essentially. Two layers above and below a center target layer.
4. Big Bang Approach: Testers complete the integration when all the application modules have already completed their process. The testing is then performed to check whether the integrated system works properly.
Beta testing is a type of acceptance testing that takes place after the completion of functional and system testing, and before the product release. It is the final stage of technical testing.
Beta testing is always conducted after the completion of Alpha testing but before the product is released to the market. The product must be at least 90%-95% complete (stable enough on any platform and almost or completely finished in all features).
Preparation Checklists:
| UAT (User Acceptance Testing) | QA (Quality Assurance) |
|---|---|
| Focused on testing the software from the end user's perspective | Focused on ensuring the overall quality of the development process |
| Involves end users testing the application's functionality and usability | Involves auditing and verifying processes, artifacts, and adherence to standards |
| Performed by end users who may not have technical knowledge | Performed by dedicated QA professionals with expertise in testing methodologies |
| Aims to validate that the application meets business requirements | Aims to identify and resolve process deviations and ensure compliance |
| Typically occurs towards the end of the development lifecycle | An ongoing process throughout the development lifecycle |
| Helps ensure the application is ready for production use | Helps establish and maintain quality standards throughout development |
| Focuses on real-world scenarios and user workflows | Focuses on the entire development process |
| The final testing phase before deployment | An ongoing effort to improve and maintain quality |
Functional testing plays a critical role in ensuring the overall quality of software. Each facet of functional testing adds value to the entire development process.
1. Accuracy of the Product: Functional testing assists teams in ensuring the accuracy of an application's performance. Applications have numerous expectations, and functional testing enables testers to verify that they are fully met.
2. Uncovering Functional Deficiencies: Functional testing assists testers in comparing the core deliverables of the application with the actual results, thereby identifying any functional flaws.
3. Guaranteeing Smooth Operation: Functional testing serves the purpose of confirming that code modifications have not altered the existing functionality or unintentionally introduced bugs into the system.
4. Ensuring Seamless Operation Across Platforms: Automated functional testing is a valuable tool in ensuring that an application operates smoothly across diverse technology platforms and devices.
5. Ensuring End Users' Requirements and Satisfaction: By implementing functional testing in the early stages of SDLC, development teams can ensure that consumer expectations are effectively managed and the product will fully satisfy the requirements of end users.
With the development of AI Agent, cloud native, DevOps and other technologies, software testing is undergoing a transformation from "human-led" to "human-machine collaboration."
1. Qualitative Changes in Testing Efficiency: AI-driven testing tools can realize natural language generation test cases, visual self-healing automation scripts and other functions, significantly lowering the testing threshold.
2. Expansion of Testing Scope: The distributed architecture of cloud-native environments, the black-box logic of AI applications, and the real-time requirements of in-vehicle software all pose new challenges to testing.
3. Upgrading of Testing Roles: Traditional "functional testers" are transforming into "quality architects" and need to have more comprehensive technical capabilities.
Core Changes:
Technical Support:
Define quantitative quality indicators based on ISO 25010 and business requirements. For example:
Regularly analyze quality shortcomings based on quality assessment results and promote the team to optimize the entire process from requirements, design, development, testing, etc.
Select appropriate intelligent testing tools to realize automatic collection, analysis and visualization of indicator data. Automate repetitive tasks while maintaining human oversight for complex scenarios.
Keep the lines of communication with the development team, product owners, and stakeholders open and productive. Ensure everyone agrees with the testing goals and any difficulties encountered.
Continuously train team members on the latest testing methodologies, tools, and best practices. Encourage knowledge sharing within the QA community.
Track and examine pertinent QA metrics like defect density, test coverage, and test execution progress. Use data-driven insights to make informed decisions.
Choosing the right QA methodology is crucial for achieving optimal product quality and optimization. Each methodology has its own strengths and weaknesses, and the choice depends on the specific requirements and context of your project.
The core of software testing is "full-process coverage" and "multi-dimensional verification" - the test level ensures layer-by-layer quality filtering from code to users, and the test type ensures that all quality requirements such as function, performance, and security are covered.
In the era of intelligence, the core principles of testing are the "methodology" of testing work; traditional quality models such as ISO 25010 are the "basic framework" of quality assessment; intelligent quality assessment technology is an "upgrade tool" to deal with emerging software forms. Mastering the combined application of the three is one of the core abilities of testers in the intelligent era.
By understanding different QA methodologies and models, you can make informed decisions and implement effective QA strategies to optimize your product's quality. Remember to constantly encourage a team environment where everyone, not just you, is responsible for quality. Keep the QA community expanding so that each QA Engineer can benefit from one another's support.
The future of QA lies in the balance between automation and human expertise, between preventive measures and corrective actions, and between established best practices and innovative approaches. By embracing this holistic view of quality assurance, organizations can deliver software products that not only meet technical specifications but also exceed user expectations in today's competitive digital landscape.
This comprehensive guide covers all fundamental aspects of Software Quality Assurance based on industry best practices and standards. By implementing these principles and methodologies, organizations can establish robust QA processes that ensure high-quality software delivery.