Presentation on
Software Engineering Quality
Assurance and Testing
Software-Quality-Testing
• The economic importance of Software
-The function of machine and equipment depends largely on
software
-We can not imagine large systems in telecommunication,
finance(Bank), or traffic control (Airlines) without software
• Software Quality
-More and more, the quality of software has become the
determining factor for the success of technical or
commercial systems and products.
• Testing for quality improvement
-Testing insure the improvement of the quality of software
products as well as the quality of the software development
process itself
What is Software??
• Computer software, or just software, is a collection of computer
programs and related data that provides the instructions to a
computer what to do and how to do (for perform a specific job).
• Definition Software (as per IEEE 610):
Computer programs, procedures and possibly associated
documentation and data pertaining to the operation of a
computer system.
• Types of Software:
-System Software
-Application Software
* ATM Machine
What is Software Quality??
• Software Quality (as per ISO/ IEC 9126):
The totality of functionality and features of a
software product that contribute to its ability to
satisfy stated or implied needs.
• Software Quality (as IEEE Std 610):
The degree to which a component, system or
process meets specified requirements and/or
user/customer needs and expectations.
Institute of Electrical and Electronics Engineers
International Electro technical Commission
International Organization for Standardization
What is Testing??
• Which measures the:
• Quality,
• Performance,
• Strengths,
• Capability and
• Reliability
of (someone or something), before putting it
into widespread use or practice.
Software Testing ??
• Which measures the:
• Quality,
• Performance,
• Strengths,
• Capability and
• Reliability
of a software before
putting it into
widespread use.
Software Testing
• Software Testing is the process of executing a
program or system with the intent of finding
errors.
• Or, it involves any activity aimed at evaluating
an attribute or capability of a program or
system and determining that it meets its
required results.
Failure Example 01
• Flight Ariane 5
(Most Expensive Computer Bugs in History)
On June 4, 1996, the rocket Ariane 5 tore itself apart 37
seconds after launch because of a malfunction in the control
software making the fault most expensive computer bugs in
history.
-mission critical issue
Failure Example 02
• Lethal X-Rays
Theara-25 was a radiation therapy machine produced by
Atomic Energy Commission of Canada in 1986. But initially
lot of people dies because of massive overdose of radiation.
And this is happen because of a software bugs.
-safety critical issue
Cause of failures:
-Human Error:
coding, DB design, system configuration…
Causes: knowledge, time presser, complexity …
-Environmental Condition:
change of environment…
Causes: radiation, electro-magnetic field, pollution,
sun spots, power failure…
Cost & Defects
Specification Design Coding Test Acceptance
Relative
Cost of error
correction
Costs of defects
-The costs of fixing
defect is increase with
the time they remain in
the system.
-Detecting errors at an
early stage allows for
error correction at
reduced cost.
Software Quality:
according to ISO/IEC 9126
software quality consists of:
-Functionality
-Reliability
-Usability
-Efficiency
-Maintainability
-Portability
Types of QA:
Constructive QA
activities to prevent defects,
e.g. through appropriate methods of software
engineering
Analytical QA
activities for finding defects,
e.g. through testing
Constructive QA
Technical
Methods
Tools
Languages
Temples
IDE
Organizational
Guidelines
Standards
Check lists
Process rules and regulation
Legal requirements
ConstructiveQA
organizationaltechnical
Guidelines
Standards
Checklists
Process rules and regulations
Legal requirements
Methods
Tools
Languages
Lists/templates
IDE
Analytical QA
Motto: Defects should be detected as early as
possible in the process through Testing
Static
Examination without executing the program
Dynamic
Includes executing the program
White box
Black box
Experience based techniques
FigureofAnalyticalQA
StaticDynamic
WhiteboxBlackbox
Reviews/ walkthroughs
Control flow analysis
Data flow analysis
Compiler metrics/ analysis
Statement Coverage
Branch Coverage
Condition Coverage
Path Coverage
Experience-based techniques
Equivalence partitioning
Boundary value analysis
State transition testing
Decision tables
Use case based testing
Software Quality:
according to ISO/IEC 9126
software quality consists of:
-Functionality Functional Q
-Reliability
-Usability
-Efficiency Non-Functional
-Maintainability
-Portability
Functional Q-attributes:
Functional means correctness & completeness
Correctness: the functionality meets the required
attributes / capabilities
Completeness: the functionality meets all (functional)
requirements
According to ISO/IEC 9126 Functionality includes:
-Suitability
-Accuracy
-Compliance
-Interoperability
-Security
Non-Functional Q-attributes:
-Reliability
maturity, fault tolerance, recovery after failure
-Usability
learn-ability, understandability, attractiveness
-Efficiency
min use of resource
-Maintainability
Verifiability, changeability
-Portability
Transfer, easy to install …
Non-Functional Q-attributes
Reliability
- maturity, fault tolerance, recovery after failure
- characteristic: under given conditions, a software/ a system will keep its
capabilities/ functionality over a period of time.
- reliability = quality/time
Usability
- Learn ability ,understandability, attractiveness
- Characteristics: easy to learn, compliance with guidelines, intuitive handling
Efficiency
- System behavior: functionality and time behavior
- Characteristics: the system requires a minimal use of resources
(e.g. CPU-time) for executing the given task
Maintainability
- Verifiability, stability, analyzability, changeability
- Characteristics: amount of effort needed to introduce changes in system
components
Portability
- Reparability, compliance, install ability
- Ability to transfer the software to a new environment
(software, hardware, organization)
- Characteristics: easy to install and uninstall, parameters
How much Testing is Enough ?
-Exit Criteria
Not finding (any more) defects is not an
appropriate criteria to stop testing activities.
-Risk Based Testing
-Time and Budget
Test case description according to IEEE 829:
- Distinct identification: Id or key in order to link, for example, an error
report to the test case where it appeared
- Preconditions: situation previous to test execution or characteristics of
the test object before conducting the test case
- Input values: description of the input data on the test object
- Expected result: output data that the test object is expected to
produce
- Post conditions: Characteristics of the test object after test execution,
description of its situation after the test
- Dependencies: order of execution of test cases, reason for
dependencies
- Requirements: Characteristics of the test object that the test case
will examine
- how to execute the test and check results (optional)
- priority(optional)
Software engineering quality assurance and testing
Testing and Debugging?
Test and re-test are test activities
Testing shows system failures
Re-testing proves, that the defect has been corrected
Debugging and correcting defects are developer
activities
Through debugging, developers can reproduce failures,
investigate the state of programs and find the corresponding
defect in order to correct it.
Test Debugging
Correct.
Defects
Re-test
Error, defect, failure
- Error(IEEE 610):
a human action that produces an incorrect result,
e.g. a programming error
- Defect:
a flaw in a component or system that can cause the component or
system to fail to perform its required function,
e.g. an incorrect statement or data definition.
- Failure:
the physical or functional manifestation of a defect. A defect, if
encountered during execution, may cause a failure.
- Deviation of the component or system from its expected delivery,
service or result. (After Fenton)
Defects cause failure
Defects and Failure
A human being can make an error (mistake), which
produces a defect (fault, bug) in the program code, or in a
document. If a defect in code is executed, the system may
fail to do what it should do (or do something it shouldn’t),
causing a failure.
Debugging vs. Testing
Debugging and testing are different. Dynamic testing can
show failures that are caused by defects.
Debugging is the development activity that finds, analyses
and removes the cause of the failure.
Seven Principle of
Testing:
Principles
A number of testing principles have
been suggested over the past 40 years
and offer general guidelines common for
all testing.
Principle 1 – Testing shows presence of defects
Testing can prove the presence of defects, but cannot prove the
absence of defects. Testing reduces the probability of
undiscovered defects remaining in the software but, even if no
defects are found, it is not a proof of correctness.
Principle 2 – Exhaustive testing is impossible
Testing everything (all combinations of inputs and
preconditions) is not feasible. Instead of exhaustive
testing, risk analysis, time & cost and priorities
should be used to focus testing efforts.
Principle 3 – Early testing
To find defects early, testing activities shall be
started as early as possible in the software or
system development life cycle, and shall be focused
on defined objectives.
Principle 4 – Defect clustering
Testing effort shall be focused proportionally to the
expected and later observed defect density of
modules. A small number of modules usually
contains most of the defects discovered during
prerelease testing, or is responsible for most of the
operational failures.
Principle 5 – Pesticide paradox
If the same tests are repeated over and over again,
eventually the same set of test cases will no longer
find any new defects. To overcome this “pesticide
paradox”, test cases need to be regularly reviewed
and revised, and new and different tests need to be
written to exercise different parts of the software or
system to find potentially more defects.
Principle 6 – Testing is context dependent
Testing is done differently in different
contexts. For example, safety-critical
software is tested differently from an e-
commerce site.
Principle 7 – Absence-of-errors fallacy
It doesn't prove the Quality.
Finding and fixing defects does not help if
the system built is unusable and does not
fulfill the users’ needs and expectations.
Depending on the approach chosen, testing will take place at different
points within the development process
- Testing is a process itself
- The testing process is determined by the following phases
- Test planning
- Test analysis and test design
- Test implementation and test execution
- Evaluating Exit Criteria and reporting
- Test closure activities
- Test Controlling (at all phases)
Test phases may overlap
Testing as a process within the SW development process
Testing Process
TestControlling Test Plan
Test Analysis and Test Design
Test Implementation
and Test Execution
Evaluating Exit Criteria and
Reporting
Test Closure Activities
- Testing is more than test
execution!
- Includes overlapping and
backtracking
- Each phase of the testing
process takes place concurrent
to the phase of the software
development process
Test Planning-main tasks
- Determining the scope and risk
- Identifying the objectives of testing
and exit criteria
- Determining the approach: test techniques,
test coverage, testing Teams
- Implement testing method/test strategy,
plan time span for actives following
- Acquiring and scheduling test resources:
people, test environment, test budget
TestControlling
Test Plan
Test Analysis and
Test Design
Test
Implementation
and Test Execution
Evaluating Exit
Criteria and
Reporting
Test Closure
Activities
Test analysis and Design-main tasks/1
- Reviewing the test basis (requirements, system architecture, design, interfaces).
*Analyze system architecture, system design
including interfaces among test objects
- Identify specific test conditions and required
test data.
*evaluate the availability of test data and/or the
feasibility of generating test data.
- Designing the test/test cases.
*Create and prioritize logical test cases
(test causes without specific values for test data)
- Select Test tools
TestControlling
Test Plan
Test Analysis and
Test Design
Test Implementation
and Test Execution
Evaluating Exit
Criteria and
Reporting
Test Closure
Activities
Test Implementation & Execution
– developing and prioritizing test cases
• creating test data , writing test procedure
• creating test sequences
– creating test automation scripts, if necessary
– configuring the test environment
– executing test(manually or automatically)
• follow test sequence state in the test
plan(test suites, order of test cases)
– test result recording and analysis
– Retest (after defect correction)
– Regression test
• ensure that changes (after installing a new release, or error fixing) did not
uncover other or introduce new defects.
TestControlling
Test Plan
Test Analysis and
Test Design
Test Implementation
and Test Execution
Evaluating Exit
Criteria and
Reporting
Test Closure
Activities
• Evaluating Exit Criteria-main tasks
– Assessing test execution against the
defined objectives (e.g. test and criteria)
– Evaluating test logs (summary of test
activities, test result, communicate
exit criteria)
– Provide information to allow the decision,
whether more test should take place
TestControlling
Test Plan
Test Analysis and
Test Design
Test Implementation
and Test Execution
Evaluating Exit
Criteria and
Reporting
Test Closure
Activities
Test control
Test control is an on going activity
influencing test planning. The test plan
may be modified according to the information
acquired from best controlling
- The status of the test process is determined
by comparing the progress achieved against
the last plan. Necessary activities will be
started accordingly.
- Measure and analyze results
- The test progress, test coverage and the
exit criteria are monitored and documented
- Start correcting measures
- Prepare and make decisions
TestControlling
Test Plan
Test Analysis and Test
Design
Test Implementation
and Test Execution
Evaluating Exit
Criteria and
Reporting
Test Closure Activities
• Test Closure Activities - main task
– Collect data from completed test activities
to consolidate experience, facts
and numbers.
– Closure of incident reports or raising
change requests for any remaining open points
– Documenting the acceptance of the system
– Finalizing and archiving test ware, the test
environment and the test infrastructure for
later reuse, hand over to operations.
– Analyzing “lessons learned” for future project
TestControlling
• Test suite/test sequence
– a set of several test cases for a component or system , where post condition of one test is used as
the precondition for the next one
• Test procedure specification(test scenario)
– a document specifying a sequence of action for the execution of a test. Also known as test script
or manual test script. (After IEEE 829)
• Test execution
– The process of running a test, producing actual results.
• Test log (test protocol, test report)
– A chronological record of relevant details about the execution of tests:
when the test was done, what result was produced.
• Regression tests:
– tasting of a previously tasted program following modification of ensure that defects have not
been introduced or uncovered in unchanged areas of the software, as a result of the changes
made. It is performed when the software or its environment is changed.
• Confirmation testing, Retest:
– repeating a test after a defect has been fixed in order to confirm that the original defect has been
successfully removed
• Roles and Responsibilities
Perception:
Wrong!
Testing is a constructive activity as well,
It aims eliminating defects from a product !
Developer role Tester role
Implements requirements Plans testing activities
Develops structures Design test case
Designs and programs the
software
Is concerned only with finding
defects
Creating a product is his
success
Finding an error made by a
developer is his success
Developers are constructive! Tester are destructive!
Personal attributes of a good tester /1
Curious , perceptive, attentive to detail
– To comprehend the practical scenarios of the customer
– To be able to analysis the structure of the test
– To discover details, where failure might show
Skepticism and has a critical eye
– Test object contain defects- you just have to find them
– Do not believed everything you are told by the developers
– One must not get frightened by the fact that serious defects may often
be found which will have impact on the course of the project.
Personal attributes of a good tester /2
Good communication skills
– To bring bad news to the developers
– To overbear frustration state of minds
– Both technical as well as issue of the practical use of the system must be
understood and communicated
– Positive communication can help to avoid or to ease difficult situations.
– To quickly establish a working relationship with the developers
Experiences
– Personal factors influencing error occurrence
– Experience helps identifying where errors might accumulate
Differences: to design- to develop – to test
– Testing requires a different mindset from designing developing
new computer systems
• Common goal: to provide good software
• Design mission: help the customer to supply the right requirements
• Developer’s mission: convert the requirements into functions
• Tester’s mission: examine the correct implementation of the
customer’s requirements
– In principle, one person can be given all three roles to work at.
• Differences in goal and role models must be taken into account
• This is difficult but possible
• Other solution (independent testers) are often easier and produce
better results
Independent testing
– The separation of testing responsibilities support the independent
evaluation of test results.
– The diagram below show the degree of independent as a bar chart.
• Types of test organization /1
Developer test
-The developer will never examine his ”creation” unbiased
(emotional attachment)
• He, however, knows the test object better than anybody else
• Extra costs result for the orientation of other person on the test
object
-Human being tend to overlook their own faults.
• The developer run the risks of not recognizing even self-evident
defects.
-Error made because of misinterpretation of the requirements will
remain undetected.
• Setting up test teams where developers test each other’s products
helps to avoid or at least lessen this short coming.
Types of test organization /2
• Teams of developers
– Developers speak the same language
– Costs for orientation in the test object are kept moderate
especially when the teams exchange test objects.
– Danger of generation of conflicts among developing teams
• One developer who looks for and finds a defect will not be the
other developer’s best friend
– Mingling development and test activities
• Frequent switching of ways of thinking
• Makes difficult to control project budget
Types of test organization /3
• Test teams
– Creating test teams converting different project
areas enhances the quality of testing.
– It is important that test teams of different areas in
the project work independently
Types of test organization /4
• Outsourcing tests
– The separation of testing activities and development activities offers best
independence between test object and tester.
– Outsourced test activities are performed by persons having relatively little
knowledge about the test object and the project background
• Learning curve bring high costs, therefore unbiased party experts should be involved at
the early stages of the project
– External expert have a high level of testing know how:
• An appropriate test design is ensured
• Methods and tools find optimal
• Designing test cases automatically
– Computer aided generation of test cases, e.g. based on the formal
specification documents, is also independent
Difficulties /1
• Unable to understand each other
– Developers should have basic knowledge of testing
– Tester should have basic knowledge of software development
• Especially in stress situations, discovering errors that someone has made
often leads to conflicts.
– The way of documenting defects and the way of the defects is described will
decide how the situation will develop.
– Persons should not be criticized, the defects must be stated faculty
– Defect description should help the developers find the error
– Common objectives must always be the main issue.
•
Difficulties /2
• Communication between tester and developers missing or
insufficient. This can make impossible to work together.
– Tester seen as “only messenger of bed news ”
– Improvement: try to see yourself in the other person’s role. Did my
message come through? Did the answer reach me?
• A solid test requires the appropriate distance to the test object
– An independent and non-biased position is acquired through distance
from the development
– However, too large a distance between the test object and the
development team will lead to more effort and time for testing.
Summary
• People make mistakes, every implementation has defects.
• Human nature makes it difficult to stand in front of one’s own
defects(error blindness)
• Developer and tester means two different worlds meet each other.
– Developing is constructive- something is created that was not there before
– Testing seems destructive at first glance-defect will found
– Together , development and test are constructive in their objective to ensure
software with the least defects possible.
• Independent testing enhances quality of testing:
instead of developer, use tester teams and teams with external personnel
for testing.

More Related Content

PPTX
PDF
What is Integration Testing? | Edureka
PDF
Software testing methods, levels and types
PPTX
Stlc ppt
PPT
Testing concepts ppt
PPTX
Software testing life cycle
PDF
Test cases
PPTX
Performance testing
What is Integration Testing? | Edureka
Software testing methods, levels and types
Stlc ppt
Testing concepts ppt
Software testing life cycle
Test cases
Performance testing

What's hot (20)

PDF
Types of Software Testing | Edureka
PPTX
Regression testing
PPTX
SDLC vs STLC
PPTX
Types of testing
PDF
Software Testing Techniques: An Overview
PPTX
System testing
PPSX
Principles of Software testing
PPTX
Manual testing
PPTX
Introduction to software testing
PDF
Black Box Testing
PPTX
Software testing
PPTX
Levels Of Testing.pptx
PPTX
functional testing
PDF
Regression Testing - An Overview
PPTX
scenario testing in software testing
PDF
Software Testing Life Cycle (STLC) | Software Testing Tutorial | Edureka
PPTX
Software Testing - Part 1 (Techniques, Types, Levels, Methods, STLC, Bug Life...
PPTX
Writing Test Cases in Agile
PPTX
Software Testing or Quality Assurance
PPTX
Types of Software Testing | Edureka
Regression testing
SDLC vs STLC
Types of testing
Software Testing Techniques: An Overview
System testing
Principles of Software testing
Manual testing
Introduction to software testing
Black Box Testing
Software testing
Levels Of Testing.pptx
functional testing
Regression Testing - An Overview
scenario testing in software testing
Software Testing Life Cycle (STLC) | Software Testing Tutorial | Edureka
Software Testing - Part 1 (Techniques, Types, Levels, Methods, STLC, Bug Life...
Writing Test Cases in Agile
Software Testing or Quality Assurance
Ad

Viewers also liked (20)

PPTX
Software quality assurance
PDF
Intro to Software Engineering - Software Quality Assurance
PPT
Software quality assurance
PPT
Fault tolerance
PPTX
Fault tolerance
PPTX
Fault tolerance techniques for real time operating system
PPT
DFD level-0 to 1
PPSX
DISE - Software Testing and Quality Management
PPT
Fault Tolerance System
PDF
01 software test engineering (manual testing)
PPTX
Software Quality Assurance
PPT
Software Fault Tolerance
PPTX
Quality Assurance in Software Ind.
PDF
Fault tolerance
PPT
Manual testing ppt
PPT
Types of Software Testing
PPTX
Software Testing Basics
PPTX
Quality by Design : Quality Target Product Profile & Critical Quality Attrib...
PPT
Introduction To Software Quality Assurance
PPT
Software Testing Fundamentals
Software quality assurance
Intro to Software Engineering - Software Quality Assurance
Software quality assurance
Fault tolerance
Fault tolerance
Fault tolerance techniques for real time operating system
DFD level-0 to 1
DISE - Software Testing and Quality Management
Fault Tolerance System
01 software test engineering (manual testing)
Software Quality Assurance
Software Fault Tolerance
Quality Assurance in Software Ind.
Fault tolerance
Manual testing ppt
Types of Software Testing
Software Testing Basics
Quality by Design : Quality Target Product Profile & Critical Quality Attrib...
Introduction To Software Quality Assurance
Software Testing Fundamentals
Ad

Similar to Software engineering quality assurance and testing (20)

PPTX
IT8076 – Software Testing Intro
PPTX
Software testing ppt
PPTX
Object Oriented Testing(OOT) presentation slides
PPTX
Object oriented testing
PDF
Objectorientedtesting 160320132146
PPTX
System testing
PPTX
Software testing career
PPT
Software Testing Life Cycle
DOC
Testing
PPT
ISTQBCH foundation level chapter 01 fundamentals of testing
PPTX
Software testing career 20180929 update
PPT
Software Engineering (Software Quality Assurance & Testing: Supplementary Mat...
DOCX
Istqb v.1.2
PPTX
SQAT - Ch.01 - Basics of Software Quality Assurance.pptx
DOCX
Testing in Software Engineering.docx
PPT
Testing and Mocking Object - The Art of Mocking.
PDF
Testing Interview Questions.pdf
PPT
NG_TEST_Presentation_0510
IT8076 – Software Testing Intro
Software testing ppt
Object Oriented Testing(OOT) presentation slides
Object oriented testing
Objectorientedtesting 160320132146
System testing
Software testing career
Software Testing Life Cycle
Testing
ISTQBCH foundation level chapter 01 fundamentals of testing
Software testing career 20180929 update
Software Engineering (Software Quality Assurance & Testing: Supplementary Mat...
Istqb v.1.2
SQAT - Ch.01 - Basics of Software Quality Assurance.pptx
Testing in Software Engineering.docx
Testing and Mocking Object - The Art of Mocking.
Testing Interview Questions.pdf
NG_TEST_Presentation_0510

More from Bipul Roy Bpl (7)

PPT
Specification and complexity - algorithm
PPT
Sequential circuit-Digital Electronics
PPTX
Test design techniques
PPTX
Garment management system
PPT
Regular expressions-Theory of computation
PPT
Finite automata
PPT
Theory of computing
Specification and complexity - algorithm
Sequential circuit-Digital Electronics
Test design techniques
Garment management system
Regular expressions-Theory of computation
Finite automata
Theory of computing

Recently uploaded (20)

PDF
Module 1 - Introduction to Generative AI.pdf
PDF
OpenColorIO Virtual Town Hall - August 2025
PPTX
Empowering Asian Contributions: The Rise of Regional User Groups in Open Sour...
PPTX
ESDS_SAP Application Cloud Offerings.pptx
PDF
WhatsApp Chatbots The Key to Scalable Customer Support.pdf
PDF
How to Write Automated Test Scripts Using Selenium.pdf
PDF
How to Set Realistic Project Milestones and Deadlines
PDF
OpenEXR Virtual Town Hall - August 2025
PPTX
MCP empowers AI Agents from Zero to Production
PPTX
oracle_ebs_12.2_project_cutoveroutage.pptx
PDF
Top 10 Project Management Software for Small Teams in 2025.pdf
PPTX
Independent Consultants’ Biggest Challenges in ERP Projects – and How Apagen ...
PPTX
Beige and Black Minimalist Project Deck Presentation (1).pptx
PDF
Streamlining Project Management in Microsoft Project, Planner, and Teams with...
PDF
Ragic Data Security Overview: Certifications, Compliance, and Network Safegua...
PDF
10 Mistakes Agile Project Managers Still Make
PDF
SBOM Document Quality Guide - OpenChain SBOM Study Group
PDF
DOWNLOAD—IOBit Uninstaller Pro Crack Download Free
PDF
Coding with GPT-5- What’s New in GPT 5 That Benefits Developers.pdf
PPTX
UNIT II: Software design, software .pptx
Module 1 - Introduction to Generative AI.pdf
OpenColorIO Virtual Town Hall - August 2025
Empowering Asian Contributions: The Rise of Regional User Groups in Open Sour...
ESDS_SAP Application Cloud Offerings.pptx
WhatsApp Chatbots The Key to Scalable Customer Support.pdf
How to Write Automated Test Scripts Using Selenium.pdf
How to Set Realistic Project Milestones and Deadlines
OpenEXR Virtual Town Hall - August 2025
MCP empowers AI Agents from Zero to Production
oracle_ebs_12.2_project_cutoveroutage.pptx
Top 10 Project Management Software for Small Teams in 2025.pdf
Independent Consultants’ Biggest Challenges in ERP Projects – and How Apagen ...
Beige and Black Minimalist Project Deck Presentation (1).pptx
Streamlining Project Management in Microsoft Project, Planner, and Teams with...
Ragic Data Security Overview: Certifications, Compliance, and Network Safegua...
10 Mistakes Agile Project Managers Still Make
SBOM Document Quality Guide - OpenChain SBOM Study Group
DOWNLOAD—IOBit Uninstaller Pro Crack Download Free
Coding with GPT-5- What’s New in GPT 5 That Benefits Developers.pdf
UNIT II: Software design, software .pptx

Software engineering quality assurance and testing

  • 1. Presentation on Software Engineering Quality Assurance and Testing
  • 2. Software-Quality-Testing • The economic importance of Software -The function of machine and equipment depends largely on software -We can not imagine large systems in telecommunication, finance(Bank), or traffic control (Airlines) without software • Software Quality -More and more, the quality of software has become the determining factor for the success of technical or commercial systems and products. • Testing for quality improvement -Testing insure the improvement of the quality of software products as well as the quality of the software development process itself
  • 3. What is Software?? • Computer software, or just software, is a collection of computer programs and related data that provides the instructions to a computer what to do and how to do (for perform a specific job). • Definition Software (as per IEEE 610): Computer programs, procedures and possibly associated documentation and data pertaining to the operation of a computer system. • Types of Software: -System Software -Application Software * ATM Machine
  • 4. What is Software Quality?? • Software Quality (as per ISO/ IEC 9126): The totality of functionality and features of a software product that contribute to its ability to satisfy stated or implied needs. • Software Quality (as IEEE Std 610): The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations. Institute of Electrical and Electronics Engineers International Electro technical Commission International Organization for Standardization
  • 5. What is Testing?? • Which measures the: • Quality, • Performance, • Strengths, • Capability and • Reliability of (someone or something), before putting it into widespread use or practice.
  • 6. Software Testing ?? • Which measures the: • Quality, • Performance, • Strengths, • Capability and • Reliability of a software before putting it into widespread use.
  • 7. Software Testing • Software Testing is the process of executing a program or system with the intent of finding errors. • Or, it involves any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results.
  • 8. Failure Example 01 • Flight Ariane 5 (Most Expensive Computer Bugs in History) On June 4, 1996, the rocket Ariane 5 tore itself apart 37 seconds after launch because of a malfunction in the control software making the fault most expensive computer bugs in history. -mission critical issue
  • 9. Failure Example 02 • Lethal X-Rays Theara-25 was a radiation therapy machine produced by Atomic Energy Commission of Canada in 1986. But initially lot of people dies because of massive overdose of radiation. And this is happen because of a software bugs. -safety critical issue
  • 10. Cause of failures: -Human Error: coding, DB design, system configuration… Causes: knowledge, time presser, complexity … -Environmental Condition: change of environment… Causes: radiation, electro-magnetic field, pollution, sun spots, power failure…
  • 11. Cost & Defects Specification Design Coding Test Acceptance Relative Cost of error correction Costs of defects -The costs of fixing defect is increase with the time they remain in the system. -Detecting errors at an early stage allows for error correction at reduced cost.
  • 12. Software Quality: according to ISO/IEC 9126 software quality consists of: -Functionality -Reliability -Usability -Efficiency -Maintainability -Portability
  • 13. Types of QA: Constructive QA activities to prevent defects, e.g. through appropriate methods of software engineering Analytical QA activities for finding defects, e.g. through testing
  • 15. ConstructiveQA organizationaltechnical Guidelines Standards Checklists Process rules and regulations Legal requirements Methods Tools Languages Lists/templates IDE
  • 16. Analytical QA Motto: Defects should be detected as early as possible in the process through Testing Static Examination without executing the program Dynamic Includes executing the program White box Black box Experience based techniques
  • 17. FigureofAnalyticalQA StaticDynamic WhiteboxBlackbox Reviews/ walkthroughs Control flow analysis Data flow analysis Compiler metrics/ analysis Statement Coverage Branch Coverage Condition Coverage Path Coverage Experience-based techniques Equivalence partitioning Boundary value analysis State transition testing Decision tables Use case based testing
  • 18. Software Quality: according to ISO/IEC 9126 software quality consists of: -Functionality Functional Q -Reliability -Usability -Efficiency Non-Functional -Maintainability -Portability
  • 19. Functional Q-attributes: Functional means correctness & completeness Correctness: the functionality meets the required attributes / capabilities Completeness: the functionality meets all (functional) requirements According to ISO/IEC 9126 Functionality includes: -Suitability -Accuracy -Compliance -Interoperability -Security
  • 20. Non-Functional Q-attributes: -Reliability maturity, fault tolerance, recovery after failure -Usability learn-ability, understandability, attractiveness -Efficiency min use of resource -Maintainability Verifiability, changeability -Portability Transfer, easy to install …
  • 21. Non-Functional Q-attributes Reliability - maturity, fault tolerance, recovery after failure - characteristic: under given conditions, a software/ a system will keep its capabilities/ functionality over a period of time. - reliability = quality/time Usability - Learn ability ,understandability, attractiveness - Characteristics: easy to learn, compliance with guidelines, intuitive handling
  • 22. Efficiency - System behavior: functionality and time behavior - Characteristics: the system requires a minimal use of resources (e.g. CPU-time) for executing the given task Maintainability - Verifiability, stability, analyzability, changeability - Characteristics: amount of effort needed to introduce changes in system components Portability - Reparability, compliance, install ability - Ability to transfer the software to a new environment (software, hardware, organization) - Characteristics: easy to install and uninstall, parameters
  • 23. How much Testing is Enough ? -Exit Criteria Not finding (any more) defects is not an appropriate criteria to stop testing activities. -Risk Based Testing -Time and Budget
  • 24. Test case description according to IEEE 829: - Distinct identification: Id or key in order to link, for example, an error report to the test case where it appeared - Preconditions: situation previous to test execution or characteristics of the test object before conducting the test case - Input values: description of the input data on the test object - Expected result: output data that the test object is expected to produce - Post conditions: Characteristics of the test object after test execution, description of its situation after the test - Dependencies: order of execution of test cases, reason for dependencies - Requirements: Characteristics of the test object that the test case will examine - how to execute the test and check results (optional) - priority(optional)
  • 26. Testing and Debugging? Test and re-test are test activities Testing shows system failures Re-testing proves, that the defect has been corrected Debugging and correcting defects are developer activities Through debugging, developers can reproduce failures, investigate the state of programs and find the corresponding defect in order to correct it. Test Debugging Correct. Defects Re-test
  • 27. Error, defect, failure - Error(IEEE 610): a human action that produces an incorrect result, e.g. a programming error - Defect: a flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. - Failure: the physical or functional manifestation of a defect. A defect, if encountered during execution, may cause a failure. - Deviation of the component or system from its expected delivery, service or result. (After Fenton) Defects cause failure
  • 28. Defects and Failure A human being can make an error (mistake), which produces a defect (fault, bug) in the program code, or in a document. If a defect in code is executed, the system may fail to do what it should do (or do something it shouldn’t), causing a failure. Debugging vs. Testing Debugging and testing are different. Dynamic testing can show failures that are caused by defects. Debugging is the development activity that finds, analyses and removes the cause of the failure.
  • 29. Seven Principle of Testing: Principles A number of testing principles have been suggested over the past 40 years and offer general guidelines common for all testing. Principle 1 – Testing shows presence of defects Testing can prove the presence of defects, but cannot prove the absence of defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness.
  • 30. Principle 2 – Exhaustive testing is impossible Testing everything (all combinations of inputs and preconditions) is not feasible. Instead of exhaustive testing, risk analysis, time & cost and priorities should be used to focus testing efforts. Principle 3 – Early testing To find defects early, testing activities shall be started as early as possible in the software or system development life cycle, and shall be focused on defined objectives.
  • 31. Principle 4 – Defect clustering Testing effort shall be focused proportionally to the expected and later observed defect density of modules. A small number of modules usually contains most of the defects discovered during prerelease testing, or is responsible for most of the operational failures. Principle 5 – Pesticide paradox If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new defects. To overcome this “pesticide paradox”, test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to find potentially more defects.
  • 32. Principle 6 – Testing is context dependent Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e- commerce site. Principle 7 – Absence-of-errors fallacy It doesn't prove the Quality. Finding and fixing defects does not help if the system built is unusable and does not fulfill the users’ needs and expectations.
  • 33. Depending on the approach chosen, testing will take place at different points within the development process - Testing is a process itself - The testing process is determined by the following phases - Test planning - Test analysis and test design - Test implementation and test execution - Evaluating Exit Criteria and reporting - Test closure activities - Test Controlling (at all phases) Test phases may overlap Testing as a process within the SW development process
  • 34. Testing Process TestControlling Test Plan Test Analysis and Test Design Test Implementation and Test Execution Evaluating Exit Criteria and Reporting Test Closure Activities
  • 35. - Testing is more than test execution! - Includes overlapping and backtracking - Each phase of the testing process takes place concurrent to the phase of the software development process
  • 36. Test Planning-main tasks - Determining the scope and risk - Identifying the objectives of testing and exit criteria - Determining the approach: test techniques, test coverage, testing Teams - Implement testing method/test strategy, plan time span for actives following - Acquiring and scheduling test resources: people, test environment, test budget TestControlling Test Plan Test Analysis and Test Design Test Implementation and Test Execution Evaluating Exit Criteria and Reporting Test Closure Activities
  • 37. Test analysis and Design-main tasks/1 - Reviewing the test basis (requirements, system architecture, design, interfaces). *Analyze system architecture, system design including interfaces among test objects - Identify specific test conditions and required test data. *evaluate the availability of test data and/or the feasibility of generating test data. - Designing the test/test cases. *Create and prioritize logical test cases (test causes without specific values for test data) - Select Test tools TestControlling Test Plan Test Analysis and Test Design Test Implementation and Test Execution Evaluating Exit Criteria and Reporting Test Closure Activities
  • 38. Test Implementation & Execution – developing and prioritizing test cases • creating test data , writing test procedure • creating test sequences – creating test automation scripts, if necessary – configuring the test environment – executing test(manually or automatically) • follow test sequence state in the test plan(test suites, order of test cases) – test result recording and analysis – Retest (after defect correction) – Regression test • ensure that changes (after installing a new release, or error fixing) did not uncover other or introduce new defects. TestControlling Test Plan Test Analysis and Test Design Test Implementation and Test Execution Evaluating Exit Criteria and Reporting Test Closure Activities
  • 39. • Evaluating Exit Criteria-main tasks – Assessing test execution against the defined objectives (e.g. test and criteria) – Evaluating test logs (summary of test activities, test result, communicate exit criteria) – Provide information to allow the decision, whether more test should take place TestControlling Test Plan Test Analysis and Test Design Test Implementation and Test Execution Evaluating Exit Criteria and Reporting Test Closure Activities
  • 40. Test control Test control is an on going activity influencing test planning. The test plan may be modified according to the information acquired from best controlling - The status of the test process is determined by comparing the progress achieved against the last plan. Necessary activities will be started accordingly. - Measure and analyze results - The test progress, test coverage and the exit criteria are monitored and documented - Start correcting measures - Prepare and make decisions TestControlling Test Plan Test Analysis and Test Design Test Implementation and Test Execution Evaluating Exit Criteria and Reporting Test Closure Activities
  • 41. • Test Closure Activities - main task – Collect data from completed test activities to consolidate experience, facts and numbers. – Closure of incident reports or raising change requests for any remaining open points – Documenting the acceptance of the system – Finalizing and archiving test ware, the test environment and the test infrastructure for later reuse, hand over to operations. – Analyzing “lessons learned” for future project TestControlling
  • 42. • Test suite/test sequence – a set of several test cases for a component or system , where post condition of one test is used as the precondition for the next one • Test procedure specification(test scenario) – a document specifying a sequence of action for the execution of a test. Also known as test script or manual test script. (After IEEE 829) • Test execution – The process of running a test, producing actual results. • Test log (test protocol, test report) – A chronological record of relevant details about the execution of tests: when the test was done, what result was produced. • Regression tests: – tasting of a previously tasted program following modification of ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed. • Confirmation testing, Retest: – repeating a test after a defect has been fixed in order to confirm that the original defect has been successfully removed
  • 43. • Roles and Responsibilities Perception: Wrong! Testing is a constructive activity as well, It aims eliminating defects from a product ! Developer role Tester role Implements requirements Plans testing activities Develops structures Design test case Designs and programs the software Is concerned only with finding defects Creating a product is his success Finding an error made by a developer is his success Developers are constructive! Tester are destructive!
  • 44. Personal attributes of a good tester /1 Curious , perceptive, attentive to detail – To comprehend the practical scenarios of the customer – To be able to analysis the structure of the test – To discover details, where failure might show Skepticism and has a critical eye – Test object contain defects- you just have to find them – Do not believed everything you are told by the developers – One must not get frightened by the fact that serious defects may often be found which will have impact on the course of the project.
  • 45. Personal attributes of a good tester /2 Good communication skills – To bring bad news to the developers – To overbear frustration state of minds – Both technical as well as issue of the practical use of the system must be understood and communicated – Positive communication can help to avoid or to ease difficult situations. – To quickly establish a working relationship with the developers Experiences – Personal factors influencing error occurrence – Experience helps identifying where errors might accumulate
  • 46. Differences: to design- to develop – to test – Testing requires a different mindset from designing developing new computer systems • Common goal: to provide good software • Design mission: help the customer to supply the right requirements • Developer’s mission: convert the requirements into functions • Tester’s mission: examine the correct implementation of the customer’s requirements – In principle, one person can be given all three roles to work at. • Differences in goal and role models must be taken into account • This is difficult but possible • Other solution (independent testers) are often easier and produce better results
  • 47. Independent testing – The separation of testing responsibilities support the independent evaluation of test results. – The diagram below show the degree of independent as a bar chart.
  • 48. • Types of test organization /1 Developer test -The developer will never examine his ”creation” unbiased (emotional attachment) • He, however, knows the test object better than anybody else • Extra costs result for the orientation of other person on the test object -Human being tend to overlook their own faults. • The developer run the risks of not recognizing even self-evident defects. -Error made because of misinterpretation of the requirements will remain undetected. • Setting up test teams where developers test each other’s products helps to avoid or at least lessen this short coming.
  • 49. Types of test organization /2 • Teams of developers – Developers speak the same language – Costs for orientation in the test object are kept moderate especially when the teams exchange test objects. – Danger of generation of conflicts among developing teams • One developer who looks for and finds a defect will not be the other developer’s best friend – Mingling development and test activities • Frequent switching of ways of thinking • Makes difficult to control project budget
  • 50. Types of test organization /3 • Test teams – Creating test teams converting different project areas enhances the quality of testing. – It is important that test teams of different areas in the project work independently
  • 51. Types of test organization /4 • Outsourcing tests – The separation of testing activities and development activities offers best independence between test object and tester. – Outsourced test activities are performed by persons having relatively little knowledge about the test object and the project background • Learning curve bring high costs, therefore unbiased party experts should be involved at the early stages of the project – External expert have a high level of testing know how: • An appropriate test design is ensured • Methods and tools find optimal • Designing test cases automatically – Computer aided generation of test cases, e.g. based on the formal specification documents, is also independent
  • 52. Difficulties /1 • Unable to understand each other – Developers should have basic knowledge of testing – Tester should have basic knowledge of software development • Especially in stress situations, discovering errors that someone has made often leads to conflicts. – The way of documenting defects and the way of the defects is described will decide how the situation will develop. – Persons should not be criticized, the defects must be stated faculty – Defect description should help the developers find the error – Common objectives must always be the main issue. •
  • 53. Difficulties /2 • Communication between tester and developers missing or insufficient. This can make impossible to work together. – Tester seen as “only messenger of bed news ” – Improvement: try to see yourself in the other person’s role. Did my message come through? Did the answer reach me? • A solid test requires the appropriate distance to the test object – An independent and non-biased position is acquired through distance from the development – However, too large a distance between the test object and the development team will lead to more effort and time for testing.
  • 54. Summary • People make mistakes, every implementation has defects. • Human nature makes it difficult to stand in front of one’s own defects(error blindness) • Developer and tester means two different worlds meet each other. – Developing is constructive- something is created that was not there before – Testing seems destructive at first glance-defect will found – Together , development and test are constructive in their objective to ensure software with the least defects possible. • Independent testing enhances quality of testing: instead of developer, use tester teams and teams with external personnel for testing.