Unit-4
1
2
Software Testing
Testing is the process of exercising
Testing is the process of exercising
a
a
program with the specific intent of
program with the specific intent of
finding errors prior to delivery to the
finding errors prior to delivery to the
end user.
end user.
3
Introduction
• A strategy for software testing integrates the design of software test
cases into a well-planned series of steps that result in successful
development of the software
• The strategy provides a road map that describes the steps to be taken,
when, and how much effort, time, and resources will be required
• The strategy incorporates test planning, test case design, test execution,
and test result collection and evaluation
• The strategy provides guidance for the practitioner and a set of
milestones for the manager
• Because of time pressures, progress must be measurable and problems
must surface as early as possible
Strategic approach to software testing
• Generic characteristics of strategic software testing:
– To perform effective testing, a software team should conduct
effective formal technical reviews. By doing this, many errors will
be eliminated before testing start.
– Testing begins at the component level and works "outward"
toward the integration of the entire computer-based system.
– Different testing techniques are appropriate at different points in
time.
– Testing is conducted by the developer of the software and (for
large projects) an independent test group.
– Testing and debugging are different activities, but debugging
must be accommodated in any testing strategy.
5
Verification and Validation
• Software testing is part of a broader group of activities called
verification and validation (V&V) that are involved in software quality
assurance(SQA).
• Verification (Are the algorithms coded correctly?)
– The set of activities that ensure that software correctly implements
a specific function or algorithm
• Validation (Does it meet user requirements?)
– The set of activities that ensure that the software that has been
built is traceable to customer requirements
Boehm [Boe81] states this another way:
– Verification: "Are we building the product right?"
– Validation: "Are we building the right product?"
• V&V encompasses a wide array of SQA activities that include
– Formal technical reviews,
– quality and configuration audits,
– performance monitoring,
– simulation,
– feasibility study,
– documentation review,
– database review,
– algorithm analysis,
– development testing,
– qualification testing, and installation testing
• Testing does provide the last bastion from which quality can
be assessed and, more pragmatically, errors can be
uncovered.
• Quality is not measure only by no. of error but it is also
measure on application methods, process model, tool, formal
technical review, etc will lead to quality, that is confirmed
during testing.
7
Who Tests the Software?
developer
developer independent tester
independent tester
Understands the system
Understands the system
but, will test "gently"
but, will test "gently"
and, is driven by "
and, is driven by "delivery
delivery"
"
Must learn about the system,
Must learn about the system,
but, will attempt to
but, will attempt to break
break it
it
and, is driven by
and, is driven by quality
quality
8
Organizing for Software Testing
• Testing should aim at "breaking" the software
• Common misconceptions
– The developer of software should do no testing at all
– The software should be given to a secret team of testers who
will test it unmercifully
– The testers get involved with the project only when the testing
steps are about to begin
• Reality: Independent test group(ITG)
– Removes the inherent problems associated with letting the
builder test the software that has been built
– Removes the conflict of interest that may otherwise be present
– Works closely with the software developer during analysis and
design to ensure that thorough testing occurs
Software Testing Strategy for
conventional software architecture
10
Levels of Testing for Conventional
Software
A Software process & strategy for software testing may also be
viewed in the context of the spiral.
• Unit testing
– begins at the vortex of the spiral and concentrates on each
component/function of the software as implemented in the
source code
• Integration testing
– Focuses on the design and construction of the software
architecture
• Validation testing
– Requirements are validated against the constructed software
• System testing
– The software and other system elements are tested as a
whole
• Software process from a procedural point
of view; a series of four steps that are
implemented sequentially.
12
Testing Strategy applied to
Conventional Software
 Initially, tests focus on each component individually, ensuring that it
functions properly as a unit.
• Unit testing
– makes heavy use of white-box testing
– Exercises specific paths in a component's control structure to
ensure complete coverage and maximum error detection
– Components are then assembled and integrated
• Integration testing
– addresses the issues associated with the dual problems of
verification and program construction.
– Focuses on inputs and outputs, and how well the components fit
together and work together
– Black-box test case design techniques are the most prevalent
during integration.
Testing Strategy applied to Conventional
Software
• Validation testing
– Provides final assurance that the software meets all
functional, behavioral, and performance requirements
– Black-box testing techniques are used exclusively during
validation.
• System testing
– Verifies that all system elements (software, hardware,
people, databases) mesh properly and that overall system
function and performance is achieved
13
14
Testing Strategy applied to Object-
Oriented Software
• Must broaden testing to include detections of errors in analysis and
design models
• Unit testing loses some of its meaning and integration testing
changes significantly
• Use the same philosophy but different approach as in conventional
software testing
• Test "in the small" and then work out to testing "in the large"
– Testing in the small involves class attributes and operations; the main
focus is on communication and collaboration within the class
– Testing in the large involves a series of regression tests to uncover
errors due to communication and collaboration among classes
• Finally, the system as a whole is tested to detect errors in fulfilling
requirements
15
Criteria for Completion of Testing
When is Testing Complete?
• There is no definitive answer to state that “we have done with
testing”.
• Every time a user executes the software, the program is being
tested
• Sadly, testing usually stops when a project is running out of time,
money, or both
• One approach is to divide the test results into various severity levels
– Then consider testing to be complete when certain levels of errors no
longer occur or have been repaired or eliminated
Test strategies for conventional software
Unit Testing
• Focuses verification effort on the smallest unit of software design –
component or module.
• Using the component-level design description as a guide
– important control paths are tested to uncover errors within the
boundary of the module.
• Concentrates on the internal processing logic and data structures
• Is simplified when a module is designed with high cohesion
– Reduces the number of test cases
– Allows errors to be more easily predicted and uncovered
• Concentrates on critical modules and those with high cyclomatic
complexity when testing resources are limited
• Unit test is white-box oriented, and the step can be conducted in
parallel for multiple components.
• Unit test consists of
– Unit Test Considerations
– Unit Test Procedures
Unit Test Considerations
18
Targets for Unit Test Cases [Contd.]
• Module interface
– Ensure that information flows properly into and out of the
module
• Local data structures
– Ensure that data stored temporarily maintains its integrity
during all steps in an algorithm execution
• Boundary conditions
– Ensure that the module operates properly at boundary
values established to limit or restrict processing
• Independent paths (basis paths)
– Paths are exercised to ensure that all statements in a
module have been executed at least once
• Error handling paths
– Ensure that the algorithms respond correctly to specific
error conditions
• Test cases should be designed to uncover errors due to
– Computations,
– Incorrect comparisons, or
– Improper control flow
• Basis path and loop testing are effective techniques for
uncovering a broad array of path errors.
Errors are commonly found during unit testing
• More common errors in computation are
– misunderstood or incorrect arithmetic precedence
– mixed mode operations,
– incorrect initialization,
– precision inaccuracy,
– incorrect symbolic representation of an expression.
• Comparison and control flow are closely coupled to one another
– Comparison of different data types,
– Incorrect logical operators or precedence,
– Incorrect comparison of variables
– Improper or nonexistent loop termination,
– Failure to exit when divergent iteration is encountered
– improperly modified loop variables.
• Potential errors that should be tested when error handling is
evaluated are
– Error description is unintelligible.
– Error noted does not correspond to error encountered.
– Error condition causes system intervention prior to error
handling.
– Exception-condition processing is incorrect.
– Error description does not provide enough information to
assist in the location of the cause of the error.
• Software often fails at its boundaries. That is, errors often
occur when the nth element of an n-dimensional array is
processed or when the maximum or minimum allowable value
is encountered.
• So BVA test is always be a last task for unit test.
Unit Test Procedures
• Perform before coding or after source code has been
generated.
• A review of design information provides guidance for
establishing test cases. Each test case should be
coupled with a set of expected results.
• Because a component is not a stand-alone program,
driver and/or stub software must be developed for each
unit test.
23
Drivers and Stubs for
Unit Testing
• Driver
– A simple main program that accepts test case data, passes
such data to the component being tested, and prints the
returned results
• Stubs
– Serve to replace modules that are subordinate to (called by)
the component to be tested
– It uses the module’s exact interface, may do minimal data
manipulation, provides verification of entry, and returns
control to the module undergoing testing
• Drivers and stubs both represent overhead
– That is, both are software that must be written but that is not
delivered with the final software product.
Unit Test Procedures
Unit Test Environment
• In such cases, complete testing can be postponed until
the integration test step
• Unit testing is simplified when a component with high
cohesion is designed.
• When only one function is addressed by a component,
the number of test cases is reduced and errors can be
more easily predicted and uncovered.
Integration testing
• Integration testing is a systematic technique for
constructing the program structure
– while at the same time conducting tests to uncover
errors associated with interfacing.
• The objective is to take unit tested components and build a
program structure that has been dictated by design.
• Two Approaches
– Non-incremental Integration Testing
– Incremental Integration Testing
Integration testing [contd.]
• Non-incremental integration
– Commonly called the “Big Bang” approach.
– All components are combined in advance
– The entire program is tested as a whole
– Chaos results
– Many seemingly-unrelated errors are encountered
– Correction is difficult because isolation of causes is complicated
– Once a set of errors are corrected, more errors occur, and testing appears to enter
an endless loop
• Incremental integration
– exact opposite of the big bang approach.
– The program is constructed and tested in small increments, where errors are easier
to isolate and correct
– Interfaces are more likely to be tested completely
– A systematic test approach is applied
– Three kinds
• Top-down integration
• Bottom-up integration
• Sandwich integration
Top-down Integration
• Top-down integration testing is an incremental approach to construction of
program structure.
• Modules are integrated by moving downward through the control hierarchy,
beginning with the main module
• Subordinate modules are incorporated in either a depth-first or breadth-first
fashion
– DF: All modules on a major control path are integrated
– BF: All modules directly subordinate at each level are integrated
• Advantages
– This approach verifies major control or decision points early in the test
process
• Disadvantages
– Stubs need to be created to substitute for modules that have not been built
or tested yet; this code is later discarded
– Because stubs are used to replace lower level modules, no significant data
flow can occur until much later in the integration/testing process
Top down integration
• Depth-first integration would integrate all components on a major
control path of the structure.
• For example, selecting the left hand path,
– Components M1, M2 , M5 would be integrated first.
– Next, M8 or M6 would be integrated
– The central and right hand control paths are built.
• Breadth-first integration incorporates all components directly
subordinate at each level, moving across the structure horizontally.
• Step would be:
– components M2, M3, and M4 would be integrated first
– next control level, M5, M6, and so on follows.
Top-down Integration process five steps:
1. The main control module is used as a test driver and stubs are
substituted for all components directly subordinate to the main
control module.
2. Depending on the integration approach selected (i.e., depth or
breadth first), subordinate stubs are replaced one at a time with
actual components.
3. Tests are conducted as each component is integrated
4. On completion of each set of tests, another stub is replaced with
the real component.
5. Regression testing may be conducted to ensure that new errors
have not been introduced.
The process continues from step 2 until the entire program structure is
built.
Problem occur in top-down integration
• Logistic problems can arise
• most common problems occurs when processing at low levels
in the hierarchy is required to adequately test upper levels.
• No significant data can flow upward in the program structure
due to stubs replace low level modules at the beginning of
top-down testing. In this case, Tester will have 3 choice
– Delay many tests until stubs are replaced with actual
modules
– develop stubs that perform limited functions that simulate
the actual module
– integrate the software from the bottom of the hierarchy
upward
Bottom-up Integration
• Integration and testing starts with the most atomic modules (i.e.,
components at the lowest levels in the program structure) in the control
hierarchy
• Advantages
– This approach verifies low-level data processing early in the testing
process
– Need for stubs is eliminated
• Disadvantages
– Driver modules need to be built to test the lower-level modules; this
code is later discarded or expanded into a full-featured version
– Drivers inherently do not contain the complete algorithms that will
eventually use the services of the lower-level modules; consequently,
testing may be incomplete or more testing may be needed later when
the upper level modules are available
Bottom up integration process steps
• Low-level components are combined into clusters
(sometimes called builds) that perform a specific
software sub function.
• A driver (a control program for testing) is written to
coordinate test case input and output.
• The cluster is tested.
• Drivers are removed and clusters are combined moving
upward in the program structure.
Bottom up integration
Example
• Components are combined to form clusters 1, 2, and 3.
Each of the clusters is tested using a driver.
• Components in clusters 1 and 2 are subordinate to Ma.
• Drivers D1 and D2 are removed and the clusters are
interfaced directly to Ma. Similarly, driver D3 for cluster 3
is removed prior to integration with module Mb.
• Both Ma and Mb will ultimately be integrated with
component Mc, and so forth.
Sandwich Integration
• Consists of a combination of both top-down and bottom-up integration
• Occurs both at the highest level modules and also at the lowest level
modules
• Proceeds using functional groups of modules, with each group
completed before the next
– High and low-level modules are grouped based on the control and
data processing they provide for a specific program feature
– Integration within the group progresses in alternating steps
between the high and low level modules of the group
– When integration for a certain functional group is complete,
integration and testing moves onto the next group
• Requires a disciplined approach so that integration doesn’t tend
towards the “big bang” scenario
Regression Testing
• Each time a new module is added as part of integration testing
– New data flow paths are established
– New I/O may occur
– New control logic is invoked
• Due to these changes, may cause problems with functions that previously worked
flawlessly.
• Regression testing re-executes a small subset of tests that have already been
conducted
– Ensures that changes have not propagated unintended side effects
– Helps to ensure that changes do not introduce unintended behavior or additional
errors
– May be done manually or through the use of automated capture/playback tools
• Regression test suite contains three different classes of test cases
– A representative sample of tests that will exercise all software functions
– Additional tests that focus on software functions that are likely to be affected by
the change
– Tests that focus on the actual software components that have been changed
Smoke Testing
• Smoke testing is an integration testing approach that is commonly used when “shrink wrapped”
software products are being developed.
• Taken from the world of hardware
– Power is applied and a technician checks for sparks, smoke, or other dramatic signs of
fundamental failure
• Designed as a pacing mechanism for time-critical projects
– Allows the software team to assess its project on a frequent basis
 Includes the following activities
– The software is compiled and linked into a build
• A build includes all data files, libraries, reusable modules, and engineered components
that are required to implement one or more product functions.
– A series of breadth tests is designed to expose errors that will keep the build from properly
performing its function
• The goal is to uncover “show stopper” errors that have the highest likelihood of throwing
the software project behind schedule
– The build is integrated with other builds and the entire product is smoke tested daily
• Daily testing gives managers and practitioners a realistic assessment of the progress of
the integration testing
– After a smoke test is completed, detailed test scripts are executed
 The integration approach may be top down or bottom up.
• Integration risk is minimized.
– Smoke tests are conducted daily, incompatibilities and other
show-stopper errors are uncovered early
• The quality of the end-product is improved.
– Smoke testing is likely to uncover both functional errors and
architectural and component-level design defects. At the end,
better product quality will result.
• Error diagnosis and correction are simplified.
– Smoke testing will probably uncover errors in the newest
components that were integrated
• Progress is easier to assess.
– Frequent tests give both managers and practitioners a realistic
assessment of integration testing progress.
Benefits of Smoke Testing
Validation Testing
• Validation testing follows integration testing
• The distinction between conventional and object-oriented software disappears
• Focuses on user-visible actions and user-recognizable output from the system
• Demonstrates conformity with requirements
• Designed to ensure that
– All functional requirements are satisfied
– All behavioral characteristics are achieved
– All performance requirements are attained
– Documentation is correct
– Usability and other requirements are met (e.g., transportability, compatibility, error recovery,
maintainability)
• After each validation test
– The function or performance characteristic conforms to specification and is accepted
– A deviation from specification is uncovered and a deficiency list is created
• A configuration review or audit ensures that all elements of the software configuration
have been properly developed, cataloged, and have the necessary detail for entering
the support phase of the software life cycle
Alpha and Beta Testing
• Alpha testing
– Conducted at the developer’s site by end users
– Software is used in a natural setting with developers watching intently
– Testing is conducted in a controlled environment
• Beta testing
– Conducted at end-user sites
– Developer is generally not present
– It serves as a live application of the software in an environment that
cannot be controlled by the developer
– The end-user records all problems that are encountered and reports
these to the developers at regular intervals
• After beta testing is complete, software engineers make software
modifications and prepare for release of the software product to the
entire customer base
System Testing
• System testing is actually a series of different tests
whose primary purpose is to fully exercise the computer-
based system.
• Although each test has a different purpose, all work to
verify that system elements have been properly
integrated and perform allocated functions.
• Types of system tests are:
– Recovery Testing
– Security Testing
– Stress Testing
– Performance Testing
Different Types
• Recovery testing
– Tests for recovery from system faults
– Forces the software to fail in a variety of ways and
verifies that recovery is properly performed
– If recovery is automatic (performed by the system
itself); reinitialization, checkpointing mechanisms, data
recovery, and restart are evaluated for correctness.
– If recovery requires human intervention, that is mean-
time-to-repair (MTTR) is evaluated to determine
whether it is within acceptable limits.
• Security testing
– Verifies that protection mechanisms built into a system
will, in fact, protect it from improper access
Different Types
• Stress testing
– Executes a system in a manner that demands
resources in abnormal quantity, frequency, or volume
– A variation of stress testing is a technique called
sensitivity testing
• Performance testing
– Tests the run-time performance of software within the
context of an integrated system
– Often coupled with stress testing and usually requires
both hardware and software instrumentation
– Can uncover situations that lead to degradation and
possible system failure
THE ART OF DEBUGGING
• Debugging is the process that results in the removal of
the error.
• Although debugging can and should be an orderly
process, it is still very much an art than a science.
• Debugging is not testing but always occurs as a
consequence of testing.
Debugging Process
Debugging Process
• The debugging process begins with the execution of a
test case.
• Results are examined and a lack of correspondence
between expected and actual performance is
encountered ( due to cause of error).
• Debugging process attempts to match symptom with
cause, thereby leading to error correction.
• One of two outcomes always comes from debugging
process:
– The cause will be found and corrected,
– The cause will not be found.
• The person performing debugging may suspect a cause,
design a test case to help validate that doubt, and work
toward error correction in an iterative fashion.
Why is debugging so difficult?
1. The symptom may disappear (temporarily) when another error is corrected.
1. The symptom may actually be caused by non-errors (e.g., round-off inaccuracies).
1. The symptom may be caused by human error that is not easily traced (e.g. wrong
input, wrongly configure the system).
1. The symptom may be a result of timing problems, rather than processing
problems.( e.g. taking so much time to display result).
1. It may be difficult to accurately reproduce input conditions (e.g., a real-time
application in which input ordering is indeterminate).
1. The symptom may be intermittent (connection irregular or broken). This is
particularly common in embedded systems that couple hardware and software.
1. The symptom may be due to causes that are distributed across a number of tasks
running on different processors
Debugging Approaches or strategies
• Debugging has one overriding objective: to find and correct the
cause of a software error.
• Three categories for debugging approaches
– Brute force
– Backtracking
– Cause elimination
Brute Force:
• probably the most common and least efficient method for
isolating the cause of a software error.
• Apply brute force debugging methods when all else fails.
• Using a "let the computer find the error" philosophy, memory
dumps are taken, run-time traces are invoked, and the
program is loaded with WRITE or PRINT statements
• It more frequently leads to wasted effort and time.
Backtracking:
• common debugging approach that can be used successfully in small
programs.
• Beginning at the site where a symptom has been open, the source
code is traced backward (manually) until the site of the cause is
found.
Cause elimination
• Involves the use of induction or deduction and introduces the
concept of binary partitioning
– Induction (specific to general): Prove that a specific starting
value is true; then prove the general case is true
– Deduction (general to specific): Show that a specific conclusion
follows from a set of general premises
• A list of all possible causes is developed and tests are conducted to
eliminate each.
Correcting the error
• The correction of a bug can introduce other errors and
therefore do more harm than good.
Questions that every software engineer should ask before
making the "correction" that removes the cause of a bug:
• Is the cause of the bug reproduced in another part of the
program? (i.e. cause of bug is logical pattern)
• What "next bug" might be introduced by the fix I'm about
to make? (i.e. cause of bug can be in logic or structure or
design).
• What could we have done to prevent this kind of bug
previously? ( i.e. same kind of bug might generated early so
developer can go through the steps)

More Related Content

PPT
Unit iv-testing-pune-university-sres-coe
PPT
Unit 4 chapter 22 - testing strategies.ppt
PPT
Chapter 13 software testing strategies
DOCX
Softwaretestingstrategies
PPT
Fundamentals of Software Engineering
PDF
Software Testing.pdf
PPTX
Software testing strategies And its types
PPTX
Testing strategies part -1
Unit iv-testing-pune-university-sres-coe
Unit 4 chapter 22 - testing strategies.ppt
Chapter 13 software testing strategies
Softwaretestingstrategies
Fundamentals of Software Engineering
Software Testing.pdf
Software testing strategies And its types
Testing strategies part -1

Similar to SOFTWARE ENGINEERING unit4-1 CLASS notes in pptx 2nd year (20)

PPTX
S.E Unit 6colorcolorcolorcolorcolorcolor.pptx
PPT
testing strategies and tactics
PPT
Software testing-and-analysis
PPTX
SENG202-v-and-v-modeling_121810.pptx
PPTX
Software Testing Strategies
PPT
Software Engineering (Testing Overview)
PDF
Module V - Software Testing Strategies.pdf
PDF
Softwarequalityassurance with Abu ul hassan Sahadvi
PPTX
Software quality assurance
PPTX
Software Testing Strategies ,Validation Testing and System Testing.
PPTX
Software testing lecture software engineering
PPTX
unit-2_20-july-2018 (1).pptx
PPT
Software testing
PPT
Software Engineering (Software Quality Assurance & Testing: Supplementary Mat...
PPTX
Software testing introduction
DOC
software testing strategies
PDF
6. oose testing
PDF
Objectorientedtesting 160320132146
PPT
Software testing part
PPTX
Object oriented testing
S.E Unit 6colorcolorcolorcolorcolorcolor.pptx
testing strategies and tactics
Software testing-and-analysis
SENG202-v-and-v-modeling_121810.pptx
Software Testing Strategies
Software Engineering (Testing Overview)
Module V - Software Testing Strategies.pdf
Softwarequalityassurance with Abu ul hassan Sahadvi
Software quality assurance
Software Testing Strategies ,Validation Testing and System Testing.
Software testing lecture software engineering
unit-2_20-july-2018 (1).pptx
Software testing
Software Engineering (Software Quality Assurance & Testing: Supplementary Mat...
Software testing introduction
software testing strategies
6. oose testing
Objectorientedtesting 160320132146
Software testing part
Object oriented testing
Ad

Recently uploaded (20)

PPTX
sinteringn kjfnvkjdfvkdfnoeneornvoirjoinsonosjf).pptx
PPTX
quantum theory on the next future in.pptx
PDF
Project_Mgmt_Institute_- Marc Marc Marc.pdf
PDF
August 2025 Top Read Articles in - Bioscience & Engineering Recent Research T...
PDF
B461227.pdf American Journal of Multidisciplinary Research and Review
PDF
LAST 3 MONTH VOCABULARY MAGAZINE 2025 . (1).pdf
PPTX
Downstream processing_in Module1_25.pptx
PPTX
MODULE 3 SUSTAINABLE DEVELOPMENT GOALSPPT.pptx
PPTX
Embedded Systems Microcontrollers and Microprocessors.pptx
PDF
Thesis of the Fruit Harvesting Robot .pdf
PDF
Snapchat product teardown product management
PDF
Recent Trends in Network Security - 2025
PDF
August 2025 Top read articles in International Journal of Database Managemen...
PDF
Design and Implementation of Low-Cost Electric Vehicles (EVs) Supercharger: A...
PPTX
PPT-HEART-DISEASE[1].pptx presentationss
PDF
BBC NW_Tech Facilities_30 Odd Yrs Ago [J].pdf
PPTX
Electric vehicle very important for detailed information.pptx
PPT
Module_1_Lecture_1_Introduction_To_Automation_In_Production_Systems2023.ppt
PPTX
Unit I - Mechatronics.pptx presentation
PDF
Human CELLS and structure in Anatomy and human physiology
sinteringn kjfnvkjdfvkdfnoeneornvoirjoinsonosjf).pptx
quantum theory on the next future in.pptx
Project_Mgmt_Institute_- Marc Marc Marc.pdf
August 2025 Top Read Articles in - Bioscience & Engineering Recent Research T...
B461227.pdf American Journal of Multidisciplinary Research and Review
LAST 3 MONTH VOCABULARY MAGAZINE 2025 . (1).pdf
Downstream processing_in Module1_25.pptx
MODULE 3 SUSTAINABLE DEVELOPMENT GOALSPPT.pptx
Embedded Systems Microcontrollers and Microprocessors.pptx
Thesis of the Fruit Harvesting Robot .pdf
Snapchat product teardown product management
Recent Trends in Network Security - 2025
August 2025 Top read articles in International Journal of Database Managemen...
Design and Implementation of Low-Cost Electric Vehicles (EVs) Supercharger: A...
PPT-HEART-DISEASE[1].pptx presentationss
BBC NW_Tech Facilities_30 Odd Yrs Ago [J].pdf
Electric vehicle very important for detailed information.pptx
Module_1_Lecture_1_Introduction_To_Automation_In_Production_Systems2023.ppt
Unit I - Mechatronics.pptx presentation
Human CELLS and structure in Anatomy and human physiology
Ad

SOFTWARE ENGINEERING unit4-1 CLASS notes in pptx 2nd year

  • 2. 2 Software Testing Testing is the process of exercising Testing is the process of exercising a a program with the specific intent of program with the specific intent of finding errors prior to delivery to the finding errors prior to delivery to the end user. end user.
  • 3. 3 Introduction • A strategy for software testing integrates the design of software test cases into a well-planned series of steps that result in successful development of the software • The strategy provides a road map that describes the steps to be taken, when, and how much effort, time, and resources will be required • The strategy incorporates test planning, test case design, test execution, and test result collection and evaluation • The strategy provides guidance for the practitioner and a set of milestones for the manager • Because of time pressures, progress must be measurable and problems must surface as early as possible
  • 4. Strategic approach to software testing • Generic characteristics of strategic software testing: – To perform effective testing, a software team should conduct effective formal technical reviews. By doing this, many errors will be eliminated before testing start. – Testing begins at the component level and works "outward" toward the integration of the entire computer-based system. – Different testing techniques are appropriate at different points in time. – Testing is conducted by the developer of the software and (for large projects) an independent test group. – Testing and debugging are different activities, but debugging must be accommodated in any testing strategy.
  • 5. 5 Verification and Validation • Software testing is part of a broader group of activities called verification and validation (V&V) that are involved in software quality assurance(SQA). • Verification (Are the algorithms coded correctly?) – The set of activities that ensure that software correctly implements a specific function or algorithm • Validation (Does it meet user requirements?) – The set of activities that ensure that the software that has been built is traceable to customer requirements Boehm [Boe81] states this another way: – Verification: "Are we building the product right?" – Validation: "Are we building the right product?"
  • 6. • V&V encompasses a wide array of SQA activities that include – Formal technical reviews, – quality and configuration audits, – performance monitoring, – simulation, – feasibility study, – documentation review, – database review, – algorithm analysis, – development testing, – qualification testing, and installation testing • Testing does provide the last bastion from which quality can be assessed and, more pragmatically, errors can be uncovered. • Quality is not measure only by no. of error but it is also measure on application methods, process model, tool, formal technical review, etc will lead to quality, that is confirmed during testing.
  • 7. 7 Who Tests the Software? developer developer independent tester independent tester Understands the system Understands the system but, will test "gently" but, will test "gently" and, is driven by " and, is driven by "delivery delivery" " Must learn about the system, Must learn about the system, but, will attempt to but, will attempt to break break it it and, is driven by and, is driven by quality quality
  • 8. 8 Organizing for Software Testing • Testing should aim at "breaking" the software • Common misconceptions – The developer of software should do no testing at all – The software should be given to a secret team of testers who will test it unmercifully – The testers get involved with the project only when the testing steps are about to begin • Reality: Independent test group(ITG) – Removes the inherent problems associated with letting the builder test the software that has been built – Removes the conflict of interest that may otherwise be present – Works closely with the software developer during analysis and design to ensure that thorough testing occurs
  • 9. Software Testing Strategy for conventional software architecture
  • 10. 10 Levels of Testing for Conventional Software A Software process & strategy for software testing may also be viewed in the context of the spiral. • Unit testing – begins at the vortex of the spiral and concentrates on each component/function of the software as implemented in the source code • Integration testing – Focuses on the design and construction of the software architecture • Validation testing – Requirements are validated against the constructed software • System testing – The software and other system elements are tested as a whole
  • 11. • Software process from a procedural point of view; a series of four steps that are implemented sequentially.
  • 12. 12 Testing Strategy applied to Conventional Software  Initially, tests focus on each component individually, ensuring that it functions properly as a unit. • Unit testing – makes heavy use of white-box testing – Exercises specific paths in a component's control structure to ensure complete coverage and maximum error detection – Components are then assembled and integrated • Integration testing – addresses the issues associated with the dual problems of verification and program construction. – Focuses on inputs and outputs, and how well the components fit together and work together – Black-box test case design techniques are the most prevalent during integration.
  • 13. Testing Strategy applied to Conventional Software • Validation testing – Provides final assurance that the software meets all functional, behavioral, and performance requirements – Black-box testing techniques are used exclusively during validation. • System testing – Verifies that all system elements (software, hardware, people, databases) mesh properly and that overall system function and performance is achieved 13
  • 14. 14 Testing Strategy applied to Object- Oriented Software • Must broaden testing to include detections of errors in analysis and design models • Unit testing loses some of its meaning and integration testing changes significantly • Use the same philosophy but different approach as in conventional software testing • Test "in the small" and then work out to testing "in the large" – Testing in the small involves class attributes and operations; the main focus is on communication and collaboration within the class – Testing in the large involves a series of regression tests to uncover errors due to communication and collaboration among classes • Finally, the system as a whole is tested to detect errors in fulfilling requirements
  • 15. 15 Criteria for Completion of Testing When is Testing Complete? • There is no definitive answer to state that “we have done with testing”. • Every time a user executes the software, the program is being tested • Sadly, testing usually stops when a project is running out of time, money, or both • One approach is to divide the test results into various severity levels – Then consider testing to be complete when certain levels of errors no longer occur or have been repaired or eliminated
  • 16. Test strategies for conventional software Unit Testing • Focuses verification effort on the smallest unit of software design – component or module. • Using the component-level design description as a guide – important control paths are tested to uncover errors within the boundary of the module. • Concentrates on the internal processing logic and data structures • Is simplified when a module is designed with high cohesion – Reduces the number of test cases – Allows errors to be more easily predicted and uncovered • Concentrates on critical modules and those with high cyclomatic complexity when testing resources are limited • Unit test is white-box oriented, and the step can be conducted in parallel for multiple components. • Unit test consists of – Unit Test Considerations – Unit Test Procedures
  • 18. 18 Targets for Unit Test Cases [Contd.] • Module interface – Ensure that information flows properly into and out of the module • Local data structures – Ensure that data stored temporarily maintains its integrity during all steps in an algorithm execution • Boundary conditions – Ensure that the module operates properly at boundary values established to limit or restrict processing • Independent paths (basis paths) – Paths are exercised to ensure that all statements in a module have been executed at least once • Error handling paths – Ensure that the algorithms respond correctly to specific error conditions
  • 19. • Test cases should be designed to uncover errors due to – Computations, – Incorrect comparisons, or – Improper control flow • Basis path and loop testing are effective techniques for uncovering a broad array of path errors.
  • 20. Errors are commonly found during unit testing • More common errors in computation are – misunderstood or incorrect arithmetic precedence – mixed mode operations, – incorrect initialization, – precision inaccuracy, – incorrect symbolic representation of an expression. • Comparison and control flow are closely coupled to one another – Comparison of different data types, – Incorrect logical operators or precedence, – Incorrect comparison of variables – Improper or nonexistent loop termination, – Failure to exit when divergent iteration is encountered – improperly modified loop variables.
  • 21. • Potential errors that should be tested when error handling is evaluated are – Error description is unintelligible. – Error noted does not correspond to error encountered. – Error condition causes system intervention prior to error handling. – Exception-condition processing is incorrect. – Error description does not provide enough information to assist in the location of the cause of the error. • Software often fails at its boundaries. That is, errors often occur when the nth element of an n-dimensional array is processed or when the maximum or minimum allowable value is encountered. • So BVA test is always be a last task for unit test.
  • 22. Unit Test Procedures • Perform before coding or after source code has been generated. • A review of design information provides guidance for establishing test cases. Each test case should be coupled with a set of expected results. • Because a component is not a stand-alone program, driver and/or stub software must be developed for each unit test.
  • 23. 23 Drivers and Stubs for Unit Testing • Driver – A simple main program that accepts test case data, passes such data to the component being tested, and prints the returned results • Stubs – Serve to replace modules that are subordinate to (called by) the component to be tested – It uses the module’s exact interface, may do minimal data manipulation, provides verification of entry, and returns control to the module undergoing testing • Drivers and stubs both represent overhead – That is, both are software that must be written but that is not delivered with the final software product.
  • 24. Unit Test Procedures Unit Test Environment
  • 25. • In such cases, complete testing can be postponed until the integration test step • Unit testing is simplified when a component with high cohesion is designed. • When only one function is addressed by a component, the number of test cases is reduced and errors can be more easily predicted and uncovered.
  • 26. Integration testing • Integration testing is a systematic technique for constructing the program structure – while at the same time conducting tests to uncover errors associated with interfacing. • The objective is to take unit tested components and build a program structure that has been dictated by design. • Two Approaches – Non-incremental Integration Testing – Incremental Integration Testing
  • 27. Integration testing [contd.] • Non-incremental integration – Commonly called the “Big Bang” approach. – All components are combined in advance – The entire program is tested as a whole – Chaos results – Many seemingly-unrelated errors are encountered – Correction is difficult because isolation of causes is complicated – Once a set of errors are corrected, more errors occur, and testing appears to enter an endless loop • Incremental integration – exact opposite of the big bang approach. – The program is constructed and tested in small increments, where errors are easier to isolate and correct – Interfaces are more likely to be tested completely – A systematic test approach is applied – Three kinds • Top-down integration • Bottom-up integration • Sandwich integration
  • 28. Top-down Integration • Top-down integration testing is an incremental approach to construction of program structure. • Modules are integrated by moving downward through the control hierarchy, beginning with the main module • Subordinate modules are incorporated in either a depth-first or breadth-first fashion – DF: All modules on a major control path are integrated – BF: All modules directly subordinate at each level are integrated • Advantages – This approach verifies major control or decision points early in the test process • Disadvantages – Stubs need to be created to substitute for modules that have not been built or tested yet; this code is later discarded – Because stubs are used to replace lower level modules, no significant data flow can occur until much later in the integration/testing process
  • 30. • Depth-first integration would integrate all components on a major control path of the structure. • For example, selecting the left hand path, – Components M1, M2 , M5 would be integrated first. – Next, M8 or M6 would be integrated – The central and right hand control paths are built. • Breadth-first integration incorporates all components directly subordinate at each level, moving across the structure horizontally. • Step would be: – components M2, M3, and M4 would be integrated first – next control level, M5, M6, and so on follows.
  • 31. Top-down Integration process five steps: 1. The main control module is used as a test driver and stubs are substituted for all components directly subordinate to the main control module. 2. Depending on the integration approach selected (i.e., depth or breadth first), subordinate stubs are replaced one at a time with actual components. 3. Tests are conducted as each component is integrated 4. On completion of each set of tests, another stub is replaced with the real component. 5. Regression testing may be conducted to ensure that new errors have not been introduced. The process continues from step 2 until the entire program structure is built.
  • 32. Problem occur in top-down integration • Logistic problems can arise • most common problems occurs when processing at low levels in the hierarchy is required to adequately test upper levels. • No significant data can flow upward in the program structure due to stubs replace low level modules at the beginning of top-down testing. In this case, Tester will have 3 choice – Delay many tests until stubs are replaced with actual modules – develop stubs that perform limited functions that simulate the actual module – integrate the software from the bottom of the hierarchy upward
  • 33. Bottom-up Integration • Integration and testing starts with the most atomic modules (i.e., components at the lowest levels in the program structure) in the control hierarchy • Advantages – This approach verifies low-level data processing early in the testing process – Need for stubs is eliminated • Disadvantages – Driver modules need to be built to test the lower-level modules; this code is later discarded or expanded into a full-featured version – Drivers inherently do not contain the complete algorithms that will eventually use the services of the lower-level modules; consequently, testing may be incomplete or more testing may be needed later when the upper level modules are available
  • 34. Bottom up integration process steps • Low-level components are combined into clusters (sometimes called builds) that perform a specific software sub function. • A driver (a control program for testing) is written to coordinate test case input and output. • The cluster is tested. • Drivers are removed and clusters are combined moving upward in the program structure.
  • 36. Example • Components are combined to form clusters 1, 2, and 3. Each of the clusters is tested using a driver. • Components in clusters 1 and 2 are subordinate to Ma. • Drivers D1 and D2 are removed and the clusters are interfaced directly to Ma. Similarly, driver D3 for cluster 3 is removed prior to integration with module Mb. • Both Ma and Mb will ultimately be integrated with component Mc, and so forth.
  • 37. Sandwich Integration • Consists of a combination of both top-down and bottom-up integration • Occurs both at the highest level modules and also at the lowest level modules • Proceeds using functional groups of modules, with each group completed before the next – High and low-level modules are grouped based on the control and data processing they provide for a specific program feature – Integration within the group progresses in alternating steps between the high and low level modules of the group – When integration for a certain functional group is complete, integration and testing moves onto the next group • Requires a disciplined approach so that integration doesn’t tend towards the “big bang” scenario
  • 38. Regression Testing • Each time a new module is added as part of integration testing – New data flow paths are established – New I/O may occur – New control logic is invoked • Due to these changes, may cause problems with functions that previously worked flawlessly. • Regression testing re-executes a small subset of tests that have already been conducted – Ensures that changes have not propagated unintended side effects – Helps to ensure that changes do not introduce unintended behavior or additional errors – May be done manually or through the use of automated capture/playback tools • Regression test suite contains three different classes of test cases – A representative sample of tests that will exercise all software functions – Additional tests that focus on software functions that are likely to be affected by the change – Tests that focus on the actual software components that have been changed
  • 39. Smoke Testing • Smoke testing is an integration testing approach that is commonly used when “shrink wrapped” software products are being developed. • Taken from the world of hardware – Power is applied and a technician checks for sparks, smoke, or other dramatic signs of fundamental failure • Designed as a pacing mechanism for time-critical projects – Allows the software team to assess its project on a frequent basis  Includes the following activities – The software is compiled and linked into a build • A build includes all data files, libraries, reusable modules, and engineered components that are required to implement one or more product functions. – A series of breadth tests is designed to expose errors that will keep the build from properly performing its function • The goal is to uncover “show stopper” errors that have the highest likelihood of throwing the software project behind schedule – The build is integrated with other builds and the entire product is smoke tested daily • Daily testing gives managers and practitioners a realistic assessment of the progress of the integration testing – After a smoke test is completed, detailed test scripts are executed  The integration approach may be top down or bottom up.
  • 40. • Integration risk is minimized. – Smoke tests are conducted daily, incompatibilities and other show-stopper errors are uncovered early • The quality of the end-product is improved. – Smoke testing is likely to uncover both functional errors and architectural and component-level design defects. At the end, better product quality will result. • Error diagnosis and correction are simplified. – Smoke testing will probably uncover errors in the newest components that were integrated • Progress is easier to assess. – Frequent tests give both managers and practitioners a realistic assessment of integration testing progress. Benefits of Smoke Testing
  • 41. Validation Testing • Validation testing follows integration testing • The distinction between conventional and object-oriented software disappears • Focuses on user-visible actions and user-recognizable output from the system • Demonstrates conformity with requirements • Designed to ensure that – All functional requirements are satisfied – All behavioral characteristics are achieved – All performance requirements are attained – Documentation is correct – Usability and other requirements are met (e.g., transportability, compatibility, error recovery, maintainability) • After each validation test – The function or performance characteristic conforms to specification and is accepted – A deviation from specification is uncovered and a deficiency list is created • A configuration review or audit ensures that all elements of the software configuration have been properly developed, cataloged, and have the necessary detail for entering the support phase of the software life cycle
  • 42. Alpha and Beta Testing • Alpha testing – Conducted at the developer’s site by end users – Software is used in a natural setting with developers watching intently – Testing is conducted in a controlled environment • Beta testing – Conducted at end-user sites – Developer is generally not present – It serves as a live application of the software in an environment that cannot be controlled by the developer – The end-user records all problems that are encountered and reports these to the developers at regular intervals • After beta testing is complete, software engineers make software modifications and prepare for release of the software product to the entire customer base
  • 43. System Testing • System testing is actually a series of different tests whose primary purpose is to fully exercise the computer- based system. • Although each test has a different purpose, all work to verify that system elements have been properly integrated and perform allocated functions. • Types of system tests are: – Recovery Testing – Security Testing – Stress Testing – Performance Testing
  • 44. Different Types • Recovery testing – Tests for recovery from system faults – Forces the software to fail in a variety of ways and verifies that recovery is properly performed – If recovery is automatic (performed by the system itself); reinitialization, checkpointing mechanisms, data recovery, and restart are evaluated for correctness. – If recovery requires human intervention, that is mean- time-to-repair (MTTR) is evaluated to determine whether it is within acceptable limits. • Security testing – Verifies that protection mechanisms built into a system will, in fact, protect it from improper access
  • 45. Different Types • Stress testing – Executes a system in a manner that demands resources in abnormal quantity, frequency, or volume – A variation of stress testing is a technique called sensitivity testing • Performance testing – Tests the run-time performance of software within the context of an integrated system – Often coupled with stress testing and usually requires both hardware and software instrumentation – Can uncover situations that lead to degradation and possible system failure
  • 46. THE ART OF DEBUGGING • Debugging is the process that results in the removal of the error. • Although debugging can and should be an orderly process, it is still very much an art than a science. • Debugging is not testing but always occurs as a consequence of testing.
  • 48. Debugging Process • The debugging process begins with the execution of a test case. • Results are examined and a lack of correspondence between expected and actual performance is encountered ( due to cause of error). • Debugging process attempts to match symptom with cause, thereby leading to error correction. • One of two outcomes always comes from debugging process: – The cause will be found and corrected, – The cause will not be found. • The person performing debugging may suspect a cause, design a test case to help validate that doubt, and work toward error correction in an iterative fashion.
  • 49. Why is debugging so difficult? 1. The symptom may disappear (temporarily) when another error is corrected. 1. The symptom may actually be caused by non-errors (e.g., round-off inaccuracies). 1. The symptom may be caused by human error that is not easily traced (e.g. wrong input, wrongly configure the system). 1. The symptom may be a result of timing problems, rather than processing problems.( e.g. taking so much time to display result). 1. It may be difficult to accurately reproduce input conditions (e.g., a real-time application in which input ordering is indeterminate). 1. The symptom may be intermittent (connection irregular or broken). This is particularly common in embedded systems that couple hardware and software. 1. The symptom may be due to causes that are distributed across a number of tasks running on different processors
  • 50. Debugging Approaches or strategies • Debugging has one overriding objective: to find and correct the cause of a software error. • Three categories for debugging approaches – Brute force – Backtracking – Cause elimination Brute Force: • probably the most common and least efficient method for isolating the cause of a software error. • Apply brute force debugging methods when all else fails. • Using a "let the computer find the error" philosophy, memory dumps are taken, run-time traces are invoked, and the program is loaded with WRITE or PRINT statements • It more frequently leads to wasted effort and time.
  • 51. Backtracking: • common debugging approach that can be used successfully in small programs. • Beginning at the site where a symptom has been open, the source code is traced backward (manually) until the site of the cause is found. Cause elimination • Involves the use of induction or deduction and introduces the concept of binary partitioning – Induction (specific to general): Prove that a specific starting value is true; then prove the general case is true – Deduction (general to specific): Show that a specific conclusion follows from a set of general premises • A list of all possible causes is developed and tests are conducted to eliminate each.
  • 52. Correcting the error • The correction of a bug can introduce other errors and therefore do more harm than good. Questions that every software engineer should ask before making the "correction" that removes the cause of a bug: • Is the cause of the bug reproduced in another part of the program? (i.e. cause of bug is logical pattern) • What "next bug" might be introduced by the fix I'm about to make? (i.e. cause of bug can be in logic or structure or design). • What could we have done to prevent this kind of bug previously? ( i.e. same kind of bug might generated early so developer can go through the steps)