SlideShare a Scribd company logo
Natalia Juristo
Universidad Politécnica de Madrid
Testers should apply technique strictly as
prescribed but, in practice, conformance to the
prescribed strategy varies
 Does tester contribute to testing technique
effectiveness?


• If so, how much? In which way?



Two techniques studied
• Equivalence Partitioning
• Branch Testing



Approach

• Empirical
Theoretical
Effectiveness
Sensitivity to
Faults

Contribution

Technique

Tester

Observed
Effectiveness
 Theoretical effectiveness
How likely a tester applying strictly a technique’s prescribed
strategy is to generate a test case that exercises certain fault
 Observed effectiveness
Empirical study where the techniques are applied by
master’s students


Tester contribution
Difference between theoretical and observed effectiveness



Nature of tester contribution
Qualitative empirical study in which we ask subjects to
explain why they do or do not detect each seeded fault


Techniques sensitivity
• Testing techniques are not equally sensitive to all faults
• Quantify theoretical effectiveness is possible for extreme cases (a

theoretical effectiveness of 0% or 100%) but harder or even
impossible for middle cases



Faults for the study
• Extreme cases highlight tester contribution as deviations of observed

from theoretical effectiveness are clearer
• The seeded faults are extreme cases with completely differently
behaviour for the two techniques

 We use faults with 100% effectiveness for one technique and 0% or much less
than 100% for the other
 We have also seeded a few medium effectiveness faults to study the role
played by the chance factor


Later I provide an in-depth explanation of why fault
detection sensitivity differs from one technique to another
1.

Studied Techniques

2.

Theoretical Effectiveness

3.

Observed Effectiveness

4.

Nature of Tester Contribution

5.

Findings
•
•

Equivalence Partitioning
Brach Testing
1.

Identify equivalence classes

• Take each input condition and partition it into two groups

Valid class: valid inputs to the program

Invalid class: erroneous input values
• If the program does not handle elements in an equivalence

class identically

 the equivalence class is split into smaller equivalence classes

2.

Define test cases

• Use equivalence classes to identify test cases
• Test cases that cover as many of the valid equivalence classes

as possible are written until all valid equivalence classes have
been covered by test cases
• Test cases that cover one, and only one, of the uncovered
invalid equivalence classes are written until all invalid
equivalence classes have been covered by test cases
 White-box

testing is concerned with the
extent to which test cases exercise the
program logic
 Branch coverage

• Enough test cases must be written to assure that

every branch alternative is exercised at least once

 Test case design strategy
1. An initial test case is generated that corresponds

to the simplest entry/exit path
2. New test cases are then generated slightly
differing from previous paths
3. As the test cases are generated a table showing
the coverage status of each decision is built
•
•
•

100% Cases
0% Cases
Fortune Cases
 Example

of a calculator

• Input should contain two operands and one

operator in the “operand operator operand”
format
 Otherwise an error message is displayed

• The admissible operators are +, -, *, and /
 Otherwise an error message is displayed
• The operands can be a blank (which would be

interpreted as a 0) or any other real number
 Otherwise an error message is displayed
A

technique’s prescribed strategy is sure
to generate a test case that exercises the
fault

 The

likelihood of the fault being
exercised (not detected!) is 100%

 Let

us look at one 100% case for each
technique


Equivalance Partitioning

• Strategy
 For a set of input values testers must identify one valid class
for each value and one invalid class representing the other
values
• Example
 Four valid (+, -, *, /) and one invalid equivalence classes must
be generated for the calculator operator input condition
 This generates one test case to test each operator plus
another which tests an operator not accepted by the program
• Fault
 Suppose that a programmer forgets to implement the
multiplication
• Likelihood
 Testers strictly conforming to equivalence partitioning’s
prescribed strategy will generate 100% sure a test case that
exercises such a fault


Brach Testing
• Strategy
 For each decision, one test case is generated to output a false
value and another to output a true value
• Example
 For the decision to detect if it is a division by zero (if
SecondOperand =0.0)
 One test case is generated for value other than 0 (false decision)
 Another test case for a value equal to 0 (true decision)
• Fault
 Suppose that the line is incorrectly coded as (if SecondOperand
<> 0.0)
• Likelihood
 Testers strictly conforming to branch testing’s prescribed
strategy will for sure generate a test case that exercises such a
fault
A

technique’s prescribed strategy is
unable to generate a test case that
exercises a fault

 The

likelihood of the fault being
exercised is 0%

 Let

us look at one 0% case for each
technique


Equivalance Partitioning

• Strategy
 A test case can contain at most one invalid equivalence class
• Example
 To check that the calculator does not accept operands that are not
numbers, testers generate one invalid class for the first operand and
another for the second operand
 This generates one test case to check the first operand and another
to check the second
 Neither of these test cases checks what happens if the format of both
operands is incorrect
• Fault
 Suppose that the line of code for checking that both operands are
numbers incorrectly expresses an XOR instead of an OR condition
• Likelihood
 Testers strictly conforming to equivalence partitioning’s prescribed
strategy are unable to generate a test case to exercise this fault
 Theoretical effectiveness for this fault is 0%
 Brach Testing

• Strategy
 Generates test cases based exclusively on source code
• Inability
 Generate test cases for omitted code
• Fault
 A programmer forgets to implement division
• Likelihood
 Testers strictly conforming to branch testing’s prescribed
strategy will not generate a test case to exercise this fault
 Theoretical effectiveness for such a fault is 0%
 Techniques’ prescribed

strategy may
generate a test case that exercises the fault
or not
• Only some of the available values within the range

established by the technique’s prescribed strategy
are capable of exercising a fault

 Unfortunate choice of test data
• Tester does not choose one of them
 Fortunate choice of test data
• Tester chooses one of them


Equivalance Partitioning

• Strategy
 Generate test cases from the specification and not from the
source code
• Limitation
 Generate equivalence classes (and consequently test cases)
for code functionality that is not listed in the specification
• Fault
 A programmer implements an unspecified operation
• Likelihood
 Testers strictly conforming to equivalence partitioning’s
prescribed strategy are unable to generate a test case to
exercise this fault…
 Unless the specific case chosen for the invalid class is exactly
the unspecified operation
 Which likelihood?


Brach Testing

• Strategy
 Tester are free to choose the test data to cover a decision
• Example
 Code line which checks number of inputs (If Number-of-Arguments
>= 4)
 Multiple values can be used to output a false decision for this line
• Fault
 The line incorrectly reads (If Number-of-Arguments > 4)
• Likelihood
 To output a false decision the value of nArgs must be less than or
equal to 4
 Only if the value is 4 the test case will exercised the fault
 Testers strictly conforming to branch testing’s prescribed strategy
can choose other values of nArgs to output a false decision
 25%, if we consider that the possible test values are 1, 2, 3, 4 for a
false decision
cmdline

nametbl

ntree

F1 F2 F3 F4 F5 F6 F1 F2 F3 F4 F5 F6 F1 F2 F3 F4 F5 F6
Unimplemented
X
X
X
specification
BT
Test data used to
X
X
X X
X X X X
achieve coverage
Combination of
invalid equivalence
X
X
X
classes
Chosen combination
of valid equivalence
X
EP classes
Test data used to
X
X
X
combine classes
Implementation of
unspecified
X
X
functionality
•
•
•
•

Empirical Study Description
Results
Differences on Sign
Differences on Size
3

programs
 6 seeded fault/program
• 3 maxEP-minBT
• 3 minEP-maxBT

 Technique effectiveness measurement
• percentage of subjects that generate a test case

which exercises a particular fault

 20-40

master students applying the
techniques
 Replicated 4 times
cmdline
EP

nametbl
BT

EP

ntree

BT

EP

BT

F1

94%

12%

100%

81%

F2

100%

12%

94%

71%

85%

12%

37%

0%

88%

82%

85%

50%

F4
Max BT- Min EP

0%

F3

Max EP- Min BT

81%

0%

50%

0%

94%

0%

40%

F5

0%

12%

6%

53%

29%

69%

F6

19%

94%

18%

35%

36%

69%
cmdline

nametbl

EP
F1

=

F2

=
↓63%

F4
Max BT- Min EP

↓19%

F3

Max EP- Min BT

BT

BT

EP

BT

↓6%

↑12%

=

↑47.7%

↓6%

↑71%

↓15%

=

↓12%

↑7%

↓15%

=

=

=

↓6%

=

↓60%

F5

↓16%

↓88%

↑6%

↓47%

↑29%

↓31%

F6

↓31%

↑60.7%

↑18%

↓15%

↑36%

↓31%

↑12%

EP

ntree

↑12%
=
 Tester

contribution tends to reduce
technique theoretical effectiveness
• Effectiveness falls in 44.5% of cases compared

with 30.5% in which it increases
 Tester

contribution differs for techniques

• Decrease in effectiveness
 greater for EP than for BT
• Increase in effectiveness
 Smaller for EP than for BT
 Four sizes
• Small (difference of 0%-25%), medium (difference of

26%-50%), large (difference of 51%-75%) and very
large (difference of 76%-100%)

 Testers

contribute little to technique
effectiveness
• Difference is small in 66.7% of cases

 Testers

contribute less to equivalence
partitioning than to branch testing

• EP has more small differences than BT
• EP has less large/very large differences than BT
•

•
•
•

Empirical Study Description
Types of Differences
Equivalnece Partitioning Case
Branch Testing Case
 Individual work
• Subjects get
 Their results
 List of seeded faults
• Subjects analyse
 whether or not they have generated a test case for a fault
 why
 Discussion in group
• How subjects generate test cases for faults that a

technique is not able to exercise
• Why they fail to generate test cases for faults that a
technique is able to exercise
 Poor technique application
• Testers make mistakes when applying the

techniques

 Technique extension
• Subjects round out the techniques with

additional knowledge on programming or
testing

 By chance
• Unfortunate choice of test data
• Fortunate choice of test data


MISTAKE 1. One valid class must be identified for
each correct input value and one invalid class
representing incorrect values

• Subjects create one single valid equivalence class for all input

values
• If one of the input values causes a failure, subjects will find it
hard to generate a test case that exercises the fault, as they do
not generate specific test cases for each value
• Example

 A tester generates a single valid equivalence class for the operator
input condition containing all four valid operators


Subjects appear to mistakenly assume that all the
equivalence classes behave equally in the code



They aim to save time and get the same effectiveness


MISTAKE 2. Generate several equivalence classes for some, but
not all, of the input values



MISTAKE 3. Fail to build equivalence classes for part of the
specification



MISTAKE 4. Misinterpret the specification and generate
equivalence classes that do not exactly state the meaning of the
specification



MISTAKE 5. Do not build enough test cases to cover all the
generated equivalence classes



MISTAKE 6. Choose test case input data that do not correspond to
the combination of equivalence classes
• Subjects are careless and overlooking important details of the context of the

test case that they really want to execute
• They may mistake some concepts for others, misleading them into thinking that
they are testing particular situations that they are not really testing
 IMPROVEMENT

1. Adding an extra
equivalence class combining several
invalid equivalence classes
• In the calculator example, this happens if a tester

generates a new class in which neither operand
is a number


MISTAKE 1. Fail to achieve the required coverage
criterion because intentionally reduce the number of
test cases
• They do this to save time and reduce workload
• They think that similar portions of code will behave same way
• In the calculator example, this happens if test cases are

generated for only some of the operators (the switch/case
sentence is not completely covered)



MISTAKE 2. Despite having designed a test case to
cover a particular decision, test data chosen by the
subject do not follow the expected execution path

• In the calculator example, this happens if the tester specifies “

”+3 instead of +3 as test data
• This is usually due to a misunderstanding of the code








IMPROVEMENT 1. Generate additional test cases
to cover common sources of programming errors
IMPROVEMENT 2. Generate additional test cases
for parts of the code that they do not understand
IMPROVEMENT 3. Extend the required coverage
using condition coverage rather than decision
coverage
IMPROVEMENT 4. Subjects discover faults
directly as they read the code
•
•
•
•

Tester Contribution
Practical Recomendations
No Doubts
Generalization Warnings
 Not

strict conformance to technique’s strategy
 Contribution to effectiveness is small
 Contribute
 Most

less to the effectiveness of EP than of BT

cases contribute to degrade the technique

• Misunderstandings of the techniques
• Oversights or mistakes
• Unfamiliarity with the programming language (for branch testing)

 Fewer

cases round out the technique

• The manner to complement depends on the technique
 Contribute

more often to reducing
effectiveness of EP than of BT
• There are more cases of misunderstandings for EP

 Contribute

more often to increasing
effectiveness of BT than of EP
 There are more cases of improvements for BT
 Contribution to BT is consequence of code reading
 The

approach taken does not inflate tester
contribution

A

scenario where tester contribution would
be expected to be negligible
• Subjects are graded on technique application
 Conformance to technique’s strategy should be high
 but it is not

• Programs are simple
 The application of the techniques to these programs
should cause no problems
 but it does

• Subjects are inexperienced
 They should make little or no contribution
 but they do
 Testers tend to apply techniques poorly
• Exploiting people’s diversity makes testing more

effective
• Different testers make different mistakes
• Two testers applying same technique on same code
will find more defects than one tester taking twice as
long to apply a technique

 Testers

usually complement white box
testing techniques with intuitive code
review

• Train testers in code review techniques since it will

improve these intuitive accessory activities


We use junior testers, experienced testers

• Might contribute more as they have more software

development and testing experience
• Conformance to the techniques’ prescribed strategy could
differ from students’
 Better or worse?



We use 3 programs, for larger programs

• Testers’ conformance to the technique’s prescribed strategy

would expect to be worse



No dynamic analyser is used to apply branch
testing
Natalia Juristo
Universidad Politécnica de Madrid
#include <stdio.h>
#include <stdlib.h>
#define NBUF 81
int main ()
{
char strOpA[NBUF], strOp[NBUF], strOpB[NBUF];
double fA, fB, fRes;
char *pa, *pb;
int nArgs;
while (!feof(stdin)) {
nArgs = nParseInput (strOpA, strOp, strOpB);
if (nArgs==0)
{ printf ("Too few argumentsn"); continue;}
if (nArgs>=4)
{ printf ("Too many argumentsn"); continue;}
fA = strtod (strOpA, &pa);
fB = strtod (strOpB, &pb);
if ((*pa!='0') || (*pb!='0'))
{ printf ("Invalid operandsn"); continue;}

}

}

if (strlen(strOp)!=1)
{ printf("Invalid operatorn"); continue;
} else switch (*strOp) {
case '+': { fRes = fA + fB; break; }
case '-': { fRes = fA - fB; break; }
case '*': { fRes = fA * fB; break; }
case '/': {
if (fB == 0.0)
{ printf ("Division by zeron"); continue;
} else { fRes = fA / fB; }
break;
}
default: {
printf ("Invalid operatorn"); continue;
}
}
printf ("%lf %s %lf = %lfn",fA,strOp,fB,fRes);


Equivalance Partitioning
• Strategy
 Testers are free to choose the specific test data from an equivalence class
• Inability
 Detection of some faults depends on the specific test data chosen to cover an
equivalence class
• Example
 To combine the valid equivalence classes number+number to get the
addition, any two numbers could be tested
• Fault
 The program mistakenly processes not real but natural numbers
• Likelihood
 Will only be detected if numbers with at least one digit (different from 0) after
the decimal point are used in the test case
 Testers strictly conforming to the technique’s prescribed strategy would not
be wrong to choose the addition of two natural numbers as a test case to
cover the equivalence class for number+number
 Less than 100% (50% or 90%)

More Related Content

What's hot (19)

PDF
Input Space Partitioning
Riyad Parvez
 
PPTX
Test case techniques
Pina Parmar
 
PPT
Test design techniques
Pragya Rastogi
 
PPSX
Test Case Design and Technique
Fayis-QA
 
PPSX
Test Case Design and Technique
ANKUR-BA
 
PPTX
Software Testing Foundations Part 6 - Intuitive and Experience-based testing
Nikita Knysh
 
PPTX
Test case design techniques
Ashutosh Garg
 
PPTX
Boundary value analysis and equivalence partitioning
Sneha Singh
 
PPTX
Test Case Design and Technique
Sachin-QA
 
PPTX
Black Box Testing
Mustafa Sherazi
 
PPTX
Equivalence partinioning and boundary value analysis
niharika5412
 
PPTX
Sta unit 4(abimanyu)
Abhimanyu Mishra
 
PPTX
Test Case Design Techniques
Murageppa-QA
 
PPTX
Test Case Design
acatalin
 
PPTX
Sta unit 3(abimanyu)
Abhimanyu Mishra
 
PPT
CPP09 - Testing
Michael Heron
 
PPTX
Coding and testing In Software Engineering
Satya Bhushan Verma
 
PPT
Orthogonal array testing
Prince Bhanwra
 
PPTX
Structure testing
Vaibhav Dash
 
Input Space Partitioning
Riyad Parvez
 
Test case techniques
Pina Parmar
 
Test design techniques
Pragya Rastogi
 
Test Case Design and Technique
Fayis-QA
 
Test Case Design and Technique
ANKUR-BA
 
Software Testing Foundations Part 6 - Intuitive and Experience-based testing
Nikita Knysh
 
Test case design techniques
Ashutosh Garg
 
Boundary value analysis and equivalence partitioning
Sneha Singh
 
Test Case Design and Technique
Sachin-QA
 
Black Box Testing
Mustafa Sherazi
 
Equivalence partinioning and boundary value analysis
niharika5412
 
Sta unit 4(abimanyu)
Abhimanyu Mishra
 
Test Case Design Techniques
Murageppa-QA
 
Test Case Design
acatalin
 
Sta unit 3(abimanyu)
Abhimanyu Mishra
 
CPP09 - Testing
Michael Heron
 
Coding and testing In Software Engineering
Satya Bhushan Verma
 
Orthogonal array testing
Prince Bhanwra
 
Structure testing
Vaibhav Dash
 

Viewers also liked (17)

PPT
Testing
Kiran Kumar
 
PDF
Equivalence partitioning
Sarjana Muda
 
PPT
Equivalence partitions analysis
Vadym Muliavka
 
PPTX
Black box software testing
Rana Muhammad Asif
 
PPT
Boundary value analysis
Vadym Muliavka
 
PDF
Istqb ctfl-series - Black Box Testing
Disha Srivastava
 
PPTX
EquivalencePartition
swornim nepal
 
PDF
Chiapas los-rumbos-de-otra-historia
Veronica Rodriguez
 
DOCX
Caratula
Priscy Ayala
 
DOCX
La responsabilidad.docx ej[1] informatica ejcalon
ejcalonmuciaoscar10391
 
PDF
Prototype your Android applications, the (U)X-factor
Wiebe Elsinga
 
PDF
Manual del limpiafondos Max+5 de Astralpool 2013
MrPoolShop
 
PDF
Alex picture sola
Alexandra Orozco
 
PDF
2014 1 horarios 10 a (1)
Ramón Díaz
 
PDF
Welfare: il futuro è rosa
Associazione Previnforma
 
DOC
Van Davis Resume DOC
Van L. Davis
 
PPSX
The science of love
Delia Rodriguez
 
Testing
Kiran Kumar
 
Equivalence partitioning
Sarjana Muda
 
Equivalence partitions analysis
Vadym Muliavka
 
Black box software testing
Rana Muhammad Asif
 
Boundary value analysis
Vadym Muliavka
 
Istqb ctfl-series - Black Box Testing
Disha Srivastava
 
EquivalencePartition
swornim nepal
 
Chiapas los-rumbos-de-otra-historia
Veronica Rodriguez
 
Caratula
Priscy Ayala
 
La responsabilidad.docx ej[1] informatica ejcalon
ejcalonmuciaoscar10391
 
Prototype your Android applications, the (U)X-factor
Wiebe Elsinga
 
Manual del limpiafondos Max+5 de Astralpool 2013
MrPoolShop
 
Alex picture sola
Alexandra Orozco
 
2014 1 horarios 10 a (1)
Ramón Díaz
 
Welfare: il futuro è rosa
Associazione Previnforma
 
Van Davis Resume DOC
Van L. Davis
 
The science of love
Delia Rodriguez
 
Ad

Similar to Tester contribution to Testing Effectiveness. An Empirical Research (20)

PPTX
Unit2 for st
Poonkodi Jayakumar
 
PPTX
Software Testing Foundations Part 4 - Black Box Testing
Nikita Knysh
 
PPTX
Random testing
Can KAYA
 
PPTX
Software Testing strategies, their types and Levels
ranapoonam1
 
PPTX
Test case prioritization usinf regression testing.pptx
maheshwari581940
 
PPTX
ISTQB Foundation Level – Chapter 4: Test Design Techniques
zubair khan
 
PPT
Orthogonal array approach a case study
Karthikeyan Rajendran
 
DOC
Black box testing
Nakul Sharma
 
PPTX
Unit 2 - Test Case Design
Selvi Vts
 
PPTX
GCSE ICT TESTING
morgan98
 
PPTX
Equivalence class testing
Mani Kanth
 
PPT
Testing
nazeer pasha
 
PPT
&lt;p>Software Testing&lt;/p>
Atul Mishra
 
PPT
An overview to Software Testing
Atul Mishra
 
PPTX
Test Case Design & Technique
Rajesh-QA
 
PPTX
Test Case Design
Vidya-QA
 
PPT
Dynamic Testing
Jimi Patel
 
PPT
Software engineering Testing technique,test case,test suit design
Maitree Patel
 
PDF
black-box-1.pdf
SupunLakshan4
 
Unit2 for st
Poonkodi Jayakumar
 
Software Testing Foundations Part 4 - Black Box Testing
Nikita Knysh
 
Random testing
Can KAYA
 
Software Testing strategies, their types and Levels
ranapoonam1
 
Test case prioritization usinf regression testing.pptx
maheshwari581940
 
ISTQB Foundation Level – Chapter 4: Test Design Techniques
zubair khan
 
Orthogonal array approach a case study
Karthikeyan Rajendran
 
Black box testing
Nakul Sharma
 
Unit 2 - Test Case Design
Selvi Vts
 
GCSE ICT TESTING
morgan98
 
Equivalence class testing
Mani Kanth
 
Testing
nazeer pasha
 
&lt;p>Software Testing&lt;/p>
Atul Mishra
 
An overview to Software Testing
Atul Mishra
 
Test Case Design & Technique
Rajesh-QA
 
Test Case Design
Vidya-QA
 
Dynamic Testing
Jimi Patel
 
Software engineering Testing technique,test case,test suit design
Maitree Patel
 
black-box-1.pdf
SupunLakshan4
 
Ad

More from Natalia Juristo (8)

PDF
CESI Keynote English
Natalia Juristo
 
PDF
PROMISE keynote Juristo
Natalia Juristo
 
PDF
Conducting Experiments in Software Industry
Natalia Juristo
 
PPTX
Common Shortcomings in SE Experiments (ICSE'14 Doctoral Symposium Keynote)
Natalia Juristo
 
PDF
Myths on Replication (LASER School Talk 2010)
Natalia Juristo
 
PDF
The Role of Scientific Method in Software Development
Natalia Juristo
 
PPTX
Towards Understanding SE Experiments Replication (ESEM'13 Keynote)
Natalia Juristo
 
PDF
Software Usability Implications in Requirements and Design
Natalia Juristo
 
CESI Keynote English
Natalia Juristo
 
PROMISE keynote Juristo
Natalia Juristo
 
Conducting Experiments in Software Industry
Natalia Juristo
 
Common Shortcomings in SE Experiments (ICSE'14 Doctoral Symposium Keynote)
Natalia Juristo
 
Myths on Replication (LASER School Talk 2010)
Natalia Juristo
 
The Role of Scientific Method in Software Development
Natalia Juristo
 
Towards Understanding SE Experiments Replication (ESEM'13 Keynote)
Natalia Juristo
 
Software Usability Implications in Requirements and Design
Natalia Juristo
 

Recently uploaded (20)

PDF
Mahidol_Change_Agent_Note_2025-06-27-29_MUSEF
Tassanee Lerksuthirat
 
PDF
Stokey: A Jewish Village by Rachel Kolsky
History of Stoke Newington
 
PDF
Women's Health: Essential Tips for Every Stage.pdf
Iftikhar Ahmed
 
PDF
Chapter-V-DED-Entrepreneurship: Institutions Facilitating Entrepreneurship
Dayanand Huded
 
PPTX
CATEGORIES OF NURSING PERSONNEL: HOSPITAL & COLLEGE
PRADEEP ABOTHU
 
PPTX
TRANSLATIONAL AND ROTATIONAL MOTION.pptx
KIPAIZAGABAWA1
 
PPTX
Controller Request and Response in Odoo18
Celine George
 
PDF
Introduction presentation of the patentbutler tool
MIPLM
 
PDF
Is Assignment Help Legal in Australia_.pdf
thomas19williams83
 
PPTX
PPT-Q1-WEEK-3-SCIENCE-ERevised Matatag Grade 3.pptx
reijhongidayawan02
 
PDF
Vani - The Voice of Excellence - Jul 2025 issue
Savipriya Raghavendra
 
PPTX
Post Dated Cheque(PDC) Management in Odoo 18
Celine George
 
PPTX
Introduction to Biochemistry & Cellular Foundations.pptx
marvinnbustamante1
 
PDF
Aprendendo Arquitetura Framework Salesforce - Dia 03
Mauricio Alexandre Silva
 
PPTX
Introduction to Indian Writing in English
Trushali Dodiya
 
PPTX
PPT-Q1-WK-3-ENGLISH Revised Matatag Grade 3.pptx
reijhongidayawan02
 
PPTX
infertility, types,causes, impact, and management
Ritu480198
 
PDF
epi editorial commitee meeting presentation
MIPLM
 
PPTX
grade 5 lesson matatag ENGLISH 5_Q1_PPT_WEEK4.pptx
SireQuinn
 
PDF
Exploring the Different Types of Experimental Research
Thelma Villaflores
 
Mahidol_Change_Agent_Note_2025-06-27-29_MUSEF
Tassanee Lerksuthirat
 
Stokey: A Jewish Village by Rachel Kolsky
History of Stoke Newington
 
Women's Health: Essential Tips for Every Stage.pdf
Iftikhar Ahmed
 
Chapter-V-DED-Entrepreneurship: Institutions Facilitating Entrepreneurship
Dayanand Huded
 
CATEGORIES OF NURSING PERSONNEL: HOSPITAL & COLLEGE
PRADEEP ABOTHU
 
TRANSLATIONAL AND ROTATIONAL MOTION.pptx
KIPAIZAGABAWA1
 
Controller Request and Response in Odoo18
Celine George
 
Introduction presentation of the patentbutler tool
MIPLM
 
Is Assignment Help Legal in Australia_.pdf
thomas19williams83
 
PPT-Q1-WEEK-3-SCIENCE-ERevised Matatag Grade 3.pptx
reijhongidayawan02
 
Vani - The Voice of Excellence - Jul 2025 issue
Savipriya Raghavendra
 
Post Dated Cheque(PDC) Management in Odoo 18
Celine George
 
Introduction to Biochemistry & Cellular Foundations.pptx
marvinnbustamante1
 
Aprendendo Arquitetura Framework Salesforce - Dia 03
Mauricio Alexandre Silva
 
Introduction to Indian Writing in English
Trushali Dodiya
 
PPT-Q1-WK-3-ENGLISH Revised Matatag Grade 3.pptx
reijhongidayawan02
 
infertility, types,causes, impact, and management
Ritu480198
 
epi editorial commitee meeting presentation
MIPLM
 
grade 5 lesson matatag ENGLISH 5_Q1_PPT_WEEK4.pptx
SireQuinn
 
Exploring the Different Types of Experimental Research
Thelma Villaflores
 

Tester contribution to Testing Effectiveness. An Empirical Research

  • 2. Testers should apply technique strictly as prescribed but, in practice, conformance to the prescribed strategy varies  Does tester contribute to testing technique effectiveness?  • If so, how much? In which way?  Two techniques studied • Equivalence Partitioning • Branch Testing  Approach • Empirical
  • 4.  Theoretical effectiveness How likely a tester applying strictly a technique’s prescribed strategy is to generate a test case that exercises certain fault  Observed effectiveness Empirical study where the techniques are applied by master’s students  Tester contribution Difference between theoretical and observed effectiveness  Nature of tester contribution Qualitative empirical study in which we ask subjects to explain why they do or do not detect each seeded fault
  • 5.  Techniques sensitivity • Testing techniques are not equally sensitive to all faults • Quantify theoretical effectiveness is possible for extreme cases (a theoretical effectiveness of 0% or 100%) but harder or even impossible for middle cases  Faults for the study • Extreme cases highlight tester contribution as deviations of observed from theoretical effectiveness are clearer • The seeded faults are extreme cases with completely differently behaviour for the two techniques  We use faults with 100% effectiveness for one technique and 0% or much less than 100% for the other  We have also seeded a few medium effectiveness faults to study the role played by the chance factor  Later I provide an in-depth explanation of why fault detection sensitivity differs from one technique to another
  • 6. 1. Studied Techniques 2. Theoretical Effectiveness 3. Observed Effectiveness 4. Nature of Tester Contribution 5. Findings
  • 8. 1. Identify equivalence classes • Take each input condition and partition it into two groups  Valid class: valid inputs to the program  Invalid class: erroneous input values • If the program does not handle elements in an equivalence class identically  the equivalence class is split into smaller equivalence classes 2. Define test cases • Use equivalence classes to identify test cases • Test cases that cover as many of the valid equivalence classes as possible are written until all valid equivalence classes have been covered by test cases • Test cases that cover one, and only one, of the uncovered invalid equivalence classes are written until all invalid equivalence classes have been covered by test cases
  • 9.  White-box testing is concerned with the extent to which test cases exercise the program logic  Branch coverage • Enough test cases must be written to assure that every branch alternative is exercised at least once  Test case design strategy 1. An initial test case is generated that corresponds to the simplest entry/exit path 2. New test cases are then generated slightly differing from previous paths 3. As the test cases are generated a table showing the coverage status of each decision is built
  • 11.  Example of a calculator • Input should contain two operands and one operator in the “operand operator operand” format  Otherwise an error message is displayed • The admissible operators are +, -, *, and /  Otherwise an error message is displayed • The operands can be a blank (which would be interpreted as a 0) or any other real number  Otherwise an error message is displayed
  • 12. A technique’s prescribed strategy is sure to generate a test case that exercises the fault  The likelihood of the fault being exercised (not detected!) is 100%  Let us look at one 100% case for each technique
  • 13.  Equivalance Partitioning • Strategy  For a set of input values testers must identify one valid class for each value and one invalid class representing the other values • Example  Four valid (+, -, *, /) and one invalid equivalence classes must be generated for the calculator operator input condition  This generates one test case to test each operator plus another which tests an operator not accepted by the program • Fault  Suppose that a programmer forgets to implement the multiplication • Likelihood  Testers strictly conforming to equivalence partitioning’s prescribed strategy will generate 100% sure a test case that exercises such a fault
  • 14.  Brach Testing • Strategy  For each decision, one test case is generated to output a false value and another to output a true value • Example  For the decision to detect if it is a division by zero (if SecondOperand =0.0)  One test case is generated for value other than 0 (false decision)  Another test case for a value equal to 0 (true decision) • Fault  Suppose that the line is incorrectly coded as (if SecondOperand <> 0.0) • Likelihood  Testers strictly conforming to branch testing’s prescribed strategy will for sure generate a test case that exercises such a fault
  • 15. A technique’s prescribed strategy is unable to generate a test case that exercises a fault  The likelihood of the fault being exercised is 0%  Let us look at one 0% case for each technique
  • 16.  Equivalance Partitioning • Strategy  A test case can contain at most one invalid equivalence class • Example  To check that the calculator does not accept operands that are not numbers, testers generate one invalid class for the first operand and another for the second operand  This generates one test case to check the first operand and another to check the second  Neither of these test cases checks what happens if the format of both operands is incorrect • Fault  Suppose that the line of code for checking that both operands are numbers incorrectly expresses an XOR instead of an OR condition • Likelihood  Testers strictly conforming to equivalence partitioning’s prescribed strategy are unable to generate a test case to exercise this fault  Theoretical effectiveness for this fault is 0%
  • 17.  Brach Testing • Strategy  Generates test cases based exclusively on source code • Inability  Generate test cases for omitted code • Fault  A programmer forgets to implement division • Likelihood  Testers strictly conforming to branch testing’s prescribed strategy will not generate a test case to exercise this fault  Theoretical effectiveness for such a fault is 0%
  • 18.  Techniques’ prescribed strategy may generate a test case that exercises the fault or not • Only some of the available values within the range established by the technique’s prescribed strategy are capable of exercising a fault  Unfortunate choice of test data • Tester does not choose one of them  Fortunate choice of test data • Tester chooses one of them
  • 19.  Equivalance Partitioning • Strategy  Generate test cases from the specification and not from the source code • Limitation  Generate equivalence classes (and consequently test cases) for code functionality that is not listed in the specification • Fault  A programmer implements an unspecified operation • Likelihood  Testers strictly conforming to equivalence partitioning’s prescribed strategy are unable to generate a test case to exercise this fault…  Unless the specific case chosen for the invalid class is exactly the unspecified operation  Which likelihood?
  • 20.  Brach Testing • Strategy  Tester are free to choose the test data to cover a decision • Example  Code line which checks number of inputs (If Number-of-Arguments >= 4)  Multiple values can be used to output a false decision for this line • Fault  The line incorrectly reads (If Number-of-Arguments > 4) • Likelihood  To output a false decision the value of nArgs must be less than or equal to 4  Only if the value is 4 the test case will exercised the fault  Testers strictly conforming to branch testing’s prescribed strategy can choose other values of nArgs to output a false decision  25%, if we consider that the possible test values are 1, 2, 3, 4 for a false decision
  • 21. cmdline nametbl ntree F1 F2 F3 F4 F5 F6 F1 F2 F3 F4 F5 F6 F1 F2 F3 F4 F5 F6 Unimplemented X X X specification BT Test data used to X X X X X X X X achieve coverage Combination of invalid equivalence X X X classes Chosen combination of valid equivalence X EP classes Test data used to X X X combine classes Implementation of unspecified X X functionality
  • 23. 3 programs  6 seeded fault/program • 3 maxEP-minBT • 3 minEP-maxBT  Technique effectiveness measurement • percentage of subjects that generate a test case which exercises a particular fault  20-40 master students applying the techniques  Replicated 4 times
  • 24. cmdline EP nametbl BT EP ntree BT EP BT F1 94% 12% 100% 81% F2 100% 12% 94% 71% 85% 12% 37% 0% 88% 82% 85% 50% F4 Max BT- Min EP 0% F3 Max EP- Min BT 81% 0% 50% 0% 94% 0% 40% F5 0% 12% 6% 53% 29% 69% F6 19% 94% 18% 35% 36% 69%
  • 25. cmdline nametbl EP F1 = F2 = ↓63% F4 Max BT- Min EP ↓19% F3 Max EP- Min BT BT BT EP BT ↓6% ↑12% = ↑47.7% ↓6% ↑71% ↓15% = ↓12% ↑7% ↓15% = = = ↓6% = ↓60% F5 ↓16% ↓88% ↑6% ↓47% ↑29% ↓31% F6 ↓31% ↑60.7% ↑18% ↓15% ↑36% ↓31% ↑12% EP ntree ↑12% =
  • 26.  Tester contribution tends to reduce technique theoretical effectiveness • Effectiveness falls in 44.5% of cases compared with 30.5% in which it increases  Tester contribution differs for techniques • Decrease in effectiveness  greater for EP than for BT • Increase in effectiveness  Smaller for EP than for BT
  • 27.  Four sizes • Small (difference of 0%-25%), medium (difference of 26%-50%), large (difference of 51%-75%) and very large (difference of 76%-100%)  Testers contribute little to technique effectiveness • Difference is small in 66.7% of cases  Testers contribute less to equivalence partitioning than to branch testing • EP has more small differences than BT • EP has less large/very large differences than BT
  • 28. • • • • Empirical Study Description Types of Differences Equivalnece Partitioning Case Branch Testing Case
  • 29.  Individual work • Subjects get  Their results  List of seeded faults • Subjects analyse  whether or not they have generated a test case for a fault  why  Discussion in group • How subjects generate test cases for faults that a technique is not able to exercise • Why they fail to generate test cases for faults that a technique is able to exercise
  • 30.  Poor technique application • Testers make mistakes when applying the techniques  Technique extension • Subjects round out the techniques with additional knowledge on programming or testing  By chance • Unfortunate choice of test data • Fortunate choice of test data
  • 31.  MISTAKE 1. One valid class must be identified for each correct input value and one invalid class representing incorrect values • Subjects create one single valid equivalence class for all input values • If one of the input values causes a failure, subjects will find it hard to generate a test case that exercises the fault, as they do not generate specific test cases for each value • Example  A tester generates a single valid equivalence class for the operator input condition containing all four valid operators  Subjects appear to mistakenly assume that all the equivalence classes behave equally in the code  They aim to save time and get the same effectiveness
  • 32.  MISTAKE 2. Generate several equivalence classes for some, but not all, of the input values  MISTAKE 3. Fail to build equivalence classes for part of the specification  MISTAKE 4. Misinterpret the specification and generate equivalence classes that do not exactly state the meaning of the specification  MISTAKE 5. Do not build enough test cases to cover all the generated equivalence classes  MISTAKE 6. Choose test case input data that do not correspond to the combination of equivalence classes • Subjects are careless and overlooking important details of the context of the test case that they really want to execute • They may mistake some concepts for others, misleading them into thinking that they are testing particular situations that they are not really testing
  • 33.  IMPROVEMENT 1. Adding an extra equivalence class combining several invalid equivalence classes • In the calculator example, this happens if a tester generates a new class in which neither operand is a number
  • 34.  MISTAKE 1. Fail to achieve the required coverage criterion because intentionally reduce the number of test cases • They do this to save time and reduce workload • They think that similar portions of code will behave same way • In the calculator example, this happens if test cases are generated for only some of the operators (the switch/case sentence is not completely covered)  MISTAKE 2. Despite having designed a test case to cover a particular decision, test data chosen by the subject do not follow the expected execution path • In the calculator example, this happens if the tester specifies “ ”+3 instead of +3 as test data • This is usually due to a misunderstanding of the code
  • 35.     IMPROVEMENT 1. Generate additional test cases to cover common sources of programming errors IMPROVEMENT 2. Generate additional test cases for parts of the code that they do not understand IMPROVEMENT 3. Extend the required coverage using condition coverage rather than decision coverage IMPROVEMENT 4. Subjects discover faults directly as they read the code
  • 37.  Not strict conformance to technique’s strategy  Contribution to effectiveness is small  Contribute  Most less to the effectiveness of EP than of BT cases contribute to degrade the technique • Misunderstandings of the techniques • Oversights or mistakes • Unfamiliarity with the programming language (for branch testing)  Fewer cases round out the technique • The manner to complement depends on the technique
  • 38.  Contribute more often to reducing effectiveness of EP than of BT • There are more cases of misunderstandings for EP  Contribute more often to increasing effectiveness of BT than of EP  There are more cases of improvements for BT  Contribution to BT is consequence of code reading
  • 39.  The approach taken does not inflate tester contribution A scenario where tester contribution would be expected to be negligible • Subjects are graded on technique application  Conformance to technique’s strategy should be high  but it is not • Programs are simple  The application of the techniques to these programs should cause no problems  but it does • Subjects are inexperienced  They should make little or no contribution  but they do
  • 40.  Testers tend to apply techniques poorly • Exploiting people’s diversity makes testing more effective • Different testers make different mistakes • Two testers applying same technique on same code will find more defects than one tester taking twice as long to apply a technique  Testers usually complement white box testing techniques with intuitive code review • Train testers in code review techniques since it will improve these intuitive accessory activities
  • 41.  We use junior testers, experienced testers • Might contribute more as they have more software development and testing experience • Conformance to the techniques’ prescribed strategy could differ from students’  Better or worse?  We use 3 programs, for larger programs • Testers’ conformance to the technique’s prescribed strategy would expect to be worse  No dynamic analyser is used to apply branch testing
  • 43. #include <stdio.h> #include <stdlib.h> #define NBUF 81 int main () { char strOpA[NBUF], strOp[NBUF], strOpB[NBUF]; double fA, fB, fRes; char *pa, *pb; int nArgs; while (!feof(stdin)) { nArgs = nParseInput (strOpA, strOp, strOpB); if (nArgs==0) { printf ("Too few argumentsn"); continue;} if (nArgs>=4) { printf ("Too many argumentsn"); continue;} fA = strtod (strOpA, &pa); fB = strtod (strOpB, &pb); if ((*pa!='0') || (*pb!='0')) { printf ("Invalid operandsn"); continue;} } } if (strlen(strOp)!=1) { printf("Invalid operatorn"); continue; } else switch (*strOp) { case '+': { fRes = fA + fB; break; } case '-': { fRes = fA - fB; break; } case '*': { fRes = fA * fB; break; } case '/': { if (fB == 0.0) { printf ("Division by zeron"); continue; } else { fRes = fA / fB; } break; } default: { printf ("Invalid operatorn"); continue; } } printf ("%lf %s %lf = %lfn",fA,strOp,fB,fRes);
  • 44.  Equivalance Partitioning • Strategy  Testers are free to choose the specific test data from an equivalence class • Inability  Detection of some faults depends on the specific test data chosen to cover an equivalence class • Example  To combine the valid equivalence classes number+number to get the addition, any two numbers could be tested • Fault  The program mistakenly processes not real but natural numbers • Likelihood  Will only be detected if numbers with at least one digit (different from 0) after the decimal point are used in the test case  Testers strictly conforming to the technique’s prescribed strategy would not be wrong to choose the addition of two natural numbers as a test case to cover the equivalence class for number+number  Less than 100% (50% or 90%)