Bad Metrics and What You
Can Do About It
Paul Holland
Test Consultant and Teacher
at Testing Thoughts
My Background
• Independent S/W Testing consultant since Apr 2012
• 16+ years testing telecommunications equipment
and reworking test methodologies at Alcatel-Lucent
• 10+ years as a test manager
• Presenter at STAREast and CAST
• Keynote at KWSQA conference in 2012
• Facilitator at 25+ peer conferences and workshops
• Teacher of S/W testing for the past 5 years
• Teacher of Rapid Software Testing
– through Satisfice (James Bach): www.satisfice.com

• Military Helicopter pilot – Canadian Sea Kings
April, 2013

©2013 Testing Thoughts

2
Attributions
• Over the past 10 years I have spoken with many
people regarding metrics. I cannot directly
attribute any specific aspects of this talk to any
individual but all of these people (and more)
have influenced my opinions and thoughts on
metrics:
– Cem Kaner, James Bach, Michael Bolton, Ross
Collard, Doug Hoffman, Scott Barber, John
Hazel, Eric Proegler, Dan Downing, Greg
McNelly, Ben Yaroch
April, 2013

©2013 Testing Thoughts

3
Definitions of METRIC
(from http://www.merriam-webster.com, April 2012)

• 1 plural : a part of prosody that deals with metrical structure
• 2 : a standard of measurement <no metric exists that can
be applied directly to happiness — Scientific Monthly>
• 3 : a mathematical function that associates a real
nonnegative number analogous to distance with each pair of
elements in a set such that the number is zero only if the
two elements are identical, the number is the same
regardless of the order in which the two elements are
taken, and the number associated with one pair of elements
plus that associated with one member of the pair and a third
element is equal to or greater than the number associated
with the other member of the pair and the third element
April, 2013

©2013 Testing Thoughts

4
Sample Metrics
• Number of Test Cases Planned (per release or

feature)
• Number of Test Cases Executed vs. Plan
• Number of Bugs Found per Tester

• Number of Bugs Found per Feature
• Number of Bugs Found in the Field
• Number of Open Bugs
• Lab Equipment Usage
April, 2013

©2013 Testing Thoughts

5
Sample Metrics
• Hours between crashes in the Field

• Percentage Behind Plan
• Percentage of Automated Test Cases
• Percentage of Tests Passed vs. Failed (pass rate)
• Number of Test Steps
• Code Coverage / Path Coverage

• Requirements Coverage
April, 2013

©2013 Testing Thoughts

6
Goodhart‘s Law
• In 1975, Charles Goodhart, a former advisor to
the Bank of England and Emeritus Professor at
the London School of Economics stated:

Any observed statistical regularity will
tend to collapse once pressure is
placed upon it for control purposes
Goodhart, C.A.E. (1975a) ‗Problems of Monetary Management: The UK Experience‘ in
Papers in Monetary Economics, Volume I, Reserve Bank of Australia, 1975

April, 2013

©2013 Testing Thoughts

7
Goodhart‘s Law
• Professor Marilyn Strathern FBA has re-stated
Goodhart's Law more succinctly and more
generally:

`When a measure becomes a
target, it ceases to be a good
measure.'

April, 2013

©2013 Testing Thoughts

8
Elements of Bad Metrics
• Measure and/or compare elements that are
inconsistent in size or composition
– Impossible to effectively use for comparison
– How many containers do you need for your
possessions?
– Test Cases and Test Steps
• Greatly vary in time required and complexity

– Bugs
• Can be different severity, likelihood - i.e.: risk
April, 2013

©2013 Testing Thoughts

9
Elements of Bad Metrics
• Create competition between individuals
and/or teams
– They typically do not result in friendly competition
– Inhibits sharing of information and teamwork
– Especially damaging if compensation is impacted
– Number of xxxx per tester
– Number of xxxx per feature

April, 2013

©2013 Testing Thoughts

10
Elements of Bad Metrics
• Easy to ―game‖ or circumvent the desired
intention
– Easy to be improved by undesirable behaviour

– Pass rate (percentage): Execute more simple
tests that will pass or break up a long test case
into many smaller ones
– Number of bugs raised: Raising two similar bug
reports instead of combining them
April, 2013

©2013 Testing Thoughts

11
Elements of Bad Metrics
• Contain misleading information or gives a
false sense of completeness
– Summarizing a large amount of information
into one or two numbers out of context

– Coverage (Code, Path)
• Misleading information based on touching the code
once

– Pass rate and number of test cases
April, 2013

©2013 Testing Thoughts

12
Impact of Using Bad
Metrics
– Promotes bad behaviour:
• Testers may create more smaller test cases instead of
creating test cases that make sense
• Execution of ineffective testing to meet requirements
• Artificially creating higher numbers instead of doing
what makes sense
• Creation of tools that will mask inefficiencies (e.g.: lab
equipment usage)
• Time wasted improving the ―numbers‖ instead of
improving the testing
April, 2013

©2013 Testing Thoughts

13
Impact of Using Bad
Metrics
• Gives Executives a false sense of test coverage
– All they see is numbers out of context
– The larger the numbers the better the testing
– The difficulty of good testing is hidden by large ―fake‖
numbers

• Dangerous message to Executives
– Our pass rate is at 96% so our product is in good
shape
– Code coverage is at 100% - our code is completely
tested
– Feature specification coverage is at 100% - Ship it!!!

• What could possibly go wrong?
April, 2013

©2013 Testing Thoughts

14
Sample Metrics
• Number of Test Cases Planned (per release or

feature)
• Number of Test Cases Executed vs. Plan
• Number of Bugs Found per Tester

• Number of Bugs Found per Feature
• Number of Bugs Found in the Field – A list of Bugs
• Number of Open Bugs – A list of Open Bugs
• Lab Equipment Usage
April, 2013

©2013 Testing Thoughts

15
Sample Metrics
• Hours between crashes in the Field

• Percentage Behind Plan – depends if plan is flexible
• Percentage of Automated Test Cases
• Percentage of Tests Passed vs. Failed (pass rate)
• Number of Test Steps
• Code Coverage / Path Coverage – depends on usage
• Requirements Coverage – depends on usage

April, 2013

©2013 Testing Thoughts

16
So … Now what?
• I have to stop counting everything. I feel
naked and exposed.
• Track expected effort instead of tracking
test cases using:
– Whiteboard
– Excel spreadsheet

April, 2013

©2013 Testing Thoughts

17
Whiteboard
• Used for planning and tracking of test
execution
• Suitable for use in waterfall or agile (as long
as you have control over your own team‘s
process)
• Use colours to track:
– Features, or
– Main Areas, or
– Test styles (performance, robustness, system)
April, 2013

©2013 Testing Thoughts

18
Whiteboard
• Divide the board into four areas:
–
–
–
–

Work to be done
Work in Progress
Cancelled or Work not being done
Completed work

• Red stickies indicate issues (not just bugs)
• Create a sticky note for each half day of work (or
mark # of half days expected on the sticky note)
• Prioritize stickies daily (or at least twice/wk)
• Finish ―on-time‖ with low priority work incomplete
April, 2013

©2013 Testing Thoughts

19
Sticky Notes
• All of these items are optional – add your own
elements
Use what makes sense to your situation
–
–
–
–
–
–

Charter Title (or Test Case Title)
Estimated Effort
Feature area
Tester name
Date complete
Effort (# of sessions or half days of work)
• Initially, estimated -> replace with actual

April, 2013

©2013 Testing Thoughts

20
Actual Sample Sticky
Charter Title

Tester

Area

Effort
April, 2013

©2013 Testing Thoughts

21
Whiteboard Example

April, 2013

©2013 Testing Thoughts

22
Reporting
• An Excel Spreadsheet with:
–
–
–
–
–
–
–
–
–
–

List of Charters
Area
Estimated Effort
Expended Effort
Remaining Effort
Tester(s)
Start Date
Completed Date
Issues
Comments

• Does NOT include pass/fail percentage or number of
test cases
April, 2013

©2013 Testing Thoughts

23
Sample Report
Charter

Area

Estimated Expended Remaining
Effort
Effort
Effort
Tester

Date
Issues
Date Started Completed Found

Comments

Lots of investigation. Problem was on 2-3 out of
ALU01617 48 ports which just happened to be 2 of the 6
01/14/2012 032
ports I tested.

Investigation for high QLN spikes on EVLT

H/W
Performance

0

20

0

acode

ARQ Verification under different RA Modes

ARQ

2

2

0

ncowan 12/14/2011 12/15/2011

POTS interference

ARQ

2

0

0

---

01/08/2012 01/08/2012

Expected throughput testing

ARQ

5

5

0

acode

01/10/2012 01/14/2012

INP vs. SHINE

ARQ

6

6

0

ncowan 12/01/2011 12/04/2011

INP vs. REIN

ARQ

6

INP vs. REIN + SHINE

ARQ

12

Traffic delay and jitter from RTX

ARQ

2

Attainable Throughput

April, 2013

ARQ

1

7

5

jbright

12/10/2011

01/06/2012 01/10/2012

Decided not to test as the H/W team already
tested this functionality and time was tight.

To translate the files properly, had to install
Python solution from Antwerp. Some overhead
to begin testing (installation, config test) but was
fairly quick to execute afterwards

12
2

4

0

0

ncowan 12/05/2011 12/05/2011

jbright

01/05/2012 01/08/2012

©2013 Testing Thoughts

Took longer because was not behaving as
expected and I had to make sure I was testing
correctly. My expectations were wrong based
on virtual noise not being exact.

24
Sample Report
"Awesome Product" Test Progress as of 02/01/2012

Effort (person half days)

90
80
Original Planned Effort
70
Expended Effort
60
Total Expected Effort

50
40
30
20
10
0
ARQ

SRA

Vectoring

Regression

H/W Performance

Feature
April, 2013

©2013 Testing Thoughts

25
April, 2013

©2013 Testing Thoughts

26

More Related Content

PDF
[Thao Vo] Deadly Traps of Automation Testing
PDF
[Vu Van Nguyen] Value-based Software Testing an Approach to Prioritizing Tests
PPT
Bart Knaack - The Truth About Model-Based Quality Improvements
PDF
Forgotten? Ignored? Obsolete? Static testing techniques
PDF
A Top-N Recommender System Evaluation Protocol Inspired by Deployed Systems
PPTX
Practitioners’ Expectations on Automated Fault Localization
PPTX
Writing acceptable patches: an empirical study of open source project patches
PDF
The Business Analyst’s Critical Role in Agile Projects
[Thao Vo] Deadly Traps of Automation Testing
[Vu Van Nguyen] Value-based Software Testing an Approach to Prioritizing Tests
Bart Knaack - The Truth About Model-Based Quality Improvements
Forgotten? Ignored? Obsolete? Static testing techniques
A Top-N Recommender System Evaluation Protocol Inspired by Deployed Systems
Practitioners’ Expectations on Automated Fault Localization
Writing acceptable patches: an empirical study of open source project patches
The Business Analyst’s Critical Role in Agile Projects

Viewers also liked (17)

PDF
Acceptance Test-driven Development: Mastering Agile Testing
PDF
Emotional Intelligence in Software Testing
PDF
Find Requirements Defects to Build Better Software
PDF
Agile Project Failures: Root Causes and Corrective Actions
PDF
Keynote: Lightning Strikes the Keynotes
PPTX
Usability Testing: Personas, Scenarios, Use Cases, and Test Cases
PDF
Continuous Delivery at Ancestry.com
PDF
Automated Performance Profiling with Continuous Integration
PDF
Agile Estimation and Planning: Scrum, Kanban, and Beyond
PDF
Agile Release Planning, Metrics, and Retrospectives
PDF
Usability Testing in a Nutshell
PDF
Sprint Reviews that Attract, Engage, and Enlighten Stakeholders
PDF
Presenting Test Results with Clarity and Confidence
PDF
Building Customer Feedback Loops: Learn Quicker, Design Smarter
PDF
Building an Enterprise Performance and Load Testing Infrastructure
PDF
Leading Change―Even If You’re Not in Charge
PDF
Requirements Engineering: A Practicum
Acceptance Test-driven Development: Mastering Agile Testing
Emotional Intelligence in Software Testing
Find Requirements Defects to Build Better Software
Agile Project Failures: Root Causes and Corrective Actions
Keynote: Lightning Strikes the Keynotes
Usability Testing: Personas, Scenarios, Use Cases, and Test Cases
Continuous Delivery at Ancestry.com
Automated Performance Profiling with Continuous Integration
Agile Estimation and Planning: Scrum, Kanban, and Beyond
Agile Release Planning, Metrics, and Retrospectives
Usability Testing in a Nutshell
Sprint Reviews that Attract, Engage, and Enlighten Stakeholders
Presenting Test Results with Clarity and Confidence
Building Customer Feedback Loops: Learn Quicker, Design Smarter
Building an Enterprise Performance and Load Testing Infrastructure
Leading Change―Even If You’re Not in Charge
Requirements Engineering: A Practicum
Ad

Similar to Bad Testing Metrics—and What To Do About Them (20)

PDF
[Paul Holland] Bad Metrics and What You Can Do About It
PDF
Agile Test Management and Reporting—Even in a Non-Agile Project
PDF
Anton Muzhailo - Practical Test Process Improvement using ISTQB
PPTX
How much testing is enough
PPT
STARCANADA 2013 Keynote: Lightning Strikes the Keynotes
PPT
Testing Metrics
PPTX
1)Testing-Fundamentals_L_D.pptx
PDF
The Test Coverage Outline: Your Testing Road Map
PPTX
Testing Metrics and Tools, Analyse de tests
PPTX
Software Testing Metrics
PDF
PAC 2019 virtual Joerek Van Gaalen
PPTX
Test case design techniques
PPTX
Test case design techniques
PPTX
WINSEM2021-22_ITE2004_ETH_VL2021220500452_Reference_Material_I_21-04-2022_TES...
PPT
rryghg.ppt
PPTX
Fundamentals of testing
PDF
Continuous testing in agile projects 2015
PPT
Software Testing Presentation in Cegonsoft Pvt Ltd...
PDF
Software Quality Metrics Do's and Don'ts - XBOSoft-QAI Webinar
PDF
software testing metrics do's - don'ts-XBOSoft-QAI Webinar
[Paul Holland] Bad Metrics and What You Can Do About It
Agile Test Management and Reporting—Even in a Non-Agile Project
Anton Muzhailo - Practical Test Process Improvement using ISTQB
How much testing is enough
STARCANADA 2013 Keynote: Lightning Strikes the Keynotes
Testing Metrics
1)Testing-Fundamentals_L_D.pptx
The Test Coverage Outline: Your Testing Road Map
Testing Metrics and Tools, Analyse de tests
Software Testing Metrics
PAC 2019 virtual Joerek Van Gaalen
Test case design techniques
Test case design techniques
WINSEM2021-22_ITE2004_ETH_VL2021220500452_Reference_Material_I_21-04-2022_TES...
rryghg.ppt
Fundamentals of testing
Continuous testing in agile projects 2015
Software Testing Presentation in Cegonsoft Pvt Ltd...
Software Quality Metrics Do's and Don'ts - XBOSoft-QAI Webinar
software testing metrics do's - don'ts-XBOSoft-QAI Webinar
Ad

More from TechWell (20)

PDF
Failing and Recovering
PDF
Instill a DevOps Testing Culture in Your Team and Organization
PDF
Test Design for Fully Automated Build Architecture
PDF
System-Level Test Automation: Ensuring a Good Start
PDF
Build Your Mobile App Quality and Test Strategy
PDF
Testing Transformation: The Art and Science for Success
PDF
Implement BDD with Cucumber and SpecFlow
PDF
Develop WebDriver Automated Tests—and Keep Your Sanity
PDF
Ma 15
PDF
Eliminate Cloud Waste with a Holistic DevOps Strategy
PDF
Transform Test Organizations for the New World of DevOps
PDF
The Fourth Constraint in Project Delivery—Leadership
PDF
Resolve the Contradiction of Specialists within Agile Teams
PDF
Pin the Tail on the Metric: A Field-Tested Agile Game
PDF
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
PDF
A Business-First Approach to DevOps Implementation
PDF
Databases in a Continuous Integration/Delivery Process
PDF
Mobile Testing: What—and What Not—to Automate
PDF
Cultural Intelligence: A Key Skill for Success
PDF
Turn the Lights On: A Power Utility Company's Agile Transformation
Failing and Recovering
Instill a DevOps Testing Culture in Your Team and Organization
Test Design for Fully Automated Build Architecture
System-Level Test Automation: Ensuring a Good Start
Build Your Mobile App Quality and Test Strategy
Testing Transformation: The Art and Science for Success
Implement BDD with Cucumber and SpecFlow
Develop WebDriver Automated Tests—and Keep Your Sanity
Ma 15
Eliminate Cloud Waste with a Holistic DevOps Strategy
Transform Test Organizations for the New World of DevOps
The Fourth Constraint in Project Delivery—Leadership
Resolve the Contradiction of Specialists within Agile Teams
Pin the Tail on the Metric: A Field-Tested Agile Game
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
A Business-First Approach to DevOps Implementation
Databases in a Continuous Integration/Delivery Process
Mobile Testing: What—and What Not—to Automate
Cultural Intelligence: A Key Skill for Success
Turn the Lights On: A Power Utility Company's Agile Transformation

Recently uploaded (20)

PDF
Enhancing plagiarism detection using data pre-processing and machine learning...
PDF
The-2025-Engineering-Revolution-AI-Quality-and-DevOps-Convergence.pdf
PDF
NewMind AI Weekly Chronicles – August ’25 Week IV
PDF
A symptom-driven medical diagnosis support model based on machine learning te...
PDF
EIS-Webinar-Regulated-Industries-2025-08.pdf
PDF
giants, standing on the shoulders of - by Daniel Stenberg
PDF
Planning-an-Audit-A-How-To-Guide-Checklist-WP.pdf
PDF
Accessing-Finance-in-Jordan-MENA 2024 2025.pdf
PDF
INTERSPEECH 2025 「Recent Advances and Future Directions in Voice Conversion」
PPTX
SGT Report The Beast Plan and Cyberphysical Systems of Control
PDF
5-Ways-AI-is-Revolutionizing-Telecom-Quality-Engineering.pdf
PDF
MENA-ECEONOMIC-CONTEXT-VC MENA-ECEONOMIC
PDF
Rapid Prototyping: A lecture on prototyping techniques for interface design
PDF
Advancing precision in air quality forecasting through machine learning integ...
PPTX
AI-driven Assurance Across Your End-to-end Network With ThousandEyes
PDF
Transform-Your-Supply-Chain-with-AI-Driven-Quality-Engineering.pdf
PDF
Data Virtualization in Action: Scaling APIs and Apps with FME
PDF
Improvisation in detection of pomegranate leaf disease using transfer learni...
PDF
The-Future-of-Automotive-Quality-is-Here-AI-Driven-Engineering.pdf
PDF
AI.gov: A Trojan Horse in the Age of Artificial Intelligence
Enhancing plagiarism detection using data pre-processing and machine learning...
The-2025-Engineering-Revolution-AI-Quality-and-DevOps-Convergence.pdf
NewMind AI Weekly Chronicles – August ’25 Week IV
A symptom-driven medical diagnosis support model based on machine learning te...
EIS-Webinar-Regulated-Industries-2025-08.pdf
giants, standing on the shoulders of - by Daniel Stenberg
Planning-an-Audit-A-How-To-Guide-Checklist-WP.pdf
Accessing-Finance-in-Jordan-MENA 2024 2025.pdf
INTERSPEECH 2025 「Recent Advances and Future Directions in Voice Conversion」
SGT Report The Beast Plan and Cyberphysical Systems of Control
5-Ways-AI-is-Revolutionizing-Telecom-Quality-Engineering.pdf
MENA-ECEONOMIC-CONTEXT-VC MENA-ECEONOMIC
Rapid Prototyping: A lecture on prototyping techniques for interface design
Advancing precision in air quality forecasting through machine learning integ...
AI-driven Assurance Across Your End-to-end Network With ThousandEyes
Transform-Your-Supply-Chain-with-AI-Driven-Quality-Engineering.pdf
Data Virtualization in Action: Scaling APIs and Apps with FME
Improvisation in detection of pomegranate leaf disease using transfer learni...
The-Future-of-Automotive-Quality-is-Here-AI-Driven-Engineering.pdf
AI.gov: A Trojan Horse in the Age of Artificial Intelligence

Bad Testing Metrics—and What To Do About Them

  • 1. Bad Metrics and What You Can Do About It Paul Holland Test Consultant and Teacher at Testing Thoughts
  • 2. My Background • Independent S/W Testing consultant since Apr 2012 • 16+ years testing telecommunications equipment and reworking test methodologies at Alcatel-Lucent • 10+ years as a test manager • Presenter at STAREast and CAST • Keynote at KWSQA conference in 2012 • Facilitator at 25+ peer conferences and workshops • Teacher of S/W testing for the past 5 years • Teacher of Rapid Software Testing – through Satisfice (James Bach): www.satisfice.com • Military Helicopter pilot – Canadian Sea Kings April, 2013 ©2013 Testing Thoughts 2
  • 3. Attributions • Over the past 10 years I have spoken with many people regarding metrics. I cannot directly attribute any specific aspects of this talk to any individual but all of these people (and more) have influenced my opinions and thoughts on metrics: – Cem Kaner, James Bach, Michael Bolton, Ross Collard, Doug Hoffman, Scott Barber, John Hazel, Eric Proegler, Dan Downing, Greg McNelly, Ben Yaroch April, 2013 ©2013 Testing Thoughts 3
  • 4. Definitions of METRIC (from http://www.merriam-webster.com, April 2012) • 1 plural : a part of prosody that deals with metrical structure • 2 : a standard of measurement <no metric exists that can be applied directly to happiness — Scientific Monthly> • 3 : a mathematical function that associates a real nonnegative number analogous to distance with each pair of elements in a set such that the number is zero only if the two elements are identical, the number is the same regardless of the order in which the two elements are taken, and the number associated with one pair of elements plus that associated with one member of the pair and a third element is equal to or greater than the number associated with the other member of the pair and the third element April, 2013 ©2013 Testing Thoughts 4
  • 5. Sample Metrics • Number of Test Cases Planned (per release or feature) • Number of Test Cases Executed vs. Plan • Number of Bugs Found per Tester • Number of Bugs Found per Feature • Number of Bugs Found in the Field • Number of Open Bugs • Lab Equipment Usage April, 2013 ©2013 Testing Thoughts 5
  • 6. Sample Metrics • Hours between crashes in the Field • Percentage Behind Plan • Percentage of Automated Test Cases • Percentage of Tests Passed vs. Failed (pass rate) • Number of Test Steps • Code Coverage / Path Coverage • Requirements Coverage April, 2013 ©2013 Testing Thoughts 6
  • 7. Goodhart‘s Law • In 1975, Charles Goodhart, a former advisor to the Bank of England and Emeritus Professor at the London School of Economics stated: Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes Goodhart, C.A.E. (1975a) ‗Problems of Monetary Management: The UK Experience‘ in Papers in Monetary Economics, Volume I, Reserve Bank of Australia, 1975 April, 2013 ©2013 Testing Thoughts 7
  • 8. Goodhart‘s Law • Professor Marilyn Strathern FBA has re-stated Goodhart's Law more succinctly and more generally: `When a measure becomes a target, it ceases to be a good measure.' April, 2013 ©2013 Testing Thoughts 8
  • 9. Elements of Bad Metrics • Measure and/or compare elements that are inconsistent in size or composition – Impossible to effectively use for comparison – How many containers do you need for your possessions? – Test Cases and Test Steps • Greatly vary in time required and complexity – Bugs • Can be different severity, likelihood - i.e.: risk April, 2013 ©2013 Testing Thoughts 9
  • 10. Elements of Bad Metrics • Create competition between individuals and/or teams – They typically do not result in friendly competition – Inhibits sharing of information and teamwork – Especially damaging if compensation is impacted – Number of xxxx per tester – Number of xxxx per feature April, 2013 ©2013 Testing Thoughts 10
  • 11. Elements of Bad Metrics • Easy to ―game‖ or circumvent the desired intention – Easy to be improved by undesirable behaviour – Pass rate (percentage): Execute more simple tests that will pass or break up a long test case into many smaller ones – Number of bugs raised: Raising two similar bug reports instead of combining them April, 2013 ©2013 Testing Thoughts 11
  • 12. Elements of Bad Metrics • Contain misleading information or gives a false sense of completeness – Summarizing a large amount of information into one or two numbers out of context – Coverage (Code, Path) • Misleading information based on touching the code once – Pass rate and number of test cases April, 2013 ©2013 Testing Thoughts 12
  • 13. Impact of Using Bad Metrics – Promotes bad behaviour: • Testers may create more smaller test cases instead of creating test cases that make sense • Execution of ineffective testing to meet requirements • Artificially creating higher numbers instead of doing what makes sense • Creation of tools that will mask inefficiencies (e.g.: lab equipment usage) • Time wasted improving the ―numbers‖ instead of improving the testing April, 2013 ©2013 Testing Thoughts 13
  • 14. Impact of Using Bad Metrics • Gives Executives a false sense of test coverage – All they see is numbers out of context – The larger the numbers the better the testing – The difficulty of good testing is hidden by large ―fake‖ numbers • Dangerous message to Executives – Our pass rate is at 96% so our product is in good shape – Code coverage is at 100% - our code is completely tested – Feature specification coverage is at 100% - Ship it!!! • What could possibly go wrong? April, 2013 ©2013 Testing Thoughts 14
  • 15. Sample Metrics • Number of Test Cases Planned (per release or feature) • Number of Test Cases Executed vs. Plan • Number of Bugs Found per Tester • Number of Bugs Found per Feature • Number of Bugs Found in the Field – A list of Bugs • Number of Open Bugs – A list of Open Bugs • Lab Equipment Usage April, 2013 ©2013 Testing Thoughts 15
  • 16. Sample Metrics • Hours between crashes in the Field • Percentage Behind Plan – depends if plan is flexible • Percentage of Automated Test Cases • Percentage of Tests Passed vs. Failed (pass rate) • Number of Test Steps • Code Coverage / Path Coverage – depends on usage • Requirements Coverage – depends on usage April, 2013 ©2013 Testing Thoughts 16
  • 17. So … Now what? • I have to stop counting everything. I feel naked and exposed. • Track expected effort instead of tracking test cases using: – Whiteboard – Excel spreadsheet April, 2013 ©2013 Testing Thoughts 17
  • 18. Whiteboard • Used for planning and tracking of test execution • Suitable for use in waterfall or agile (as long as you have control over your own team‘s process) • Use colours to track: – Features, or – Main Areas, or – Test styles (performance, robustness, system) April, 2013 ©2013 Testing Thoughts 18
  • 19. Whiteboard • Divide the board into four areas: – – – – Work to be done Work in Progress Cancelled or Work not being done Completed work • Red stickies indicate issues (not just bugs) • Create a sticky note for each half day of work (or mark # of half days expected on the sticky note) • Prioritize stickies daily (or at least twice/wk) • Finish ―on-time‖ with low priority work incomplete April, 2013 ©2013 Testing Thoughts 19
  • 20. Sticky Notes • All of these items are optional – add your own elements Use what makes sense to your situation – – – – – – Charter Title (or Test Case Title) Estimated Effort Feature area Tester name Date complete Effort (# of sessions or half days of work) • Initially, estimated -> replace with actual April, 2013 ©2013 Testing Thoughts 20
  • 21. Actual Sample Sticky Charter Title Tester Area Effort April, 2013 ©2013 Testing Thoughts 21
  • 23. Reporting • An Excel Spreadsheet with: – – – – – – – – – – List of Charters Area Estimated Effort Expended Effort Remaining Effort Tester(s) Start Date Completed Date Issues Comments • Does NOT include pass/fail percentage or number of test cases April, 2013 ©2013 Testing Thoughts 23
  • 24. Sample Report Charter Area Estimated Expended Remaining Effort Effort Effort Tester Date Issues Date Started Completed Found Comments Lots of investigation. Problem was on 2-3 out of ALU01617 48 ports which just happened to be 2 of the 6 01/14/2012 032 ports I tested. Investigation for high QLN spikes on EVLT H/W Performance 0 20 0 acode ARQ Verification under different RA Modes ARQ 2 2 0 ncowan 12/14/2011 12/15/2011 POTS interference ARQ 2 0 0 --- 01/08/2012 01/08/2012 Expected throughput testing ARQ 5 5 0 acode 01/10/2012 01/14/2012 INP vs. SHINE ARQ 6 6 0 ncowan 12/01/2011 12/04/2011 INP vs. REIN ARQ 6 INP vs. REIN + SHINE ARQ 12 Traffic delay and jitter from RTX ARQ 2 Attainable Throughput April, 2013 ARQ 1 7 5 jbright 12/10/2011 01/06/2012 01/10/2012 Decided not to test as the H/W team already tested this functionality and time was tight. To translate the files properly, had to install Python solution from Antwerp. Some overhead to begin testing (installation, config test) but was fairly quick to execute afterwards 12 2 4 0 0 ncowan 12/05/2011 12/05/2011 jbright 01/05/2012 01/08/2012 ©2013 Testing Thoughts Took longer because was not behaving as expected and I had to make sure I was testing correctly. My expectations were wrong based on virtual noise not being exact. 24
  • 25. Sample Report "Awesome Product" Test Progress as of 02/01/2012 Effort (person half days) 90 80 Original Planned Effort 70 Expended Effort 60 Total Expected Effort 50 40 30 20 10 0 ARQ SRA Vectoring Regression H/W Performance Feature April, 2013 ©2013 Testing Thoughts 25