Module V
Topics
• Feature-Driven Development: Introduction, incremental
software development, Regaining Control, motivation behind
FDD, planning an iterative project, architecture centric, FDD
and XP.
• Test Driven Development: Unit Tests, Integration Tests, End-
to-End Tests, Customer Tests.
• Release Management: Version Control, Continuous
Integration
Incremental Software Development
• An incremental software development process is one that does not try to
complete the whole design task in one go.
• This is in contrast to the more traditional waterfall model of software
development.
• When applying some form of iterative approach, the intention is that
each iteration adds something to the evolving system.
• Some iterations may lead to a release of the software system, while
others may not
Incremental Software Development
Each iteration
1. Determines what will be done during the iteration.
2. Designs and implements the required functionality.
3. Tests the new functionality.
4. (Optionally) Creates a new release.
5. Reviews what has been done before moving to the next iteration
Incremental Software Development
Incremental Software Development
• Figure depicts the spiral nature of this approach to software development.
here each iteration around the spiral is a mini-software development project.
• The end result is that you incrementally produce the system being designed.
• While you do this, you explicitly identify the risks to your design/system
upfront and deal with them early on
• The iterations within XP projects tend to be between 1 and 3 weeks, with new
releases at the end of most iterations
• An iterative approach may still be relevant, but the iterations may be in terms
of months rather than weeks
Feature-Driven Development
• FDD stands for Feature-Driven Development. It is an agile
iterative and incremental model that focuses on progressing
the features of the developing software.
• The main motive of feature-driven development is to provide
timely updated and working software to the client.
• In FDD, reporting and progress tracking is necessary at all
levels.
FDD Lifecycle
• Build overall model
• Build feature list
• Plan by feature
• Design by feature
• Build by feature
Characteristics of FDD
• Short iterative: FDD lifecycle works in simple and short
iterations to efficiently finish the work on time and gives good
pace for large projects.
• Customer focused: This agile practice is totally based on
inspection of each feature by client and then pushed to main
build code.
• Structured and feature focused: Initial activities in lifecycle
builds the domain model and features list in the beginning of
timeline
• Frequent releases: Feature-driven development provides
continuous releases of features in the software and retaining
continuous success of the project.
Advantages of FDD
• Reporting at all levels leads to easier progress tracking.
• FDD provides continuous success for larger size of teams and
projects.
• Reduction in risks is observed as whole model and design is build in
smaller segments.
• FDD provides greater accuracy in cost estimation of the project due to
feature segmentation
Disadvantages of FDD
• This agile practice is not good for smaller projects.
• There is high dependency on lead programmers, designers and mentors.
• There is lack of documentation which can create an issue afterwards
Regaining Control :The Motivation Behind FDD
• To manage such a project in an agile manner?
----------This is where feature-driven development will come-in.
• The Motivation Behind FDD
• To regain control of an iterative project the guidelines are:
• The process should be feature-centric. This means that the units of requirements
• (e.g., use cases, user stories) should be unified with the units of planning (e.g.
tasks).
• Project planning should be based around timeboxes (rather than phases) so that
the length of each iteration is known.
• The project plan should be adaptive that is responsive to the changing risks and
benefits of the system and business environment
Feature-Centric Development
• The term feature-centric is used to refer to development processes that
combine the expression of requirements with the units of activity for
planning purposes.
• A feature in such a process can be viewed as a unit of “plannable
functionality.”
• Feature-driven development (FDD) uses features in this way.
• Features are closely related to use cases and to the realisation of use cases
in the standard Unified Process.
• A feature is a schedulable piece of functionality, something that delivers
value to the user.
• The emphasis on schedulable. That is, a feature is derived from a planning
perspective rather than from the user perspective or indeed the
requirements perspective.
To aid in planning, features go further, they must also be associated with:
• a priority (so that they can be ordered),
• a cost (so that they can be accounted for),
• resources (so that they can be scheduled).
• Costs and resources can be determined by examining the number of person days taken to
accomplish the feature
PLANNING AN ITERATIVE PROJECT
Before any project embarks on an iteration development process there are a number of steps that
should be followed.
These steps are:
• Identify and prioritise features (the feature list should be continually revised throughout the
project).
• Roughly identify iterations and allocate features.
• Timebox iterations/calculate costs.
• For each iteration
◦ Plan iterations (which should be continually revised during lifetime of project).
◦ Identify tasks required to implement features.
◦ Allocate tasks to resources (that is, allocate tasks to project members).
◦ Implement iteration.
The key here is that iterations are based on “timeboxes” so that their length is known and can be
managed.
Iterations are also based on tasks constructed around features so that they can be responsive to
user feedback and to changing business requirements.
Iterations, Timeboxes and Releases
• At the start of the project, the project team along with various project stakeholders
create a prioritised feature list. Note that this cannot be done without the collaboration
of those project stakeholders who can state what the priorities of the features should be.
• Providing priorities like High, Medium, and Low is sufficient at this stage. Costs are
related to implementation time, typically using a Three-Point estimation approach.
• Includes, requires a best estimate, an average estimate and a worst-case estimate to be
given. The overall cost is derived from these three estimates.
• Finally, the number of software engineers involved with the feature is also estimated.
• The process involves determining the expected number of iterations, duration, and
features, requiring the involvement of business representatives with the necessary
knowledge and authority.
From this, we emerge with an outline plan for what will be done when and at what point
we will be completing various iterations of the end system
Overall structure of an FDD Project
The key steps in any iteration are:
1. Iteration initiation meeting. The iteration's length, features, and resources should be
revised and confirmed, with all stakeholders involved in a meeting to ensure a
comprehensive project.
2. Plan features for iteration. Having agreed the features to be addressed, a detailed plan
should be produced mapping features to work packages and work packages to tasks. The
tasks in turn should be allocated to actual resources, etc. This plan must be accepted by
the key stakeholders (including the clients).
3. Analyse the requirements associated with the features. The process may involve revising
a use case document, designing new GUI displays, and determining user interaction
sequence, while also identifying and agreeing on acceptance criteria for this iteration.
4. Analyse impact on the architecture. The architecture serves as the foundation for the
iterative process, and iterating on it is crucial to assess the impact of new features and
identify significant entities.
5. (Optionally) Revise architecture as required. The process involves reevaluating and
modifying the application architecture to accommodate the necessary features,
potentially involving core feature design and analysis to assess their architectural impact.
6. new acceptance test plan and specification A new acceptance test plan and specification
should be written for this iteration, addressing specific features within the work package.
7. implementing the features, The features are implemented through tasks, monitored as usual,
and each feature should have associated unit tests passed before being considered
completed
8. Once the features are implemented, the new system should be tested (this includes the
generation of a test report). This includes unit tests and acceptance tests.
9. All tests should pass before the iteration is allowed to proceed. If any tests fail, then the
release cannot be deployed, and the problems must be corrected.
10. The new application should then be deployed to the client who should then perform any
agreed user acceptance tests. This may lead to the revision of the deployed system, if and
when deficiencies are identified.
11. A post iteration meeting should review the progress made during the iteration, it should
consider any issues that arose and re-prioritise any features that were not addressed. Again,
this should involve all project stakeholders.
12. At this point, a decision should also be made regarding the validity of the next iteration and
whether any further iterations are required.
13. One outstanding issue for this iterative approach is what comprises the acceptance tests at
the end of an iteration.
Why Have an Architecture?
Architecture is a critical element of the object-oriented design process.
• understand the system. An architecture is a blueprint or model for large, complex software
systems, abstracting implementation details while positioning elements for functional
requirements.
• organise development. This system organizes "plumbers" and "electricians" by separating
concerns, focusing on plumbing issues, and identifying their interconnectedness, ensuring well-
documented and clearly specified points.
• promote reuse. Reusable code requires identifying its reusable nature. Repeated design and
implementation can make it easier to produce reusable code. Class-level reuse is common in
many systems, but architecture can help identify critical systems and subsystems early on.
Common subsystems can then be made reusable at the reusable level.
• promote continued development. A system's evolution over time is common, with new
requirements and functionality added or modified. The original architecture helps control this
evolution, controlling it within and between releases. A good architecture requires minimal
change but is instrumental to future releases, providing a structure for new additions or
modifications. It minimizes potential misinterpretation or ignore of design.
FDD AND XP
• Focusing on iterative project planning and FDD, agile methods like Agile Modelling
and eXtreme Programming can be used for modeling and implementing solutions.
• Feature-driven development provides a way of controlling the iterative and
incremental nature of agile projects. It does not really have anything to say about
how you implement those features.
• Features can be implemented in a variety of different ways using a variety of
techniques. However, taking an agile approach means that applying the techniques
can work extremely well
• Within a development model in which FDD is used to plan the details of iterations
and in which features are treated as the tasks to be performed, then applying Agile
Modelling and XP practices can result in a workflow resembling that presented in
Figure 5.
• we are using Agile Modelling to allow any modelling activities to take place and XP
practices to implement the required behavior
• an explicit analysis step that involves some design and/or modelling work in order
to determine how the feature should be implemented or broken down into tasks.
Agile                          Implementations
Test-Driven Development
22
Bug fixing
Test case design
Feature specifications
Write source code
Refactoring
Software
Run
Unsuccessful
Successful
Code change
Test-Driven Development
• Developed software goes through a repeated maintenance process due
to lack of quality and inability to satisfy the customer needs.
• System functionality is decomposed into several small features.
• Test cases are designed before coding.
• Unit tests are written first for the feature specification and then the
small source code is written according to the specification.
• Source code is run against the test case.
• It is quite possible that the small code written may not meet the
requirements, thus it will fail the test.
• After failure, we need to modify the small code written before to meet
the requirements and run it again.
• If the code passes the test case implies the code is correct. The same
process is repeated for another set of requirements specification
24
Levels of Testing
• Testing is a defect detection technique that is performed at various
levels. Testing begins once a module is fully constructed.
• Although software engineers test source codes after it is written, but it
is not an appealing way that can satisfy customer’s needs and
expectations.
• Software is developed through a series of activities, i.e., customer
needs, specification, design, and coding.
• Each of these activities has different aims. Therefore, testing is
performed at various levels of development phases to achieve their
purpose.
Continue..
Agile                          Implementations
Unit test
• Unit means a program unit, module, component, procedure, subroutine of a system
developed by the programmer.
• The aim of unit testing is to find bugs by isolating an individual module using test
stub and test drivers and by executing test cases on it.
• The unit testing is performed to detect both structural and functional errors in the
module.
• Therefore, test cases are designed using white-box and black-box testing strategies
for unit testing.
• Most of the module errors are captured through white-box testing
Unit test envirnoment
Integration Testing
• Integration testing is another level of testing, which is performed after
unit testing of modules.
• It is carried out keeping in view the design issues of the system into
subsystems.
• The main goal of integration testing is to find interface errors between
modules.
• There are various approaches in which the modules are combined
together for integration testing.
• Big-bang approach
• Top-down approach
• Bottom-up approach
• Sandwich approach
Big-bang approach
• The big-bang is a simple and straightforward integration testing.
• In this approach, all the modules are first tested individually and then these
are combined together and tested as a single system.
• This approach works well where there is less number of modules in a system.
• As all modules are integrated to form a whole system, the chaos may occur.
If there is any defect found, it becomes difficult to identify where the defect
has occurred.
• Therefore, big-bang approach is generally avoided for large and complex
systems.
Top-down approach
• Top-down integration testing begins with the main module and move
downwards integrating and testing its lower level modules.
• Again the next lower level modules are integrated and tested.
• Thus, this incremental integration and testing is continued until all modules
up to the concrete level are integrated and tested.
• The top-down integration testing approach is as follows:
• main system -> subsystems -> modules at concrete level.
• In this approach, the testing of a module may be delayed if its lower level
modules (i.e., test stubs) are not available at this time.
• Thus, writing test stubs and simulating to act as actual modules may be
complicated and time-consuming task.
31
Integration Testing
Top-down approach
M
S1 S2
M3.2
M3.1
M2.1
M1.2
M1.1
S3
M3.3
Figure 9.14: Top-down integration
Continue..
Bottom-up approach
• As the name implies, bottom-up approach begins with the individual testing of
bottom-level modules in the software hierarchy.
• Then lower level modules are merged function wise together to form a
subsystem and then all subsystems are integrated to test the main module
covering all modules of the system.
• The approach of bottom-up integration is as follows:
concrete level modules -> subsystem –> main module.
• The bottom-up approach works opposite to the top-down integration
approach.
Sandwich approach
• The sandwich testing combines both top-down and bottom-up
integration approaches.
• During sandwich testing, top-down approach force to the lower level
modules to be available and bottom-up approach requires upper level
modules.
• Thus, testing a module requires its top and bottom level modules.
• It is the most preferred approach in testing because the modules are
tested as and when these are available for testing
End To End Testing
End To End Testing
• End-to-end testing is a Software testing methodology to test an application
flow from start to end.
• The purpose of this testing is to simulate the real user scenario and validate
the system under test and its components for integration and data integrity.
• It is performed from start to finish under real-world scenarios like
communication of the application with hardware, network, database, and
other applications.
• The main reason for carrying out this testing is to determine various
dependencies of an application as well as ensure that accurate information is
communicated between various system components. It is usually performed
after the completion of functional and system testing of any application.
End To End Testing example of Gmail:
End to End Verification of a Gmail account will include the following steps:
1.Launching a Gmail login page through URL.
2.Logging into Gmail account by using valid credentials.
3.Accessing Inbox. Opening Read and Unread emails.
4.Composing a new email, reply or forward an email.
5.Opening Sent items and checking emails.
6.Checking emails in the Spam folder
7.Logging out of Gmail application by clicking ‘logout
Customer test/Acceptance Testing
The user involvement is important during acceptance testing of the software as it
is developed for the end-users.
Acceptance testing is performed at two levels, i.e.,
Alpha testing
Beta testing.
Alpha testing is a pilot testing in which customers are involved in exercising test
cases.
o In alpha testing, customer conducts tests in the development environment. The users perform
alpha test and tries to pinpoint any problem in the system.
o The alpha test is conducted in a controlled environment.
o After alpha testing, system is ready to transport the system at the customer site for deployment
Beta testing
Beta testing is performed by a limited and friendly customers and end-users.
Beta testing is conducted at the customer site, where the software is to be
deployed and used by the end-users.
o The developer may or may not be present during beta testing.
o The end-users operate the system under testing mode and note down any problem
observed during system operation.
o The defects noted by the end-users are corrected by the developer.
o If there are any major changes required, then these changes are sent to the
configuration management team.
o The configuration management team decides whether to approve or disapprove the
changes for modification in the system
Shadow testing
• Shadow testing is conducted in case of maintenance or
reengineering type of projects.
• In this testing, the new system and the legacy system are run
side-by-side and their results are compared.
• Any unusual results noted by end-user are informed to the
developers that they take corrective actions to remove the
problems.
Benchmark testing
• In benchmark test, client prepares test cases to test the system
performance. The benchmark test is conducted either by end-users
or testers.
• Before performing benchmark test, tester or end-users must be
familiar with the functional and nonfunctional requirements of the
system.
• Benchmark testing helps to assess product’s performance against
other products in a number of areas including functionality,
durability, quality, etc.
version control
• A version control system to handle the frequent and rapid changes introduced into the
software and to allow the software to roll back when necessary.
• Version control, also known as source control, is the practice of tracking and managing
changes to software code. Version control systems are software tools that help software
teams manage changes to source code over time.
• As development environments have accelerated, version control systems help software
teams work faster and smarter.
• They are especially useful for DevOps teams since they help them to reduce development
time and increase successful deployments.
• Version control software keeps track of every modification to the code in a special kind of
database.
• If a mistake is made, developers can turn back the clock and compare earlier versions of
the code to help fix the mistake while minimizing disruption to all team members.
Version control
•
Advantages of version control systems
• Using version control software is a best practice for high performing software and DevOps
teams.
• Version control also helps developers move faster and allows software teams to preserve
efficiency and agility as the team scales to include more developers.
• Version Control Systems (VCS) have seen great improvements over the past few decades and
some are better than others.
• VCS are sometimes known as SCM (Source Code Management) tools or RCS (Revision
Control System).
• One of the most popular VCS tools in use today is called Git. Git is a Distributed VCS, a
category known as DVCS
Main benefits you should expect from version control are
• A complete long-term change history of every file. This means every change
made by many individuals over the years. Changes include the creation and
deletion of files as well as edits to their contents.
• Different VCS tools differ on how well they handle renaming and moving of
files. This history should also include the author, date and written notes on
the purpose of each change.
• Having the complete history enables going back to previous versions to help
in root cause analysis for bugs and it is crucial when needing to fix problems
in older versions of software. If the software is being actively worked on,
almost everything can be considered an "older version" of the software
Advantages of version control systems
• Traceability: Being able to trace each change made to the software and
connect it to project management and bug tracking software such as Jira,
and being able to annotate each change with a message describing the
purpose and intent of the change can help not only with root cause analysis
and other forensics.
• Having the annotated history of the code at your fingertips when you are
reading the code, trying to understand what it is doing and why it is so
designed can enable developers to make correct and harmonious changes
that are in accord with the intended long-term design of the system.
• This can be especially important for working effectively with legacy code
and is crucial in enabling developers to estimate future work with any accuracy.
Continuous Integration
• Continuous integration. New code is integrated and the system rebuilt every time a
task is completed (which may be many times a day).
• The aim when trying to implement “continuous integration” it not to integrate every
5 min, but between one and several times per day.
• The aim is to avoid the problems encountered with big bang integrations. Big bang
integrations happen when a period of time (typically days or weeks rather than
hours) has elapsed. In many situations, the act of integrating all the code can take
days in itself.
• In one project, the integration took a week just to get to the point that all the code
compiled (it had yet to be tested!). One developer, in particular, seemed to have gone
off on their own causing chaos.
• Big bang integrations slow development projects down and can help to create a
culture of blame.
Continous integration
The reason for regular integration (every few hours) is to help you find out:
1. Have you broken anything?
2. Has anyone broken anything you have done with his or her changes?
The key to continuous integration is that pair programmers should work in
small steps and that these small steps can be integrated. Remember the way
in which pair programmers should work:
1. Write a test.
2. Write the code stubs.
3. Make sure everything compiles so far.
4. Run the test – it should fail. That’s okay.
5. Implement stubs.
6. Make sure the test is passed before continuing.
7. Make sure all tests can pass before continuing further.
8. Integrate the now working code into the current build.
9. Return to step 1 until complete.

More Related Content

PPT
what-is-devops.ppt
PPTX
Rup
PPT
Rational unified process lecture-5
PPT
Unified process
PDF
Management of time uncertainty in agile
PPTX
Lect6 life cycle phases
PDF
Software vjhghjjkhjkkkghhjhEngineering.pdf
PPSX
SDLC Methodologies
what-is-devops.ppt
Rup
Rational unified process lecture-5
Unified process
Management of time uncertainty in agile
Lect6 life cycle phases
Software vjhghjjkhjkkkghhjhEngineering.pdf
SDLC Methodologies

Similar to Agile Implementations (20)

PDF
Agile mODEL
PPTX
Process model in software engineering note ppt complete
PPSX
SDLC Methodologies
PPSX
SDLC Method Training Course
PPSX
Software Development Life Cycle – SDLC
PPSX
SDLC
PPSX
Software Development
PPTX
ecse ppt.pptx
PPTX
ecse ppt.pptx
PPTX
Phases in Agile Development- 9.pptx
PPTX
Software Engineering And Project Management Basics
PPTX
Object Oriented Software engineering.pptx
PDF
Software Process Models
PPTX
Software process models shaukat wasi
DOC
Chapter 1,2,3,4 notes
PPTX
Applying both of waterfall and iterative development
PPSX
Step by Step Guide to Learn SDLC
PPTX
Software Engineering Unit 1 PowerPoint presentation For AKTU University
PPTX
4_59247024118127714222222222222222255.pptx
DOCX
process models- software engineering
Agile mODEL
Process model in software engineering note ppt complete
SDLC Methodologies
SDLC Method Training Course
Software Development Life Cycle – SDLC
SDLC
Software Development
ecse ppt.pptx
ecse ppt.pptx
Phases in Agile Development- 9.pptx
Software Engineering And Project Management Basics
Object Oriented Software engineering.pptx
Software Process Models
Software process models shaukat wasi
Chapter 1,2,3,4 notes
Applying both of waterfall and iterative development
Step by Step Guide to Learn SDLC
Software Engineering Unit 1 PowerPoint presentation For AKTU University
4_59247024118127714222222222222222255.pptx
process models- software engineering
Ad

More from TSANKARARAO (7)

PPT
rosario-math-foundations statistics foundation
PPT
Text mining turban_dss9e_ch07 to learn about
PPT
4_22865_IS465_2019_1__2_1_08ClassBasic.ppt
PPT
NeuralNetworksbasics for Deeplearning
PPTX
Module 2 - Part2.pptx
PDF
module3part-1-bigdata-230301002404-3db4f2a4 (1).pdf
PDF
Computer org Architecture module2 ppt.pdf
rosario-math-foundations statistics foundation
Text mining turban_dss9e_ch07 to learn about
4_22865_IS465_2019_1__2_1_08ClassBasic.ppt
NeuralNetworksbasics for Deeplearning
Module 2 - Part2.pptx
module3part-1-bigdata-230301002404-3db4f2a4 (1).pdf
Computer org Architecture module2 ppt.pdf
Ad

Recently uploaded (20)

PDF
Abrasive, erosive and cavitation wear.pdf
PPTX
Software Engineering and software moduleing
PPTX
Sorting and Hashing in Data Structures with Algorithms, Techniques, Implement...
PDF
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
PPTX
ASME PCC-02 TRAINING -DESKTOP-NLE5HNP.pptx
PPTX
Graph Data Structures with Types, Traversals, Connectivity, and Real-Life App...
PPTX
CyberSecurity Mobile and Wireless Devices
PPTX
AUTOMOTIVE ENGINE MANAGEMENT (MECHATRONICS).pptx
PPTX
Fundamentals of safety and accident prevention -final (1).pptx
PPTX
Management Information system : MIS-e-Business Systems.pptx
PDF
SMART SIGNAL TIMING FOR URBAN INTERSECTIONS USING REAL-TIME VEHICLE DETECTI...
PPTX
Fundamentals of Mechanical Engineering.pptx
PPTX
Module 8- Technological and Communication Skills.pptx
PDF
August -2025_Top10 Read_Articles_ijait.pdf
PDF
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
PDF
August 2025 - Top 10 Read Articles in Network Security & Its Applications
PDF
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
PPT
INTRODUCTION -Data Warehousing and Mining-M.Tech- VTU.ppt
PDF
Exploratory_Data_Analysis_Fundamentals.pdf
PDF
Soil Improvement Techniques Note - Rabbi
Abrasive, erosive and cavitation wear.pdf
Software Engineering and software moduleing
Sorting and Hashing in Data Structures with Algorithms, Techniques, Implement...
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
ASME PCC-02 TRAINING -DESKTOP-NLE5HNP.pptx
Graph Data Structures with Types, Traversals, Connectivity, and Real-Life App...
CyberSecurity Mobile and Wireless Devices
AUTOMOTIVE ENGINE MANAGEMENT (MECHATRONICS).pptx
Fundamentals of safety and accident prevention -final (1).pptx
Management Information system : MIS-e-Business Systems.pptx
SMART SIGNAL TIMING FOR URBAN INTERSECTIONS USING REAL-TIME VEHICLE DETECTI...
Fundamentals of Mechanical Engineering.pptx
Module 8- Technological and Communication Skills.pptx
August -2025_Top10 Read_Articles_ijait.pdf
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
August 2025 - Top 10 Read Articles in Network Security & Its Applications
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
INTRODUCTION -Data Warehousing and Mining-M.Tech- VTU.ppt
Exploratory_Data_Analysis_Fundamentals.pdf
Soil Improvement Techniques Note - Rabbi

Agile Implementations

  • 2. Topics • Feature-Driven Development: Introduction, incremental software development, Regaining Control, motivation behind FDD, planning an iterative project, architecture centric, FDD and XP. • Test Driven Development: Unit Tests, Integration Tests, End- to-End Tests, Customer Tests. • Release Management: Version Control, Continuous Integration
  • 3. Incremental Software Development • An incremental software development process is one that does not try to complete the whole design task in one go. • This is in contrast to the more traditional waterfall model of software development. • When applying some form of iterative approach, the intention is that each iteration adds something to the evolving system. • Some iterations may lead to a release of the software system, while others may not
  • 4. Incremental Software Development Each iteration 1. Determines what will be done during the iteration. 2. Designs and implements the required functionality. 3. Tests the new functionality. 4. (Optionally) Creates a new release. 5. Reviews what has been done before moving to the next iteration
  • 6. Incremental Software Development • Figure depicts the spiral nature of this approach to software development. here each iteration around the spiral is a mini-software development project. • The end result is that you incrementally produce the system being designed. • While you do this, you explicitly identify the risks to your design/system upfront and deal with them early on • The iterations within XP projects tend to be between 1 and 3 weeks, with new releases at the end of most iterations • An iterative approach may still be relevant, but the iterations may be in terms of months rather than weeks
  • 7. Feature-Driven Development • FDD stands for Feature-Driven Development. It is an agile iterative and incremental model that focuses on progressing the features of the developing software. • The main motive of feature-driven development is to provide timely updated and working software to the client. • In FDD, reporting and progress tracking is necessary at all levels.
  • 8. FDD Lifecycle • Build overall model • Build feature list • Plan by feature • Design by feature • Build by feature
  • 9. Characteristics of FDD • Short iterative: FDD lifecycle works in simple and short iterations to efficiently finish the work on time and gives good pace for large projects. • Customer focused: This agile practice is totally based on inspection of each feature by client and then pushed to main build code. • Structured and feature focused: Initial activities in lifecycle builds the domain model and features list in the beginning of timeline • Frequent releases: Feature-driven development provides continuous releases of features in the software and retaining continuous success of the project.
  • 10. Advantages of FDD • Reporting at all levels leads to easier progress tracking. • FDD provides continuous success for larger size of teams and projects. • Reduction in risks is observed as whole model and design is build in smaller segments. • FDD provides greater accuracy in cost estimation of the project due to feature segmentation Disadvantages of FDD • This agile practice is not good for smaller projects. • There is high dependency on lead programmers, designers and mentors. • There is lack of documentation which can create an issue afterwards
  • 11. Regaining Control :The Motivation Behind FDD • To manage such a project in an agile manner? ----------This is where feature-driven development will come-in. • The Motivation Behind FDD • To regain control of an iterative project the guidelines are: • The process should be feature-centric. This means that the units of requirements • (e.g., use cases, user stories) should be unified with the units of planning (e.g. tasks). • Project planning should be based around timeboxes (rather than phases) so that the length of each iteration is known. • The project plan should be adaptive that is responsive to the changing risks and benefits of the system and business environment
  • 12. Feature-Centric Development • The term feature-centric is used to refer to development processes that combine the expression of requirements with the units of activity for planning purposes. • A feature in such a process can be viewed as a unit of “plannable functionality.” • Feature-driven development (FDD) uses features in this way. • Features are closely related to use cases and to the realisation of use cases in the standard Unified Process. • A feature is a schedulable piece of functionality, something that delivers value to the user. • The emphasis on schedulable. That is, a feature is derived from a planning perspective rather than from the user perspective or indeed the requirements perspective.
  • 13. To aid in planning, features go further, they must also be associated with: • a priority (so that they can be ordered), • a cost (so that they can be accounted for), • resources (so that they can be scheduled). • Costs and resources can be determined by examining the number of person days taken to accomplish the feature
  • 14. PLANNING AN ITERATIVE PROJECT Before any project embarks on an iteration development process there are a number of steps that should be followed. These steps are: • Identify and prioritise features (the feature list should be continually revised throughout the project). • Roughly identify iterations and allocate features. • Timebox iterations/calculate costs. • For each iteration ◦ Plan iterations (which should be continually revised during lifetime of project). ◦ Identify tasks required to implement features. ◦ Allocate tasks to resources (that is, allocate tasks to project members). ◦ Implement iteration. The key here is that iterations are based on “timeboxes” so that their length is known and can be managed. Iterations are also based on tasks constructed around features so that they can be responsive to user feedback and to changing business requirements.
  • 15. Iterations, Timeboxes and Releases • At the start of the project, the project team along with various project stakeholders create a prioritised feature list. Note that this cannot be done without the collaboration of those project stakeholders who can state what the priorities of the features should be. • Providing priorities like High, Medium, and Low is sufficient at this stage. Costs are related to implementation time, typically using a Three-Point estimation approach. • Includes, requires a best estimate, an average estimate and a worst-case estimate to be given. The overall cost is derived from these three estimates. • Finally, the number of software engineers involved with the feature is also estimated. • The process involves determining the expected number of iterations, duration, and features, requiring the involvement of business representatives with the necessary knowledge and authority. From this, we emerge with an outline plan for what will be done when and at what point we will be completing various iterations of the end system
  • 16. Overall structure of an FDD Project
  • 17. The key steps in any iteration are: 1. Iteration initiation meeting. The iteration's length, features, and resources should be revised and confirmed, with all stakeholders involved in a meeting to ensure a comprehensive project. 2. Plan features for iteration. Having agreed the features to be addressed, a detailed plan should be produced mapping features to work packages and work packages to tasks. The tasks in turn should be allocated to actual resources, etc. This plan must be accepted by the key stakeholders (including the clients). 3. Analyse the requirements associated with the features. The process may involve revising a use case document, designing new GUI displays, and determining user interaction sequence, while also identifying and agreeing on acceptance criteria for this iteration. 4. Analyse impact on the architecture. The architecture serves as the foundation for the iterative process, and iterating on it is crucial to assess the impact of new features and identify significant entities. 5. (Optionally) Revise architecture as required. The process involves reevaluating and modifying the application architecture to accommodate the necessary features, potentially involving core feature design and analysis to assess their architectural impact. 6. new acceptance test plan and specification A new acceptance test plan and specification should be written for this iteration, addressing specific features within the work package.
  • 18. 7. implementing the features, The features are implemented through tasks, monitored as usual, and each feature should have associated unit tests passed before being considered completed 8. Once the features are implemented, the new system should be tested (this includes the generation of a test report). This includes unit tests and acceptance tests. 9. All tests should pass before the iteration is allowed to proceed. If any tests fail, then the release cannot be deployed, and the problems must be corrected. 10. The new application should then be deployed to the client who should then perform any agreed user acceptance tests. This may lead to the revision of the deployed system, if and when deficiencies are identified. 11. A post iteration meeting should review the progress made during the iteration, it should consider any issues that arose and re-prioritise any features that were not addressed. Again, this should involve all project stakeholders. 12. At this point, a decision should also be made regarding the validity of the next iteration and whether any further iterations are required. 13. One outstanding issue for this iterative approach is what comprises the acceptance tests at the end of an iteration.
  • 19. Why Have an Architecture? Architecture is a critical element of the object-oriented design process. • understand the system. An architecture is a blueprint or model for large, complex software systems, abstracting implementation details while positioning elements for functional requirements. • organise development. This system organizes "plumbers" and "electricians" by separating concerns, focusing on plumbing issues, and identifying their interconnectedness, ensuring well- documented and clearly specified points. • promote reuse. Reusable code requires identifying its reusable nature. Repeated design and implementation can make it easier to produce reusable code. Class-level reuse is common in many systems, but architecture can help identify critical systems and subsystems early on. Common subsystems can then be made reusable at the reusable level. • promote continued development. A system's evolution over time is common, with new requirements and functionality added or modified. The original architecture helps control this evolution, controlling it within and between releases. A good architecture requires minimal change but is instrumental to future releases, providing a structure for new additions or modifications. It minimizes potential misinterpretation or ignore of design.
  • 20. FDD AND XP • Focusing on iterative project planning and FDD, agile methods like Agile Modelling and eXtreme Programming can be used for modeling and implementing solutions. • Feature-driven development provides a way of controlling the iterative and incremental nature of agile projects. It does not really have anything to say about how you implement those features. • Features can be implemented in a variety of different ways using a variety of techniques. However, taking an agile approach means that applying the techniques can work extremely well • Within a development model in which FDD is used to plan the details of iterations and in which features are treated as the tasks to be performed, then applying Agile Modelling and XP practices can result in a workflow resembling that presented in Figure 5. • we are using Agile Modelling to allow any modelling activities to take place and XP practices to implement the required behavior • an explicit analysis step that involves some design and/or modelling work in order to determine how the feature should be implemented or broken down into tasks.
  • 22. Test-Driven Development 22 Bug fixing Test case design Feature specifications Write source code Refactoring Software Run Unsuccessful Successful Code change
  • 23. Test-Driven Development • Developed software goes through a repeated maintenance process due to lack of quality and inability to satisfy the customer needs. • System functionality is decomposed into several small features. • Test cases are designed before coding. • Unit tests are written first for the feature specification and then the small source code is written according to the specification. • Source code is run against the test case. • It is quite possible that the small code written may not meet the requirements, thus it will fail the test. • After failure, we need to modify the small code written before to meet the requirements and run it again. • If the code passes the test case implies the code is correct. The same process is repeated for another set of requirements specification
  • 24. 24 Levels of Testing • Testing is a defect detection technique that is performed at various levels. Testing begins once a module is fully constructed. • Although software engineers test source codes after it is written, but it is not an appealing way that can satisfy customer’s needs and expectations. • Software is developed through a series of activities, i.e., customer needs, specification, design, and coding. • Each of these activities has different aims. Therefore, testing is performed at various levels of development phases to achieve their purpose. Continue..
  • 26. Unit test • Unit means a program unit, module, component, procedure, subroutine of a system developed by the programmer. • The aim of unit testing is to find bugs by isolating an individual module using test stub and test drivers and by executing test cases on it. • The unit testing is performed to detect both structural and functional errors in the module. • Therefore, test cases are designed using white-box and black-box testing strategies for unit testing. • Most of the module errors are captured through white-box testing
  • 28. Integration Testing • Integration testing is another level of testing, which is performed after unit testing of modules. • It is carried out keeping in view the design issues of the system into subsystems. • The main goal of integration testing is to find interface errors between modules. • There are various approaches in which the modules are combined together for integration testing. • Big-bang approach • Top-down approach • Bottom-up approach • Sandwich approach
  • 29. Big-bang approach • The big-bang is a simple and straightforward integration testing. • In this approach, all the modules are first tested individually and then these are combined together and tested as a single system. • This approach works well where there is less number of modules in a system. • As all modules are integrated to form a whole system, the chaos may occur. If there is any defect found, it becomes difficult to identify where the defect has occurred. • Therefore, big-bang approach is generally avoided for large and complex systems.
  • 30. Top-down approach • Top-down integration testing begins with the main module and move downwards integrating and testing its lower level modules. • Again the next lower level modules are integrated and tested. • Thus, this incremental integration and testing is continued until all modules up to the concrete level are integrated and tested. • The top-down integration testing approach is as follows: • main system -> subsystems -> modules at concrete level. • In this approach, the testing of a module may be delayed if its lower level modules (i.e., test stubs) are not available at this time. • Thus, writing test stubs and simulating to act as actual modules may be complicated and time-consuming task.
  • 31. 31 Integration Testing Top-down approach M S1 S2 M3.2 M3.1 M2.1 M1.2 M1.1 S3 M3.3 Figure 9.14: Top-down integration Continue..
  • 32. Bottom-up approach • As the name implies, bottom-up approach begins with the individual testing of bottom-level modules in the software hierarchy. • Then lower level modules are merged function wise together to form a subsystem and then all subsystems are integrated to test the main module covering all modules of the system. • The approach of bottom-up integration is as follows: concrete level modules -> subsystem –> main module. • The bottom-up approach works opposite to the top-down integration approach.
  • 33. Sandwich approach • The sandwich testing combines both top-down and bottom-up integration approaches. • During sandwich testing, top-down approach force to the lower level modules to be available and bottom-up approach requires upper level modules. • Thus, testing a module requires its top and bottom level modules. • It is the most preferred approach in testing because the modules are tested as and when these are available for testing
  • 34. End To End Testing
  • 35. End To End Testing • End-to-end testing is a Software testing methodology to test an application flow from start to end. • The purpose of this testing is to simulate the real user scenario and validate the system under test and its components for integration and data integrity. • It is performed from start to finish under real-world scenarios like communication of the application with hardware, network, database, and other applications. • The main reason for carrying out this testing is to determine various dependencies of an application as well as ensure that accurate information is communicated between various system components. It is usually performed after the completion of functional and system testing of any application.
  • 36. End To End Testing example of Gmail: End to End Verification of a Gmail account will include the following steps: 1.Launching a Gmail login page through URL. 2.Logging into Gmail account by using valid credentials. 3.Accessing Inbox. Opening Read and Unread emails. 4.Composing a new email, reply or forward an email. 5.Opening Sent items and checking emails. 6.Checking emails in the Spam folder 7.Logging out of Gmail application by clicking ‘logout
  • 37. Customer test/Acceptance Testing The user involvement is important during acceptance testing of the software as it is developed for the end-users. Acceptance testing is performed at two levels, i.e., Alpha testing Beta testing. Alpha testing is a pilot testing in which customers are involved in exercising test cases. o In alpha testing, customer conducts tests in the development environment. The users perform alpha test and tries to pinpoint any problem in the system. o The alpha test is conducted in a controlled environment. o After alpha testing, system is ready to transport the system at the customer site for deployment
  • 38. Beta testing Beta testing is performed by a limited and friendly customers and end-users. Beta testing is conducted at the customer site, where the software is to be deployed and used by the end-users. o The developer may or may not be present during beta testing. o The end-users operate the system under testing mode and note down any problem observed during system operation. o The defects noted by the end-users are corrected by the developer. o If there are any major changes required, then these changes are sent to the configuration management team. o The configuration management team decides whether to approve or disapprove the changes for modification in the system
  • 39. Shadow testing • Shadow testing is conducted in case of maintenance or reengineering type of projects. • In this testing, the new system and the legacy system are run side-by-side and their results are compared. • Any unusual results noted by end-user are informed to the developers that they take corrective actions to remove the problems.
  • 40. Benchmark testing • In benchmark test, client prepares test cases to test the system performance. The benchmark test is conducted either by end-users or testers. • Before performing benchmark test, tester or end-users must be familiar with the functional and nonfunctional requirements of the system. • Benchmark testing helps to assess product’s performance against other products in a number of areas including functionality, durability, quality, etc.
  • 41. version control • A version control system to handle the frequent and rapid changes introduced into the software and to allow the software to roll back when necessary. • Version control, also known as source control, is the practice of tracking and managing changes to software code. Version control systems are software tools that help software teams manage changes to source code over time. • As development environments have accelerated, version control systems help software teams work faster and smarter. • They are especially useful for DevOps teams since they help them to reduce development time and increase successful deployments. • Version control software keeps track of every modification to the code in a special kind of database. • If a mistake is made, developers can turn back the clock and compare earlier versions of the code to help fix the mistake while minimizing disruption to all team members.
  • 43. Advantages of version control systems • Using version control software is a best practice for high performing software and DevOps teams. • Version control also helps developers move faster and allows software teams to preserve efficiency and agility as the team scales to include more developers. • Version Control Systems (VCS) have seen great improvements over the past few decades and some are better than others. • VCS are sometimes known as SCM (Source Code Management) tools or RCS (Revision Control System). • One of the most popular VCS tools in use today is called Git. Git is a Distributed VCS, a category known as DVCS
  • 44. Main benefits you should expect from version control are • A complete long-term change history of every file. This means every change made by many individuals over the years. Changes include the creation and deletion of files as well as edits to their contents. • Different VCS tools differ on how well they handle renaming and moving of files. This history should also include the author, date and written notes on the purpose of each change. • Having the complete history enables going back to previous versions to help in root cause analysis for bugs and it is crucial when needing to fix problems in older versions of software. If the software is being actively worked on, almost everything can be considered an "older version" of the software
  • 45. Advantages of version control systems • Traceability: Being able to trace each change made to the software and connect it to project management and bug tracking software such as Jira, and being able to annotate each change with a message describing the purpose and intent of the change can help not only with root cause analysis and other forensics. • Having the annotated history of the code at your fingertips when you are reading the code, trying to understand what it is doing and why it is so designed can enable developers to make correct and harmonious changes that are in accord with the intended long-term design of the system. • This can be especially important for working effectively with legacy code and is crucial in enabling developers to estimate future work with any accuracy.
  • 46. Continuous Integration • Continuous integration. New code is integrated and the system rebuilt every time a task is completed (which may be many times a day). • The aim when trying to implement “continuous integration” it not to integrate every 5 min, but between one and several times per day. • The aim is to avoid the problems encountered with big bang integrations. Big bang integrations happen when a period of time (typically days or weeks rather than hours) has elapsed. In many situations, the act of integrating all the code can take days in itself. • In one project, the integration took a week just to get to the point that all the code compiled (it had yet to be tested!). One developer, in particular, seemed to have gone off on their own causing chaos. • Big bang integrations slow development projects down and can help to create a culture of blame.
  • 48. The reason for regular integration (every few hours) is to help you find out: 1. Have you broken anything? 2. Has anyone broken anything you have done with his or her changes? The key to continuous integration is that pair programmers should work in small steps and that these small steps can be integrated. Remember the way in which pair programmers should work: 1. Write a test. 2. Write the code stubs. 3. Make sure everything compiles so far. 4. Run the test – it should fail. That’s okay. 5. Implement stubs. 6. Make sure the test is passed before continuing. 7. Make sure all tests can pass before continuing further. 8. Integrate the now working code into the current build. 9. Return to step 1 until complete.