SlideShare a Scribd company logo
VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280
VIVA Institute of Technology
9th
National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021)
F-142
www.viva-technology.org/New/IJRI
THE USABILITY METRICS FOR USER EXPERIENCE
Prachi Desul1
, Prof.Chandani Patel2
1
(Department of MCA, Viva School Of MCA/ University of Mumbai, India)
2
(Department of MCA, Viva School Of MCA/ University of Mumbai, India)
Abstract : The Usability Metric for User Experience (UMUX) is a four-item Likert scale used for the
subjective assessment of an application’s perceived usability. It is designed to provide results similar to those
obtained with the 10-item System Usability Scale, and is organized around the ISO 9241-11 definition of
usability. A pilot version was assembled from candidate items, which was then tested alongside the System
Usability Scale during usability testing. It was shown that the two scales correlate well, are reliable, and both
align on one underlying usability factor. In addition, the Usability Metric for User Experience is compact
enough to serve as a usability module in a broader user experience metric
Keywords - metric,system usability scale,usability,user experience.
I. INTRODUCTION
Usability can be measured, but it is rarely Metrics are expensive and are a poor use of typically scarce
usability resources. Although measuring usability can cost fourfold the maximum amount as conducting
qualitative studies (which often generate better insight), metrics are sometimes well worth the expense. Among
other things, metrics can help managers track design progress and support decisions about when to release a
product. As organizations increase their usability investments, collecting actual measurements is a natural next
step and does provide benefits. In general, usability metrics let you: Track progress between releases. You
cannot fine-tune your methodology unless you recognize how well you're doing. Assess your competitive
position. Are you better or worse than other companies? Where are you better or worse? Make a Stop/Go
decision before launch. Is the design ok to release to an unsuspecting world? Create bonus plans for design
managers and higher-level executives. For example, you'll determine bonus amounts for development project
leaders supported what percentage customer-support calls or emails their products generated during the year.
Usability may be a quality attribute that assesses how easy user interfaces are to use. The word "usability" also
refers to methods for improving ease-of-use during the planning process.
Usability is defined by 5 quality components:
Learnability: How easy is it for users to accomplish basic tasks the primary time they encounter the design?
Efficiency: Once users have learned the planning , how quickly can they perform tasks? Memorability: When
users return to the planning after a period of not using it, how easily can they re-establish proficiency? Errors:
what percentage errors do users make, how severe are these errors, and the way easily can they get over the
errors? Satisfaction: How pleasant is it to use the planning There are many other important quality attributes. A
key one is utility, which refers to the design's functionality: Does it do what users need?
Usability and utility are equally important and together determine whether something is useful: It matters little
that something is straightforward if it isn't what you would like
VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280
VIVA Institute of Technology
9th
National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021)
F-143
www.viva-technology.org/New/IJRI
RELATED WORK
Software development organizations contains marketing, design, project management, development and
quality assurance team. it's important for the various teams within the organization to know the
advantages and limitation of incorporating various usability testing methods within the software development
life cycle. Some reasons for poor usability include effort prioritization conflicts from development, project
management, and style team. The part played by the usability engineer is to urge involved because
the heuristic judge and facilitate the event and style efforts are supported usability principles and at an
equivalent time adhering to the project period of time . Two approaches for usability inspection methods
consist of user experience testing and expert review or more commonly referred to as Heuristic Evaluation
(HE). This paper focuses on understanding the strength of HE as a strategy for defect detection. The results also
increase the need for integrating traditional heuristics with modified heuristics customized to the domain or field
of the project being tested such as E-Government.[1]
. Describes an innovative methodology developed for usability tests of the IEEE PCS internet site that
combines heuristic evaluation and task-based testing. Tests conducted on the PCS Web site has evaluated
whether the location facilitated members' ability to seek out information and participate in
discussions, also as developers' are capable to seek out , contribute, and manage administrative
information on the location . The distinctive social characteristics of Communities of Practice (CoPs)
provide context for tailoring design heuristics for informational internet sites that serve the
requirements and interests of CoP members. The discussion gives important on technical communication
principles that apply not only to evaluating the effectiveness of the PCS internet site design but also to all
or any centralised. f the PCS Web site design but also to all centralised technical communication products and
media that increasingly demand user participation.[2] Here Proposes a usability testing method that alters a
given usability testing method to form it less expensive and time consuming for the investigator. The
usage of user-centred methods is stimulated and a mixture of two centralised methods suggested. Future
this method is combined with other techniques to additionally detect the state of satisfaction within the
participant.
User based features like emotions, opinions, cognitive and conative effects are therefore are
considered. a way for the joint analysis of all data gathered is proposed.[3] More automated system
testing might be instrumental in achieving these goals and in recent years testing tools are developed to
automate the interaction with software systems at the GUI level. However, there's absence knowledge on
the usability and applicability of these tools in an industrial setting. This study analyses two tools for
automated visual GUI testing on a real-world, safety-critical software is developed by the corporate Saab
AB.
The tools are compared supported their characteristics also as how they support automation of system
test cases that have previously been presented manually. The time to develop and the size of the
automated test cases also as their execution times are evaluated.[4] Usability testing is important to be
performed by software development companies to determine whether their products are usable or
unusable. it's equally important for the end- users companies running usability studies also . This paper
represents the event of
Usability Management System (USEMATE), an automatic system as an alternate solution
to assist usability tester or practitioner to run usability testing more efficiently and effectively.
The main objective of USEMATE is to enhance the present systems which are paper-based,
require manual score calculation using excel and manual reaction time recording into a webbased
management system. The tools used for the event compromise Adobe Photoshop CS2, Adobe
Dreamweaver CS3, Apache Web Server, and a private computer (PC). The modules and
usefulness criteria included and therefore the approach utilized in the event of this automated system
were replicated from a case study on usability testing of a webpage conducted earlier. USEMATE is
envisaged to be ready to minimize the lengthy working hour and energy needed to manage the usability
testing process from phase to phase.[5]
Usage of traditional UT techniques which aren't sufficient and suitable with the growing
complexity of internet sites & constraints faced by usability practitioners. For a sample, the Lab
Based Usability Testing (LBUT) is dear and has lesser coverage than Exploratory
Heuristics Evaluation (EHE) while the EHE is subjected to false alarms. A hybrid usability
methodology (HUM) comprising of LBUT and EHE is obtainable . Six experiments involving
EHE and LBUT were performed at the first , in-between and future stages of the SDLC of
websites, during which the simplest relative performance of every method were measured using
the dependent variables followed by the planning of a HUM. To prove the HUM, four case
VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280
VIVA Institute of Technology
9th
National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021)
F-144
www.viva-technology.org/New/IJRI
studies were conducted, during which remarkable improvements were observed in website
effectiveness and efficiency. supported the findings, HUM may be a realistic approach for usability
practitioners and also provides stakeholders a validated situational deciding
framework for usability testing strategies taking under consideration world constraints.[6]
II. METHODOLOGY
usability may be a multidimensional concept that aims into the fulfillment of certain set of goals,
mainly; effectiveness, efficiency and satisfaction” and without these goals, usability can't be achieved.
Effectiveness: this term refers to the accuracy and completeness of the user goal achievement.
Efficiency: refers to the resources exhausted by users so as to make sure an accurate and completed achievement
of the goals.
Satisfaction refers to the subjective thoughts of the user regarding their attitude, level of comfort, relevance
of application and therefore the acceptability of use.
A system or a product is completely hooked in to its specific and distinct context of use, the character of the
task, the users appointed to require the task, and finally the equipment used to perform it.
Measuring the usability of a certain system can be done through the measurement of the three goals using a
number of observable and quantifiable usability metrics.
In the light of the three goals mentioned earlier, we’ll go through the different metrics used to measure each
goal, however, our main focus will be on the Effectiveness It can be measured through using two usability
metrics: Success rate, called also completion rate and the number of errors .Success rate/ completion rate: is the
percentage of users who were able to successfully complete the tasks. Despite the very fact that this metric
remains unable to supply insights on how the tasks were performed or why users fail just in case of failure,
they're still critical and are at the core of usability.The success rate is one of the most commonly used metric for
most of practitioners, where 79% of them reported using the success rate as the first metric to think about for
simple use and through data collection and interpretation.
The success rate metric are often measured by assigning a binary value of 0 and 1 to the users; where 1 is
assigned to those that successfully complete the task and 0 to the ones who fail to do so.”Once the test is over
and you have all the data you need to calculate your success rate, the next step would be to divide the total
number of correctly completed attempts by the total number of attempts multiplied by 100.The completion rate
is easy to measure and to collect but with one major pitfall to consider; it happens frequently when a user stops
at some point during the task and fails to end it or maybe finishes it but not in the expected way.Taking into
account that they have completed some steps successfully in the task, how would you score what they have
accomplished as an evaluator?I am getting to dive a touch bit into the small print on the way to score you users
taking under consideration the various stages of their success or failure, using an example to illustrate.
Let’s consider, for instance, that your user task is to order a box of dark chocolates with a card to their
mother for mother’s day.The scoring might seem simple at first glance, and you'll easily say; if the mother
receives the box of bittersweet chocolate with the cardboard then it's a case of success. On the opposite hand, if
the mother doesn't receive anything then we will simply say, that this is often a case of failure.
However, it’s not that straightforward , there are other considerations:
Ordered a box of chocolate but not the dark one (white or milky or a spread of these) alongside card.
Ordered the proper chocolate box without a present card
Ordered quite one box of chocolate by mistake and a present card
Ordered a box of chocolate but didn’t add delivery information or address
Ordered a box of chocolates and gift card successfully but to the incorrect address
All these cases entail a percentage of success and failure within the process of fulfilling the task, their failure
is partial also as their success, which simply means that as an evaluator you would like to interact your own
personal opinion within the scoring.
If you decided that there are no middle grounds in the estimated scoring, your success rate would be different
from that obtained when you appreciate the effort they have made in spite of the task you planned for them.
The fact that there is not a steady rule when it comes to scoring your users, and oftentimes success rates
become subjective; because different evaluators won’t have the same scoring and estimate an equivalent
percentage of failure or success for the above cases, within the same way. However, so as to mainstream the
method , you'd like to work out the important aspects of the task and what score you would allot each a part of
it.
VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280
VIVA Institute of Technology
9th
National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021)
F-145
www.viva-technology.org/New/IJRI
Success rate remains the only usability metric and therefore the easiest among the entire range of those
usability signals, mainly because it’s quick and straightforward and doesn't require much preparation and time to
gather and most significantly it enables you from tracking the progress within your system being one among the
overall areas commonly employed by marketers and designers right along , to ascertain the large picture of how
well their system is doing at the extent of user experience, this doesn't change the very fact , that it remains
subjective.
Help the designers and developers to ascertain that uncovering problems isn't a symbol of
failure. nobody does an ideal job the initially time only. Users always surprise us. It's much better to seek
out out about the issues with a some users during a usability test than later when the design is being
reviewed and is out there within the marketplace. [7]
2. The Number of Errors
This metric provides an idea about the average number of times where an error occurred per user when
performing a given task.These errors can be either slips; where the user accidently types the incorrect email
address or picks the incorrect dates when making a reservation or booking a flight, or they will be mistakes
where the user clicks on an image that’s not clickable or even double clicks a button or a link
intentionally.Normally any users of any interactive system may make errors, where 2 out of every 3 users err,
and there's absolutely no such thing as a ‘’perfect’’ system anyway..To help you measure and ensure obtaining
great diagnostic results, it is highly recommended to set a short description where you give details about how to
score those errors and the severity of a certain of an error to show you how simple and intuitive your system is.
3. Time-Based Efficiency
Or referred to as time on task, this metric helps in the measurement of the time spent by the user to complete
the task or speed of work. This consequently means there's an immediate relationship between the efficiency
and effectiveness, and that we can say, that efficiency is really the user effectiveness divided by the user time
spent.
4. The Overall Relative Efficiency
This is actually measured through users who successfully completed the task in relation to the total time
taken by all users.Let’s consider that we have 2 users where each one of is supposed to complete a different
task.The first user has successfully completed task (1) yet failed to complete task (2). While the second hand has
did not complete task (1) but completed task (2) successfully.
5.Post Task Satisfaction
Once your users have finished the task and it doesn’t matter whether complete it successfully or not, it’s time
to hand them over a questionnaire to have an idea about the difficulty of the task from the users point of
view.Generally, these tasks consist of 5 questions, and the idea behind them give your users a space to judge the
usability of your system.
6. Task Level Satisfaction
This metric helps into investigating the overall impression of users confronted with the system. To measure
the level of satisfaction you can either use the smiley scale method where the user is expected to choose one of
the 5 smileys as a reflection of their satisfaction or lack of satisfaction.The Word Method is also use to measure
the user’s level of satisfaction through listing a series of positive and negative connotations highlighted in green
and red respectively.
In light of the conceptual framework we have discussed earlier, the user experience is highly influenced by
everything that surrounds it.However, the tide might be turning on usability funding. I've recently worked on
several projects to determine formal usability metrics in several companies. As organizations increase their
usability investments, collecting actual measurements is a natural next step and does provide benefits. In
general, usability metrics let you:
Track progress between releases. You cannot fine-tune your methodology unless you recognize how well you're
doing.Assess your competitive position. Are you better or worse than other companies? Where are you better or
worse?Make a Stop/Go decision before launch. Is the design ok to release to an unsuspecting world?
Create bonus plans for design managers and higher-level executives. For example, you'll determine bonus
amounts for development project leaders supported what percentage customer-support calls or emails their
products generated during the year.
How to Measure
VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280
VIVA Institute of Technology
9th
National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021)
F-146
www.viva-technology.org/New/IJRI
It is easy to specify usability metrics, but hard to gather them. Typically, usability is measured relative to
users' performance on a given set of test tasks. The most basic measures are supported the definition of usability
as a top quality metric:
success rate (whether users can perform the task at all),
the time a task requires,
the error rate, and
users' subjective satisfaction.[8]
It is also possible to collect more specific metrics, such as the percentage of time that users follow an
optimal navigation path or the number of times they need to back track. You can collect usability metrics for
both novice users and experienced users. Few websites have truly expert users, since people rarely spend
enough time on any given site to find out it in great detail. Given this, most websites benefit most from studying
novice users. Exceptions are sites like Yahoo and Amazon, which have highly committed and constant users and
may enjoy studying expert users. Intranets, extranets, and weblications are almost like traditional software
design and can hopefully have skilled users; studying experienced users is thus more important than working
with the novice users who typically dominate public websites. With qualitative user testing, it is enough to test
3–5 users. After the fifth user tests, you've got all the insight you're likely to urge and your best bet is to travel
back to the drafting board and improve the design so that you can test it again. Testing quite five users wastes
resources, reducing the amount of design iterations and compromising the ultimate design quality.
Unfortunately, when you're collecting usability metrics, you want to test with quite five users. In order to urge a
fairly tight confidence interval on the results, I usually recommend testing 20 users for every design. Thus,
conducting quantitative usability studies is approximately fourfold as expensive as conducting qualitative ones.
Considering that you simply can learn more from the simpler studies, I usually recommend against metrics
unless the project is extremely well funded.success rate or the completion rate because it’s gives a general idea
about the performance of the system.
IV. FIGURES AND TABLES
Comparing Two Designs
To illustrate quantitative results, we can look at those recently posted by Macromedia from its usability study of
a Flash site, aimed at showing that Flash is not necessarily bad. Basically, Macromedia took a design,
redesigned it according to a set of usability guidelines, and tested both versions with a group of users. Here are
the results:
Table no:1
Original Design Redesign
Task 1 12 sec. 6 sec.
Task 2 75 sec. 15 sec.
Task 3 9 sec. 8 sec.
Task 4 140 sec. 40 sec.
Satisfaction score* 44.75 74.50
*Measured on a scale ranging from
12 (unsatisfactory on all counts) to 84 (excellent on all counts).
Table no :2
VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280
VIVA Institute of Technology
9th
National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021)
F-147
www.viva-technology.org/New/IJRI
Fig-1 fig-2
Fig-3
V. CONCLUSION
usability metrics, it's possible to watch and quantify the usability of any system irrespective if it's software,
hardware, web-based or a mobile application. This is because the metrics presented here are supported extensive
research and testing by various academics and experts and have withstood the test of your time .
Moreover, they cover all of the three core elements that constitute the definition of usability: effectiveness,
efficiency and satisfaction, thus ensuring an all-round quantification of the usability of the system being tested.
usability gets side-tracked and becomes something which will be addressed afterward . Tracking the usability of
your product with metrics allows you to possess a transparent understanding of the experience you're providing
to your users, and improve it over time. usability metrics are measured and aggregated into actionable results,
which allows you to act instantly on the info you record. That makes it painless to stay track of how your
design's usability progresses, detect issues, and improve your users' experience
REFERENCES
Bevan, N. and Macleod, M. 1994. Usability measurement in context, Behavior and Information Technology 13: 132–145.
Ivory, M.Y. and Hearst, M.A. 2001. The state of the art in automating usability evaluation of user interfaces, ACM Computing Surveys 33:
470–516.
Kirakowski, J. and Corbett, M., 1993. SUMI: The Software Usability Measurement Inventory, British Journal of Educational Technology
24: 210–212
Lin, H. X., Choong, Y.-Y., and Salvendy, G., 1997. A proposed index of usability: A method for comparing the relative usability of
different software systems, Behaviour and Information Technology, 16: 267-277.
Macleod, M., 1994. Usability: Practical Methods for testing and Improvement, Proceedings of the Norwegian Computer Society Software
Conference, Sandvika, Norway. Retrieved July 3, 2005 from http://www.usability.serco.com/papers/mm-us94.pdf.
Macleod, M., and Rengger, R., 1993. The development of DRUM: A software tool for video-assisted usability evaluation. Retrieved July 3,
2005 from http://www.usability.serco.com/papers/drum93.pdf
Nielsen, J., 1993. Usability Engineering, London, UK: Academic Press
Symposium on User Interface Software and Technology, New York: ACM Press, pp. 101–110. Shackel, B., 1991. Usability—Context,
framework, definition, design and evaluation, in B. Shackel and S. Richardson (Eds.), Human Factors for Informatics Usability, Cambridge,
MA: University Press, pp. 21–38.
Landuaer, T.K. The Trouble with Computers: Usefulness, Usability and Productivity, MIT Press, 1995. Mayhew, D.J. (1999). The Usability
Engineering Lifecycle: A Practitioner’s Handbook for User Interface design, Morgan Kaufmann, San Francisco.
Holzinger, A.: Usability Engineering for Software Developers. Communications of the ACM 48(1), 71–74 (2005)
Seffah, A., Metzker, E.: The obstacles and myths of usability and software engineering. Communications of the ACM 47(12), 71–76 (2004)
Nielsen, Jakob (4 January 2012). "Usability 101: Introduction to Usability". Nielsen Norman Group. Archived from the original on 1
September 2016. Retrieved 7 August 2016.

More Related Content

PDF
THE USABILITY METRICS FOR USER EXPERIENCE
vivatechijri
 
PDF
Managing usability evaluation practices in agile development environments
IJECEIAES
 
PDF
Selecting A Development Approach For Competitive Advantage
mtoddne
 
PDF
Positive developments but challenges still ahead a survey study on ux profe...
Journal Papers
 
PDF
An interactive approach to requirements prioritization using quality factors
ijfcstjournal
 
PDF
A comparative studies of software quality model for the software product eval...
imdurgesh
 
PDF
Usability Evaluation Techniques for Agile Software Model
Saad, Ph.D (Health IT)
 
PDF
Process-Centred Functionality View of Software Configuration Management: A Co...
theijes
 
THE USABILITY METRICS FOR USER EXPERIENCE
vivatechijri
 
Managing usability evaluation practices in agile development environments
IJECEIAES
 
Selecting A Development Approach For Competitive Advantage
mtoddne
 
Positive developments but challenges still ahead a survey study on ux profe...
Journal Papers
 
An interactive approach to requirements prioritization using quality factors
ijfcstjournal
 
A comparative studies of software quality model for the software product eval...
imdurgesh
 
Usability Evaluation Techniques for Agile Software Model
Saad, Ph.D (Health IT)
 
Process-Centred Functionality View of Software Configuration Management: A Co...
theijes
 

What's hot (18)

PDF
The Impact of In-House Software Development Practices on System Usability in ...
IJMIT JOURNAL
 
PDF
AN APPROACH TO IMPROVEMENT THE USABILITY IN SOFTWARE PRODUCTS
ijseajournal
 
PDF
A study of various viewpoints and aspects software quality perspective
eSAT Journals
 
PDF
A METHOD FOR WEBSITE USABILITY EVALUATION: A COMPARATIVE ANALYSIS
IJwest
 
PDF
MAKE THE QUALITY OF SOFTWARE PRODUCT IN THE VIEW OF POOR PRACTICES BY USING S...
Journal For Research
 
PDF
A FRAMEWORK FOR INTEGRATING USABILITY PRACTICES INTO SMALL-SIZED SOFTWARE DEV...
ijseajournal
 
PDF
30 8948 prakash paper64 (edit ndit)
IAESIJEECS
 
PDF
ITERATIVE AND INCREMENTAL DEVELOPMENT ANALYSIS STUDY OF VOCATIONAL CAREER INF...
ijseajournal
 
PDF
Smart Sim Selector: A Software for Simulation Software Selection
CSCJournals
 
PDF
A Guideline Tool for Ongoing Product Evaluation in Small and Medium-Sized Ent...
IJECEIAES
 
PPT
Hci In The Software Process
ahmad bassiouny
 
PDF
http___www.irma-international.org_viewtitle_32970_
Abdul Hakeem
 
PDF
Performance Evaluation of Software Quality Model
Editor IJMTER
 
DOCX
Effectiveness of software product metrics for mobile application
tanveer ahmad
 
PDF
STRATEGIES TO REDUCE REWORK IN SOFTWARE DEVELOPMENT ON AN ORGANISATION IN MAU...
ijseajournal
 
PDF
An Elite Model for COTS Component Selection Process
IJEACS
 
DOCX
216328327 nilesh-and-teams-project
homeworkping8
 
The Impact of In-House Software Development Practices on System Usability in ...
IJMIT JOURNAL
 
AN APPROACH TO IMPROVEMENT THE USABILITY IN SOFTWARE PRODUCTS
ijseajournal
 
A study of various viewpoints and aspects software quality perspective
eSAT Journals
 
A METHOD FOR WEBSITE USABILITY EVALUATION: A COMPARATIVE ANALYSIS
IJwest
 
MAKE THE QUALITY OF SOFTWARE PRODUCT IN THE VIEW OF POOR PRACTICES BY USING S...
Journal For Research
 
A FRAMEWORK FOR INTEGRATING USABILITY PRACTICES INTO SMALL-SIZED SOFTWARE DEV...
ijseajournal
 
30 8948 prakash paper64 (edit ndit)
IAESIJEECS
 
ITERATIVE AND INCREMENTAL DEVELOPMENT ANALYSIS STUDY OF VOCATIONAL CAREER INF...
ijseajournal
 
Smart Sim Selector: A Software for Simulation Software Selection
CSCJournals
 
A Guideline Tool for Ongoing Product Evaluation in Small and Medium-Sized Ent...
IJECEIAES
 
Hci In The Software Process
ahmad bassiouny
 
http___www.irma-international.org_viewtitle_32970_
Abdul Hakeem
 
Performance Evaluation of Software Quality Model
Editor IJMTER
 
Effectiveness of software product metrics for mobile application
tanveer ahmad
 
STRATEGIES TO REDUCE REWORK IN SOFTWARE DEVELOPMENT ON AN ORGANISATION IN MAU...
ijseajournal
 
An Elite Model for COTS Component Selection Process
IJEACS
 
216328327 nilesh-and-teams-project
homeworkping8
 
Ad

Similar to 195 (20)

PPSX
Majestic MRSS Usability Engineering
Majestic MRSS
 
PPT
MMRSS Usability Engineering
MajesticMRSS
 
PDF
Some practical considerations and a
ijseajournal
 
PDF
Ijetr021224
ER Publication.org
 
PDF
The impact of user involvement in software development process
nooriasukmaningtyas
 
PDF
THE IMPACT OF IN-HOUSE SOFTWARE DEVELOPMENT PRACTICES ON SYSTEM USABILITY IN ...
IJMIT JOURNAL
 
PDF
The Impact of In-House Software Development Practices on System Usability in ...
IJMIT JOURNAL
 
PDF
Best Practices for Improving User Interface Design
sebastianku31
 
PDF
Best Practices for Improving User Interface Design
ijseajournal
 
PDF
BEST PRACTICES FOR IMPROVING USER INTERFACE DESIGN
ijseajournal
 
PDF
Ijcatr04051006
Editor IJCATR
 
PDF
Approaches and Challenges of Software Reusability: A Review of Research Liter...
IRJET Journal
 
PDF
User Experience Evaluation for Automation Tools: An Industrial Experience
IJCI JOURNAL
 
PDF
2012 in tech-usability_of_interfaces (1)
Mahesh Kate
 
PDF
DESQA a Software Quality Assurance Framework
IJERA Editor
 
PDF
Usability Testing - A Holistic Guide.pdf
kalichargn70th171
 
PDF
USEFul: A Framework to Mainstream Web Site Usability through Automated Evalua...
Waqas Tariq
 
PDF
Factors Influencing the Efficacy of Agile Usage
Dr. Amarjeet Singh
 
PPTX
Hci in-the-software-process-1
Ali javed
 
PDF
AN IMPROVED REPOSITORY STRUCTURE TO IDENTIFY, SELECT AND INTEGRATE COMPONENTS...
ijseajournal
 
Majestic MRSS Usability Engineering
Majestic MRSS
 
MMRSS Usability Engineering
MajesticMRSS
 
Some practical considerations and a
ijseajournal
 
Ijetr021224
ER Publication.org
 
The impact of user involvement in software development process
nooriasukmaningtyas
 
THE IMPACT OF IN-HOUSE SOFTWARE DEVELOPMENT PRACTICES ON SYSTEM USABILITY IN ...
IJMIT JOURNAL
 
The Impact of In-House Software Development Practices on System Usability in ...
IJMIT JOURNAL
 
Best Practices for Improving User Interface Design
sebastianku31
 
Best Practices for Improving User Interface Design
ijseajournal
 
BEST PRACTICES FOR IMPROVING USER INTERFACE DESIGN
ijseajournal
 
Ijcatr04051006
Editor IJCATR
 
Approaches and Challenges of Software Reusability: A Review of Research Liter...
IRJET Journal
 
User Experience Evaluation for Automation Tools: An Industrial Experience
IJCI JOURNAL
 
2012 in tech-usability_of_interfaces (1)
Mahesh Kate
 
DESQA a Software Quality Assurance Framework
IJERA Editor
 
Usability Testing - A Holistic Guide.pdf
kalichargn70th171
 
USEFul: A Framework to Mainstream Web Site Usability through Automated Evalua...
Waqas Tariq
 
Factors Influencing the Efficacy of Agile Usage
Dr. Amarjeet Singh
 
Hci in-the-software-process-1
Ali javed
 
AN IMPROVED REPOSITORY STRUCTURE TO IDENTIFY, SELECT AND INTEGRATE COMPONENTS...
ijseajournal
 
Ad

More from vivatechijri (20)

PDF
Design and Implementation of Water Garbage Cleaning Robot
vivatechijri
 
PDF
Software Development Using Python Language For Designing Of Servomotor
vivatechijri
 
PDF
GSM Based Controlling and Monitoring System of UPS Battery
vivatechijri
 
PDF
Electrical Drive Based Floor Cleaning Robot
vivatechijri
 
PDF
IoT BASED FIRE EXTINGUISHER SYSTEM with IOT
vivatechijri
 
PDF
Wave Energy Generation producing electricity in future
vivatechijri
 
PDF
Predictive Maintenance of Motor Using Machine Learning
vivatechijri
 
PDF
Development of an Android App For Designing Of Stepper Motor By Kodular Software
vivatechijri
 
PDF
Implementation Technology to Repair Pothole Using Waste Plastic
vivatechijri
 
PDF
NFC BASED VOTING SYSTEM with Electronic voting devices
vivatechijri
 
PDF
Review on Electrical Audit Management in MATLAB Software.
vivatechijri
 
PDF
DESIGN AND FABRICATION OF AUTOMATIC CEMENT PLASTERING MACHINE
vivatechijri
 
PDF
Research on Inspection Robot for Chemical Industry
vivatechijri
 
PDF
Digital Synchroscope using Arduino microcontroller
vivatechijri
 
PDF
BLDC MACHINE DESIGN SOFTWARE AND CALCULATION
vivatechijri
 
PDF
SIMULATION MODEL OF 3 PHASE TRANSMISSION LINE FAULT ANALYSIS
vivatechijri
 
PDF
Automated Water Supply and Theft Identification Using ESP32
vivatechijri
 
PDF
Multipurpose Swimming Pool Cleaning Device for Observation, Cleaning and Life...
vivatechijri
 
PDF
Annapurna – Waste Food Management system
vivatechijri
 
PDF
A One stop APP for Personal Data management with enhanced Security using Inte...
vivatechijri
 
Design and Implementation of Water Garbage Cleaning Robot
vivatechijri
 
Software Development Using Python Language For Designing Of Servomotor
vivatechijri
 
GSM Based Controlling and Monitoring System of UPS Battery
vivatechijri
 
Electrical Drive Based Floor Cleaning Robot
vivatechijri
 
IoT BASED FIRE EXTINGUISHER SYSTEM with IOT
vivatechijri
 
Wave Energy Generation producing electricity in future
vivatechijri
 
Predictive Maintenance of Motor Using Machine Learning
vivatechijri
 
Development of an Android App For Designing Of Stepper Motor By Kodular Software
vivatechijri
 
Implementation Technology to Repair Pothole Using Waste Plastic
vivatechijri
 
NFC BASED VOTING SYSTEM with Electronic voting devices
vivatechijri
 
Review on Electrical Audit Management in MATLAB Software.
vivatechijri
 
DESIGN AND FABRICATION OF AUTOMATIC CEMENT PLASTERING MACHINE
vivatechijri
 
Research on Inspection Robot for Chemical Industry
vivatechijri
 
Digital Synchroscope using Arduino microcontroller
vivatechijri
 
BLDC MACHINE DESIGN SOFTWARE AND CALCULATION
vivatechijri
 
SIMULATION MODEL OF 3 PHASE TRANSMISSION LINE FAULT ANALYSIS
vivatechijri
 
Automated Water Supply and Theft Identification Using ESP32
vivatechijri
 
Multipurpose Swimming Pool Cleaning Device for Observation, Cleaning and Life...
vivatechijri
 
Annapurna – Waste Food Management system
vivatechijri
 
A One stop APP for Personal Data management with enhanced Security using Inte...
vivatechijri
 

Recently uploaded (20)

PPTX
FUNDAMENTALS OF ELECTRIC VEHICLES UNIT-1
MikkiliSuresh
 
PDF
Unit I Part II.pdf : Security Fundamentals
Dr. Madhuri Jawale
 
PPTX
Tunnel Ventilation System in Kanpur Metro
220105053
 
PPTX
Information Retrieval and Extraction - Module 7
premSankar19
 
PDF
flutter Launcher Icons, Splash Screens & Fonts
Ahmed Mohamed
 
PDF
Introduction to Ship Engine Room Systems.pdf
Mahmoud Moghtaderi
 
PPTX
Civil Engineering Practices_BY Sh.JP Mishra 23.09.pptx
bineetmishra1990
 
PDF
Biodegradable Plastics: Innovations and Market Potential (www.kiu.ac.ug)
publication11
 
PDF
dse_final_merit_2025_26 gtgfffffcjjjuuyy
rushabhjain127
 
PDF
20ME702-Mechatronics-UNIT-1,UNIT-2,UNIT-3,UNIT-4,UNIT-5, 2025-2026
Mohanumar S
 
PDF
2025 Laurence Sigler - Advancing Decision Support. Content Management Ecommer...
Francisco Javier Mora Serrano
 
PDF
Software Testing Tools - names and explanation
shruti533256
 
PDF
Chad Ayach - A Versatile Aerospace Professional
Chad Ayach
 
PDF
Introduction to Data Science: data science process
ShivarkarSandip
 
PPTX
22PCOAM21 Session 2 Understanding Data Source.pptx
Guru Nanak Technical Institutions
 
PPTX
Inventory management chapter in automation and robotics.
atisht0104
 
PDF
Natural_Language_processing_Unit_I_notes.pdf
sanguleumeshit
 
DOCX
SAR - EEEfdfdsdasdsdasdasdasdasdasdasdasda.docx
Kanimozhi676285
 
PDF
EVS+PRESENTATIONS EVS+PRESENTATIONS like
saiyedaqib429
 
PDF
Traditional Exams vs Continuous Assessment in Boarding Schools.pdf
The Asian School
 
FUNDAMENTALS OF ELECTRIC VEHICLES UNIT-1
MikkiliSuresh
 
Unit I Part II.pdf : Security Fundamentals
Dr. Madhuri Jawale
 
Tunnel Ventilation System in Kanpur Metro
220105053
 
Information Retrieval and Extraction - Module 7
premSankar19
 
flutter Launcher Icons, Splash Screens & Fonts
Ahmed Mohamed
 
Introduction to Ship Engine Room Systems.pdf
Mahmoud Moghtaderi
 
Civil Engineering Practices_BY Sh.JP Mishra 23.09.pptx
bineetmishra1990
 
Biodegradable Plastics: Innovations and Market Potential (www.kiu.ac.ug)
publication11
 
dse_final_merit_2025_26 gtgfffffcjjjuuyy
rushabhjain127
 
20ME702-Mechatronics-UNIT-1,UNIT-2,UNIT-3,UNIT-4,UNIT-5, 2025-2026
Mohanumar S
 
2025 Laurence Sigler - Advancing Decision Support. Content Management Ecommer...
Francisco Javier Mora Serrano
 
Software Testing Tools - names and explanation
shruti533256
 
Chad Ayach - A Versatile Aerospace Professional
Chad Ayach
 
Introduction to Data Science: data science process
ShivarkarSandip
 
22PCOAM21 Session 2 Understanding Data Source.pptx
Guru Nanak Technical Institutions
 
Inventory management chapter in automation and robotics.
atisht0104
 
Natural_Language_processing_Unit_I_notes.pdf
sanguleumeshit
 
SAR - EEEfdfdsdasdsdasdasdasdasdasdasdasda.docx
Kanimozhi676285
 
EVS+PRESENTATIONS EVS+PRESENTATIONS like
saiyedaqib429
 
Traditional Exams vs Continuous Assessment in Boarding Schools.pdf
The Asian School
 

195

  • 1. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021) ISSN(Online): 2581-7280 VIVA Institute of Technology 9th National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021) F-142 www.viva-technology.org/New/IJRI THE USABILITY METRICS FOR USER EXPERIENCE Prachi Desul1 , Prof.Chandani Patel2 1 (Department of MCA, Viva School Of MCA/ University of Mumbai, India) 2 (Department of MCA, Viva School Of MCA/ University of Mumbai, India) Abstract : The Usability Metric for User Experience (UMUX) is a four-item Likert scale used for the subjective assessment of an application’s perceived usability. It is designed to provide results similar to those obtained with the 10-item System Usability Scale, and is organized around the ISO 9241-11 definition of usability. A pilot version was assembled from candidate items, which was then tested alongside the System Usability Scale during usability testing. It was shown that the two scales correlate well, are reliable, and both align on one underlying usability factor. In addition, the Usability Metric for User Experience is compact enough to serve as a usability module in a broader user experience metric Keywords - metric,system usability scale,usability,user experience. I. INTRODUCTION Usability can be measured, but it is rarely Metrics are expensive and are a poor use of typically scarce usability resources. Although measuring usability can cost fourfold the maximum amount as conducting qualitative studies (which often generate better insight), metrics are sometimes well worth the expense. Among other things, metrics can help managers track design progress and support decisions about when to release a product. As organizations increase their usability investments, collecting actual measurements is a natural next step and does provide benefits. In general, usability metrics let you: Track progress between releases. You cannot fine-tune your methodology unless you recognize how well you're doing. Assess your competitive position. Are you better or worse than other companies? Where are you better or worse? Make a Stop/Go decision before launch. Is the design ok to release to an unsuspecting world? Create bonus plans for design managers and higher-level executives. For example, you'll determine bonus amounts for development project leaders supported what percentage customer-support calls or emails their products generated during the year. Usability may be a quality attribute that assesses how easy user interfaces are to use. The word "usability" also refers to methods for improving ease-of-use during the planning process. Usability is defined by 5 quality components: Learnability: How easy is it for users to accomplish basic tasks the primary time they encounter the design? Efficiency: Once users have learned the planning , how quickly can they perform tasks? Memorability: When users return to the planning after a period of not using it, how easily can they re-establish proficiency? Errors: what percentage errors do users make, how severe are these errors, and the way easily can they get over the errors? Satisfaction: How pleasant is it to use the planning There are many other important quality attributes. A key one is utility, which refers to the design's functionality: Does it do what users need? Usability and utility are equally important and together determine whether something is useful: It matters little that something is straightforward if it isn't what you would like
  • 2. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021) ISSN(Online): 2581-7280 VIVA Institute of Technology 9th National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021) F-143 www.viva-technology.org/New/IJRI RELATED WORK Software development organizations contains marketing, design, project management, development and quality assurance team. it's important for the various teams within the organization to know the advantages and limitation of incorporating various usability testing methods within the software development life cycle. Some reasons for poor usability include effort prioritization conflicts from development, project management, and style team. The part played by the usability engineer is to urge involved because the heuristic judge and facilitate the event and style efforts are supported usability principles and at an equivalent time adhering to the project period of time . Two approaches for usability inspection methods consist of user experience testing and expert review or more commonly referred to as Heuristic Evaluation (HE). This paper focuses on understanding the strength of HE as a strategy for defect detection. The results also increase the need for integrating traditional heuristics with modified heuristics customized to the domain or field of the project being tested such as E-Government.[1] . Describes an innovative methodology developed for usability tests of the IEEE PCS internet site that combines heuristic evaluation and task-based testing. Tests conducted on the PCS Web site has evaluated whether the location facilitated members' ability to seek out information and participate in discussions, also as developers' are capable to seek out , contribute, and manage administrative information on the location . The distinctive social characteristics of Communities of Practice (CoPs) provide context for tailoring design heuristics for informational internet sites that serve the requirements and interests of CoP members. The discussion gives important on technical communication principles that apply not only to evaluating the effectiveness of the PCS internet site design but also to all or any centralised. f the PCS Web site design but also to all centralised technical communication products and media that increasingly demand user participation.[2] Here Proposes a usability testing method that alters a given usability testing method to form it less expensive and time consuming for the investigator. The usage of user-centred methods is stimulated and a mixture of two centralised methods suggested. Future this method is combined with other techniques to additionally detect the state of satisfaction within the participant. User based features like emotions, opinions, cognitive and conative effects are therefore are considered. a way for the joint analysis of all data gathered is proposed.[3] More automated system testing might be instrumental in achieving these goals and in recent years testing tools are developed to automate the interaction with software systems at the GUI level. However, there's absence knowledge on the usability and applicability of these tools in an industrial setting. This study analyses two tools for automated visual GUI testing on a real-world, safety-critical software is developed by the corporate Saab AB. The tools are compared supported their characteristics also as how they support automation of system test cases that have previously been presented manually. The time to develop and the size of the automated test cases also as their execution times are evaluated.[4] Usability testing is important to be performed by software development companies to determine whether their products are usable or unusable. it's equally important for the end- users companies running usability studies also . This paper represents the event of Usability Management System (USEMATE), an automatic system as an alternate solution to assist usability tester or practitioner to run usability testing more efficiently and effectively. The main objective of USEMATE is to enhance the present systems which are paper-based, require manual score calculation using excel and manual reaction time recording into a webbased management system. The tools used for the event compromise Adobe Photoshop CS2, Adobe Dreamweaver CS3, Apache Web Server, and a private computer (PC). The modules and usefulness criteria included and therefore the approach utilized in the event of this automated system were replicated from a case study on usability testing of a webpage conducted earlier. USEMATE is envisaged to be ready to minimize the lengthy working hour and energy needed to manage the usability testing process from phase to phase.[5] Usage of traditional UT techniques which aren't sufficient and suitable with the growing complexity of internet sites & constraints faced by usability practitioners. For a sample, the Lab Based Usability Testing (LBUT) is dear and has lesser coverage than Exploratory Heuristics Evaluation (EHE) while the EHE is subjected to false alarms. A hybrid usability methodology (HUM) comprising of LBUT and EHE is obtainable . Six experiments involving EHE and LBUT were performed at the first , in-between and future stages of the SDLC of websites, during which the simplest relative performance of every method were measured using the dependent variables followed by the planning of a HUM. To prove the HUM, four case
  • 3. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021) ISSN(Online): 2581-7280 VIVA Institute of Technology 9th National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021) F-144 www.viva-technology.org/New/IJRI studies were conducted, during which remarkable improvements were observed in website effectiveness and efficiency. supported the findings, HUM may be a realistic approach for usability practitioners and also provides stakeholders a validated situational deciding framework for usability testing strategies taking under consideration world constraints.[6] II. METHODOLOGY usability may be a multidimensional concept that aims into the fulfillment of certain set of goals, mainly; effectiveness, efficiency and satisfaction” and without these goals, usability can't be achieved. Effectiveness: this term refers to the accuracy and completeness of the user goal achievement. Efficiency: refers to the resources exhausted by users so as to make sure an accurate and completed achievement of the goals. Satisfaction refers to the subjective thoughts of the user regarding their attitude, level of comfort, relevance of application and therefore the acceptability of use. A system or a product is completely hooked in to its specific and distinct context of use, the character of the task, the users appointed to require the task, and finally the equipment used to perform it. Measuring the usability of a certain system can be done through the measurement of the three goals using a number of observable and quantifiable usability metrics. In the light of the three goals mentioned earlier, we’ll go through the different metrics used to measure each goal, however, our main focus will be on the Effectiveness It can be measured through using two usability metrics: Success rate, called also completion rate and the number of errors .Success rate/ completion rate: is the percentage of users who were able to successfully complete the tasks. Despite the very fact that this metric remains unable to supply insights on how the tasks were performed or why users fail just in case of failure, they're still critical and are at the core of usability.The success rate is one of the most commonly used metric for most of practitioners, where 79% of them reported using the success rate as the first metric to think about for simple use and through data collection and interpretation. The success rate metric are often measured by assigning a binary value of 0 and 1 to the users; where 1 is assigned to those that successfully complete the task and 0 to the ones who fail to do so.”Once the test is over and you have all the data you need to calculate your success rate, the next step would be to divide the total number of correctly completed attempts by the total number of attempts multiplied by 100.The completion rate is easy to measure and to collect but with one major pitfall to consider; it happens frequently when a user stops at some point during the task and fails to end it or maybe finishes it but not in the expected way.Taking into account that they have completed some steps successfully in the task, how would you score what they have accomplished as an evaluator?I am getting to dive a touch bit into the small print on the way to score you users taking under consideration the various stages of their success or failure, using an example to illustrate. Let’s consider, for instance, that your user task is to order a box of dark chocolates with a card to their mother for mother’s day.The scoring might seem simple at first glance, and you'll easily say; if the mother receives the box of bittersweet chocolate with the cardboard then it's a case of success. On the opposite hand, if the mother doesn't receive anything then we will simply say, that this is often a case of failure. However, it’s not that straightforward , there are other considerations: Ordered a box of chocolate but not the dark one (white or milky or a spread of these) alongside card. Ordered the proper chocolate box without a present card Ordered quite one box of chocolate by mistake and a present card Ordered a box of chocolate but didn’t add delivery information or address Ordered a box of chocolates and gift card successfully but to the incorrect address All these cases entail a percentage of success and failure within the process of fulfilling the task, their failure is partial also as their success, which simply means that as an evaluator you would like to interact your own personal opinion within the scoring. If you decided that there are no middle grounds in the estimated scoring, your success rate would be different from that obtained when you appreciate the effort they have made in spite of the task you planned for them. The fact that there is not a steady rule when it comes to scoring your users, and oftentimes success rates become subjective; because different evaluators won’t have the same scoring and estimate an equivalent percentage of failure or success for the above cases, within the same way. However, so as to mainstream the method , you'd like to work out the important aspects of the task and what score you would allot each a part of it.
  • 4. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021) ISSN(Online): 2581-7280 VIVA Institute of Technology 9th National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021) F-145 www.viva-technology.org/New/IJRI Success rate remains the only usability metric and therefore the easiest among the entire range of those usability signals, mainly because it’s quick and straightforward and doesn't require much preparation and time to gather and most significantly it enables you from tracking the progress within your system being one among the overall areas commonly employed by marketers and designers right along , to ascertain the large picture of how well their system is doing at the extent of user experience, this doesn't change the very fact , that it remains subjective. Help the designers and developers to ascertain that uncovering problems isn't a symbol of failure. nobody does an ideal job the initially time only. Users always surprise us. It's much better to seek out out about the issues with a some users during a usability test than later when the design is being reviewed and is out there within the marketplace. [7] 2. The Number of Errors This metric provides an idea about the average number of times where an error occurred per user when performing a given task.These errors can be either slips; where the user accidently types the incorrect email address or picks the incorrect dates when making a reservation or booking a flight, or they will be mistakes where the user clicks on an image that’s not clickable or even double clicks a button or a link intentionally.Normally any users of any interactive system may make errors, where 2 out of every 3 users err, and there's absolutely no such thing as a ‘’perfect’’ system anyway..To help you measure and ensure obtaining great diagnostic results, it is highly recommended to set a short description where you give details about how to score those errors and the severity of a certain of an error to show you how simple and intuitive your system is. 3. Time-Based Efficiency Or referred to as time on task, this metric helps in the measurement of the time spent by the user to complete the task or speed of work. This consequently means there's an immediate relationship between the efficiency and effectiveness, and that we can say, that efficiency is really the user effectiveness divided by the user time spent. 4. The Overall Relative Efficiency This is actually measured through users who successfully completed the task in relation to the total time taken by all users.Let’s consider that we have 2 users where each one of is supposed to complete a different task.The first user has successfully completed task (1) yet failed to complete task (2). While the second hand has did not complete task (1) but completed task (2) successfully. 5.Post Task Satisfaction Once your users have finished the task and it doesn’t matter whether complete it successfully or not, it’s time to hand them over a questionnaire to have an idea about the difficulty of the task from the users point of view.Generally, these tasks consist of 5 questions, and the idea behind them give your users a space to judge the usability of your system. 6. Task Level Satisfaction This metric helps into investigating the overall impression of users confronted with the system. To measure the level of satisfaction you can either use the smiley scale method where the user is expected to choose one of the 5 smileys as a reflection of their satisfaction or lack of satisfaction.The Word Method is also use to measure the user’s level of satisfaction through listing a series of positive and negative connotations highlighted in green and red respectively. In light of the conceptual framework we have discussed earlier, the user experience is highly influenced by everything that surrounds it.However, the tide might be turning on usability funding. I've recently worked on several projects to determine formal usability metrics in several companies. As organizations increase their usability investments, collecting actual measurements is a natural next step and does provide benefits. In general, usability metrics let you: Track progress between releases. You cannot fine-tune your methodology unless you recognize how well you're doing.Assess your competitive position. Are you better or worse than other companies? Where are you better or worse?Make a Stop/Go decision before launch. Is the design ok to release to an unsuspecting world? Create bonus plans for design managers and higher-level executives. For example, you'll determine bonus amounts for development project leaders supported what percentage customer-support calls or emails their products generated during the year. How to Measure
  • 5. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021) ISSN(Online): 2581-7280 VIVA Institute of Technology 9th National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021) F-146 www.viva-technology.org/New/IJRI It is easy to specify usability metrics, but hard to gather them. Typically, usability is measured relative to users' performance on a given set of test tasks. The most basic measures are supported the definition of usability as a top quality metric: success rate (whether users can perform the task at all), the time a task requires, the error rate, and users' subjective satisfaction.[8] It is also possible to collect more specific metrics, such as the percentage of time that users follow an optimal navigation path or the number of times they need to back track. You can collect usability metrics for both novice users and experienced users. Few websites have truly expert users, since people rarely spend enough time on any given site to find out it in great detail. Given this, most websites benefit most from studying novice users. Exceptions are sites like Yahoo and Amazon, which have highly committed and constant users and may enjoy studying expert users. Intranets, extranets, and weblications are almost like traditional software design and can hopefully have skilled users; studying experienced users is thus more important than working with the novice users who typically dominate public websites. With qualitative user testing, it is enough to test 3–5 users. After the fifth user tests, you've got all the insight you're likely to urge and your best bet is to travel back to the drafting board and improve the design so that you can test it again. Testing quite five users wastes resources, reducing the amount of design iterations and compromising the ultimate design quality. Unfortunately, when you're collecting usability metrics, you want to test with quite five users. In order to urge a fairly tight confidence interval on the results, I usually recommend testing 20 users for every design. Thus, conducting quantitative usability studies is approximately fourfold as expensive as conducting qualitative ones. Considering that you simply can learn more from the simpler studies, I usually recommend against metrics unless the project is extremely well funded.success rate or the completion rate because it’s gives a general idea about the performance of the system. IV. FIGURES AND TABLES Comparing Two Designs To illustrate quantitative results, we can look at those recently posted by Macromedia from its usability study of a Flash site, aimed at showing that Flash is not necessarily bad. Basically, Macromedia took a design, redesigned it according to a set of usability guidelines, and tested both versions with a group of users. Here are the results: Table no:1 Original Design Redesign Task 1 12 sec. 6 sec. Task 2 75 sec. 15 sec. Task 3 9 sec. 8 sec. Task 4 140 sec. 40 sec. Satisfaction score* 44.75 74.50 *Measured on a scale ranging from 12 (unsatisfactory on all counts) to 84 (excellent on all counts). Table no :2
  • 6. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021) ISSN(Online): 2581-7280 VIVA Institute of Technology 9th National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021) F-147 www.viva-technology.org/New/IJRI Fig-1 fig-2 Fig-3 V. CONCLUSION usability metrics, it's possible to watch and quantify the usability of any system irrespective if it's software, hardware, web-based or a mobile application. This is because the metrics presented here are supported extensive research and testing by various academics and experts and have withstood the test of your time . Moreover, they cover all of the three core elements that constitute the definition of usability: effectiveness, efficiency and satisfaction, thus ensuring an all-round quantification of the usability of the system being tested. usability gets side-tracked and becomes something which will be addressed afterward . Tracking the usability of your product with metrics allows you to possess a transparent understanding of the experience you're providing to your users, and improve it over time. usability metrics are measured and aggregated into actionable results, which allows you to act instantly on the info you record. That makes it painless to stay track of how your design's usability progresses, detect issues, and improve your users' experience REFERENCES Bevan, N. and Macleod, M. 1994. Usability measurement in context, Behavior and Information Technology 13: 132–145. Ivory, M.Y. and Hearst, M.A. 2001. The state of the art in automating usability evaluation of user interfaces, ACM Computing Surveys 33: 470–516. Kirakowski, J. and Corbett, M., 1993. SUMI: The Software Usability Measurement Inventory, British Journal of Educational Technology 24: 210–212 Lin, H. X., Choong, Y.-Y., and Salvendy, G., 1997. A proposed index of usability: A method for comparing the relative usability of different software systems, Behaviour and Information Technology, 16: 267-277. Macleod, M., 1994. Usability: Practical Methods for testing and Improvement, Proceedings of the Norwegian Computer Society Software Conference, Sandvika, Norway. Retrieved July 3, 2005 from http://www.usability.serco.com/papers/mm-us94.pdf. Macleod, M., and Rengger, R., 1993. The development of DRUM: A software tool for video-assisted usability evaluation. Retrieved July 3, 2005 from http://www.usability.serco.com/papers/drum93.pdf Nielsen, J., 1993. Usability Engineering, London, UK: Academic Press Symposium on User Interface Software and Technology, New York: ACM Press, pp. 101–110. Shackel, B., 1991. Usability—Context, framework, definition, design and evaluation, in B. Shackel and S. Richardson (Eds.), Human Factors for Informatics Usability, Cambridge, MA: University Press, pp. 21–38. Landuaer, T.K. The Trouble with Computers: Usefulness, Usability and Productivity, MIT Press, 1995. Mayhew, D.J. (1999). The Usability Engineering Lifecycle: A Practitioner’s Handbook for User Interface design, Morgan Kaufmann, San Francisco. Holzinger, A.: Usability Engineering for Software Developers. Communications of the ACM 48(1), 71–74 (2005) Seffah, A., Metzker, E.: The obstacles and myths of usability and software engineering. Communications of the ACM 47(12), 71–76 (2004) Nielsen, Jakob (4 January 2012). "Usability 101: Introduction to Usability". Nielsen Norman Group. Archived from the original on 1 September 2016. Retrieved 7 August 2016.