Design for Strangers: Effective User Experience Design  When Your Users are on Another Continent Rashmi Sinha Jonathan Boutelle Uzanto Consulting
Structure of workshop Introduction Evaluating Systems (Morning session) Overview of evaluation   Heuristic Evaluation Usability Testing GOMS Understanding users (Afternoon session) Personas and Scenarios Mental Models and Information Architecture Business of Usability (time permitting)
Evaluating systems: Available data streams Different data streams yield different types of metrics Heuristic Evaluation Usability Testing Remote Usability Testing Server Logs or Transaction Logs Satisfaction Data Page Level Ratings GOMS
Heuristic Evaluation Using heuristics (or rules of thumb) for evaluating systems. Expert analyze degree to which system complies with rules Heuristics such as Keep user informed of system status Speak the user’s language
Usability Tests Test with users Very useful for design purposes But software must be built before it can be tested Difficult to use to convince management Often conducted in artificial scenarios
Remote Usability Testing Advantages Large Sample Size Disadvantages Cost Most of the usual disadvantages of usability testing
Server and Transaction Logs Can give an accurate view of site activity Can give detailed view of site activity – possible to drill down Hard to relate to user experience and user goals Hard to understand – massive reams of data Often used by corporations to roughly track user experience
Satisfaction Ratings Give an overall view of the site Such ratings often have business buy-in Very difficult to move such numbers Might not relate to specific aspects of the site Make effort not to let the satisfaction levels fall
GOMS Can help track the complexity of an interface How much work it will take to complete a task Might not tell you what real users will do Very helpful in comparing interfaces Can be used with interfaces that have not been implemented yet
What Data Streams to Use What does it measure User Behavior (navigation paths, errors) or User Attitudes (user loyalty, satisfaction)?  Gap between reported and actual behavior.  Recommendation: Have at least one data stream of each. How comprehensive is the coverage? how much of the site is covered the frequency of measurement Sensitivity of measurement:  How sensitive is data stream to changes in the user experience
What Data Streams to Use continued Sampling Bias:  Every data stream comes with its own set of sampling biases. The  economics of measurement  will determine what types of data are practical to collect. Initial cost Ongoing cost Cost of increasing sample size
Structure of workshop Introduction Evaluating Systems (Morning session) Overview of evaluation   Heuristic Evaluation Usability Testing GOMS Understanding users (Afternoon session) Personas and Scenarios Mental Models and Information Architecture Business of Usability (time permitting)
Heuristic Evaluation Developed by Jakob Nielsen Helps find usability problems in a UI design Small set (3-5) of evaluators examine UI independently check for compliance with usability principles (“heuristics”) different evaluators will find different problems evaluators only communicate afterwards findings are then aggregated Can perform on working UI or on prototypes or designs
What are heuristics? Simple easy rules of thumbs for enhancing usability For example:  Have simple and  natural dialog Speak the users’ language
Heuristic Evaluation Process Evaluators go through UI several times inspect various dialogue elements compare with list of usability principles consider other principles/results that come to mind Usability principles Nielsen’s “heuristics” supplementary list of category-specific heuristics competitive analysis & user testing of existing products Use violations to redesign/fix problems From Jakob Neilsen
Heuristic 1: Visibility of system status The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. searching database for matches
Visibility of system status (cont) Response Time parameters 0.1 sec: no special indicators needed, why?  1.0 sec: user tends to lose track of data  10 sec: max. duration if user to stay focused on action  for longer delays, use percent-done progress bars
Heuristic 2: Match between system &  real world The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
There should be a match between system & real world follow real world conventions Use User’s language, not developer’s language
Provide ways for users to backtrack when they make mistakes. Have clearly labeled exits allowing users to backtrack without an extended interaction. Support undo and redo. Heuristic 3: User Control and Freedom
User Freedom Heuristics (cont.) H2-3: User control & freedom “exits” for mistaken choices, undo, redo don’t force down fixed paths Wizards must respond to Q before going to next Should be easy to good for beginners have 2 versions (WinZip)
Use a consistent look and feel. Do not confuse users by changing platform conventions. Heuristic 4: Consistency and Standards
Consistency (cont.) Is this confusing?
Heuristic 5: Error Prevention Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Example:   If user is asked to spell something, e.g. file names, it might be easier to give them a menu from which they can choose the files. Example: Modes When the same action leads to different consequences in different states. For example in older word processors, there was an insert and edit modes. The same key press in the different modes would lead to different outcomes.
Heuristic 6: Recognition rather than recall Make objects, actions, and options visible.  The user should not have to remember information from one part of the dialogue to another.  Instructions for use of the system should be visible or easily retrievable whenever appropriate. Computers good at remembering things, human beings are not. Computer should display dialog elements to the user, and have them make a choice. During web navigation, remind users where they are currently.
Accelerators -- unseen by the novice user -- may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.  Heuristic 7: Flexibility & efficiency of use
Flexibility (cont.) accelerators for experts (e.g., gestures, kb shortcuts) allow users to tailor frequent actions (e.g., macros) OR Ctrl-V Ctrl-C Ctrl-X Edit Cut Copy Paste
Dialogues should not contain information which is irrelevant or rarely needed.  Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.  Heuristic 8: Aesthetic and minimalist design
Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.  Heuristic 9: Help users recognize, diagnose, and recover from errors
Heuristic 10: Help and documentation  It is better if the system can be used without documentation, but it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.
Phases of Heuristic Evaluation Pre-evaluation training give evaluators needed domain knowledge and information on the scenario Evaluation individuals evaluate and then aggregate results Severity rating determine how severe each problem is (priority) can do this first individually and then as a group Debriefing discuss the outcome with design team
How to Perform Evaluation At least two passes for each evaluator first to get feel for flow and scope of system second to focus on specific elements If system is walk-up-and-use or evaluators are domain experts, no assistance needed otherwise might supply evaluators with scenarios Each evaluator produces list of problems explain why with reference to heuristic or other information be specific and list each problem separately
Examples Can’t copy info from one window to another violates “Minimize the users’ memory load” (H1-3) fix: allow copying Typography uses mix of upper/lower case formats and fonts violates “Consistency and standards” (H2-4) slows users down probably wouldn’t be found by user testing fix: pick a single format for entire interface
Severity Rating Used to allocate resources to fix problems  Estimates of need for more usability efforts Combination of frequency impact persistence (one time or repeating) Should be calculated after all evals. are in Should be done independently by all judges Severity Ratings 0 - don’t agree that this is a usability problem 1 - cosmetic problem  2 - minor usability problem 3 - major usability problem; important to fix 4 - usability catastrophe; imperative to fix
Debriefing Conduct with evaluators, observers, and development team members Discuss general characteristics of UI Suggest potential improvements to address major usability problems Dev. team rates how hard things are to fix Make it a brainstorming session little criticism until end of session
Results of Using HE Single evaluator achieves poor results only finds 35% of usability problems 5 evaluators find ~ 75% of usability problems why not more evaluators???? 10? 20? adding evaluators costs more many evaluators won’t find many more problems
Summary Heuristic evaluation is a discount method Have evaluators go through the UI twice Ask them to see if it complies with heuristics note where it doesn’t and say why Combine the findings from 3 to 5 evaluators Have evaluators independently rate severity Discuss problems with design team Alternate with user testing
Heuristic Evaluation Exercise Split into two groups Conduct Heuristic Evaluation as a group (Create list of heuristic violation) Each person within group provides a severity rating for each heuristic violation (eliminate redundancies) Average severity for each group Present back to larger group
Structure of workshop Introduction Evaluating Systems (Morning session) Overview of evaluation   Heuristic Evaluation Usability Testing GOMS Understanding users (Afternoon session) Personas and Scenarios Mental Models and Information Architecture Business of Usability (time permitting)
Overview of user testing Why do user testing? Choosing participants Designing the test Collecting data Analyzing the data
Why do User Testing? Can’t tell how good or bad UI is until people use it! Other methods are based on evaluators who? may know too much may not know enough (about tasks, etc.) Summary:  Hard to predict what real users will do
Choosing Participants Representative of eventual users in terms of job-specific vocabulary / knowledge tasks If you can’t get real users, get approximation system intended for doctors get medical students system intended for electrical engineers get engineering students Use incentives to get participants
Ethical Considerations Sometimes tests can be distressing users have left in tears users can be embarrassed by mistakes You have a responsibility to alleviate this make voluntary with informed consent avoid pressure to participate let them know they can stop at any time [Gomoll] stress that you are testing the system, not them make collected data as anonymous as possible
User Test Proposal A report that contains objective description of system being testing task environment & materials participants methodology tasks test measures
Selecting Tasks Should reflect what real tasks will be like Tasks from analysis & design can be used may need to shorten if they take too long require background that test user won’t have Avoid bending tasks in direction of what your design best supports Don’t choose tasks that are too fragmented
Deciding on Data to Collect Two types of data process data observations of what users are doing & thinking bottom-line data summary of what happened (time, errors, success…) i.e., the dependent variables Focus on process data first gives good overview of where problems are Bottom-line doesn’t tell you where to fix just says: “too slow”, “too many errors”, etc. Hard to get reliable bottom-line results need many users for statistical significance (don’t bother unless needed)
The “Thinking Aloud” Method Need to know what users are thinking, not just what they are doing Ask users to talk while performing tasks tell us what they are thinking tell us what they are trying to do tell us questions that arise as they work tell us things they read Make a recording or take good notes make sure you can tell what they were doing
Thinking Aloud (cont.) Prompt the user to keep talking “tell me what you are thinking” Only help on things you have pre-decided keep track of anything you do give help on Recording use a digital watch/clock take notes, plus if possible record audio and video (or even event logs)
Using the Test Results Summarize the data make a list of all critical incidents (CI) positive: something they liked or worked well negative: difficulties with the UI include references back to original data try to judge why each difficulty occurred What does data tell you? UI work the way you thought it would? consistent with heuristic evaluation users take approaches you expected?
Using the Results (cont.) Update task analysis and rethink design  rate severity & ease of fixing CI’s fix both severe problems & make the easy fixes Will thinking aloud give the right answers? not always if you ask a question, people will always give an answer, even it is has nothing to do with the facts try to avoid specific questions
Measuring Bottom-Line Usability Situations in which numbers are useful time requirements for task completion successful task completion compare  two designs on speed or # of errors Do not combine with thinking-aloud talking can affect speed and accuracy (neg. & pos.) Time is easy to record Error or successful completion is harder define in advance what these mean
Analyzing the Numbers Example: trying to get task time <=30 min.  test gives: 20, 15, 40, 90, 10, 5 mean (average) = 30 median (middle) = 17.5 looks good!  wrong answer, not certain of anything Factors contributing to our uncertainty small number of test users (n = 6) results are very variable (standard deviation = 32) std. dev. measures dispersal from the mean
Measuring User Preference How much users like or dislike the system can ask them to rate on a scale of 1 to 10 or have them choose among statements “best UI I’ve ever…”, “better than average”… hard to be sure what data will mean novelty of UI, feelings, not realistic setting, etc. If many give you low ratings, you are in trouble Can get some useful data by asking what they liked, disliked, where they had trouble, best part, worst part, etc. (redundant questions)
User Testing: Cultural Issues Are users the same all over Obviously not Getting users that are as similar as possible to your real users is important Can you test on users from another country? Probably not for things that are culturally specific Entertainment marketing-ware Generic business software Yes for applications targeted at specialists with strong international work cultures Doctors Software engineers
Testing Details Order of tasks choose one simple order (simple -> complex) Training depends on how real system will be used What if someone doesn’t finish assign very large time & large # of errors Pilot study helps you fix problems with the study do twice, first with colleagues, then with real users
Instructions to Participants Describe the purpose of the evaluation “I’m testing the product; I’m not testing you” Tell them they can quit at any time Demonstrate the equipment Explain how to think aloud Explain that you will not provide help Describe the task give written instructions
Details (cont.) Keeping variability down recruit test users with similar background brief users to bring them to common level perform the test the same way every time don’t help some more than others (plan in advance) make instructions clear Debriefing test users often don’t remember, so show video segments ask for comments on specific features show them screen (online or on paper)
Summary User testing is important, but takes time & effort Early testing can be done on a mock-ups (low-fi) Use real tasks & representative participants Be ethical & treat your participants well Want to know what people are doing & why i.e., collect process data Using bottom line data requires more users to get statistically reliable results
User Testing Exercise Divide into groups Each group devise a test plan 2 tasks, where to get users from, who to test Test someone from the other group Note findings
Structure of workshop Introduction Evaluating Systems (Morning session) Overview of evaluation   Heuristic Evaluation Usability Testing GOMS Understanding users (Afternoon session) Personas and Scenarios Mental Models and Information Architecture Business of Usability (time permitting)
GOMS Can help track the complexity of an interface How much work it will take to complete a task Might not tell you what real users will do Very helpful in comparing interfaces Can be used with interfaces that have not been implemented yet
GOMS Overview Goals, Objects, Methods, Selection Rules A way of measuring how much work it takes to do something using a given information system System doesn’t have to exist yet Many GOMS variants: most are  quite  complex and difficult to implement A simplified version of Keystroke-Level GOMS will be presented today
GOMS Keystroke Actions The actions K (Click, Keying): .2 Seconds M (mentally preparing): 1.35 Seconds P (pointing): 1.1 Seconds H (homing) (move hand between keyboard and pointing device) .4 Second R (system responding): varies by system / action Very approximate estimates of time to do task Useless for predicting how much time a task will take Thinking doesn’t always take 1.35 second Pointing time varies with size of target and distance from current location (Fitt’s law) Yet valid on a comparative basis if two designs / systems are analyzed using the same technique
EZ-GOMS Calculation Explicitly specify a task Typically many potential paths through a given design, optional fields etc: get explicit Consider using ranges (minimum, maximum, typical) to get a better sense of best / worst case scenarios Calculate all the actions that will be taken to perform that task Add M (mental preparation) in using this rules In front of all clicking In front of all pointing Remove “M”s using these rules (you’ll do this automatically after a little practice) Remove anticipated “M”s (M P M K-> M P K) Remove “M”s within cognitive units (“fred”-> MKMKMKMK->MKKKK) Remove overlapping “M”s (adjacent to Rs) Remove “M”s before consecutive terminators }} Remove “M”s that are terminators of commands
EZ-GOMS Example H M P K H (select name text box) M K K K K K K (enter name) H M P K H (select password text box) M K K K K K K (enter password) H M P K (click “sign in” button) R (waiting for the server to respond)
Understanding User Needs Afternoon Session
Structure of workshop Introduction Evaluating Systems (Morning session) Overview of evaluation   Heuristic Evaluation Usability Testing GOMS Understanding users (Afternoon session) Personas and Scenarios Mental Models and Information Architecture Business of Usability (time permitting)
Problem with traditional user research methods Long sessions of observing users or interviewing them or participatory design.  Appropriate in face to face interaction situations. Methods work well in designing for easy to access audiences. Difficult to use for remote users.  Difficult to use when designing for global audiences. Also difficult to use such methods to make business case since numbers are small and data is qualitative. So what is the answer?
Semi-structured user research methods  Using mostly phone and online surveys Complementary with, rather than an alternative to open-ended methods Can work for information-rich domains Help understand information representations in user’s minds. e.g. design of navigation for cell phone. Work well in remote situations
Two types of user research methods  Part 1: User information needs What user needs are important? Can users be differentiated into groups on the basis of such needs? Can this grouping be used to form personas? Part 2: User Categorizations Scope & boundaries of information domain Structure of information domain Differences between groups of people (different user groups, different cultures, stakeholders)
Part 1: Understanding user needs, creating scenarios & personas remotely Why persona based design One of the problems in design is that it is very hard to visualize an abstract “USER” and what he / she might want Develop one or two persona of the typical “user” from interviews with many users Persona is made up person, your so called “typical user”. Should be based on your experiences with actual users in the interview stage. From Alan Cooper Many potential users One Persona
Persona based Design Process Persona:  The archetypical user Goals Goals of the persona in using the software Tasks Specific steps needed to accomplish goal. Scenario The usage scenario, the whole incident of software usage From Alan Cooper
Characteristics of Personas (from Cooper) “Hypothetical Archetypes” Archetype:  An original model after which other similar things are patterned; a prototype A precise description of a user and what they want to accomplish Imaginary, but  precise Specific, but stereotyped
Targeted Design with Personas Describe a person in terms of their  Goals  in life (especially relating to this project) Capabilities ,  inclinations , and  background People have a “visceral” ability to generalize about real and  fictional  people They won’t be 100% accurate, but it feels natural to think about people this way Why use personas If you try to satisfy everyone, you end up satisfying no one. A compromise design pleases no-one From all your interviews etc.,  decide what is your typical user / users, create a specific persona  then try to please that that persona 100% of the time.
Advantages of Personas Targeted Design Works Better Example: Roller suitcases Was designed specifically for airline employees, pilots, airhostesses etc. Has become popular with all classes of people In order to do good design you need to have a specific person in mind, and think in terms of that person every time a design decision needs to be made Puts an end to feature debates Makes hypothetical arguments less hypothetical Q: “What if the user wants to print this out?” Typical discussion “The user will / wiil not want to print often.” “ Given her tasks, and Emilee won’t want to print often.”
Case Study using Personas Primary Persona Joe, the executive Make him happy 100% of the time Secondary Persona Dan, the traveler Try to take care of his needs as well
Developing Personas cont. Joe: The busy traveling executive from a multinational company. He is on the road about 10 days a month. He is very  fond of food  but is afraid to explore in strange cities, and prefers restaurants which serve  good , but not exotic food. He is also fond of a  beer  with his meal. He does not like to  travel  far for food, prefers to walk or hop into a cab for a short ride
Developing Personas cont. Dan : Driving his car across the country after graduating. Gets to a different city every night and finds a hotel and a restaurant. He wants to  explore  the town, find the local hangouts, understand the town’s culture. He likes to try  different kinds of food . He prefers restaurant in the  middle  of the town.
Goals and Tasks of Users Goals are larger functions that the user is hoping to satisfy  Get acquainted with the city, discover its special cuisine Not have to travel too much for food Relax after a hard day’s work / driving
Tasks of users Tasks are the specific steps that the user has to go through in order to accomplish his goals. Asks include the usage of the software. Find information  about various restaurants Decide  on the one based on factors such as price, cuisine, serves alcohol or not/ distance from location Get  to the restaurant Eat Pay  for meal
Development of Scenarios Primary Persona:  Joe, the executive Make him happy 100% of the time Scenario: Joe’s company has tied up with some Delhi IT company, and he is visiting Delhi for the first time. He is staying somewhere near South Ex. He needs to find a restaurant to eat at. He is not feeling adventerous, so not Dosa! Just some safe Burger and Fries. So Joe turns to his trusted Palm
Development of Scenarios Joe needs to input his location into his palm. Input what kind of food he wants or the program can use defaults The information returned:  list of possible restaurants along with their relevant details, kinds of food etc. More details about each on request:  details such as the availability of beer, if they take credit cards, links to reviews etc.
Development of Scenarios The information returned to Joe needs to be  broad  (offer a number of options) and  deep  (offer more details upon request) Location Information is another concern of Joe’s. Ideally he wants exact distance & directions to restaurant. Not possible, not live website
Development of Scenarios What else does Joe need?   To mark restaurants that he liked. Lets think more… Compromise : Tag restaurants in terms of neighborhoods.  Joe can give current neighborhood. Can be shown  map  with neighborhoods marked out & approximate distances.
Our secondary Persona Does this design make Dan happy? Designing for one specific user often makes other users happy as well.
Aspects of Scenarios Daily Use Fast to learn Shortcuts and customization after more use Necessary Use Infrequent but required Nothing fancy needed Edge Cases Ignore or save for version 2
Personas and Market Segmentation Uses of Market Segmentation Used to identify clusters of people product can appeal to. Using demographics or using attitudinal/psychological/psychographic variables. Questions focus on like / dislike of product concept what do you think of vanilla coke or green Heinz ketchup? Forecasts marketplace acceptance of products.  Helps convince executives to build product.  Not helpful for defining and designing product.
Reconciling personas and market segments Build personas on top of segments Ground the personas in reality. Define a persona for each main segment Focus on goals and behaviors of users. Advantages: Easy to get buy-in for personas from management, engineering etc.
Persona building method Method Conduct secondary research Examine existing market segments Conduct interviews with various stakeholders, including multiple users Conduct online survey if users are remote. Find patterns. Pick nugget and interesting tidbit and build persona around it.
Conduct secondary research Examine existing market segments  What type of user population is product/site targeting How should you identify current segments? Easier for demographic segments More difficult for attitudinal segments What type of population characteristics are useful for design purposes? Example: Segments for Palm based restaurant finder
Stakeholder and user interviews Can be in person or on phone Semi-structured interviews:  Decide on few questions before-hand leaving room for change. Ask about scenarios of usage: e.g., last time they used product. Go through steps of usage, exact context, motivations etc.  Tape interview if possible or keep a phone log. Interview people from each user segment. Ask for a few ratings on a five-point scale.  Aggregate rating information for sake of comparison.
Online survey of user needs (optional) Important for remote users or if there are many types of users Example Conduct online survey on factors used in finding restaurants for travelers. Identified factors important in choosing restaurants. e.g., Food quality, décor, wine selection, cuisine, service. Ask for importance ratings (on 5-point scale) of factors. Tie response to behavior: Asked respondents to recall a specific incident of choosing a restaurant, rather than answer questions in an abstract fashion. Option: Ask about several scenarios of usage from same person. e.g., One restaurant visit with business colleagues, another with friends.
Personas Exercise Divide into groups Craft a primary and secondary persona for your product Think of all that you know about your users
Structure of workshop Introduction Evaluating Systems (Morning session) Overview of evaluation   Heuristic Evaluation Usability Testing GOMS Understanding users (Afternoon session) Personas and Scenarios Mental Models and Information Architecture Business of Usability (time permitting)
Understanding User categorizations Overview Why people categorize? The structure of semantic memory Is understanding user categorization important for design? Methods Free-listing. Types of Card Sorting. Testing information architecture.
Is understanding categorization useful for design? Direct use: when user categorization informs design, such as that of menus or of navigation design. Often referred to as  information architecture (IA).  Indirect use: good to have broad understanding how users think about product even when user categorization does not directly inform IA. Important to remember:  Categorization is not static. People are good at learning new categories. If you provide the context and the right examples, they can learn new categories or alter boundaries of old categories.
Should interfaces always reflect user categories faithfully? No.  Categorization is far too important to depend only on what user thinks. Should also be influenced by business proposition, strategy, brand etc. Different user groups might differ in their perception of domain. No one scheme can serve them all perfectly.  User research can provide several alternative categorization schemes, allowing designers the freedom to make choices.
Do categorizations work across culture Research shows the structure of categories can be similar across cultures, though content of categories might not be. Enough similarity for successful design.  The net generation shares a lot of culture Cross-cultural design has been happening anyway.  Japanese cars Italian fashion Swiss chocolates Indian ???
Free-listing methods for understanding scope and boundary of domain
Free-listing to explore domain scope and boundaries Goals Explore boundaries and scope of domain across a group of people. Gain familiarity with user vocabulary for the domain. Use as a precursor to card-sorting, to define and limit the domain, and frame card items in the user’s language. Method Can be conducted as part of interview, or as written exercise  Ask respondent, “Name all the x's you know.” Give sufficient time to do so.  How many respondents? Depends on how much agreement there is about the domain. more agreement > fewer respondents.
Free-listing menu for Mc Donald’s User No 1 French fries Cheese burger Shake Hamburger French fries Chicken sandwich Chicken Mcnuggets Fish sandwich Shake Hamburger User No 2 French fries Chicken  Cheese burger Shake User No 4 Chicken Mcnuggets Cheese burger Bacon cheese burger French fries User No 5 Hamburger Quarter pounder Big mac Chicken fajita French fries Apple pie User No 3 Hamburger Cheese burger French fries Mc rib Chicken sandwich
Analyzing free-listing data Create a list of all items, sorted by their average rank (of being listed by a respondent). Examine how that rank order changes with the addition of each new respondent. If the ranks are relatively stable, then you can stop adding new respondents. 60% 70% 40% 40% 100% 30% Cheese burger Chicken Mcnuggets Chicken sandwich Fish sandwich French fries Shake Listed by % participants Items
Concept structure Plot items according to frequency of mention  Core Middle Periphery Divide items into 3 concentric circles (use your own break points):
Other uses for free-listing Comparing cultural or other group differences How do two groups perceive the same domain? Comparing two domains How does perception of McDonald’s menu compare with Wendy’s? Segment respondents into types based on familiarity: Find respondents with greater domain familiarity or those who perceive domain in idiosyncratic fashion?
Card-sorting and other methods for designing information architecture
Case Study: Design of online travel guide Example: Designing an online travel guide to help users plan trips. Purpose of card sort:  to structure the website for helping users find travel information, and create personalized travel guides. Items include  lodging, entertainment, local information, When to Go, Travel by Car/Air/Bus, Music Events, Hiking, Day Trips, Skiing, Diving, Golf, Emergency Info.
Open card-sorting Goal: to understand the overall categorization scheme Method: Open card sort Users given items. Asked to create categories  Options: Provide total number of categories to be created (avoid problems with splitters and lumpers) Successive card sorts to create taxonomies It is ok to put one card in multiple groups Ask for labels for each grouping
Cluster Analysis for card-sorting data Cluster Analysis Suggests a structural solution. Easy to translate into design. Challenge: How to reconcile multiple schemes? Hotels Bed and Breakfast Restaurants Hostels Emergency Info Currency Camping Hiking Day Trips Skiing Diving Surfing Mountain Climbing Biking
Closed card-sorting to design an IA Goal: to understand goodness of existing information architecture and labels Method: Closed card sort Users given items and category labels. Asked to place each item in a category. Do not allow creation of a miscellaneous category. Useful for:  Understanding user categorizations when category labels are a given Refining existing categorization scheme. Options: Allowing items to belong to multiple categories. Providing category descriptions rather than category labels.
Doing closed card-sorting online User works with given categories Each item (card) occupies a row Each category is represented by a column An “Other” category catches items that do not fit in
Comparing card-sorts for different user types Very useful for understanding differences in mental maps of various groups Can help understand differences between user groups, different cultures etc.  Try to create consensus maps to reconcile differences between different groups.
Practical exercise Using the RUMM (Rapid User Mental Modeling) method.
Structure of workshop Introduction Evaluating Systems (Morning session) Overview of evaluation   Heuristic Evaluation Usability Testing GOMS Understanding users (Afternoon session) Personas and Scenarios Mental Models and Information Architecture Business of Usability  (time permitting)
Swimming with Sharks: The Business of Usability
What we’ll cover Stakeholder analysis for fun and profit Making a business case for a User Experience project Test out the ideas with a sample project
Stakeholder Analysis
Who are stakeholders and why should we analyze them? Stakeholder: Anyone who is affected by, or can affect, your project Goals of understanding stakeholders Make your design better, by getting important information about the business context Identify potential obstacles ahead of time so you can deal with them Change design to address the issues raised by stakeholders Marshal evidence to counter their objections Neutralize resistance by making stakeholders feel heard
Putting Stakeholders into context It does not matter how good the design is if it is not approved  by management and actually put into operation A given project isn’t necessarily in everybody’s best interest This isn’t about playing politics: this is about the institutional decision making process. People represent different organizations within an enterprise If a project is seen as a big negative by various organizations, it should either address the concerns raised or justify itself strongly in order to be approved Stakeholders as another class of users who design should satisfy A real person you can talk to Goals are typically very concrete and business-metrics oriented.
Understanding Who’s Who in an Organization Org charts don’t tell the whole story Detective work needed to sort out Motive Influence How to do? Indirect Watch for “Influence Tells” Direct “What are the organizational challenges?”
The Interview Ask semi-structured questions about the product in general What group of users is least well-served? What one change would impact profits the most? Where do you see <<product>> in 5 years? Find out what their conception of your project is What might happen if this project went well? What are some risks associated with this project?
Remote Interviews Online Survey Ask same questions as in face-to-face interview Limit to 5 minutes of work Phone Interviews Follow-up on survey answers: clarify answers, try to get a sense of a concerns Compared to face-to-face interview Less emotional connection Even more necessary (remoteness means you know even less about stakeholders and their concerns)
Recording your understanding    
Prioritizing Stakeholders High Influence / High Interest: Engage Low Influence / High Interest: Use as Information Source High Influence / Low Interest: Broadly Satisfy Low Influence / Low Interest: Avoid Low Interest High Interest Low Influence High Influence Andre Chris Sandeep Anu
An organizational dilemma Usability often an Independent Business Unit IBUs provide “accountability”, make measurement easier Engineering is responsible for paying for usability services Engineering measured on the basis of Schedule Feature checklists # bugs Marketing/Sales measured on the basis of Sales Engineering invests in usability Money, Time but Marketing / Sales reap the benefits!  Solution: tie engineering compensation to usability metrics Good luck
Building a Business Case for Usability
ROI of Usability: Previous work Cost – Justifying Usability (Bias & Mayhew) Cost (employees,subjects,equipment) Benefit (task speed, user errors, late design changes, increased sales)  Internal vs. external Internal benefits increase with # users and frequency of use External benefits increase with development budget, large base of sales Usability Return on Investment (Nielson Norman Group) “ Usability Projects have an ROI of 150 %” Measured by  sales conversions Traffic / Visitor Count User performance / productivity
Myths of Usability ROI* Generalizing ROI estimates Assuming improvements are due to usability Benefits to customer booked as benefits to software company Support, training are  profit centers  in enterprise software! How does usability increase revenue?  Win/loss reports for enterprise software sales User research to determine buying reasons for shrink-wrap software registration / shopping cart behavior for ecommerce Ignores competitive landscape Being the “overall best choice” in your niche wins you the sale Usability may play a greater or lesser role in determining this Ignores potential  negative  business impact of changes that enhance usability Marketing vs. User Experience in ecommerce Ignoring opportunity costs “ Should the project be approved? Yes, because NPV is positive.” *Rosenberg, BayCHI 2003
Building a Business Case * Understand your business,  The financial levers for the company The competitive environment that company operates in Understand Project Approval Process Who has say, what are the stages of project approval What metrics the enterprise cares about Understand threats and opportunities from UX perspective User and Stakeholder Research Find areas where user and business interests are in tandem Try to frame UX projects such that Risk low, payoff high (it is all about risk) Chances of success are high Estimate ROI Estimate Costs: Development,Negative Revenue Impact, Opportunity Cost Estimate Benefit (be conservative) After the project Follow up: track successes and failures. Be accountable. *reference: Herman,J. CHI 2004
Key Points  Not every project will be justifiable ROI for some projects will be huge Ultimate proof is in “moving the needle” Different companies care about different “financial levers” (business metrics) Make your case on the basis of those numbers For example, # Registrations, % successful registrations, support calls per customer, average sale size Management doesn’t care about methodology Don’t justify methodology
Key Points (cont.) UX practitioners should understand business levers and incorporate them into design at a core level Post-hoc justification is not enough Project selection and design should be informed by business metrics Some UX practitioners should learn about business analysis Take a process oriented approach Evolve a process that takes into account the various interests and goals within an organization
Example Situations: ROI in an ecommerce Context Context: Online book seller is planning to improve the checkout process Metrics: Number of shopping cart bailouts Performance on usability test It is easy to justify ROI of shopping cart improvement since fewer bailouts means more sales.  Design should focus on reducing bailouts
Example Situations: ROI in a Customer Service Context Context: Bank is planning to two projects to reduce call volume (a) let users look at their account balance, and (b) let users update their contact information.  Metrics  Call volume metrics (overall # of calls, per task # of calls) # Online Transactions (that plausibly replaced calls) Performance on usability test It is easier to justify ROI of updating contact information than of looking at their account balance Updating of contact information plausibly replaces a phone call  Looking at account balance does NOT plausibly replace a phone call. Did they even care, or are they just browsing? Even if they did care, benefit is more diffuse (customer convenience -> loyalty)
Crossing the Chasm Where in the technology adoption life cycle does usability matter? Early Adaptors Innovators Early Majority Late Majority Laggards
Revised technology life-cycle chasm main street tornado Early Adaptors Innovators Early Majority Late Majority Laggards bowling alley
ROI of UX in an Outsourcing context Software Services -> Software Products Product development requires understanding users on a deeper level Good times ahead? For Services It depends on the situation of your customer Your ROI of designing systems that satisfy your  customer  is huge (duh) But your customer is hardly ever the user So it depends on the business situation of your client What kind of clients would care about usability?
What kind of clients care about usability? Clients who’s customers have low switching costs Money Time Expertise Clients where the buyer=the user Business success comes from making the buyer happy: if the buyer is the user, usability plays a bigger role Clients operating in a fiercely competitive landscape The better your competition is, the better you have to be to win a sale Usability is one dimension by which products can be better Clients making very high quality products Trying to cross the chasm? Four types of contexts Content Ecommerce Desktop Enterprise
What’s Next Where do we go from here? Can engineers do usability work on their own products? Are usability specialists needed? What kind of processes / corporate structures will facilitate usability work in software companies?
Thank you [email_address] [email_address] slides and other material will be posted at www.uzanto.com/papers/indiamar04

More Related Content

PDF
Smas Hits May 11, 2009 Sensex Down 193 Points On Profit Booking
PDF
UI / UX Engineering for Web Applications
PPTX
Design process design rules
PDF
What is usability
DOCX
HCI Part 6 - Prototype and Evaluation Plan
PPTX
Evaluation in hci
PPTX
Chapter five HCI
PPTX
Evaluation techniques in HCI
Smas Hits May 11, 2009 Sensex Down 193 Points On Profit Booking
UI / UX Engineering for Web Applications
Design process design rules
What is usability
HCI Part 6 - Prototype and Evaluation Plan
Evaluation in hci
Chapter five HCI
Evaluation techniques in HCI

What's hot (12)

PPT
HCI 3e - Ch 9: Evaluation techniques
PPT
HCI 3e - Ch 6: HCI in the software process
PPTX
Usability Evaluation
PPT
HCI 3e - Ch 11: User support
PPT
HCI 3e - Ch 7: Design rules
PDF
Usability_Evaluation
PPTX
hci in software development process
PDF
Introduction of software engineering
PPSX
Chapter3-evaluation techniques HCI
PPT
user support system in HCI
PPTX
Usability in product development
PPT
HCI 3e - Ch 8: Implementation support
HCI 3e - Ch 9: Evaluation techniques
HCI 3e - Ch 6: HCI in the software process
Usability Evaluation
HCI 3e - Ch 11: User support
HCI 3e - Ch 7: Design rules
Usability_Evaluation
hci in software development process
Introduction of software engineering
Chapter3-evaluation techniques HCI
user support system in HCI
Usability in product development
HCI 3e - Ch 8: Implementation support
Ad

Similar to Design For Strangers (20)

PDF
Ebay News 2006 7 19 Earnings
PPSX
heuristicevaluationprinciples-201015124852.ppsx
PPSX
Heuristic evaluation principles
PPTX
evaluation -human computer interaction.pptx
PDF
Heuristic ux-evaluation
PDF
Evaluating User Interfaces
PDF
Game design 2 (2013): Lecture 10 - Expert Evaluation Methods for Game UI
PPTX
Guerilla Human Computer Interaction and Customer Based Design
PDF
30 years of usability heuristics
PPTX
Lesson 6 - HCI Evaluation Techniques.pptx
PPT
Intranet Usability Testing
PPT
Game Design 2 - Lecture 8 - Expert Evaluation
PPTX
Unit 3_Evaluation Technique.pptx
KEY
Game Design 2: Expert Evaluation of User Interfaces
PPT
Comu346 lecture 6 - evaluation
PPTX
10 Usability Heuristics explained
PPTX
Intro to ux and how to design a thoughtful ui
PPTX
Heuristic Evaluation.pptx for information systems
PDF
citigroup January 19, 2007 - Fourth Quarter Financial Supplement
PPTX
hci Evaluation Techniques.pptx
Ebay News 2006 7 19 Earnings
heuristicevaluationprinciples-201015124852.ppsx
Heuristic evaluation principles
evaluation -human computer interaction.pptx
Heuristic ux-evaluation
Evaluating User Interfaces
Game design 2 (2013): Lecture 10 - Expert Evaluation Methods for Game UI
Guerilla Human Computer Interaction and Customer Based Design
30 years of usability heuristics
Lesson 6 - HCI Evaluation Techniques.pptx
Intranet Usability Testing
Game Design 2 - Lecture 8 - Expert Evaluation
Unit 3_Evaluation Technique.pptx
Game Design 2: Expert Evaluation of User Interfaces
Comu346 lecture 6 - evaluation
10 Usability Heuristics explained
Intro to ux and how to design a thoughtful ui
Heuristic Evaluation.pptx for information systems
citigroup January 19, 2007 - Fourth Quarter Financial Supplement
hci Evaluation Techniques.pptx
Ad

Recently uploaded (20)

PDF
Call cute girls 😀 Delhi, call now pls cute girls delhi call🔙
PPTX
Simple linear regression model an important topic in econometrics
PDF
Income processes in Poland: An analysis based on GRID data
PPTX
INDIAN FINANCIAL SYSTEM (Financial institutions, Financial Markets & Services)
PPTX
Rise of Globalization...................
PPT
1_Chapter_1_Introduction_to_Auditing.ppt
PPTX
Evolution of International Business.....
PDF
NewBase 22 August 2025 Energy News issue - 1818 by Khaled Al Awadi_compresse...
PDF
epic-retirement-criteria-for-funds (1).pdf
PDF
Pension Trustee Training (1).pdf From Salih Shah
PPT
CompanionAsset_9780128146378_Chapter04.ppt
PPTX
1. Set Theory - Academic AWellness 2024.pptx
PPTX
Machine Learning (ML) is a branch of Artificial Intelligence (AI)
PPT
Business Process Analysis and Quality Management (PMgt 771) with 2 Credit Housr
PPTX
Social Studies Subject for High School_ Ancient Greece & Greek Mytholoy.pptx
PDF
Lundin Gold - August 2025.pdf presentation
PDF
Very useful ppt for your banking assignments BANKING.pptx.pdf
PPT
Project_finance_introduction in finance.ppt
PDF
In July, the Business Activity Recovery Index Worsened Again - IER Survey
PPTX
Financial literacy among Collage students.pptx
Call cute girls 😀 Delhi, call now pls cute girls delhi call🔙
Simple linear regression model an important topic in econometrics
Income processes in Poland: An analysis based on GRID data
INDIAN FINANCIAL SYSTEM (Financial institutions, Financial Markets & Services)
Rise of Globalization...................
1_Chapter_1_Introduction_to_Auditing.ppt
Evolution of International Business.....
NewBase 22 August 2025 Energy News issue - 1818 by Khaled Al Awadi_compresse...
epic-retirement-criteria-for-funds (1).pdf
Pension Trustee Training (1).pdf From Salih Shah
CompanionAsset_9780128146378_Chapter04.ppt
1. Set Theory - Academic AWellness 2024.pptx
Machine Learning (ML) is a branch of Artificial Intelligence (AI)
Business Process Analysis and Quality Management (PMgt 771) with 2 Credit Housr
Social Studies Subject for High School_ Ancient Greece & Greek Mytholoy.pptx
Lundin Gold - August 2025.pdf presentation
Very useful ppt for your banking assignments BANKING.pptx.pdf
Project_finance_introduction in finance.ppt
In July, the Business Activity Recovery Index Worsened Again - IER Survey
Financial literacy among Collage students.pptx

Design For Strangers

  • 1. Design for Strangers: Effective User Experience Design  When Your Users are on Another Continent Rashmi Sinha Jonathan Boutelle Uzanto Consulting
  • 2. Structure of workshop Introduction Evaluating Systems (Morning session) Overview of evaluation Heuristic Evaluation Usability Testing GOMS Understanding users (Afternoon session) Personas and Scenarios Mental Models and Information Architecture Business of Usability (time permitting)
  • 3. Evaluating systems: Available data streams Different data streams yield different types of metrics Heuristic Evaluation Usability Testing Remote Usability Testing Server Logs or Transaction Logs Satisfaction Data Page Level Ratings GOMS
  • 4. Heuristic Evaluation Using heuristics (or rules of thumb) for evaluating systems. Expert analyze degree to which system complies with rules Heuristics such as Keep user informed of system status Speak the user’s language
  • 5. Usability Tests Test with users Very useful for design purposes But software must be built before it can be tested Difficult to use to convince management Often conducted in artificial scenarios
  • 6. Remote Usability Testing Advantages Large Sample Size Disadvantages Cost Most of the usual disadvantages of usability testing
  • 7. Server and Transaction Logs Can give an accurate view of site activity Can give detailed view of site activity – possible to drill down Hard to relate to user experience and user goals Hard to understand – massive reams of data Often used by corporations to roughly track user experience
  • 8. Satisfaction Ratings Give an overall view of the site Such ratings often have business buy-in Very difficult to move such numbers Might not relate to specific aspects of the site Make effort not to let the satisfaction levels fall
  • 9. GOMS Can help track the complexity of an interface How much work it will take to complete a task Might not tell you what real users will do Very helpful in comparing interfaces Can be used with interfaces that have not been implemented yet
  • 10. What Data Streams to Use What does it measure User Behavior (navigation paths, errors) or User Attitudes (user loyalty, satisfaction)? Gap between reported and actual behavior. Recommendation: Have at least one data stream of each. How comprehensive is the coverage? how much of the site is covered the frequency of measurement Sensitivity of measurement: How sensitive is data stream to changes in the user experience
  • 11. What Data Streams to Use continued Sampling Bias: Every data stream comes with its own set of sampling biases. The economics of measurement will determine what types of data are practical to collect. Initial cost Ongoing cost Cost of increasing sample size
  • 12. Structure of workshop Introduction Evaluating Systems (Morning session) Overview of evaluation Heuristic Evaluation Usability Testing GOMS Understanding users (Afternoon session) Personas and Scenarios Mental Models and Information Architecture Business of Usability (time permitting)
  • 13. Heuristic Evaluation Developed by Jakob Nielsen Helps find usability problems in a UI design Small set (3-5) of evaluators examine UI independently check for compliance with usability principles (“heuristics”) different evaluators will find different problems evaluators only communicate afterwards findings are then aggregated Can perform on working UI or on prototypes or designs
  • 14. What are heuristics? Simple easy rules of thumbs for enhancing usability For example: Have simple and natural dialog Speak the users’ language
  • 15. Heuristic Evaluation Process Evaluators go through UI several times inspect various dialogue elements compare with list of usability principles consider other principles/results that come to mind Usability principles Nielsen’s “heuristics” supplementary list of category-specific heuristics competitive analysis & user testing of existing products Use violations to redesign/fix problems From Jakob Neilsen
  • 16. Heuristic 1: Visibility of system status The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. searching database for matches
  • 17. Visibility of system status (cont) Response Time parameters 0.1 sec: no special indicators needed, why? 1.0 sec: user tends to lose track of data 10 sec: max. duration if user to stay focused on action for longer delays, use percent-done progress bars
  • 18. Heuristic 2: Match between system & real world The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
  • 19. There should be a match between system & real world follow real world conventions Use User’s language, not developer’s language
  • 20. Provide ways for users to backtrack when they make mistakes. Have clearly labeled exits allowing users to backtrack without an extended interaction. Support undo and redo. Heuristic 3: User Control and Freedom
  • 21. User Freedom Heuristics (cont.) H2-3: User control & freedom “exits” for mistaken choices, undo, redo don’t force down fixed paths Wizards must respond to Q before going to next Should be easy to good for beginners have 2 versions (WinZip)
  • 22. Use a consistent look and feel. Do not confuse users by changing platform conventions. Heuristic 4: Consistency and Standards
  • 23. Consistency (cont.) Is this confusing?
  • 24. Heuristic 5: Error Prevention Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Example: If user is asked to spell something, e.g. file names, it might be easier to give them a menu from which they can choose the files. Example: Modes When the same action leads to different consequences in different states. For example in older word processors, there was an insert and edit modes. The same key press in the different modes would lead to different outcomes.
  • 25. Heuristic 6: Recognition rather than recall Make objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate. Computers good at remembering things, human beings are not. Computer should display dialog elements to the user, and have them make a choice. During web navigation, remind users where they are currently.
  • 26. Accelerators -- unseen by the novice user -- may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions. Heuristic 7: Flexibility & efficiency of use
  • 27. Flexibility (cont.) accelerators for experts (e.g., gestures, kb shortcuts) allow users to tailor frequent actions (e.g., macros) OR Ctrl-V Ctrl-C Ctrl-X Edit Cut Copy Paste
  • 28. Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility. Heuristic 8: Aesthetic and minimalist design
  • 29. Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution. Heuristic 9: Help users recognize, diagnose, and recover from errors
  • 30. Heuristic 10: Help and documentation It is better if the system can be used without documentation, but it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.
  • 31. Phases of Heuristic Evaluation Pre-evaluation training give evaluators needed domain knowledge and information on the scenario Evaluation individuals evaluate and then aggregate results Severity rating determine how severe each problem is (priority) can do this first individually and then as a group Debriefing discuss the outcome with design team
  • 32. How to Perform Evaluation At least two passes for each evaluator first to get feel for flow and scope of system second to focus on specific elements If system is walk-up-and-use or evaluators are domain experts, no assistance needed otherwise might supply evaluators with scenarios Each evaluator produces list of problems explain why with reference to heuristic or other information be specific and list each problem separately
  • 33. Examples Can’t copy info from one window to another violates “Minimize the users’ memory load” (H1-3) fix: allow copying Typography uses mix of upper/lower case formats and fonts violates “Consistency and standards” (H2-4) slows users down probably wouldn’t be found by user testing fix: pick a single format for entire interface
  • 34. Severity Rating Used to allocate resources to fix problems Estimates of need for more usability efforts Combination of frequency impact persistence (one time or repeating) Should be calculated after all evals. are in Should be done independently by all judges Severity Ratings 0 - don’t agree that this is a usability problem 1 - cosmetic problem 2 - minor usability problem 3 - major usability problem; important to fix 4 - usability catastrophe; imperative to fix
  • 35. Debriefing Conduct with evaluators, observers, and development team members Discuss general characteristics of UI Suggest potential improvements to address major usability problems Dev. team rates how hard things are to fix Make it a brainstorming session little criticism until end of session
  • 36. Results of Using HE Single evaluator achieves poor results only finds 35% of usability problems 5 evaluators find ~ 75% of usability problems why not more evaluators???? 10? 20? adding evaluators costs more many evaluators won’t find many more problems
  • 37. Summary Heuristic evaluation is a discount method Have evaluators go through the UI twice Ask them to see if it complies with heuristics note where it doesn’t and say why Combine the findings from 3 to 5 evaluators Have evaluators independently rate severity Discuss problems with design team Alternate with user testing
  • 38. Heuristic Evaluation Exercise Split into two groups Conduct Heuristic Evaluation as a group (Create list of heuristic violation) Each person within group provides a severity rating for each heuristic violation (eliminate redundancies) Average severity for each group Present back to larger group
  • 39. Structure of workshop Introduction Evaluating Systems (Morning session) Overview of evaluation Heuristic Evaluation Usability Testing GOMS Understanding users (Afternoon session) Personas and Scenarios Mental Models and Information Architecture Business of Usability (time permitting)
  • 40. Overview of user testing Why do user testing? Choosing participants Designing the test Collecting data Analyzing the data
  • 41. Why do User Testing? Can’t tell how good or bad UI is until people use it! Other methods are based on evaluators who? may know too much may not know enough (about tasks, etc.) Summary: Hard to predict what real users will do
  • 42. Choosing Participants Representative of eventual users in terms of job-specific vocabulary / knowledge tasks If you can’t get real users, get approximation system intended for doctors get medical students system intended for electrical engineers get engineering students Use incentives to get participants
  • 43. Ethical Considerations Sometimes tests can be distressing users have left in tears users can be embarrassed by mistakes You have a responsibility to alleviate this make voluntary with informed consent avoid pressure to participate let them know they can stop at any time [Gomoll] stress that you are testing the system, not them make collected data as anonymous as possible
  • 44. User Test Proposal A report that contains objective description of system being testing task environment & materials participants methodology tasks test measures
  • 45. Selecting Tasks Should reflect what real tasks will be like Tasks from analysis & design can be used may need to shorten if they take too long require background that test user won’t have Avoid bending tasks in direction of what your design best supports Don’t choose tasks that are too fragmented
  • 46. Deciding on Data to Collect Two types of data process data observations of what users are doing & thinking bottom-line data summary of what happened (time, errors, success…) i.e., the dependent variables Focus on process data first gives good overview of where problems are Bottom-line doesn’t tell you where to fix just says: “too slow”, “too many errors”, etc. Hard to get reliable bottom-line results need many users for statistical significance (don’t bother unless needed)
  • 47. The “Thinking Aloud” Method Need to know what users are thinking, not just what they are doing Ask users to talk while performing tasks tell us what they are thinking tell us what they are trying to do tell us questions that arise as they work tell us things they read Make a recording or take good notes make sure you can tell what they were doing
  • 48. Thinking Aloud (cont.) Prompt the user to keep talking “tell me what you are thinking” Only help on things you have pre-decided keep track of anything you do give help on Recording use a digital watch/clock take notes, plus if possible record audio and video (or even event logs)
  • 49. Using the Test Results Summarize the data make a list of all critical incidents (CI) positive: something they liked or worked well negative: difficulties with the UI include references back to original data try to judge why each difficulty occurred What does data tell you? UI work the way you thought it would? consistent with heuristic evaluation users take approaches you expected?
  • 50. Using the Results (cont.) Update task analysis and rethink design rate severity & ease of fixing CI’s fix both severe problems & make the easy fixes Will thinking aloud give the right answers? not always if you ask a question, people will always give an answer, even it is has nothing to do with the facts try to avoid specific questions
  • 51. Measuring Bottom-Line Usability Situations in which numbers are useful time requirements for task completion successful task completion compare two designs on speed or # of errors Do not combine with thinking-aloud talking can affect speed and accuracy (neg. & pos.) Time is easy to record Error or successful completion is harder define in advance what these mean
  • 52. Analyzing the Numbers Example: trying to get task time <=30 min. test gives: 20, 15, 40, 90, 10, 5 mean (average) = 30 median (middle) = 17.5 looks good! wrong answer, not certain of anything Factors contributing to our uncertainty small number of test users (n = 6) results are very variable (standard deviation = 32) std. dev. measures dispersal from the mean
  • 53. Measuring User Preference How much users like or dislike the system can ask them to rate on a scale of 1 to 10 or have them choose among statements “best UI I’ve ever…”, “better than average”… hard to be sure what data will mean novelty of UI, feelings, not realistic setting, etc. If many give you low ratings, you are in trouble Can get some useful data by asking what they liked, disliked, where they had trouble, best part, worst part, etc. (redundant questions)
  • 54. User Testing: Cultural Issues Are users the same all over Obviously not Getting users that are as similar as possible to your real users is important Can you test on users from another country? Probably not for things that are culturally specific Entertainment marketing-ware Generic business software Yes for applications targeted at specialists with strong international work cultures Doctors Software engineers
  • 55. Testing Details Order of tasks choose one simple order (simple -> complex) Training depends on how real system will be used What if someone doesn’t finish assign very large time & large # of errors Pilot study helps you fix problems with the study do twice, first with colleagues, then with real users
  • 56. Instructions to Participants Describe the purpose of the evaluation “I’m testing the product; I’m not testing you” Tell them they can quit at any time Demonstrate the equipment Explain how to think aloud Explain that you will not provide help Describe the task give written instructions
  • 57. Details (cont.) Keeping variability down recruit test users with similar background brief users to bring them to common level perform the test the same way every time don’t help some more than others (plan in advance) make instructions clear Debriefing test users often don’t remember, so show video segments ask for comments on specific features show them screen (online or on paper)
  • 58. Summary User testing is important, but takes time & effort Early testing can be done on a mock-ups (low-fi) Use real tasks & representative participants Be ethical & treat your participants well Want to know what people are doing & why i.e., collect process data Using bottom line data requires more users to get statistically reliable results
  • 59. User Testing Exercise Divide into groups Each group devise a test plan 2 tasks, where to get users from, who to test Test someone from the other group Note findings
  • 60. Structure of workshop Introduction Evaluating Systems (Morning session) Overview of evaluation Heuristic Evaluation Usability Testing GOMS Understanding users (Afternoon session) Personas and Scenarios Mental Models and Information Architecture Business of Usability (time permitting)
  • 61. GOMS Can help track the complexity of an interface How much work it will take to complete a task Might not tell you what real users will do Very helpful in comparing interfaces Can be used with interfaces that have not been implemented yet
  • 62. GOMS Overview Goals, Objects, Methods, Selection Rules A way of measuring how much work it takes to do something using a given information system System doesn’t have to exist yet Many GOMS variants: most are quite complex and difficult to implement A simplified version of Keystroke-Level GOMS will be presented today
  • 63. GOMS Keystroke Actions The actions K (Click, Keying): .2 Seconds M (mentally preparing): 1.35 Seconds P (pointing): 1.1 Seconds H (homing) (move hand between keyboard and pointing device) .4 Second R (system responding): varies by system / action Very approximate estimates of time to do task Useless for predicting how much time a task will take Thinking doesn’t always take 1.35 second Pointing time varies with size of target and distance from current location (Fitt’s law) Yet valid on a comparative basis if two designs / systems are analyzed using the same technique
  • 64. EZ-GOMS Calculation Explicitly specify a task Typically many potential paths through a given design, optional fields etc: get explicit Consider using ranges (minimum, maximum, typical) to get a better sense of best / worst case scenarios Calculate all the actions that will be taken to perform that task Add M (mental preparation) in using this rules In front of all clicking In front of all pointing Remove “M”s using these rules (you’ll do this automatically after a little practice) Remove anticipated “M”s (M P M K-> M P K) Remove “M”s within cognitive units (“fred”-> MKMKMKMK->MKKKK) Remove overlapping “M”s (adjacent to Rs) Remove “M”s before consecutive terminators }} Remove “M”s that are terminators of commands
  • 65. EZ-GOMS Example H M P K H (select name text box) M K K K K K K (enter name) H M P K H (select password text box) M K K K K K K (enter password) H M P K (click “sign in” button) R (waiting for the server to respond)
  • 66. Understanding User Needs Afternoon Session
  • 67. Structure of workshop Introduction Evaluating Systems (Morning session) Overview of evaluation Heuristic Evaluation Usability Testing GOMS Understanding users (Afternoon session) Personas and Scenarios Mental Models and Information Architecture Business of Usability (time permitting)
  • 68. Problem with traditional user research methods Long sessions of observing users or interviewing them or participatory design. Appropriate in face to face interaction situations. Methods work well in designing for easy to access audiences. Difficult to use for remote users. Difficult to use when designing for global audiences. Also difficult to use such methods to make business case since numbers are small and data is qualitative. So what is the answer?
  • 69. Semi-structured user research methods Using mostly phone and online surveys Complementary with, rather than an alternative to open-ended methods Can work for information-rich domains Help understand information representations in user’s minds. e.g. design of navigation for cell phone. Work well in remote situations
  • 70. Two types of user research methods Part 1: User information needs What user needs are important? Can users be differentiated into groups on the basis of such needs? Can this grouping be used to form personas? Part 2: User Categorizations Scope & boundaries of information domain Structure of information domain Differences between groups of people (different user groups, different cultures, stakeholders)
  • 71. Part 1: Understanding user needs, creating scenarios & personas remotely Why persona based design One of the problems in design is that it is very hard to visualize an abstract “USER” and what he / she might want Develop one or two persona of the typical “user” from interviews with many users Persona is made up person, your so called “typical user”. Should be based on your experiences with actual users in the interview stage. From Alan Cooper Many potential users One Persona
  • 72. Persona based Design Process Persona: The archetypical user Goals Goals of the persona in using the software Tasks Specific steps needed to accomplish goal. Scenario The usage scenario, the whole incident of software usage From Alan Cooper
  • 73. Characteristics of Personas (from Cooper) “Hypothetical Archetypes” Archetype: An original model after which other similar things are patterned; a prototype A precise description of a user and what they want to accomplish Imaginary, but precise Specific, but stereotyped
  • 74. Targeted Design with Personas Describe a person in terms of their Goals in life (especially relating to this project) Capabilities , inclinations , and background People have a “visceral” ability to generalize about real and fictional people They won’t be 100% accurate, but it feels natural to think about people this way Why use personas If you try to satisfy everyone, you end up satisfying no one. A compromise design pleases no-one From all your interviews etc., decide what is your typical user / users, create a specific persona then try to please that that persona 100% of the time.
  • 75. Advantages of Personas Targeted Design Works Better Example: Roller suitcases Was designed specifically for airline employees, pilots, airhostesses etc. Has become popular with all classes of people In order to do good design you need to have a specific person in mind, and think in terms of that person every time a design decision needs to be made Puts an end to feature debates Makes hypothetical arguments less hypothetical Q: “What if the user wants to print this out?” Typical discussion “The user will / wiil not want to print often.” “ Given her tasks, and Emilee won’t want to print often.”
  • 76. Case Study using Personas Primary Persona Joe, the executive Make him happy 100% of the time Secondary Persona Dan, the traveler Try to take care of his needs as well
  • 77. Developing Personas cont. Joe: The busy traveling executive from a multinational company. He is on the road about 10 days a month. He is very fond of food but is afraid to explore in strange cities, and prefers restaurants which serve good , but not exotic food. He is also fond of a beer with his meal. He does not like to travel far for food, prefers to walk or hop into a cab for a short ride
  • 78. Developing Personas cont. Dan : Driving his car across the country after graduating. Gets to a different city every night and finds a hotel and a restaurant. He wants to explore the town, find the local hangouts, understand the town’s culture. He likes to try different kinds of food . He prefers restaurant in the middle of the town.
  • 79. Goals and Tasks of Users Goals are larger functions that the user is hoping to satisfy Get acquainted with the city, discover its special cuisine Not have to travel too much for food Relax after a hard day’s work / driving
  • 80. Tasks of users Tasks are the specific steps that the user has to go through in order to accomplish his goals. Asks include the usage of the software. Find information about various restaurants Decide on the one based on factors such as price, cuisine, serves alcohol or not/ distance from location Get to the restaurant Eat Pay for meal
  • 81. Development of Scenarios Primary Persona: Joe, the executive Make him happy 100% of the time Scenario: Joe’s company has tied up with some Delhi IT company, and he is visiting Delhi for the first time. He is staying somewhere near South Ex. He needs to find a restaurant to eat at. He is not feeling adventerous, so not Dosa! Just some safe Burger and Fries. So Joe turns to his trusted Palm
  • 82. Development of Scenarios Joe needs to input his location into his palm. Input what kind of food he wants or the program can use defaults The information returned: list of possible restaurants along with their relevant details, kinds of food etc. More details about each on request: details such as the availability of beer, if they take credit cards, links to reviews etc.
  • 83. Development of Scenarios The information returned to Joe needs to be broad (offer a number of options) and deep (offer more details upon request) Location Information is another concern of Joe’s. Ideally he wants exact distance & directions to restaurant. Not possible, not live website
  • 84. Development of Scenarios What else does Joe need? To mark restaurants that he liked. Lets think more… Compromise : Tag restaurants in terms of neighborhoods. Joe can give current neighborhood. Can be shown map with neighborhoods marked out & approximate distances.
  • 85. Our secondary Persona Does this design make Dan happy? Designing for one specific user often makes other users happy as well.
  • 86. Aspects of Scenarios Daily Use Fast to learn Shortcuts and customization after more use Necessary Use Infrequent but required Nothing fancy needed Edge Cases Ignore or save for version 2
  • 87. Personas and Market Segmentation Uses of Market Segmentation Used to identify clusters of people product can appeal to. Using demographics or using attitudinal/psychological/psychographic variables. Questions focus on like / dislike of product concept what do you think of vanilla coke or green Heinz ketchup? Forecasts marketplace acceptance of products. Helps convince executives to build product. Not helpful for defining and designing product.
  • 88. Reconciling personas and market segments Build personas on top of segments Ground the personas in reality. Define a persona for each main segment Focus on goals and behaviors of users. Advantages: Easy to get buy-in for personas from management, engineering etc.
  • 89. Persona building method Method Conduct secondary research Examine existing market segments Conduct interviews with various stakeholders, including multiple users Conduct online survey if users are remote. Find patterns. Pick nugget and interesting tidbit and build persona around it.
  • 90. Conduct secondary research Examine existing market segments What type of user population is product/site targeting How should you identify current segments? Easier for demographic segments More difficult for attitudinal segments What type of population characteristics are useful for design purposes? Example: Segments for Palm based restaurant finder
  • 91. Stakeholder and user interviews Can be in person or on phone Semi-structured interviews: Decide on few questions before-hand leaving room for change. Ask about scenarios of usage: e.g., last time they used product. Go through steps of usage, exact context, motivations etc. Tape interview if possible or keep a phone log. Interview people from each user segment. Ask for a few ratings on a five-point scale. Aggregate rating information for sake of comparison.
  • 92. Online survey of user needs (optional) Important for remote users or if there are many types of users Example Conduct online survey on factors used in finding restaurants for travelers. Identified factors important in choosing restaurants. e.g., Food quality, décor, wine selection, cuisine, service. Ask for importance ratings (on 5-point scale) of factors. Tie response to behavior: Asked respondents to recall a specific incident of choosing a restaurant, rather than answer questions in an abstract fashion. Option: Ask about several scenarios of usage from same person. e.g., One restaurant visit with business colleagues, another with friends.
  • 93. Personas Exercise Divide into groups Craft a primary and secondary persona for your product Think of all that you know about your users
  • 94. Structure of workshop Introduction Evaluating Systems (Morning session) Overview of evaluation Heuristic Evaluation Usability Testing GOMS Understanding users (Afternoon session) Personas and Scenarios Mental Models and Information Architecture Business of Usability (time permitting)
  • 95. Understanding User categorizations Overview Why people categorize? The structure of semantic memory Is understanding user categorization important for design? Methods Free-listing. Types of Card Sorting. Testing information architecture.
  • 96. Is understanding categorization useful for design? Direct use: when user categorization informs design, such as that of menus or of navigation design. Often referred to as information architecture (IA). Indirect use: good to have broad understanding how users think about product even when user categorization does not directly inform IA. Important to remember: Categorization is not static. People are good at learning new categories. If you provide the context and the right examples, they can learn new categories or alter boundaries of old categories.
  • 97. Should interfaces always reflect user categories faithfully? No. Categorization is far too important to depend only on what user thinks. Should also be influenced by business proposition, strategy, brand etc. Different user groups might differ in their perception of domain. No one scheme can serve them all perfectly. User research can provide several alternative categorization schemes, allowing designers the freedom to make choices.
  • 98. Do categorizations work across culture Research shows the structure of categories can be similar across cultures, though content of categories might not be. Enough similarity for successful design. The net generation shares a lot of culture Cross-cultural design has been happening anyway. Japanese cars Italian fashion Swiss chocolates Indian ???
  • 99. Free-listing methods for understanding scope and boundary of domain
  • 100. Free-listing to explore domain scope and boundaries Goals Explore boundaries and scope of domain across a group of people. Gain familiarity with user vocabulary for the domain. Use as a precursor to card-sorting, to define and limit the domain, and frame card items in the user’s language. Method Can be conducted as part of interview, or as written exercise Ask respondent, “Name all the x's you know.” Give sufficient time to do so. How many respondents? Depends on how much agreement there is about the domain. more agreement > fewer respondents.
  • 101. Free-listing menu for Mc Donald’s User No 1 French fries Cheese burger Shake Hamburger French fries Chicken sandwich Chicken Mcnuggets Fish sandwich Shake Hamburger User No 2 French fries Chicken Cheese burger Shake User No 4 Chicken Mcnuggets Cheese burger Bacon cheese burger French fries User No 5 Hamburger Quarter pounder Big mac Chicken fajita French fries Apple pie User No 3 Hamburger Cheese burger French fries Mc rib Chicken sandwich
  • 102. Analyzing free-listing data Create a list of all items, sorted by their average rank (of being listed by a respondent). Examine how that rank order changes with the addition of each new respondent. If the ranks are relatively stable, then you can stop adding new respondents. 60% 70% 40% 40% 100% 30% Cheese burger Chicken Mcnuggets Chicken sandwich Fish sandwich French fries Shake Listed by % participants Items
  • 103. Concept structure Plot items according to frequency of mention Core Middle Periphery Divide items into 3 concentric circles (use your own break points):
  • 104. Other uses for free-listing Comparing cultural or other group differences How do two groups perceive the same domain? Comparing two domains How does perception of McDonald’s menu compare with Wendy’s? Segment respondents into types based on familiarity: Find respondents with greater domain familiarity or those who perceive domain in idiosyncratic fashion?
  • 105. Card-sorting and other methods for designing information architecture
  • 106. Case Study: Design of online travel guide Example: Designing an online travel guide to help users plan trips. Purpose of card sort: to structure the website for helping users find travel information, and create personalized travel guides. Items include lodging, entertainment, local information, When to Go, Travel by Car/Air/Bus, Music Events, Hiking, Day Trips, Skiing, Diving, Golf, Emergency Info.
  • 107. Open card-sorting Goal: to understand the overall categorization scheme Method: Open card sort Users given items. Asked to create categories Options: Provide total number of categories to be created (avoid problems with splitters and lumpers) Successive card sorts to create taxonomies It is ok to put one card in multiple groups Ask for labels for each grouping
  • 108. Cluster Analysis for card-sorting data Cluster Analysis Suggests a structural solution. Easy to translate into design. Challenge: How to reconcile multiple schemes? Hotels Bed and Breakfast Restaurants Hostels Emergency Info Currency Camping Hiking Day Trips Skiing Diving Surfing Mountain Climbing Biking
  • 109. Closed card-sorting to design an IA Goal: to understand goodness of existing information architecture and labels Method: Closed card sort Users given items and category labels. Asked to place each item in a category. Do not allow creation of a miscellaneous category. Useful for: Understanding user categorizations when category labels are a given Refining existing categorization scheme. Options: Allowing items to belong to multiple categories. Providing category descriptions rather than category labels.
  • 110. Doing closed card-sorting online User works with given categories Each item (card) occupies a row Each category is represented by a column An “Other” category catches items that do not fit in
  • 111. Comparing card-sorts for different user types Very useful for understanding differences in mental maps of various groups Can help understand differences between user groups, different cultures etc. Try to create consensus maps to reconcile differences between different groups.
  • 112. Practical exercise Using the RUMM (Rapid User Mental Modeling) method.
  • 113. Structure of workshop Introduction Evaluating Systems (Morning session) Overview of evaluation Heuristic Evaluation Usability Testing GOMS Understanding users (Afternoon session) Personas and Scenarios Mental Models and Information Architecture Business of Usability (time permitting)
  • 114. Swimming with Sharks: The Business of Usability
  • 115. What we’ll cover Stakeholder analysis for fun and profit Making a business case for a User Experience project Test out the ideas with a sample project
  • 117. Who are stakeholders and why should we analyze them? Stakeholder: Anyone who is affected by, or can affect, your project Goals of understanding stakeholders Make your design better, by getting important information about the business context Identify potential obstacles ahead of time so you can deal with them Change design to address the issues raised by stakeholders Marshal evidence to counter their objections Neutralize resistance by making stakeholders feel heard
  • 118. Putting Stakeholders into context It does not matter how good the design is if it is not approved by management and actually put into operation A given project isn’t necessarily in everybody’s best interest This isn’t about playing politics: this is about the institutional decision making process. People represent different organizations within an enterprise If a project is seen as a big negative by various organizations, it should either address the concerns raised or justify itself strongly in order to be approved Stakeholders as another class of users who design should satisfy A real person you can talk to Goals are typically very concrete and business-metrics oriented.
  • 119. Understanding Who’s Who in an Organization Org charts don’t tell the whole story Detective work needed to sort out Motive Influence How to do? Indirect Watch for “Influence Tells” Direct “What are the organizational challenges?”
  • 120. The Interview Ask semi-structured questions about the product in general What group of users is least well-served? What one change would impact profits the most? Where do you see <<product>> in 5 years? Find out what their conception of your project is What might happen if this project went well? What are some risks associated with this project?
  • 121. Remote Interviews Online Survey Ask same questions as in face-to-face interview Limit to 5 minutes of work Phone Interviews Follow-up on survey answers: clarify answers, try to get a sense of a concerns Compared to face-to-face interview Less emotional connection Even more necessary (remoteness means you know even less about stakeholders and their concerns)
  • 123. Prioritizing Stakeholders High Influence / High Interest: Engage Low Influence / High Interest: Use as Information Source High Influence / Low Interest: Broadly Satisfy Low Influence / Low Interest: Avoid Low Interest High Interest Low Influence High Influence Andre Chris Sandeep Anu
  • 124. An organizational dilemma Usability often an Independent Business Unit IBUs provide “accountability”, make measurement easier Engineering is responsible for paying for usability services Engineering measured on the basis of Schedule Feature checklists # bugs Marketing/Sales measured on the basis of Sales Engineering invests in usability Money, Time but Marketing / Sales reap the benefits! Solution: tie engineering compensation to usability metrics Good luck
  • 125. Building a Business Case for Usability
  • 126. ROI of Usability: Previous work Cost – Justifying Usability (Bias & Mayhew) Cost (employees,subjects,equipment) Benefit (task speed, user errors, late design changes, increased sales) Internal vs. external Internal benefits increase with # users and frequency of use External benefits increase with development budget, large base of sales Usability Return on Investment (Nielson Norman Group) “ Usability Projects have an ROI of 150 %” Measured by sales conversions Traffic / Visitor Count User performance / productivity
  • 127. Myths of Usability ROI* Generalizing ROI estimates Assuming improvements are due to usability Benefits to customer booked as benefits to software company Support, training are profit centers in enterprise software! How does usability increase revenue? Win/loss reports for enterprise software sales User research to determine buying reasons for shrink-wrap software registration / shopping cart behavior for ecommerce Ignores competitive landscape Being the “overall best choice” in your niche wins you the sale Usability may play a greater or lesser role in determining this Ignores potential negative business impact of changes that enhance usability Marketing vs. User Experience in ecommerce Ignoring opportunity costs “ Should the project be approved? Yes, because NPV is positive.” *Rosenberg, BayCHI 2003
  • 128. Building a Business Case * Understand your business, The financial levers for the company The competitive environment that company operates in Understand Project Approval Process Who has say, what are the stages of project approval What metrics the enterprise cares about Understand threats and opportunities from UX perspective User and Stakeholder Research Find areas where user and business interests are in tandem Try to frame UX projects such that Risk low, payoff high (it is all about risk) Chances of success are high Estimate ROI Estimate Costs: Development,Negative Revenue Impact, Opportunity Cost Estimate Benefit (be conservative) After the project Follow up: track successes and failures. Be accountable. *reference: Herman,J. CHI 2004
  • 129. Key Points Not every project will be justifiable ROI for some projects will be huge Ultimate proof is in “moving the needle” Different companies care about different “financial levers” (business metrics) Make your case on the basis of those numbers For example, # Registrations, % successful registrations, support calls per customer, average sale size Management doesn’t care about methodology Don’t justify methodology
  • 130. Key Points (cont.) UX practitioners should understand business levers and incorporate them into design at a core level Post-hoc justification is not enough Project selection and design should be informed by business metrics Some UX practitioners should learn about business analysis Take a process oriented approach Evolve a process that takes into account the various interests and goals within an organization
  • 131. Example Situations: ROI in an ecommerce Context Context: Online book seller is planning to improve the checkout process Metrics: Number of shopping cart bailouts Performance on usability test It is easy to justify ROI of shopping cart improvement since fewer bailouts means more sales. Design should focus on reducing bailouts
  • 132. Example Situations: ROI in a Customer Service Context Context: Bank is planning to two projects to reduce call volume (a) let users look at their account balance, and (b) let users update their contact information. Metrics Call volume metrics (overall # of calls, per task # of calls) # Online Transactions (that plausibly replaced calls) Performance on usability test It is easier to justify ROI of updating contact information than of looking at their account balance Updating of contact information plausibly replaces a phone call Looking at account balance does NOT plausibly replace a phone call. Did they even care, or are they just browsing? Even if they did care, benefit is more diffuse (customer convenience -> loyalty)
  • 133. Crossing the Chasm Where in the technology adoption life cycle does usability matter? Early Adaptors Innovators Early Majority Late Majority Laggards
  • 134. Revised technology life-cycle chasm main street tornado Early Adaptors Innovators Early Majority Late Majority Laggards bowling alley
  • 135. ROI of UX in an Outsourcing context Software Services -> Software Products Product development requires understanding users on a deeper level Good times ahead? For Services It depends on the situation of your customer Your ROI of designing systems that satisfy your customer is huge (duh) But your customer is hardly ever the user So it depends on the business situation of your client What kind of clients would care about usability?
  • 136. What kind of clients care about usability? Clients who’s customers have low switching costs Money Time Expertise Clients where the buyer=the user Business success comes from making the buyer happy: if the buyer is the user, usability plays a bigger role Clients operating in a fiercely competitive landscape The better your competition is, the better you have to be to win a sale Usability is one dimension by which products can be better Clients making very high quality products Trying to cross the chasm? Four types of contexts Content Ecommerce Desktop Enterprise
  • 137. What’s Next Where do we go from here? Can engineers do usability work on their own products? Are usability specialists needed? What kind of processes / corporate structures will facilitate usability work in software companies?
  • 138. Thank you [email_address] [email_address] slides and other material will be posted at www.uzanto.com/papers/indiamar04