USABILITY HEURISTICS, PRINCIPLES, EVALUATION & ITS RELEVANCE TO USER EXPERIENCE & INTERFACE

HEURISTIC EVALUATION:

The use of Heuristic Evaluation for User Interface Design is employed when there are no physical access to users and there is need to evaluate interface. Usability Heuristic evaluations are relevant for UI design as it gives room for usability professionals to evaluate an interface without recruiting another person based on a set of standard principles. A heuristic is an easy method for solving problems or making decisions.

PRINCIPLES:
H1: Visibility of System Status (Does the design keep the user informed?)

H2: Match between system and the real world (Does the design speak a user’s language?)

H3: User control and freedom (Does the design give the user control?)

H4: Consistency and standards (Does the design abide by a set of rules that a user can understand?)

H5: Error prevention (Does the design reduce the potential for a user to make errors?)

H6: Recognition rather than recall (Does the design require the user to remember how things work?)

H7: Flexibility and efficiency of use (Does the design provide the advanced user shortcuts?)

H8: Aesthetic and minimalist design (Does the design communicate and handle information simply?)

H9: Help users recognize, diagnose, and recover from errors (Does the design explain how a user erred?)

H10: Help and Documentation (Does the design provide users with help?)

HOW TO CONDUCT HEURISTIC EVALUATION:

1. Know what to test and how — Whether it’s the entire product or one procedure, clearly define the parameters of what to test and the objective.

2. Know your users and have clear definitions of the target audience’s goals, contexts, etc. User personas can help evaluators see things from the users’ perspectives.

3. Select 3–5 evaluators, ensuring their expertise in usability and the relevant industry.

4. Define the heuristics (around 5–10) — This will depend on the nature of the system/product/design. Consider adopting/adapting the Nielsen-Molich heuristics and/or using/defining others.

5. Brief evaluators on what to cover in a selection of tasks, suggesting a scale of severity codes (e.g., critical) to flag issues.

6. 1st Walkthrough — Have evaluators use the product freely so they can identify elements to analyse.

7. 2nd Walkthrough — Evaluators scrutinize individual elements according to the heuristics. They also examine how these fit into the overall design, clearly recording all issues encountered.

8. Debrief evaluators in a session so they can collate results for analysis and suggest fixes.

Hello….I'm a creative designer. I'm currently expanding into the UI/UX sphere. I love to read and I enjoy browsing the internet for design related info.