EVAL Health
Guides

Design

Design for complexity, usability, and peer review in your clinical evaluations.

Designing for Complexity

Clinical evaluations can involve dozens of questions, multiple scoring paths, and nuanced decision logic. Your job as a designer is to manage this complexity so that users experience a clear, manageable workflow rather than a wall of questions. Break complex assessments into logical sections that guide users step by step through the evaluation.

Consider Your End Users

Think about the people who interact with your evaluation. Patients filling out intake forms have different needs than practitioners conducting assessments. Design with both audiences in mind:

  • Patients need clear, jargon-free language and a straightforward path through the evaluation.
  • Practitioners need efficient workflows that fit into their clinical routine without unnecessary friction.

When you empathize with your users, you make design decisions that improve completion rates and data quality.

Use Section Headers and Descriptions

Guide users through your evaluation with descriptive section headers and supporting text. Each section header should tell the user what they are about to answer and why. Add section descriptions to provide context, explain instructions, or set expectations for what comes next.

Well-written headers and descriptions reduce confusion and help users feel confident that they are on the right track.

Add Media for Clarity

Where appropriate, include images or videos to enhance understanding. Media works well for:

  • Showing anatomical reference points for physical assessments
  • Demonstrating measurement techniques
  • Providing visual scales or reference charts
  • Explaining complex concepts that are difficult to convey through text alone

Use media purposefully. Every image or video you add should serve a clear function in helping the user provide accurate responses.

Write Clear, Concise Questions

Each question should communicate exactly what information you need. Follow these principles:

  • Use precise, unambiguous language.
  • Avoid double-barreled questions that ask about two things at once.
  • Keep question text short while retaining necessary clinical specificity.
  • Include units of measure or expected formats when collecting numerical data.

Clear questions produce better data, which leads to more accurate results.

Organize Choices Logically

When you present answer choices, arrange them in an order that makes sense to the user:

  • Place the most commonly selected options first to speed up completion.
  • Use numerical or severity order for scaled responses (e.g., mild, moderate, severe).
  • Maintain consistent ordering patterns across similar questions throughout the evaluation.

Logical choice ordering reduces cognitive load and helps users find their answer quickly.

Create Adaptive Experiences with Visibility Rules

Use visibility rules to show only the questions relevant to each user's situation. When you hide questions that do not apply, you:

  • Shorten the perceived length of the evaluation
  • Prevent users from answering irrelevant questions
  • Reduce data noise in your results
  • Create a personalized experience that adapts to each user's responses

An evaluation that asks 50 questions but shows only 15 to any given user feels manageable and focused. Design your visibility logic to create these streamlined experiences.

Conduct Peer Review

Before you publish, invite colleagues to review your evaluation. All users with Player access can inspect Builder in read-only mode, allowing them to examine your data structure, logic, and design without accidentally modifying anything.

Peer reviewers can help you:

  • Catch formula errors or logic gaps
  • Identify confusing question text or unclear instructions
  • Validate that scoring matches clinical expectations
  • Suggest improvements to the user experience

Fresh eyes often spot issues that you overlook after spending hours building an evaluation.

Test with Realistic Scenarios

Run your evaluation through multiple test scenarios that simulate real-world use. Cover different paths through the evaluation to verify that every combination of inputs produces the correct result.

Name your test scenarios descriptively so you can quickly identify what each one validates. For example:

  • "BMI Test: Weight 150 lbs; Height 70 in = 21"
  • "Cardiac Risk: Male, Age 55, Smoker, BP 145/92 = High Risk"
  • "Depression Screen: PHQ-9 Score 4 = Minimal Symptoms"

Descriptive names make it easy to rerun specific scenarios after you make changes and confirm that your updates did not introduce regressions.

Gather Feedback Before Publishing

After peer review and testing, gather feedback from representative users whenever possible. Observing someone use your evaluation reveals usability issues that internal review alone cannot uncover. Pay attention to:

  • Where users hesitate or express confusion
  • Questions that users interpret differently than you intended
  • Sections that feel too long or overwhelming
  • Results that users find unclear or difficult to act on

Incorporate this feedback into a final round of refinements, then publish with confidence that your evaluation delivers a clear, accurate, and usable clinical experience.

Copyright © 2026