EVAL Health
Tutorials

101: Basic Evaluation with Visibility Rules

Build a clinical decision support evaluation using the URECA algorithm as a worked example.

Overview

This tutorial walks through building a clinical decision support evaluation using the URECA (Adult Urinary Retention Evaluation & Catheterization Algorithm) as the working example. The evaluation uses 7 questions and 5 conditionally displayed results to guide clinicians through a branching decision pathway.

The tutorial demonstrates the five-step development methodology — from initial component breakdown through testing — showing how structured planning makes EVAL evaluation development faster and more reliable.

Evaluation Structure

  • 7 questions with branching logic
  • 5 results displayed conditionally based on user responses
  • Decision pathways determined by symptom assessment and clinical criteria

Five-Step Development Methodology

Step 1: Break Down Components

Start by identifying the workflow elements. Map out the questions users will answer and the results the evaluation must produce. For the URECA example, this preliminary analysis reveals the question/result structure needed to support clinical decision-making.

Key principle: Simplify choices. Use yes/no responses when the choices don't drive distinct logic paths. Minimize the number of options wherever possible.

Step 2: Diagram the Process Flow

Convert your workflow diagram into logic formulas that express if-then relationships between user inputs and outcomes. For each result, write out the conditions that must be true for it to display.

Key principle: Every component (question, choice, result) must be uniquely identifiable so that visibility rules can reference them accurately.

Step 3: Peer Review

Before building in EVAL, share your component breakdown and process flow diagrams with a clinical colleague for review. Early peer review catches:

  • Translational inaccuracies between the clinical source and your workflow
  • Logic gaps (outcomes with no decision path leading to them)
  • Computational errors in scoring formulas
  • Design flaws in the question flow

Investing time in peer review before building saves significant rework later.

Step 4: Build in EVAL

Using your validated diagrams as a blueprint, build the evaluation in the EVAL Builder:

  1. Create the evaluation and fill in the General tab metadata
  2. Add sections as needed
  3. Create each question with its choices and values
  4. Define result formulas
  5. Apply visibility rules to sections, questions, and results based on your process flow diagrams

Step 5: Conduct Testing

After building, test every possible logic pathway:

  1. Use EVAL's built-in test scenarios to define input combinations and expected outputs
  2. Run each scenario and verify the correct results display
  3. Manually validate edge cases where logic paths converge or diverge
  4. Confirm that all visibility rules behave as intended

Key principle: Account for every possible outcome combination. Gaps in decision pathways can lead to clinical errors.

Design Principles Summary

PrincipleGuidance
SimplicityMinimize choice options; prefer yes/no where distinct logic isn't needed
Unique identificationEvery component must be clearly distinguishable for accurate logic connections
CompletenessEvery possible input combination must lead to a defined outcome
Early peer reviewReview diagrams before building to catch errors cheaply
Copyright © 2026