EVAL Health
Guides

Data

Manage data integrity, labeling, and structure for your clinical evaluations.

Why Data Structure Matters

The data you collect and the way you organize it form the foundation of your entire evaluation. When you structure your data carefully, you ensure that every input traces cleanly through calculations to the final output. Poor data structure leads to ambiguous references, broken formulas, and results you cannot trust.

Ensure Data Integrity with Unique Labels

Assign a unique label to every element in your evaluation: sections, questions, choices, and results. These labels serve as the identifiers that your formulas, visibility rules, and scoring logic reference. When two elements share a label, you introduce ambiguity that can produce incorrect results.

Every question and choice title should be unique and stand apart from all others in the evaluation. This distinctness is essential for traceability and accuracy. When you review a formula that references a specific label, you need to know exactly which element it points to without any guesswork.

Use Meaningful, Descriptive Names

Avoid generic labels like "Question 1," "Option A," or "Section 2." Instead, choose names that describe the clinical purpose of each element:

  • Use systolic_bp instead of q3
  • Use pain_severity_mild instead of option_a
  • Use cardiovascular_risk_factors instead of section_2

Descriptive names make your formulas self-documenting. When you revisit an evaluation months later or hand it off to a colleague for peer review, meaningful labels let anyone understand the logic without deciphering cryptic references.

Labels as References

Labels do more than identify elements -- they serve as the connection points throughout your evaluation. You reference labels in two critical contexts:

  • Formula expressions -- Use labels as keywords in calculations (e.g., IF(systolic_bp > 140, "Elevated", "Normal")).
  • Visibility rules -- Reference labels to control which questions appear based on prior answers (e.g., show follow-up questions only when a specific condition is met).

Plan your labeling convention before you start building. A consistent naming pattern makes your logic easier to write, read, and debug.

Plan Your Data Model

Before you create your first question, map out the full data model:

  • Inputs -- What data do you need to collect? List every question, measurement, and selection the user provides.
  • Outputs -- What results do you need to produce? Define every score, classification, and recommendation the evaluation generates.
  • Transformations -- How do inputs become outputs? Identify the calculations, lookups, and decision rules that connect them.

This planning step helps you catch missing data points before you build, saving you from restructuring later.

Consider Units and Constraints

When you collect numerical data, define the units of measure and set appropriate constraints:

  • Specify whether a weight field expects pounds or kilograms.
  • Set minimum and maximum values to prevent data entry errors (e.g., a heart rate between 30 and 250 bpm).
  • Determine decimal precision for measurements that require it.

Clear units and constraints protect the integrity of your calculations and prevent users from entering values that would produce meaningless results.

Organize Sections to Match Clinical Workflow

Arrange your sections in an order that mirrors the clinical evaluation process. Group related questions together so users experience a natural progression:

  • Place demographic and intake questions first.
  • Follow with assessment and observation questions.
  • Position scoring and result sections at the end.

Logical section ordering reduces cognitive load for users and helps them complete evaluations efficiently without jumping back and forth.

Keep Data Traceable

Maintain a clear chain from input through calculation to output. For every result your evaluation produces, you should be able to trace backward through the logic to the specific inputs that influenced it. This traceability is essential for:

  • Debugging -- When a result looks wrong, you can follow the chain to find where the error occurs.
  • Validation -- When you create test scenarios, you verify that specific inputs produce expected outputs.
  • Peer review -- When a colleague reviews your evaluation, they can follow the data flow and confirm its accuracy.

Build your data structure with traceability in mind from the start, and you will save significant time during testing and review.

Copyright © 2026