Understanding Severity Adjusted Effect Size (SAES) in OpenFIT

Created by Enda Madden, Modified on Wed, 26 Feb at 7:02 AM by Enda Madden

What is SAES?

The Severity Adjusted Effect Size (SAES) is a core statistical measure in OpenFIT, designed to assess client progress while accounting for baseline symptom severity. It refines traditional effect size calculations, offering a more accurate reflection of therapeutic impact, especially when comparing clients who start treatment at different levels of distress.


For a deep dive on different types of Effect Sizes and their advantages and disadvantages we recommend reading Effect size calculations for the clinician: Methods and comparability  Seidel, Miller & Chow. The following text explaining how the Severity Adjusted Effect Size is calculated in OpenFIT makes frequent reference to this paper.         

    

Why Adjust for Severity?

Standard effect size measures, like Cohen’s d, can misrepresent outcomes by not considering a client’s initial ORS (Outcome Rating Scale) score. Clients starting with higher levels of distress often show larger raw score improvements, even if their relative progress mirrors those with milder symptoms. The SAES levels the playing field, ensuring progress is measured relative to where a client started.


How is SAES Calculated in OpenFIT?

SAES is derived using the Effect Size for Severity Adjusted (ESSA) method, following guidelines from the Miller & Seidel paper. OpenFIT uses the Alternative Sample-Based Calculation method (Method 2) to compute SAES, which provides transparency and granularity in how individual client outcomes are contextualized against the reference dataset.



There are many methods of calculating Effect Sizes as outlined in Miller, Seidel, Chow such as Pre-Treatment (Intake) and Pre-Post difference (Latest ORS - Intake ORS). OpenFIT uses the Reference Group Effect Size calculation aproach.


Reference Effect Size

In order to calculate the Effect Size for a sample dataset it's necessary to have pre-calculated a Reference or "grand mean" effect size based on the normative dataset of hundreds of thousands of ORS scores used to create baseline expectations and predict post-treatment scores. 


Reference Group Data Calculations (Steps 1–4)

These steps establish the baseline effect size using the historical/reference ORS data. This Effect Size for the Reference dataset is recalculated at predefined periods and updated in the OpenFIT system. 

  1. Calculate Standard Deviations:
    •  For pre (intake) and post-treatment ORS scores in the reference group.

  2. Compute Slope and Intercept:
    • Derived from the reference dataset to model expected change based on baseline ORS scores.

  3. Determine Mean Pre- and Post-Treatment Scores:
    • Establishes the average ORS scores before and after treatment in the reference group.

  4. Calculate Mean Change Score:
    • Mean pre-treatment (intake) score minus mean posttreatment score).


From Page 8 Seidel Miller


Coefficients

The various coefficients used in the formulas used during the steps to calculate Severity Adjusted Effect Size are derived from the reference dataset.


Calculating the Sample Effect Size

The sample dataset can be an individual Client or Collateral ORS scores or an aggregation of all Client/Collateral ORS scores for a Clinician/Agency/Region etc.




Step 1. Calculating Predicted Posttreatment Scores (Step 5 in Miller, Seidel, Chow)
The predicted post-treatment score as a baseline for each client is calculated using a multiple regression equation.


 


Substituting the coefficients that are derived from the Reference Dataset the formula looks like this.


  • b_intercept - Baseline ORS_post Score
    • This is the baseline ORS_post score when ORS_pre = 0 and n_days = 1 (since log⁡(1)=0).
    • It provides the reference point for predicting ORS_post.

  • ORSpre - The Clients intake ORS Score
  • b_ors_pre - Severity Adjustment Coefficient
    This coefficient, derived using Ordinary Least Squares (OLS) regression from the Reference Dataset, adjusts for baseline severity by accounting for regression to the mean. It determines the expected relationship between a client’s intake ORS score (ORS_pre) and their predicted post-treatment ORS score (ORS_post).
    • If bors_pre < 1, clients with higher ORS_pre scores are predicted to improve less (due to regression to the mean).
    • If bors_pre > 1, higher ORS_pre scores would predict greater improvement (not the case here).

      This coefficient ensures that predictions reflect typical clinical trends observed in the reference dataset.

  • log⁡(ndays) - Treatment Duration in days 
    Adjusted using a logarithmic transformation. Accounts for the impact of the time in treatment. 

  • b_n_days_log - Treatment Duration Coefficient
    This coefficient, derived using Ordinary Least Squares (OLS) regression from the Reference Dataset, adjusts for the effect of treatment duration on the predicted post-treatment ORS score. It represents the expected change in ORSpost for each unit increase in the natural logarithm of the number of treatment days (log(n_days)). The logarithmic transformation accounts for diminishing returns, where early sessions contribute more to improvement than later ones.



Step 2. Calculating the Residuals (Step 6 in Miller, Seidel, Chow)

Now that we have the Predicted Posttreatment Score from Step 1, the next step is to compute the Residuals—the difference between the client's actual outcome and the expected outcome.


  • ors_post - Last-session ORS score of the particular client

  • Predicted_ORSpost - The predicted ORS score from the previous step

  • c_resid - The residual

This formula adjusts the post-treatment ORS score by accounting for baseline severity (ors_pre) and treatment duration (n_days), providing the residual which reflects the difference between actual and predicted post-treatment scores.


A positive residual indicates the client improved more than expected. A negative residual suggests the client improved less than expected.


Step 3. Calculate the Residual Effect Size for Each Client (Step 7 in Miller, Seidel, Chow) 

Now that we have the c_resid (the residual score) for each client, the next step is to standardize it to calculate the Residual Effect Size.


c_resid: The residual score calculated in Step 2. Represents the difference between the client’s actual posttreatment score and their predicted score, adjusted for intake severity and treatment duration.


sd_ors_pre: The standard deviation of ORS_pre (intake scores) for all clients in the sample. This serves as the denominator to standardize the residuals, converting them into unit-free effect sizes.


c_r_es: The Residual Effect Size for each client. It tells us how much the client’s outcome deviates from the expected outcome in terms of standard deviations.


Step 4. Mean Residual Effect Size (Step 8 in Miller, Seidel, Chow) 

Now that we have the Residual Effect Size (c_r_es) for each client, the next step is to compute the Average Residual Effect Size for the clinician, agency or whatever aggregation of data selected.



c_r_es: The Residual Effect Size for each individual client, calculated in Step 3. 


N: The total number of clients in the provider's caseload (or the entire sample, depending on the level of aggregation).


m_c_r_es: The Mean Residual Effect Size is the average of the raw residuals (c_resid) standardized by the pre-treatment standard deviation (sd_ors_pre).


Step 5: The final step, Calculating the Standardized Average Effect Size or ESsa

This is the culmination of the previous steps, where we combine the sample’s mean residual effect size with the reference group’s mean effect size to compute the final Standardized Average Effect Size (SAES).


ESsa = Standardized Average Effect Size

m_c_r_es = Mean Residual Effect Size (calculated in Step 4)

ESref = Reference Group’s Mean Effect Size, a benchmark derived from the normative dataset as described above in Reference Group Data Calculations (Steps 1–4).



SAES Formula Observations and Conclusion

This method highlights individual client deviations from expected trajectories while maintaining consistency with the normative data.

Benefits of using Method 2 to calculate the Effect Size from Seidel, Miller, Chow in OpenFIT:

  • Transparency: By calculating the Residual Effect Size and combining it with the Reference Group’s Mean Effect Size, users can more easily understand the specific contribution of each element.

  • Granularity: Method 2 highlights individual client deviations from expected trajectories, normalized against the reference standard.

  • Clarity in Reporting: This approach makes it simpler to explain the calculation to clinicians and stakeholders, especially when presenting outcomes in aggregate.

Nature and Size of the Reference Dataset

  • Scale: The reference group consists of hundreds of thousands of ORS scores, collected across diverse treatment settings and client profiles.

  • Diversity: The dataset includes clients with varying levels of baseline severity, treatment durations, and outcome trajectories.

  • Purpose: It provides a robust normative baseline, ensuring that individual client progress is evaluated against a statistically sound and generalizable standard.

Why Severity Adjustment Matters

  • Fair Comparisons: It allows for meaningful comparisons across clients, regardless of starting severity.

  • Outcome Accuracy: By adjusting for baseline distress, SAES provides a clearer picture of genuine therapeutic progress.

  • Benchmarking: It enables organizations to assess aggregate outcomes against normative data, supporting quality improvement and outcome-based reimbursement models.

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article