Inconsistent Risk Assessments in CA Jails Obstruct Realignment Accountability

Image credit: bikeriderlondon / Shutterstock.com

California prison realignment transferred responsibility for non-serious inmates onto each county’s Community Corrections Partnership (CCP). Similar to data inconsistency from California’s local jails, each county chooses from multiple methods of assigning detainees risk scores and recovery paths. Arrests in one county versus another could result in a harsher sentence, harsher in-custody experience, and a less relevant rehabilitation.

Risk and needs assessments are implemented in four key ways:

  1. Probation supervision and intervention programs
  2. Jail release decisions
  3. Pretrial release decisions
  4. In-custody program placement

The state’s situation is complicated, though: 18 out of 58 counties use at least two separate risk standards to guide their assessments, divvying up four functions between specialized systems. Seven out of the 18 employ a combination of three or more.

breakdown of California risk assessment policies:

  1. STRONG (Static Risk and Offender Needs Guide): (53% of counties)
  2. COMPAS (Correctional Offender Management Profiling for Alternative Sanctions): (15.5%)
  3. ORAS (Ohio Risk Assessment System): (15.5%)
  4. CAIS (Correctional Assessment and Intervention System): (12%)
  5. VPRAI (Virginia Pretrial Risk Assessment Instrument): (12%) [supplement to STRONG, COMPAS]
  6. LS-CMI (Level of Service-Case Management Inventory) (8%)
  7. LSI/LSI-R (Level of Service Inventory [Revised]) (7%)
  8. WRAI (Wisconsin Risk Assessment Instrument): (7%)
  9. other: (15.5%)
STRONG and COMPAS

STRONG (Static Risk and Offender Needs Guide) and COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) dominate the field by virtue of their familiarity, simplicity, and efficiency. They’re both automated, need no formal qualifications to administer, and weave together risk versus needs.

STRONG and COMPAS share automated communications and accountability, low barriers for staff to administer, and decision criteria neatly divided into risk assessment and rehab needs. COMPAS was billed by realignment advocates as a new start.

COMPAS intended to be the transition from STRONG. COMPAS has strict requirements for structured inmate interviews; STRONG doesn’t. Qualification to administer is open but requires two-day training. Decision criteria are flexible with individual case needs.

COMPAS can self-evaluate and construct its own reports while STRONG’s automated system is limited to case planning. COMPAS administrators can even choose additional training in the “theoretical underpinnings” of corrections. If COMPAS was the update, why do 53 percent of state counties use an outdated system while only 9 percent have implemented COMPAS?

ORAS and LSI-R/LS/CMI

ORAS (Ohio Risk Assessment System) and the LSI-R/LS/CMI (Level of Service Inventory-Revised, Level of Service/Case Management Inventory) are more structured, amassing detailed documents and research on jail inmates. Like STRONG and COMPASS, ORAS enables administrators through similar training, but risks and needs are intertwined in six thorough sub-programs, including a pretrial risk indicator.

The LSI-R and its variations mix structured interviews, official sources, collateral records, and self-reporting to construct a 54-item compilation of assessment variables. Unlike the rest, LSI-R and LS/CMI recommend that a “professional with advanced training in psychological assessment must assume responsibility for the instrument’s use, interpretation, and communication of results.” It ensures that access to decisions about an inmate’s past, present, and future isn’t afforded to unqualified staff.

CAIS

Finally, CAIS (Correctional Assessment and Intervention System) is unique because it’s balanced between the simplicity of STRONG and COMPAS and the thorough criteria of ORAS and the LS variations. Like the first two, CAIS is web-based, automated, and open to any trainee, but it is also a modernized version of existing measures each sporting 50-70 exhaustive items. CAIS’s training packet is tasked with closing this difference between inexperienced interviewers and thorough data.

—–

Inconsistent county data collection is obfuscating statewide progress; here that data vacuum is apparent. The relative efficacy of these risk assessments has yet to be formally compared; comparing overly disparate data sets could produce inaccurate results. While counties deserve freedom to target unique problems, various county strategies must be untangled.