The Situation
In many disability programs, a physician, psychologist, or other health professional conducts an exam with the claimant. Although it's not easy, a research study could determine an inter-rater reliability estimate1 for these medical2 examinations.
After the medical exam, lay3 adjudicators review all relevant records, including questionnaires and statements from the claimant, along with the medical examiner's report, and apply regulatory standards to reach an administrative/legal decision regarding the disability claim. Researchers could conduct a study to estimate the inter-rater reliability for these adjudicative decisions.
As you can see, the medical examiner's report influences the adjudicative decision, at least to some extent. (In most systems, the examiner's report significantly influences adjudicative decisions.) Thus, the two inter-rater reliability estimates are not independent.
My Question
What is/are the best way(s) to calculate the overall reliability of such a disability determination process?
Brief Background
I'm asking this question because some organizations conduct research on the inter-rater reliability of the adjudicative decisions and describe the results as "the accuracy of our disability determination decisions". The unspoken assumption is that the inter-rater reliability of the medical examinations is 1.00.
Notes
- I wasn't sure if "coefficient" would be a better term than "estimate".
- Most disability programs refer to all exams as "medical" even though some are actually psychological, audiological, etc.
- lay, adj. - Not of or belonging to a particular profession; nonprofessional.
Prior research
I searched SE-Math but found just one related thread: combined reliability
I reviewed the following texts/tutorials (not that I couldn't have missed something relevant).
Gwet, Kilem L. Handbook of Inter-Rater Reliability. 4th ed. Gaithersburg, MD: Advanced Analytics, 2014.
Khan Academy, Statistics and probability.
Stat Trek, Statistics and Probability.
Trochim, William M., James P. Donnelly, and Kanika Arora. Research Methods: The Essential Knowledge Base. 2nd ed. Boston: Cengage Learning, 2016.