Welcome!

Welcome to my blog, a place to explore and learn about the experience of running a psychiatric practice. I post about things that I find useful to know or think about. So, enjoy, and let me know what you think.


Tuesday, July 23, 2013

These and These are the Words

I recently received this email from the NY State Psychiatric Association:

Just published field-trial research on DSM-5 shows that in routine clinical practice the diagnostic criteria are viewed as easy to understand and use by both clinicians and patients. This follows field trials in high-volume academic centers that found high reliability of the criteria in the revised manual. 

As part of the process that tested the more-dimensional approach that characterizes DSM-5, APA’s Division of Research conducted a series of field trials in settings of routine clinical practice. Researchers recruited a sample of 621 psychiatrists, clinical psychologists, social workers, counselors, and other licensed mental health professionals. Each participant was then asked to report on at least one randomly selected active patient. Data were provided on 1,269 patients, who also answered study questions. 

The clinicians reported that the revised criteria were generally easy to use and were useful in patient assessment, said principal investigator Eve Mościcki, Sc.D., M.P.H., and colleagues today in Psychiatric Services in Advance. “The clinicians put in a lot of effort collecting data on their patients,” she told Psychiatric News. Large majorities favored the overall feasibility of the DSM-5 approach, the clinical utility of the DSM-5 criteria, and the value of its cross-cutting measures. 

“These trials indicate that the DSM-5 approach works in a wide range of practice settings and a wide range of clinical settings,” said Mościcki. 

This sounds pretty good, doesn't it? But as it happens, I also recently finished reading Gary Greenberg's Book of Woe. And while I disagree with Mr. Greenberg on a number of points regarding the nature of psychiatry, I believe his data are accurate, especially those having to do with his own experience as a field trial participant.

In a press release on October 5, 2010, the APA announced that its field trials had started. There were to be two types of trials. One, across 11 academic centers, with two clinicians evaluating the same patient. Both evaluations would be vidoetaped and viewed by a third clinician, to establish reliability.

The second was the RCP, or Routine Clinical Practice trial, in which private clinicians would evaluate two patients, then re-evaluate a couple weeks later. The data would then be compared, and sent back to work groups who would tweak criteria and send them back to the clinicians for a second round of field trials.

But they hadn’t started as of October 5th, just a pilot study for medical center trials. The data from the pilot study needed to be analyzed, methodology modified, and clinicians trained before the field trials could begin in earnest. The academic center trials actually began between December 2010 and March 2011.

 At the end of 2011, two months before the data had to be in, 5000 clinicians had signed up for the RCP trial, 1000 had started the training, 195 had completed the training,  and 70 patients had been enrolled. The goal was 10,000 patients.

Mr. Greenberg's description of the field trial diagnostic interview demonstration at the APA meeting is one of the funniest things I've ever read. William Narrow, the psychiatrist in charge of research for the DSM-5, bungles his way through a clunky computerized interview, most of which is irrelevant to the fake patient's description of her problem, and this takes place after she has already entered her data extensively on her own section of the computerized interview.

There follows a debacle in which Dr. Narrow runs out of time, and then forgets to save his painfully acquired data. The conclusion, obvious from the initial description of extensive hoarding, is Hoarding Disorder. The audience is then asked their opinion on whether the criteria are an improvement over DSM-IV criteria, despite the fact that Hoarding Disorder doesn't exist in DSM-IV.

At the end, the question of reliability is raised by an audience member, Michael First, who was a prominent participant in the DSM-IV, and denied a position on DSM-5. Dr. First wanted to know how to tell if diagnostic discrepancies are the result of criteria or clinician style?

The answer provided had to do with Cohen's Kappa, a statistical measure of reliability introduced in the DSM-III. A Kappa of 0 indicates that agreement is due to chance, alone. A Kappa of 1 indicates that agreement is completely non-random. For the DSM-III, a Kappa of 0.40 was considered poor, and a Kappa of 0.80 was considered high. The same day as Dr. Narrow's demonstration, Helena Kraemer, chief statistician for the DSM-5, said that a Kappa of between 0.20 and 0.40 would be considered acceptable. In other words, the DSM-III reliability was inflated, so it was a good thing that the DSM-5 reliability would be much lower.

This is the APA's definition of, "...high reliability of the criteria in the revised manual."

I want to give you a taste of what Gary Greenberg's experience as a field trial participant was like. (This is from Kindle location 4635).

He sat with the patient, in front of his computer, for several hours, plowing through 49 pages of questions on mood disorders, 31 pages of questions on anxiety disorders, and 63 pages of questions on substance disorders.

He then had to rate the patient's responses on a 0-4 scale of severity:

Here I was given a choice. I could “proceed directly to rating” and pull a number out of the air, or I could get a “detailed description of levels.” I went for the details, which turned out to be extensive, 3 pages of descriptions about “identity” and “self-direction” and “empathy” and “intimacy”. Was she a Level 2-”Excessive dependence on others for identity definition, with compromised boundary delineation”? Or did she have the “weak sense of autonomy/agency" and "poor or rigid boundary definition" of a Level 3? Or was her experience of autonomy/agency “virtually absent” and her boundaries “confused or lacking,” which earned her a Level 4? Was her self-esteem “fragile” (Level 3) or merely “vulnerable” (2), or perhaps riddled with “significant distortions” and “confusions” (4)? Was her capacity for empathy “significantly compromised,” “significantly limited,” or “virtually absent”? Was her desire for connection with others "desperate,” “limited,” or “largely based on meeting self-regulatory needs?”

I had no idea. And even if I had, or if I knew how to get this confused and confusing woman to parse it for me, there still loomed thirty pages or so to get through, box after box to check about her self and interpersonal functioning, her separation insecurity and depressivity, her negative affectivity and disinhibition, the types and facets and domains of her traits, hundreds of boxes, or so it seemed, before I could make my final diagnosis, and, with the authority vested in me as a Collaborating Investigator of the American Psychiatric Association, determine which of the constructs that deserve neither denigration nor worship, that aren't real but still can be measured from zero to four, that need to be taken seriously enough to warrant payment and maybe a round of medication but not so seriously that anyone would accuse them of existing, which fictive placeholder would join her height and blood pressure and her childood illnesses and surgeries and all the other facts of her medical life. At which point I realized that no matter what diagnosis I settled on, I wouldn’t so much have tamed her rapids as funneled them into the diagnostic turbines, raw material for the APA’s profitable mills.

This is the APA's definition of, "...easy to understand and use by both clinicians and patients."

When I first got the email, I forwarded it to Mr. Greenberg, with a note stating that I thought he would appreciate it. He was kind enough to reply, and wrote, "If it weren't so sad, it would be hilarious." I have to agree.