Pages

Sunday, June 30, 2013

Free Diagnostic Coding Reference-Yay!

This is so great!

You have to check out this ICD coding reference.

These techno-angels have created a free database of ICD-9 codes, including those for 2013.

We've taken the official 2006-2013 ICD-9-CM and HCPCS coding books and added 5.3+ million hyperlinks between codes. Combine that with a Google-powered search engine, drill-down navigation system and instant coding notes  and it's easier than ever to quickly find the medical coding information you need.

They're supported by Google Adsense, so it's totally free to use. The'll even convert ICD-9 codes to ICD-10 codes. And they were happy to have me link to their site. You gotta love these guys.

What would be super-great is if they wrote some code to convert DSM-IV codes to ICD-9 codes. Maybe I'll write to them about it.

And for future reference, I've added a link to their site under "Useful Links", to the right.

Friday, June 28, 2013

Op-Ed Follow Up

In my last post, How Does This Help?, I linked to an invitation to a dialogue in the NY Times. Responses were supposed to be submitted by Thursday, with publication in the Sunday Dialogue.

As a reminder, the topic is, Gender Identity.

I'm doing a little experiment, here, to see if I can predict some of the responses. Note the date on the post. This was not written after the fact.


1. I/my child suffered from gender dysphoria, and treatment was helpful.

2. I/my child suffered from gender dysphoria, and treatment was harmful.

3. Gender dysphoria is complicated, and Dr. Drescher did a good job explaining the controversy about treatment.

4. Gender dysphoria is complicated, and Dr. Drescher did a bad job explaining the controversy about treatment.

5. Psychiatry/Psychiatrists is/are evil.

6. I'm a therapist who treats gender dysphoria, and my method of treatment is the best.

The Times likes to present a "balanced" view. Let's see what happens.


Wednesday, June 26, 2013

How Does This Help?

In yesterday's NY Times, there's an Invitation to a Dialogue about Gender Identity, by Jack Drescher, with a request for responses from readers by tomorrow.

Dr. Drescher writes about Coy Mathis, a 6-year-old, born a boy, now identifying as a girl. He discusses Gender Identity, and the fact that it is unclear what, if anything, to do with or for young Coy, or similar children.

He admonishes, "Currently experts can’t tell apart kids who outgrow gender dysphoria (desisters) from those who don’t (persisters), and how to treat them is controversial."

And finally, he offers some advice:

I would advise parents to learn all they can about the different approaches so they can understand the limitations and how they are sometimes guided by personal beliefs about gender rather than by good research data. 

Is this supposed to be helpful? How? What kind of dialogue does the Times expect this to generate?

It's like saying, "No one knows what to do, so as the expert, I'm advising you to educate yourselves." What's the point of being an expert?

I'm not implying Dr. Drescher ought to know what to do, or that parents ought not to educate themselves. But if your best advice is, "Study up!", then you don't need to write about it in the Times.

So why am I writing this post? Not sure. Maybe it's to point out the hype anything related to DSM-5 gets. Or maybe I'm annoyed by the final blurb:

The writer, a psychiatrist and psychoanalyst, served on the D.S.M.-5 Workgroup on Sexual and Gender Identity Disorders. He is co-editor of “Treating Transgender Children and Adolescents.”

Oh, so that's the advice! If you want answers, buy my book!

An unpaid ad in the NY Times.



Wednesday, June 19, 2013

More on MOC





In the May issue of The Carlat Report, right under the article I wrote, is an article about Maintenance of Certification, by Dr. James Amos. It's informative and well written, but as importantly, it links to his blog, The Practical Psychosomaticist, which has some wonderful posts and resources.

He has a particular interest in getting rid of the MOC requirements, very much in line with my own feelings about MOC (See Alphabet Soup). He thinks the exam and PIP are unnecessary wastes of time and money, but that lifelong learning should be encouraged in other ways.

I want to make people aware of some links and resources, for those who think MOC requirements are, well, extortion that may soon be linked to maintenance of licensure:

1. The Change Board Recertification site. The link will take you, not to the homepage, but to the page with a list of Boards' tax returns. The ABPN reported $ 46.7 million in total assets and $ 27.6 million in gross receipts in 2009. Links to form 990s are included.

2. Dr. Amos' petition: Dr. Amos' resolution to uphold lifelong learning in the continuous improvement of patient care and to oppose MOL was approved by the Iowa Medical Society House of Delegates in April.

3. The lawsuit: The Association of American Physicians & Surgeons (AAPS) filed suit April 23, 2013 in federal court against the American Board of Medical Specialties (ABMS) for restraining trade and causing a reduction in access by patients to their physicians. The ABMS has entered into agreements with 24 other corporations to impose enormous “recertification” burdens on physicians, which are not justified by any significant improvements in patient care.

Monday, June 17, 2013

Quiet RIAT

RIAT stands for "Restoring Invisible and Abandoned Trials.

I don't know if it's supposed to be pronounced like "riot", or like "ree-at".

But here's the idea.

Drug companies conduct studies to get new drugs approved, and to convince people to prescribe or request those drugs once they are approved. Do the studies tell the whole story? No. Do the studies always report all the truth about the drug in question, including side effects? No. Do the studies sometimes juke their stats to make their drugs look good? Yes. Are all relevant studies published? No.

It amazes me that in an age when the government routinely spies on random dot citizen with impunity, when everyone is talking about how privacy has gone the way of the Edsel, somehow, Big Pharma has managed to retain its secrets.

But not all its secrets.

Due to things like technology and lawsuits, lots of data are now available to the general public. Specifically, the pharmaceutical industry produces Clinical Study Reports (CSRs), which are hundreds or even thousands of pages long, and include "...an unabridged and detailed summary of the planning, conduct, and results of a clinical trial...Manufacturers submit clinical study reports to the US Food and Drug Administration as part of applications for new drugs. In addition, the FDA typically also requires submission of the protocol and individual participant data. ." (Same link as above).

So when a drug company releases its CSR and data to the public realm, whether voluntarily or by legal requirement, all the information necessary to publish the study becomes available. To the whole wide world.

Now along comes RIAT, with access to 178,000-some pages of data, and says, loud and clear:

HEY! Big Pharma! You have 365 days to republish the study, accurately and fully, so the world can judge the results for itself, Yo! And if you don't republish the study within a year, we'll publish it for you, using your publicly available data, and our own judgement as to what really went on, and what to conclude from the results.

Actually, it's the BMJ, so they say it a lot more politely.

I like a lot of things about this idea. It dovetails nicely with my Open Source post. It holds pharmaceutical companies accountable for their products. It's clever. And it openly acknowledges the problems that may arise, and it accepts that there are many questions to be asked, and not always clear answers:



There's a lot more to it, and the article has a number of links to free papers with titles like, Rethinking credible evidence synthesis, and, Evidence b(i)ased medicine—selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications.

Pretty interesting. Check 'em out!













Must-Read Article

One of my favorite lines from the Harry Potter books is the neon sign in the window of Fred and George's joke shop:

Why are you worried about You Know Who?
You should be worried about You-No-Poo!
The constipation sensation that's sweeping the nation.


Take a break from worrying about DSM and SAFE and I-STOP and E/M coding.

Here, read this amazing article, and start worrying about EHRs.


Thursday, June 13, 2013

What, Exactly, Is HIPAA?

The term, "HIPAA" gets thrown around in relation to patient confidentiality, so I thought it might be useful to clarify exactly what HIPAA is.

My understanding of a HIPAA-covered entity, at least for private practice, has been that you're a HIPAA Covered Entity if you bill electronically. End of story. Here's a government chart to corroborate my opinion.

Specifically:












I don't bill electronically, therefore, I am not a HIPAA Covered Entity.

Now, if you do bill electronically, or work with a billing service, then you are a HIPAA Covered Entity. But again, you are restricted in terms of the billing, not other areas.

This is a link to a useful fact sheet from HHS. Some key points:

Health Insurance Portability and Accountability Act (HIPAA) does not require patients to sign consent forms before doctors, hospitals, or ambulances can share information for treatment purposes.

So you can share patient information with other health care providers without the patient's consent.

I think it's still a nice idea to get consent, anyway, but you are not in violation of HIPAA if you don't have it.

HIPAA does not cut off all communications between providers and the families and friends of patients

You can share needed information with family, friends, or caregivers, as long as the patient doesn't object. And if the patient is unable to indicate a preference, you can do what you think is best.

So let's get something straight once and for all. HIPAA is not a catchall term that describes all legal issues surrounding patient privacy and confidentiality. It's about billing electronically.

The next time you hear someone try to reassure you by saying he or she is restricted from revealing patient information because of HIPAA, Uh-Uh.

If you're a HIPAA covered-entity, you need to give your patients forms indicating how their information may be used. And you need to make a good faith effort to get them to sign a form acknowledging receipt of these forms. What you don't need to do is get the patient to consent to the release of information. And in fact, these forms are more about the ways in which the patient's information lacks privacy.

The sample Patient Privacy Notice from NYU Langone includes the following ways a patient's health information may be used, without consent:

Treatment
Payment
Business Operations
Appointment Reminders
Fundraising
Education
Business Associates
Electronic Communications
Research
Public Need

Doesn't sound all that private, to me.

There are non-HIPAA laws regarding patient privacy, doctor-patient privilege, and confidentiality, and these vary by state. I'll go into more detail in a future post.

But for now, PLEASE read Shrink Rap's post on KevinMD for a description of what can happen to privacy under the auspices of "HIPAA".



Wednesday, June 12, 2013

DSM-5 in Action

Last week, I saw a new patient who was referred to me by her therapist for evaluation for med management, specifically, for depression.

The therapist wasn't certain the patient needed meds. And the patient wasn't sure she needed, or wanted meds.

She was suffering-that was the only certain thing.

I'm a decent psychopharmacologist, despite the fact that I mostly do therapy and analysis. The reason I'm good with meds is not because I'm an expert on the Cytochrome p450 system, or I'm the first shrink on the block to prescribe a brand new med. I believe it's because I listen to my patients, so important feelings/experiences/symptoms don't get overlooked, or boxed into neat little 15-minute med check categories that don't really fit.

So I listened to this patient, and while I was listening, I started thinking about everything I've learned regarding DSM-5. Like, maybe Major Depression, as defined by the DSM, doesn't really exist. (Note: I am NOT saying I don't believe depression exists. I'm just referring to the DSM definition of depression). And even if it does exist in DSM form, if I run through the checklist, and the patient meets five of the nine criteria, does that imply that she'll benefit from medication? And if she doesn't meet 5/9, does that mean she won't?

And what about the meds? What exactly am I treating? And how? A world-outlook? A traumatic childhood loss? If I can't be sure there's a disease process going on, and I can't be sure which aspect of the disease, if such it is, I'm treating, and no one even knows how the meds work, then why would I medicate?

I thought about how easy it would be to say to her, "You meet criteria for MDD, here's a pill, take it and you'll feel better." But I just couldn't bring myself to do that. I didn't believe it would help.

This is not the first patient I've sent home prescription-less. But it's the first time I've thought about it this way. In the past, I might've said to myself, the patient doesn't meet criteria, and therefore doesn't have the disorder in question, and consequently won't benefit from medication for this condition.

But now I'm reminded of the joke about the philosophy exam question, asking students to describe the physical characteristics of the chair at the front of the room. One student's response: What chair?

What disorder?

I came up with the following analogy: Suppose I decide that people who have more than 5 bad hair days per month carry a diagnosis of BHDD (Bad Hair Day Disorder). Am I describing an entity? Yes, people who have too many bad hair days. Does that make it a disorder, or disease?

I'm not sure the analogy is valid. At least some of the DSM diagnoses have a basis in clinical experience.

The truth is, and I realized this while I was writing, that I do find the DSM criteria for MDD useful, as a line of questioning.
This patient did not fit into any nice little DSM category. And I didn't try to make her fit. My thinking was, let's explore what she's been experiencing, using DSM criteria as a starting point. And that was useful.

I haven't purchased a copy of DSM-5. I don't want to. And I resent that, despite all the protests to the contrary, it remains the "bible" of psychiatry, and decisions are made based on its contents-reimbursement decisions, legal decisions.

But I do wish it could be what it professes to be: a guideline. Then it could sit on my book shelf with all the other books I use as references and guidelines, not with pride of place, and not with shame, either.

Friday, June 7, 2013

Learning from Diabetes

In a recent post, I wrote about how 126 became the cutoff for diabetes. It turns out that it was my fantasy about how 126 became the cutoff for diabetes. In response to the post, I got an email with a link to an article about the diagnosis of DM, which is, in the words of the person who sent it, "eerily like stuff going around re the DSM." I can't vouch for the accuracy of the article, but I'll summarize briefly.

A long time ago, back in the 70's, there were multiple standards for diagnosing diabetes. The reason for the multiple standards was that if you graph the sugars of a varied population at any given time, some will be elevated, but those don't necessarily correspond to the people who have diabetes.  Additionally, the graph never "jumps", so there's no clear cutoff point.

And at the time, there were limited treatments for diabetes, and essentially nothing to keep early type 2 from progressing. In addition, diabetes was quite stigmatized, and people with a diagnosis of DM could be refused health insurance, life insurance, employment, even a driver's license.

In 1978, the NIH convened a committee to establish a definition of diabetes, and the committee decided to place the cutoff higher than any standard had heretofore done, so that only people who unequivocally had DM would be given the diagnosis, and people who couldn't be helped anyway, such as early type 2 diabetics, would be spared the stigma, and its practical consequences.

And since a graph of a general population did not have a clear cut off point for DM, the committee looked at a subculture, the Pima Indians, whose graph did make a jump. Those Pima Indians whose Oral Glucose Tolerance Test was under 200 showed no symptoms of retinopathy, and those who did show signs of retinopathy had OGTT's over 240.

Then the committee decided to put the cutoff for fasting glucose at 140, higher than that of the typical diabetic Pima Indian, whose fasting glucose would hover around 120. Presumably this was done because at the time, OGTT was the test expected to be used to make a diagnosis, not fasting glucose.

In 1995, another committee was convened to re-examine the decision of the 1978 committee. This committee decided to use fasting glucose as the diagnostic test, presumably because it was cheaper, and it used a cutoff of 126, even though 121 seems to correspond with an OGTT of 200. They went with the highest number they could find in any study, specifically, a study of 13 Pacific populations.

There's more to the saga, but I'm gonna stop here with the diabetes. The take-home lesson for me is, Psychiatry is not such an outlier.

There are groups of people, including Gary Greenberg and his Book of Woe, who claim that Psychiatry is not as scientifically based as other medical specialties. Then there are other groups that claim it is scientific. Well, it appears to be at least as scientific as endocrinology.

A known disease entity, no one definitive diagnostic system, definition of disease determined by committee based on dubious scientific conclusions, the political stance not to further stigmatize people suffering from the disease, and a subsequent committee that examined the problems with the first committee's decisions, and then went on to make its own, new mistakes.

There is nothing new under the sun. A generation passes, and the world remains the same.

Read the article. It'll spook you. Even if it isn't accurate, it's exactly the same kind of rhetoric taking place now, about DSM-5.







Wednesday, June 5, 2013

Show Me The Science

I got this email yesterday:

NYSPA E-ALERT:  CONTACT YOUR LEGISLATOR TODAY REGARDING AUTISM BILL

All members are strongly encouraged to contact their local Assembly member or Senator today to oppose the passage of S.3044-A (Carlucci)/A.1663-A (Abinanti), a bill that would amend the Insurance Law and the Mental Hygiene Law to codify the DSM-IV definition of autism as the official definition of autism for the purposes of New York State law.  The proponents of the legislation seek to freeze the definition of autism because they are fearful that the new definitions in DSM-5 may diminish or eliminate eligibility for special education services in schools and/or health insurance coverage for community services.  This is simply not true and would be an improper intrusion of the Legislature into the realm of medical science.  Medical professionals must have the ability to update and revise clinical diagnoses according to new scientific evidence and advances in medicine. 


I'm gonna ignore, for now, the topic of what role the government should play in medicine, and focus on that last sentence: Medical professionals must have the ability to update and revise clinical diagnoses according to new scientific evidence and advances in medicine. 

Nu, so show me the science.

According to the DSM-5 description of the changes in criteria for Autism Spectrum Disorder:

The DSM-5 criteria were tested in real-life clinical settings as part of DSM-5 field trials, and analysis from that testing indicated that there will be no significant changes in the prevalence of the disorder. More recently, the largest and most up-to-date study, published by Huerta, et al, in the October 2012 issue of American Journal of Psychiatry, provided the most comprehensive assessment of the DSM-5 criteria for ASD based on symptom extraction from previously collected data. The study found that DSM-5 criteria identified 91 percent of children with clinical DSM-IV PDD diagnoses, suggesting that most children with DSM-IV PDD diagnoses will retain their diagnosis of ASD using the new criteria. Several other studies, using various methodologies, have been inconsistent in their findings. 

There didn't seem to be any reference or link to the "real-life clinical settings", and it seems like, if a "real-life" study was done, it would have been published, or pending publication, at least cited. But I did check out the Huerta paper, published in October, 2012. Here's the method:

Three data sets included 4,453 children with DSM-IV clinical PDD diagnoses and 690 with non-PDD diagnoses (e.g., language disorder). Items from a parent report measure of ASD symptoms (Autism Diagnostic Interview-Revised) and clinical observation instrument (Autism Diagnostic Observation Schedule) were matched to DSM-5 criteria and used to evaluate the sensitivity and specificity of the proposed DSM-5 criteria and current DSM-IV criteria when compared with clinical diagnoses.

I don't really understand what was done, and I'm not paying for the full article to figure it out.

These were the results:

Based on just parent data, the proposed DSM-5 criteria identified 91% of children with clinical DSM-IV PDD diagnoses. Sensitivity remained high in specific subgroups, including girls and children under 4. The specificity of DSM-5 ASD was 0.53 overall, while the specificity of DSM-IV ranged from 0.24, for clinically diagnosed PDD not otherwise specified (PDD-NOS), to 0.53, for autistic disorder. When data were required from both parent and clinical observation, the specificity of the DSM-5 criteria increased to 0.63.

Okay, sensitivity 91% or less ("remained high"), specificity 0.53 to 0.63.  I'm a little confused. You have 100 patients who were diagnosed with DSM-4 PDD, and 91 of them were diagnosed with DSM-5 ASD. Normally, you would use a sensitivity of 91% to establish that the new standard you're proposing is adequate, since it's almost as good as the old standard. So the diagnostic standard you're comparing to is DSM-4, and you're saying that since  DSM-5 is almost as good as DSM-4, it's better than DSM-4.

What would make sense is if there were a third standard, the Autism Standard, which was the basis for diagnosing Autism. If DSM-4 had, say, 80% sensitivity, and DSM-5 had 91% sensitivity, compared with the Autism Standard, then you could conclude that DSM-5 was more sensitive than DSM-4.

Alternatively, if they're saying that DSM-5 only picked up 91% of cases because 9% of DSM-4 cases are inaccurately diagnosed, then you need to question the basis for DSM-4 criteria, so you can't use it as a standard to compare DSM-5 to.

To belabor the point: Suppose you're testing lexapro vs. prozac, and comparing both to nortriptyline. If lexapro helped 80% of patients who were helped by nortriptyline, and prozac helped 95% of patients helped by nortriptyline, you could reasonably conclude that prozac works better than lexapro. But if you compare lexapro directly with prozac, and you find that lexapro helps 91 of the 100 patients that were helped by prozac, you can't conclude that lexapro works better than prozac.
And if you then claim that lexapro does work better than prozac because prozac didn't really help all the 100 people it claims to have helped, then you have no idea what prozac does and doesn't do, so what does it mean to say lexapro works better?

See what I'm getting at?

You could argue that DSM-5 has better specificity than DSM-4, but the whole concern is that people who have a DSM-4 diagnosis of PDD won't all have a DSM-5 diagnosis of ASD, so some will lose needed services/treatment. So the concern is about missing a real case, not misdiagnosing someone who doesn't have PDD. In other words, specificity isn't the issue.

Moving right along.

I looked up some of those "other studies" that have been "inconsistent in their findings".

This study, by McPartland, et al, published in April, 2012, found these results:

When applying proposed DSM-5 diagnostic criteria for ASD, 60.6% (95% confidence interval: 57%-64%) of cases with a clinical diagnosis of an ASD met revised DSM-5 diagnostic criteria for ASD. Overall specificity was high, with 94.9% (95% confidence interval: 92%-97%) of individuals accurately excluded from the spectrum. Sensitivity varied by diagnostic subgroup (autistic disorder = 0.76; Asperger's disorder = 0.25; pervasive developmental disorder-not otherwise specified = 0.28) and cognitive ability (IQ < 70 = 0.70; IQ ≥ 70 = 0.46).

The study concludes that:

Proposed DSM-5 criteria could substantially alter the composition of the autism spectrum. Revised criteria improve specificity but exclude a substantial portion of cognitively able individuals and those with ASDs other than autistic disorder. A more stringent diagnostic rubric holds significant public health ramifications regarding service eligibility and compatibility of historical and future research.


Another study found lower sensitivity and greater specificity, with sensitivity improving, although still less than DSM-4, if one DSM-5 criterion was relaxed.

Yet another study found lower sensitivity.

Let's see. We have "real-life" clinical settings without a linked study. We have one cited study demonstrating that some of the people with DSM-4 PDD diagnoses would lose those diagnoses under DSM-5, we have a statement that, "This is simply not true," and we have a sneaky little sentence dismissing, and not citing, those "other studies".

No, I don't think I will be contacting my local assembly member or senator.