Pages

Tuesday, July 30, 2013

I-STOP 2.0

Here's an update on I-STOP, the NY State law that's intended to better control drug diversion. Basically, it describes exceptions to the requirement to check the Prescription Monitoring Program Registry (PMP), including prescribing in a hospice, or actually administering the controlled substance. It also describes the requirements for assigning a designee to check the PMP for you.

To review, starting on August 27th, if you prescribe a schedule II, III, or IV substance, you need to check the PMP registry no more than 24 hours prior to writing the prescription. You need to document that you checked it (yes, more required documentation imposed by an external source), or you need to document why you didn't check it, such as a power outage.

For reference, see this post for a slightly wonky description of how to access the PMP registry.

And here are lists of:

Schedule II Drugs
Schedule III Drugs
Schedule IV Drugs





And here are some songs with the word "stop" in the title:

Stop in the Name of Love
I'll Stop the World and Melt with You
Don't Stop Believin'
Can't Stop Fallin' Into Love
Can't Stop This Thing We Started
Don't Stop Me Now
Who'll Stop the Rain
Bus Stop
Don't Stop
Don't Stop Til You Get Enough
Nothing's Gonna Stop Us Now
Stop Your Sobbing
Stop

Sunday, July 28, 2013

Statistically Writing-Variance and Standard Deviation

I hope people weren't too annoyed by my previous statistics post about measures of central tendency. But it's important to really understand the concept of a mean, and its implications for research, before moving on to bigger and better things. This time around, we're going to look at measures of dispersion.


Say you have a set of data points, and you've figured out the mean for that set. You might, then, want to know how far from the mean each of your data points is. So if you subtract each data point from the mean, and take the absolute value, you would know that information, for each point.

Consider teenagers. You have a group of 5 teens, and each spends a certain number of hours per day on Facebook:

T1=3; T2= 5; T3=2; T4=6; T5=2

If you calculate the mean here, you get: 3.6. So, on average, each teen spends 3.6 hours per day on Facebook.

Now suppose you want to know how close or far from average each kid's time on Facebook is (Why? To see if your kid is a freak):

T1: |3-3.6|= 0.6
T2: |5-3.6|= 1.4
T3: |2-3.6|= 1.6
T3: |6-3.6|= 2.4
T5: |2-3.6|= 1.6

Well, that's nice, but notice, you have another data set here, for which you can also find the mean. This is called the Mean Absolute Deviation. In this case, it's equal to 1.52 hours.


But Mean Absolute Deviation is not Variance. Variance, denoted by sigma squared, is actually the sum of the squares of each of these numbers, averaged out:




So here, the Variance = (0.62 + 1.42 + 1.62 + 2.42 + 1.62)/5 = 5.72.


You may recall from my last stats post that I wrote about the distinction between the sample mean and the population mean. In the above example, the 5 teens constitute our entire population, and the formula above is for population variance, denoted by sigma squared. (Also note that population mean is denoted by mu.)

But let's say you want to use this group of 5 teens to estimate the average number of hours on Facebook for all teens in the US. Then the group of 5 teens is a sample. And weird as this may sound, a better way to estimate the variance of a population based on a sample is to calculate the "unbiased sample variance", denoted by s squared, where the result is computed by dividing by n-1 rather than by n.



In this case, the unbiased sample variance = 7.15.

Variance is a useful measure of how far from the mean the data points are. But notice, it's a squared value. This implies that the distance from the mean is exaggeratedly large. Just looking at the variance, without units, you can see that 7.15 is bigger than the greatest amount of time spent on Facebook, 6 hours.

And if you have outliers, say some weird kid was on Facebook 20 hours a day, the variance will be huge. For those readers who thought my last statistics post was overly simplistic, this is where it starts to be important to know which measures are good for data with outliers, and which aren't.

Also, if your data is measured in hours, it's unintuitive to think about distance from the mean in hours squared. This is where Standard Deviation comes in handy.

Standard Deviation is nothing but the square root of variance:

For an entire population,



And for a sample,



In this case, the Population Standard Deviation = 2.39,

and the Sample Standard Deviation = 2.67.


Visually, it looks something like this:



The mean is in blue, the data points in green, and the purple lines represent one standard deviation in each direction from the mean.

Saturday, July 27, 2013

Scooped

From CNBC.com, Take two Senate seats, then repeal Obamacare.

Two primary care physicians, Dr. Annette Bosworth of Sioux Falls, S.D., and Dr. Alieta Eck of New Jersey, are running for the Senate. Neither has any political experience, but both are so concerned about the impact of Obamacare that they decided to pursue Senate seats.

Bosworth said she would seek to replace Obamacare with legislation that improves and enhances safeguards to public health, but would remove any individual or employer mandates. She plans to unveil her full plans in the coming weeks. 

Eck, on the other hand, said she will not replace Obamacare with any reforms. To care for the health needs of the poor, she suggests getting every medical doctor to volunteer four hours a week at a health clinic for the needy.

Hey! I had that idea years ago. Not four hours a week, which is more than I would comfortably be willing to donate, but maybe a day a month. Of course, I never had the gumption or the inclination to actually try to implement my idea, which involves telling the government to mind its own business when it comes to caring for patients, telling health insurance companies to go work for the gecko, and reminding the country that the disjointed care patients who can't afford fee-for-service would get from seeing a different doctor at each visit is better than the disjointed care they get from heading to the ER for routine medical care, and is much less expensive.

So, kudos to doctors Bosworth and Eck!




Tuesday, July 23, 2013

These and These are the Words

I recently received this email from the NY State Psychiatric Association:

Just published field-trial research on DSM-5 shows that in routine clinical practice the diagnostic criteria are viewed as easy to understand and use by both clinicians and patients. This follows field trials in high-volume academic centers that found high reliability of the criteria in the revised manual. 

As part of the process that tested the more-dimensional approach that characterizes DSM-5, APA’s Division of Research conducted a series of field trials in settings of routine clinical practice. Researchers recruited a sample of 621 psychiatrists, clinical psychologists, social workers, counselors, and other licensed mental health professionals. Each participant was then asked to report on at least one randomly selected active patient. Data were provided on 1,269 patients, who also answered study questions. 

The clinicians reported that the revised criteria were generally easy to use and were useful in patient assessment, said principal investigator Eve Mościcki, Sc.D., M.P.H., and colleagues today in Psychiatric Services in Advance. “The clinicians put in a lot of effort collecting data on their patients,” she told Psychiatric News. Large majorities favored the overall feasibility of the DSM-5 approach, the clinical utility of the DSM-5 criteria, and the value of its cross-cutting measures. 

“These trials indicate that the DSM-5 approach works in a wide range of practice settings and a wide range of clinical settings,” said Mościcki. 

This sounds pretty good, doesn't it? But as it happens, I also recently finished reading Gary Greenberg's Book of Woe. And while I disagree with Mr. Greenberg on a number of points regarding the nature of psychiatry, I believe his data are accurate, especially those having to do with his own experience as a field trial participant.

In a press release on October 5, 2010, the APA announced that its field trials had started. There were to be two types of trials. One, across 11 academic centers, with two clinicians evaluating the same patient. Both evaluations would be vidoetaped and viewed by a third clinician, to establish reliability.

The second was the RCP, or Routine Clinical Practice trial, in which private clinicians would evaluate two patients, then re-evaluate a couple weeks later. The data would then be compared, and sent back to work groups who would tweak criteria and send them back to the clinicians for a second round of field trials.

But they hadn’t started as of October 5th, just a pilot study for medical center trials. The data from the pilot study needed to be analyzed, methodology modified, and clinicians trained before the field trials could begin in earnest. The academic center trials actually began between December 2010 and March 2011.

 At the end of 2011, two months before the data had to be in, 5000 clinicians had signed up for the RCP trial, 1000 had started the training, 195 had completed the training,  and 70 patients had been enrolled. The goal was 10,000 patients.

Mr. Greenberg's description of the field trial diagnostic interview demonstration at the APA meeting is one of the funniest things I've ever read. William Narrow, the psychiatrist in charge of research for the DSM-5, bungles his way through a clunky computerized interview, most of which is irrelevant to the fake patient's description of her problem, and this takes place after she has already entered her data extensively on her own section of the computerized interview.

There follows a debacle in which Dr. Narrow runs out of time, and then forgets to save his painfully acquired data. The conclusion, obvious from the initial description of extensive hoarding, is Hoarding Disorder. The audience is then asked their opinion on whether the criteria are an improvement over DSM-IV criteria, despite the fact that Hoarding Disorder doesn't exist in DSM-IV.

At the end, the question of reliability is raised by an audience member, Michael First, who was a prominent participant in the DSM-IV, and denied a position on DSM-5. Dr. First wanted to know how to tell if diagnostic discrepancies are the result of criteria or clinician style?

The answer provided had to do with Cohen's Kappa, a statistical measure of reliability introduced in the DSM-III. A Kappa of 0 indicates that agreement is due to chance, alone. A Kappa of 1 indicates that agreement is completely non-random. For the DSM-III, a Kappa of 0.40 was considered poor, and a Kappa of 0.80 was considered high. The same day as Dr. Narrow's demonstration, Helena Kraemer, chief statistician for the DSM-5, said that a Kappa of between 0.20 and 0.40 would be considered acceptable. In other words, the DSM-III reliability was inflated, so it was a good thing that the DSM-5 reliability would be much lower.

This is the APA's definition of, "...high reliability of the criteria in the revised manual."

I want to give you a taste of what Gary Greenberg's experience as a field trial participant was like. (This is from Kindle location 4635).

He sat with the patient, in front of his computer, for several hours, plowing through 49 pages of questions on mood disorders, 31 pages of questions on anxiety disorders, and 63 pages of questions on substance disorders.

He then had to rate the patient's responses on a 0-4 scale of severity:

Here I was given a choice. I could “proceed directly to rating” and pull a number out of the air, or I could get a “detailed description of levels.” I went for the details, which turned out to be extensive, 3 pages of descriptions about “identity” and “self-direction” and “empathy” and “intimacy”. Was she a Level 2-”Excessive dependence on others for identity definition, with compromised boundary delineation”? Or did she have the “weak sense of autonomy/agency" and "poor or rigid boundary definition" of a Level 3? Or was her experience of autonomy/agency “virtually absent” and her boundaries “confused or lacking,” which earned her a Level 4? Was her self-esteem “fragile” (Level 3) or merely “vulnerable” (2), or perhaps riddled with “significant distortions” and “confusions” (4)? Was her capacity for empathy “significantly compromised,” “significantly limited,” or “virtually absent”? Was her desire for connection with others "desperate,” “limited,” or “largely based on meeting self-regulatory needs?”

I had no idea. And even if I had, or if I knew how to get this confused and confusing woman to parse it for me, there still loomed thirty pages or so to get through, box after box to check about her self and interpersonal functioning, her separation insecurity and depressivity, her negative affectivity and disinhibition, the types and facets and domains of her traits, hundreds of boxes, or so it seemed, before I could make my final diagnosis, and, with the authority vested in me as a Collaborating Investigator of the American Psychiatric Association, determine which of the constructs that deserve neither denigration nor worship, that aren't real but still can be measured from zero to four, that need to be taken seriously enough to warrant payment and maybe a round of medication but not so seriously that anyone would accuse them of existing, which fictive placeholder would join her height and blood pressure and her childood illnesses and surgeries and all the other facts of her medical life. At which point I realized that no matter what diagnosis I settled on, I wouldn’t so much have tamed her rapids as funneled them into the diagnostic turbines, raw material for the APA’s profitable mills.

This is the APA's definition of, "...easy to understand and use by both clinicians and patients."

When I first got the email, I forwarded it to Mr. Greenberg, with a note stating that I thought he would appreciate it. He was kind enough to reply, and wrote, "If it weren't so sad, it would be hilarious." I have to agree.








Wednesday, July 17, 2013

NEBA

In case you didn't get to read this article in the Times, on Monday, the FDA "approved the first brain wave test to help diagnose attention deficit hyperactivity disorder in children."

The way it works is that they hook up the kid to a device called a NEBA, "Neuropsychiatric EEG-Based Assessment Aid", and 15 or 20 minutes later, it interprets the EEG to determine if it's consistent with ADHD.

What the NEBA people didn't tell the FDA is the way it really works. You hook up the kid to the NEBA, and 15-20 minutes later, if the kid is still sitting there, he doesn't have ADHD.

Okay, I'm joking. And I don't mean to be glib about ADHD, which is a source of tremendous suffering. But a device like this does make you consider the whole concept of "lab tests" for psychiatric illnesses.

One common complaint about Psychiatry is that it's not "scientific" enough, because there are no lab tests to clearly delineate disease, unlike something more "medical", such as diabetes. In a recent post, Learning From Diabetes, I addressed this point from the angle of, Diabetes isn't really that scientific, either. And in my first week of medical school, we were taught that, "No lab test is 100% dependable." That's a reasonable point-that science always has its limits in medicine, and clinical correlation, based on clinician experience, is a necessary entity.

But I think there's something deeper going on here. NEBA may well be scientifically sound. And it is exciting to imagine being able to draw some blood to diagnose OCD, or to differentiate Schizophrenia from Bipolar. But science for the sake of science doesn't always make sense. (And science for the sake of industry always does make sense, but isn't always desirable.)

What I'm trying to say is, it isn't cool to overlook the painfully obvious just so you can say you're scientific. And especially where there is money to be made, the temptation is very great to do just that.



Saturday, July 13, 2013

Follow Up Poll

Back in April, I posted a survey about the DSM-5: whether people planned to purchase it, what people thought the problems with it would be.

Now I'm following up. I want to know if people have purchased it, and whether they feel it's been helpful.

Click HERE to participate. It's completely anonymous, and you don't need to have participated in the first poll to join in on this one.

This survey will close 07.27.13, 12am. Thank you in advance for participating.


Wednesday, July 10, 2013

Statistically Writing

I took a Statistics class my sophomore year in college. Got an A, didn't learn anything. I subsequently learned some probability and combinatorics, which I find immensely useful (not being facetious), but I still don't know any stats. And it seems to me that if I'm going to try to intelligently  read papers, I should know, really know, what ANOVA is, and how to compute number needed to treat, and all that other jazz.

Since this is the kind of information that is useful to most or all clinicians, I thought I'd break the topics up into individual posts, and share my understanding, or lack thereof. Now, most of you probably already know all of it. You haven't forgotten anything you learned about statistics in medical school, and you read through the minutiae of the statistical analyses in all studies you peruse. So you probably don't need it. But for the minority who don't remember so well, here's a refresher.

And please let me know if this is overly simplistic. Maybe I'm the only one, but honestly, I didn't really get the implications of this stuff until I wrote this post.

Let's start with the basic basics, measures of central tendency. These are the mean, median, and mode. The definitions are pretty simple. The mean is the average, the median is the middle value, and the mode is the most common value. But the important question, when it comes to understanding the results of a study, for instance, is why would you use one rather than the other?

For the record, what's referred to as the "mean" is generally the "sample mean", i.e. the average value of all the data points in a given sample. This stands in contradistinction to the "population mean", the average value of all the data points in an entire population.

Say you wanted to know what percentage of redheads in the US are left-handed. One way to determine this would be to find every last redhead in the country, and count the number of lefties. In this case, you'd be looking at the entire population, which, practically speaking, is impossible to do on a limited grant. So instead, you'd pick a sample of redheads, maybe all the redheads in your town who were willing to sign up for the study. This is a more do-able project, and you hope that the sample you're looking at is representative of the entire population. But if for some reason your town had an unusually high number of lefties, then the sample would not be representative of the general population in the US.

If you think about it this way, you can see how a perfectly conducted study can draw erroneous conclusions, because it can't look at the entire population, just a sample of it.

The function of any measure of central tendency is to give you a handle on a collection of data points, a sense of what the data is "telling" you. But it's important to note that there is no best measure of central tendency. The one we see the most is the mean, but it has it's limitations.

The mean is useful for including all data points, even in very large sets. And it's easy to incorporate new data.

Where it starts to falter is with outliers. Suppose you want to know the typical number of marbles owned by each of five children.  And suppose the numbers are as follows:

3; 7; 4; 5; 100

The mean value here is 23.8. But it would be misleading to say that on average, each child has 24 marbles. This is where the median is useful. If you put the numbers in order:

3; 4; 5; 7; 100,

you can see that the median is 5, which is much closer to the number of marbles owned by most children in this group.

This is something to keep in mind when reading a study. If a new antidepressant, Happyzac, caused massive improvement in 2 out of 30 subjects, but poor to moderate improvement in the other 28 subjects, the mean improvement might be misleading.

On the other hand, the median can be difficult to use if there is a very large data set, since it has to be put in order.

Also, if some data points are very close together, and others spread out, the middle number may not be the most useful way to think about the data set. Consider the following sequence:

1; 2; 3; 30; 70; 200; 554

The median here is 30, which really doesn't tell you much about the nature of this set. Unlike the example above, there are no outliers, just one small cluster, and a bunch of other numbers all over the place.

The mode is good for categories, particularly non-numerical ones. Say you wanted to find out the most common hair color of lefties. The mean isn't useful because how do you average hair color? And the median isn't useful because how do you put hair color in order, so you can determine the middle value?

The problem with the mode is that it can be very far from the middle value. Also, there can be more than one mode, e.g. if it turned out that there are the same number of blonde lefties as red-headed lefties.

To summarize:



Click HERE to read the next Statistics post on Variance and Standard Deviation.

Saturday, July 6, 2013

Az, The Great and Powerful

I remember studying Pharmacology in medical school, and some of the mnemonics. If it ended with, "alol", it was a beta blocker (propranalol, metopralol). And if it had an "az" in it, it was a benzodiazepine (diazepam, lorazepam, chlordiazepoxide).  Now I just call them valium, ativan, librium, etc., so my mnemonic isn't much use.

I've noticed that there's an excessive and ironic amount of anxiety associated with prescribing benzos for anxiety. My residency had a particularly strong addiction psychiatry program, and it was great to work on the dual diagnosis unit, believe it or not. But the ethos there was, absolutely no benzos. Benzos are bad. If patients ask for them, it's because they're drug seeking. And if doctors suggest their use, it's because the doctors are either too foolish to recognize that they're being duped by drug seekers, or because the doctors are coddling their patients.

As a PGY-3, in the outpatient clinic, I had to force myself to unlearn some of this attitude. But even then, it was with a fair degree of trepidation that I wrote my first independent script for a benzo (I think klonopin).

So with the I-STOP program looming large, gearing up to check on all the patients I write controlled substance scripts for, in real time, I need to ask myself, what's the truth about benzos?

Here are some specific questions, in no particular order:

* Do benzos help with anxiety, in the long run?
* Do benzos help with sleep?
* How likely is it that a patient without a significant substance
    history will abuse or divert benzos?
* How likely is it that a patient with a significant substance
    history will abuse or divert benzos?
* Does the particular substance of abuse matter?
* How dangerous is benzo use?
* How helpful can a benzo be to someone with a significant substance history?

In an extensive review of the literature since 1966, Posternak and Mueller (2001) found that:

Although most benzodiazepine abusers concurrently abuse other substances, there is little evidence to indicate that a history of substance abuse is a major risk factor for future benzodiazepine abuse or dependence. Furthermore, benzodiazepines do not appear to induce relapse of substance abuse in these patients. The authors conclude that the position that benzodiazepines are contraindicated in former substance abusers appears to lack empirical justification. Benzodiazepines may be indicated in certain patients with anxiety disorders and a history of substance abuse or dependence.

Conversely, there's a really nice summary entitled, Addiction: Part I. Benzodiazepines—Side Effects, Abuse Risk and Alternatives on the American Family Physician site. It reviews things I hadn't thought about in a while, like the neurochemistry of BZDs. According to this article, benzos are pretty safe in overdose, when used alone, but are frequently used with other substances, which can increase toxicity and danger. The authors claim that people who take benzos become tolerant to the hypnotic effects pretty quickly, which is why they're not useful for insomnia, in the long run. Tolerance to anxiolytic effects develops more slowly, but it does develop, so that the authors discourage long-term benzo use for anxiety. Finally, the article claims that:

Benzodiazepines are rarely the preferred or sole drug of abuse. An estimated 80 percent of benzodiazepine abuse is part of polydrug abuse, most commonly with opioids...Studies indicate that 3 to 41 percent of alcoholic persons report that they abused benzodiazepines at some time, often to modulate intoxication or withdrawal effects...Most addiction medicine specialists believe that benzodiazepines are relatively contraindicated in patients with current alcohol or drug abuse problems and in patients who are in recovery.

This article predates the Posternak and Mueller paper, so it isn't clear what the fact that they disagree implies.

This lack of clarity is reflected in a 2012 study of prescribing practices, Psychiatrist Decision-Making Towards Prescribing Benzodiazepines: The Dilemma with Substance Abusers. The study surveyed outpatient psychiatrists, and found that:

Sixty-six percent of (respondents) experienced requests for behaviors suspicious for abuse...Patient characteristics such as ‘history of abuse’, ‘unknown patient’, and ‘patient use of illicit substances’ were occasional or common reasons for NOT prescribing BZDs (75 %). (And) The most common contexts in which the majority of (the) sample was uncomfortable prescribing BZDs involved a patient history of substance abuse, fear of initiation of dependence, diversion, and feeling manipulated by the patient.

According to the CDC's Emergency Department Visits Involving Nonmedical Use of Selected Prescription Drugs --- United States, 2004--2008:

The estimated number of ED visits involving nonmedical use of benzodiazepines increased from 143,500 in 2004 to 271,700 in 2008 (89%, p=0.01), and rates increased from 49.0 to 89.4 per 100,000, an increase of 82% (p<0.05). The increases in numbers of ED visits during 2004--2008 for individual benzodiazepines were significant: alprazolam (125%, p=0.01), clonazepam (72%, p<0.001), diazepam (70%, p=0.02), and lorazepam (107%, p=0.006)...Among opioid analgesic--related visits, 38% did not involve any other drug (including alcohol); the corresponding figure was 21% for benzodiazepine-related visits. Benzodiazepines were involved in 26% of opioid analgesic--related visits. Alcohol was involved in 15% and 25% of visits for opioids and benzodiazepines, respectively.

The overall opinion, aside from the Posternak article, at least that I could find in a semi-brief search that was more extensive than what I've recorded, seems to be that BZD treatment of anxiety is contraindicated in substance abusers, at least while they're actively using. Once the substance use is in remission, it appears that benzos can be used, with caution, to treat anxiety, but are not recommended as first line.
In more general populations, benzos are useful for the short term treatment of anxiety, especially as PRNs, and the even shorter term treatment of insomnia.
Benzos are generally safe in overdose if used alone, but are frequently used with other substances which can make them much more toxic in overdose.
Benzos are also making increasing appearances in emergency rooms.

Somehow, I'm not satisfied with this answer. Maybe because it doesn't jive with my clinical experience. I've known benzos to be helpful to patients with a history of substance abuse, and even to be helpful in curtailing the substance abuse, with no subsequent escalation in dose. I've worked with patients whose insomnia improved significantly with benzo use QHS, even long term, and even after failing with ambien or trazadone.

So I'm not quite sure what to do with this information. I recognize that my sample size is not large, but since I've read mostly abstracts of the above articles, I can't speak to the validity of their conclusions, either. Any thoughts?







Thursday, July 4, 2013

Chairs

What kind of chair do you use in your office?

Like most shrinks in private practice, I sit a lot. And I happen to have back problems (old muscle injury). So it's been hard to find the right chair.

The way my (small) office is set up, I use the same chair for sitting in when I'm seeing patients face to face, and when I'm working at my desk. At the moment, I'm using a Setu Chair.

I got the kind without the arms, because I thought it would give me more freedom of movement, but I think it makes it harder for me to take notes, or even just sit back comfortably. It's also not height adjustable (this model), so I use a footrest with it. The nice thing about it is that the instructions consist of two words: Sit Down! No adjustable doodads. Nothing complicated. And it does mold to your back nicely. But I still seem to ache a lot, even with stretching and strength training.

How do you figure out which chair works for you? If you go to a store to try it out, it's not like you're going to sit there for 3 hours to see how it feels.

In the past, I've used a Mirra chair:



But the controls were too cumbersome, and it wasn't all that comfortable.

My husband uses an Aeron at his desk:


I've never found it that comfy.

I recently read about the second iteration of the Think Chair:


This looks promising, but again, how do you know until you've sat in it for several hours straight?

One day, in the far off future, when I have a big enough office to fit both a desk chair and a sitting chair, what I want is the Saarinen Womb Chair with Ottoman:




I also love the idea of an Eames Chair with Ottoman:


In all honesty, I'm crazy about this chair, but I've wondered what it would do to my back to sit in it all day.

That's the thing. There are all these different ergonomic desk chairs, but they're made for working at...wait for it...desks!

And the lounge chairs are more for reading.

It reminds me of doctor shoes. I learned this as a medical student and resident. There are great shoes for running. And there are great shoes for walking. But what you need when you work in a hospital are great shoes for standing. Because that's mostly what you do. I used to think those Dansko clogs looked silly on surgeons, and the Merrell clogs are ugly. But they work for standing.

Is there such a thing as a shrink's chair? One that's designed for attentive listening in a not completely reclined position, that lets you move around a little, and doesn't let you sink down til it's bad for your posture? This could be a challenge for Herman Miller.

Just for fun, here are some of my other favorite chairs:

The Swan


The Egg



The Marshmallow



The Flight Recliner


The Papa Bear


Do other people have similar sitting issues? And if so, how have you solved them?









Tuesday, July 2, 2013

The Culture of Medicine and the Art of K'vetching





I've been reading a lot of professional posts and articles, lately-one perk of writing a blog is that it forces me to read up on topics. A lot. The kind of reading up that should count towards MOC, and not just as Category 2 credits.

But I digress.

What I've been thinking about is, "How did we get to this place?"

The DSM, pick a version, tells us how to diagnose, assuming we want to be paid, or our patients to be reimbursed.

The APA tells us we have to change the way we code billing and to change the way we write our notes accordingly, even if that means said notes cease to reflect anything about the treatment.

I-STOP tells we have to stop in the middle of a session, get online, and check to see if the person sitting in front of us is misusing controlled sbstances.

The ABPN, endorsed by the APA, tells us we have to pay thousands of dollars to take exams that cover outdated meds and topics, and additional money to waste time doing PIP modules that intrude on patient care, encourage checklists rather than talking with patients, and have never, not even remotely, been shown to improve quality of care. And if we don't do this, we will lose our board certifications, and possibly our licenses, down the line.

HIPAA tells us we have to ask our patients to sign forms indicating they know we're sending their private medical information all over the place (See What, Exactly, Is HIPAA?)

The government tells us we have to implement expensive electronic medical record systems that are designed for billing rather than patient care, that send private medical information willy-nilly into the cloud, that take time away from patient care, and that have not been shown to improve quality of care.

Insurance companies tell us how and for how long we're supposed to treat our patients, and I don't care that they pay lip service to the idea that all medical decisions are the responsibility of the doctor/patient dyad, and not of the companies holding the pursestrings.

Politicians tell us we have to compromise our patients' trust by reporting them to a national database if we think they might, at some unspecified time in the future, be dangerous, and that this will help prevent the random and rare wonton murder of school children.

This is crazy! How did we lose control of our own profession? I mean, I don't go around telling lawyers how to practice law!

Here's a link to a thoughtful post by Dr. James Amos entitled, Can We Make Medicine More Fun?

Given everything I've just written, I don't see how.

In considering where this all comes from, it seems to me that it has to do with the "Suck It Up" attitude that exists in medicine. You have more patients than you can safely manage? Too bad! You need to work 30 straight hours without sleep? Tough! You need to fill out an ever-increasing number of meaningless forms? So what! You need to be in two places at once and do more work than is humanly possible? Suck it up!

There's no getting away from the fact that part of medical training involves learning to do what you're told without complaining, even if it makes no sense and requires magical abilities, because it's for the good of the patient.

This attitude might be just fine if it actually were for the good of the patient. Maybe it used to be, but not anymore. Our complacency has harmed our patients. All these external requirements amount to diminished care.

Here's what needs to happen: We, the doctors in the clinical trenches, need to break out of our well-trained molds and learn how to k'vetch, good and loud, and not just to each other, like I'm doing right now. And we need to refuse to let people who have no medical training or experience tell us how to do our jobs.

How do we go about this?

What do you get when you cross an elephant and a rhino?
Elephino?