Pages

Tuesday, October 18, 2016

Finally! Building a Website-The Website

I know, I know, I started writing about building my practice website months ago, and I've completely dropped the ball. But only the blog ball. I've actually been working very hard on my own website, as well as that of my analytic institute, and on other tech-y advances we're adopting. More on that later.

For now, I have finally chosen SquareSpace as my site host, and I'm diligently shoveling away at everything I need to do to get the site up and running.

Why SquareSpace? I know that in my last post on this matter, it looked like I was going to go with Duda. Since then, I've traveled further along the convoluted paths of website-building research, and one of the things I discovered is that SquareSpace is a solid company in many ways, including staying power. I really wasn't sure I could say that about Duda-that a couple years from now, they'll still be around.

In the end, it was a competition between Squarespace, and a hosted site using Wordpress. I'll tell you a little bit about that.

I like Instagram. My interests fall into a few main categories: dogs, crochet/yarn, design, food, and museums. They have targeted ads, which I sometimes look at. One day, an ad came up for SkillShare, where you can take mini-courses taught by whoever. Some are free. Some require a premium subscription. I found a premium class called, Mastering Wordpress: Build the Ultimate Professional Website. So I paid the 99 cent three month trial fee, and started taking the class. It was pretty bad. It was boring, and the guy was describing how to make his EXACT site. But it did make it clear to me that I could do this.

Then I found another course, How to Properly Make a Website with Wordpress-Beginner's Tutorial. This one was helpful. You can check it out on SkillShare, but the guy has his own site called, Websites Made Easy.

Basically, you use HostGator.com to host your site. You can also buy a domain name through them.

To review briefly, the domain name is the name of your site, i.e. its address, like alfredeneumanmd.com. (I actually got a .org domain name). The hosting site is where your site "lives" online. Hosting sites usually have several pricing plans that vary by what's offered.

Once you buy your domain name and pick your hosting plan, you hook up Wordpress to it, and you build your Wordpress site.  Wordpress has a ton of plugins, which are little extra functions that someone else wrote the code for, and which do great things for your site.

I didn't go this route because I actually tried to go this route, and something happened with the billing, and then it somehow got canceled. The problem wasn't on my end, so I started reading reviews of HostGator, and apparently they used to be pretty good, but not so much anymore. So I gave up on it.

It's also not clear to me why this is a better way to go than simply hosting your site through Wordpress.

But here's what it came down to with SquareSpace. It's a one-stop shop. You can buy a domain name through them, and host on their site, and use their software to build your site.

The domain name has an annual fee (no matter where you get it), and SquareSpace charges $20/year, which is more than many of the other hosting sites. But they lock you in at your initial rate. Other sites don't tell you what they'll charge after the first year.

SquareSpace also doesn't give you a hard time about transferring your domain name, if you decide you don't want them to host anymore, as long as you've had your site for at least two months. According to reviews, other hosting sites do give you a hard time. I think it's indicative of SquareSpace's trust that you'll like their product, and want to continue with them.

The pricing is middle of the road. I got the "personal" plan, as opposed to the business plan. It boasts:


and costs $16/month, or $12/month if I pay annually. It includes Domain Privacy, which removes your personal contact information from the public WhoIs internet record of your domain name, which can be crawled by spam marketers for your email address.

The business plan has a few more items that I don't need right now. But you can switch between plans whenever you like. They also require  a 14 day free trial, so you're sure SquareSpace is what you want.

The design software took a little getting used to, but it's powerful, and really quite beautiful. They have excellent online tutorials, and lots of them.

In case you're interested, I chose the Keene template:



I changed the font and ditched the toothbrush, and I really like it. It's clean, uncluttered, and attractive.

The new font looks like this:



Finally, SquareSpace has exceptional customer service, which I came to realize is very important when you're DIY'ing your own site. Every review I read about SquareSpace was impressed by the customer service. I've already made use of it, and the turnaround time was faster than I expected, and they were genuinely helpful.

Next up, the ACTUAL building of the site, or, "How do I introduce myself to the world and describe what I do? What DO I do? Why do I do it that way?" I never realized how philosophical building a website can be.




Sunday, October 2, 2016

Rosh Hashanah 5777

Tonight is the start of the year 5777, on the Jewish calendar. Regardless of when you mark the beginning of the year, whether it's tonight, or January 1st, or some other day, the new year is a time for reflection on the past, and hope for the future. May it be a year of happiness, health, and peace.



(It's traditional to celebrate Rosh Hashanah by eating an apple dipped in honey, to symbolize a sweet new year). 

Saturday, October 1, 2016

There IS Something You Can Do



On September 16th, The Department of Health and Human Services (HHS) released a final regulation about clinical trials submitted to the FDA, and the National Institute of Health (NIH)issued a new policy regarding the same subject. These are the basics:


The HHS regulation, also called the "Final Rule", states that a responsible party, such as a pharmaceutical company submitting a phase 2, 3, or 4 clinical trial for review by the FDA, with the purpose of getting a new drug approved, or a new indication for an existing drug, must register the trial at clinicaltrials.gov within 21 days of enrolling the first participant. Registration involves providing, "1) descriptive information, 2) recruitment information, 3) location and contact information, and 4) administrative information."

In addition, "The Final Rule requires a responsible party to submit summary results information to ClinicalTrials.gov for any applicable clinical trial that is required to be registered, regardless of whether the drug, biological, or device products under study have been approved, licensed, or cleared for marketing by the FDA."

The, "...results information must be submitted no later than one year after...the date that the final subject was examined or received an intervention for the purpose of collecting the data for the primary outcome measure. Results information submission may be delayed for as long as two additional years...," for a few complicated reasons we won't get into here.

Results need to include, "1) participant flow information, 2) demographics and baseline characteristics of the enrolled participants, 3) primary and secondary outcomes, including results of any scientifically appropriate statistical tests, and 4) adverse events."

Importantly, "The Final Rule also adds a requirement to submit the clinical trial protocol and statistical analysis plan at the time of results information submission."

Information needs to be updated on clinicaltrials.gov at least once a year. And any errors, deficiencies, or inconsistencies that the NIH (which runs clinicaltrials.gov) identifies need to be addressed by the responsible party.

That's the HHS final rule. Trials need to be registered on clinicaltrials.gov, information needs to be kept relatively current, and results have to be posted.

The NIH policy broadens the scope of which trials they consider subject to these requirements.

These are good things for trial transparency and honesty. We'll get to the catches in a bit.

Let me backtrack and try to explain why I'm writing about this. Currently, pharmaceutical companies submit drug trials to the FDA, to get approval. They're supposed to register these trials on clincialtrials.gov, but they often don't, it's not policed, and the data they do submit isn't checked. Note that clincialtrials.gov is run by the NIH, not the FDA, so there's a built in disconnect right from the start.

While a study is being reviewed by the FDA, the pharmaceutical company can publish anything they want about that study in peer reviewed journals. They can do this even if the drug ends up not being approved. They often mis-report data and results. And there is no way of knowing if they are staying true to their original study protocol, or if they're messing around with the stats in ways that benefit them. The journals have no way of knowing what's true and what isn't. The whole "peer review" part is also a sham, because the "peers" are given whatever information the pharmaceutical company feels like giving them.

And the FDA does nothing to prevent any of this from happening.

This is not me being paranoid. Here's an article that describes a disturbing example:

The citalopram CIT-MD-18 pediatric
depression trial: Deconstruction of medical
ghostwriting, data mischaracterisation and
academic malfeasance

 Jureidini, et al.

Abstract.
OBJECTIVE: Deconstruction of a ghostwritten report of a randomized, double-blind, placebo-controlled efficacy and safety trial of citalopram in depressed children and adolescents conducted in the United States.
METHODS: Approximately 750 documents from the Celexa and Lexapro Marketing and Sales Practices Litigation: Master Docket 09-MD-2067-(NMG) were deconstructed.
RESULTS: The published article contained efficacy and safety data inconsistent with the protocol criteria. Procedural deviations went unreported imparting statistical significance to the primary outcome, and an implausible effect size was claimed; positive post hoc measures were introduced and negative secondary outcomes were not reported; and adverse events were misleadingly analysed. Manuscript drafts were prepared by company employees and outside ghostwriters with academic
researchers solicited as ‘authors’.
CONCLUSION: Deconstruction of court documents revealed that protocol-specified outcome measures showed no statistically significant difference between citalopram and placebo. However, the published article concluded that citalopram wassafe and significantly more efficacious than placebo for children and adolescents, with possible adverse effects on patient safety.

International Journal of Risk & Safety in Medicine 28 (2016) 33–43
DOI 10.3233/JRS-160671


And this is the abstract from the original article, for comparison:

A randomized, placebo-controlled trial of citalopram for the treatment of major depression in children and adolescents.

Wagner KD1, Robb AS, Findling RL, Jin J, Gutierrez MM, Heydorn WE.

Abstract
OBJECTIVE:
Open-label trials with the selective serotonin reuptake inhibitor citalopram suggest that this agent is effective and safe for the treatment of depressive symptoms in children and adolescents. The current study investigated the efficacy and safety of citalopram compared with placebo in the treatment of pediatric patients with major depression.
METHOD:
An 8-week, randomized, double-blind, placebo-controlled study compared the safety and efficacy of citalopram with placebo in the treatment of children (ages 7-11) and adolescents (ages 12-17) with major depressive disorder. Diagnosis was established with the Schedule for Affective Disorders and Schizophrenia for School-Age Children-Present and Lifetime Version. Patients (N=174) were treated initially with placebo or 20 mg/day of citalopram, with an option to increase the dose to 40 mg/day at week 4 if clinically indicated. The primary outcome measure was score on the Children's Depression Rating Scale-Revised; the response criterion was defined as a score of < or =28.
RESULTS:
The overall mean citalopram dose was approximately 24 mg/day. Mean Children's Depression Rating Scale-Revised scores decreased significantly more from baseline in the citalopram treatment group than in the placebo treatment group, beginning at week 1 and continuing at every observation point to the end of the study (effect size=2.9). The difference in response rate at week 8 between placebo (24%) and citalopram (36%) also was statistically significant. Citalopram treatment was well tolerated. Rates of discontinuation due to adverse events were comparable in the placebo and citalopram groups (5.9% versus 5.6%, respectively). Rhinitis, nausea, and abdominal pain were the only adverse events to occur with a frequency exceeding 10% in either treatment group.
CONCLUSIONS:
In this population of children and adolescents, treatment with citalopram reduced depressive symptoms to a significantly greater extent than placebo treatment and was well tolerated.

Am J Psych. 2004;161(6):1079-83

In the end, the FDA did not approve citalopram for use in children. But the study has been cited over 160 times, putting it in the top 5% of cited articles in medicine from 2004. Between 2005 and 2010, nearly 160,000 children under age 12 received escitalopram, despite the FDA's lack of approval.  (There was a switch at some point from off-patent citalopram to on-patent escitalopram, in this time period.) It's hard not to conclude that the published study had impact on prescribing practices.

There's a much fuller description of this here, by Bernard Carroll. And a lot more about this whole topic on 1BoringOldMan.com.

Getting back to the Final Rule, there are some problematic loopholes. There are allowances for delays in reporting. There is also the crazy idea that the study protocol doesn't have to be reported until the results are submitted. This leaves room for dinking around with the protocol, changing outcome measures after the trial has started, etc.

Here's what you can do. Take a look at this petition, and if you're on board, sign it. It's entitled, "Stop False Reporting of Drug Benefits & Harms by Making FDA & NIH Work Together". The main point is this:

We now petition Congress to require the FDA and NIH to coordinate their monitoring and sharing of key information through ClinicalTrials.gov. Working together, the two agencies could enable stakeholders to verify whether purported scientific claims are faithful to the a priori protocols and plans of analysis originally registered with the FDA. Publication of analyses for which such fidelity cannot be verified shall be prohibited unless the deviations are positively identified (as in openly declared unplanned, secondary analyses). This prohibition shall include scientific claims for on-label or off-label uses made in medical journals, archival conference abstracts, continuing education materials, brochures distributed by sales representatives, direct-to-consumer advertising, and press releases issued by companies or their academic partners. It shall extend to FDA Phase 2, Phase 3, and Phase 4 clinical trials. By acting on this petition, Congress will create a mechanism for stakeholders independently to verify whether inferences about clinical use suggested by the unregulated corporate statistical analyses can be trusted.

Please think about it. Thanks.

Friday, September 30, 2016

PA Victory?

I know everyone has problems with prior authorizations. The Byzantine bureaucracy and obscure explanations for denials are maddening.

I'm suddenly reminded of the DMV scene in Zootopia, except I think insurance companies do it on purpose.






Today I had a minor and unexpected victory, and I think I might know why it worked, so I thought I'd share it.

Unfortunately, I have to mask any clinical information, so you'll have to take my word on a number of points. Some things may sound peculiar or not-thought-out, but like most cross-sectional views of a patient's medication regimen, if you knew the full history, it would make sense.

Here's what happened. I needed to get a PA for a new medication, M. The patient had had trials of multiple cheaper covered medications, and had either failed them, or had been unable to tolerate them. The insurance company's criteria seemed to be having failed a 4 week trial of, or been unable to tolerate, two medications.

A number of months ago, I had tried to get a PA for a new med for this patient, and the application was rejected, despite the fact that the criteria were obviously met. I didn't have the energy to pursue it then, and there were one or two more covered meds left to try.

The difference now seems to be that I've since had a Genesight test done on this patient. As I've described in the past, I have my doubts about the whole genetic testing business, but I used the test results as documentation to support my rationale. M was in the "Green Column", and the other meds in that column had already been tried.

It worked! Who knew? I'm only guessing that that's the reason, but I can't think of any other difference. Same patient, same insurance company.

It wasn't a pure victory, though. This medication has a starter pack that's used to titrate gently. THAT wasn't approved. And using a single dose form, starting lower, and then increasing the number of pills per day, well, that wasn't covered, either, because it involved too many pills. What I needed to do was skip the recommended titration, and go straight to the next dose up. Not dangerous, but possibly hard to tolerate.

To me, it feels like a punishment for asking the insurance company to cover the medication.

I guess with insurance companies, you take what you can get.





Tuesday, September 27, 2016

ICD-10 Changes

I know it's been forever. I have actually been super busy. I still am, but since this is timely and important, I thought I'd post something.

There are ICD-10 changes that go into effect on October 1st. These are intended to correspond to recent changes in DSM-5. I suppose it should now be called DSM-5.1.

These are the changes:



I hope this is helpful.

Friday, August 5, 2016

Laziness






I'm very curious about the idea of laziness. "Lazy" is one of those words we throw around as though everyone understands what it means and agrees on its definition. Kind of like, "Love". But I don't think anyone honestly knows what lazy means. I'm not sure it even has a meaning. I think what is really meant by, "He's so lazy," is, "I don't understand why he's not doing the things I think he should be motivated to do, or that I would be doing in his position."

I'm hard pressed to think of an instance in which the word lazy is used in a non-pejorative way. At least in reference to a person or an animal. Rivers and summer days are exempt from this criticism.

I searched Pep-Web to see if the concept is addressed in the analytic literature, and there were a lot of hits for "laziness" (343) and "lazy" (806), but none in a title, and while I didn't go through every one, the ones I did look at all seemed to be either quoting someone speaking about himself or someone else, or describing someone, and all with the assumption that no elaboration was needed as to what was meant by "laziness" or "lazy".

I'll share a multilayered thought I just had. I generally make an effort to write correctly, which means that there's a comma before quotes, and the first word of a quote is capitalized, and the ending punctuation is within the quotes, even though that's weird. Or, "That's weird." Not, "that's weird". But it's not always clear to me what to do when I'm referring to an individual term. Do I place a comma before, "Lazy?" Do I always need to put quotes around, "Lazy?" Do I capitalize , "Lazy," if I use it over and over again? Do I place the punctuation within the quotes if it's just a word I'm defining, like, "Lazy"?

I assume I should do the same thing with "lazy" that I do with longer quotes, but I don't always do so. And my thought was, I'm just too lazy to bother.

Then I thought about Frank McCourt in Angela's Ashes, where he doesn't bother to use quotes, but he manages to write in such a way that you're never confused about who's saying what. Presumably, he's mimicking Joyce. So who cares whether I get the punctuation right or not? The whole purpose of punctuation is to make yourself understood, and if readers know what I mean, what difference does it make?

I'm impressed by how easily I fell into using the catchall term, "Lazy," to explain why I don't always punctuate correctly. But I'll get back to this.

I Googled, "What is Laziness?" and after the definition: the quality of being unwilling to work or use energy; idleness, I linked to an article by Neel Burton, MD, in Psychology Today, The Causes of Laziness.

It's not a bad article, but the explanations are a bit simplistic: We haven't evolved enough past our ancestral need to conserve energy and to assume life will be short so why plan for the future; We prefer immediate gratification to long-term goals; We can't see the purpose of our work; We're afraid of success; We're afraid of failure so we don't try.

Now back to my previous point. I don't think anyone knows much about why she does what she does. Even less about why someone else does what he does. You can get at some unconscious content in analysis, but there will always be mysterious actions and thoughts.

What I do notice is that when I have to make a decision about how to punctuate, it causes a slight twinge of anxiety. Am I doing this right? Is my meaning clear if I don't do it right? Why do I care? Do I care?

Clearly, I do.

I'm reminded of Otto Fenichel's paper, On the Psychology of Boredom. Fenichel describes a particular kind of boredom, a sort of ennui, in which the bored person can never settle into any particular activity. Fenichel's understanding of this is that it reflects a warded off, unacceptable wish. So the bored person wants something, but is unable to allow himself to know what it is he wants, because he's conflicted about it, and it makes him anxious. So instead, he searches for something to satisfy the wish, but of course, nothing does, because he doesn't consciously know what he's wishing for.

Laziness is not directly related to boredom, though obviously it can be, in some instances. The common thread here is anxiety generated by unconscious content-likely conflict. And it's hard to assess motivation when there's all sorts of unconscious fermentation going on.

My final association:  I read a story when I was a kid, and I still don't quite understand it. It was a Chinese fable about a lazy boy who never did anything his mother asked. He never helped out at home. He never did any kind of work. He was just a lazy good-for-nothing. Then one day, there was some kind of threat to his family, and the boy got up and deftly handled the situation, and saved the day.

The End.


Wednesday, July 27, 2016

Good and Bad Ideas

Today, NY State sent a letter to insurance companies, telling them they better comply with parity laws, and that they'll be checking up to make sure the insurers are keeping in line. Specifically, the letter was written to "remind" insurers that

MHPAEA (Mental Health Parity and Addiction Equity Act) prohibits issuers whose
policies or contracts provide medical and surgical benefits and MH/SUD benefits from applying
financial requirements, quantitative treatment limitations (“QTLs”), and NQTLs to MH/SUD
benefits that are more restrictive than the predominant financial requirements or treatment
limitations that are applied to substantially all medical and surgical benefits covered by the plan...

...state regulators [will] further review the processes, strategies, evidentiary standards, or other factors used inapplying the NQTL to both MH/SUD and medical and surgical benefits to determine parity compliance:

• preauthorization and pre-service notice requirements;
• fail-first protocols;
• probability of improvement requirements;
• written treatment plan requirement; and
• other requirements, such as patient non-compliance rules, residential treatment limits,
geographical limitations, and licensure requirements.

Accordingly, issuers are advised that the Department of Financial Services will be reviewing
issuers’ NQTLs and QTLs to ensure that issuers fully comply with MHPAEA and will take
necessary action in the event of any non-compliance.

Some additional NQTLs are:

"...treatment limitations based on geography, facility type, provider specialty, and the criteria limiting the scope or duration of benefits or services."

This is a good idea, enforcing rules for insurance companies.  But I worry about certain bad ideas. In fact, I have a sneaking suspicion that insurance companies pay lawyers or others so inclined large sums of money to sit around all day and come up with new bad ideas by finding ways to comply with parity laws, but still hinder or delay reimbursement.

I've written previously about one of these bad ideas, namely, an insurance company's demand that I provide proof that my patient requires out of network services. I almost fell for this and started researching articles on continuity of treatment, etc., until Dinah from Shrink Rap pointed out that the insurance company doesn't need to cover out of network services, but if they do cover out of network, the patient doesn't need to justify not using in-network care.

Other egregious examples are stalling and finally informing the patient that the claims were never submitted, or that they were lost, and then sometimes even more egregiously, when the claims are resubmitted, the insurance company comes back and says it's too late to submit.

Or prior authorization. I tried to get Brintellix, now Trintellix (because Brintellix sounds too much like some other drug) approved, got rejected, appealed by filling out a long form that met every criterion for approval, got rejected again, and finally decided it's a crappy drug anyway, and not worth the effort.

A recent gem involved asking the patient's spouse, who is the primary insured, to call the insurance company to verify or "prove" that the patient has no other insurance (Doesn't, never did).

And I'm quite convinced that these stalling tactics are effective overall, because some percentage of them will not be pursued by patients. That percentage is a gold mine for insurance companies. And mental health patients are perhaps more susceptible than most to this hindrance, since things like depression, psychosis, and anxiety can get in the way of accomplishing tiresome, long, and frustrating tasks like talking to insurance companies.

Anyone else have insurance horror stories?

Thursday, July 21, 2016

The Gene Genie

In response to my recent post, In the Genes?, I got the following email message:

Dear PsychPractice,
I am a genetic counselor for Assurex Health, the company behind GeneSight. We read your recent blog post comparing Genesight, the Genecept Assay, and Genelex with interest, and I wanted to reach out to you concerning a couple items that you may not have run across in your search of GeneSight.  While much of what you wrote about GeneSight is accurate, some of it is outdated. For example, GeneSight Psychotropic now includes 12 genes, not 5.  Based on the images of the report in your blog, I wonder if perhaps you pulled the images from our Pine Rest (Winner et al, 2013) study?  This study was published in 2013, at a time when GeneSight had 5 genes.  GeneSight has since been updated and currently includes 12 genes.  Additionally, 4 other studies on GeneSight have been published, as well as 2 meta-analyses of these data, all of which were statistically significant. 

If you are interested in receiving any of these studies, please let me know and I will send them to you.  You may also be interested in white papers on GeneSight genes that outline all current research in the context of psychiatric pharmacogenomics.  We have also created white papers on genes we choose not to add to GeneSight due to lack of clinical utility. 

Somehow, I thought I had gotten the image I used, with the list of genes, from the Genesight site. So I went there to look for it again, and I couldn't find it. So then I tracked down where I found the image, by scrolling through my browser history, and it was here:

Winner, Joel G; A prospective, Randomized, Double-Blind Study Assessing the Clinical Impact of Integrated Pharmacogenomic Testing for Major Depressive Disorder; Discovery Medicine; ISSN: 1539-6509; Discov Med 16(89):219-227, November 2013.

I guess it was from the Pine Rest study, and yes, I suppose it is outdated. And I'm pretty sure I looked up the paper because I couldn't find much information about which genes Genesight uses on their site.

I checked again just now, and it took a while for me to find it, but they do have a description of all the genes they're testing, including four new ones.

Pharmacokinetic:

UGT2B15
CYP2D6
CYP2C19
CYP2C9
UGT1A4
CYP3A4
CYP2B6
CYP1A2

Pharmacodynamic:

HLA-A*3101
HLA-B*1502
HTR2A
SLC6A4

The UGTs and HLAs are the new ones.

So now we've got that straightened out. My apologies to Genesight for propagating outdated information.

That said, since publishing that last post, I have used Genesight. Yes, I have. I chose it partially for reasons having to do with insurance coverage.

I had to meet with a rep. I tried to tell him, via email, that all I needed was a test kit. He insisted that we had to meet in person, and that it would only take 15 minutes. We scheduled and rescheduled, mostly because I was annoyed about the waste of my time so I didn't prioritize the time slot. Maybe that sounds obnoxious, but I felt I was giving up my time, and therefore money, so he and his company could make more money. I was paying to make him money.

He asked if he should just bring along a lunch. I flat out said no. When he got to my office, he had a bag with lunch, from Panera. I refused it, and I'm really proud of this fact because I was super hungry.

The meeting took closer to 40 minutes, with most of my time spent sitting quietly and waiting while the rep talked on the phone to his IT people, to figure out the new tablet system they were using. When the rep did talk to me, there was emphasis on the fact that the tests are covered by medicare. Maybe that makes them more respectable. I don't know. I was also told I'd be receiving 10 more kits in the mail shortly. I was encouraged to use them on new patients, and asked about when I do my chart review, in the mornings or evenings. I tried to explain that I don't have a high volume practice, and it doesn't work that way. He also mentioned something about my EMR, and looked a bit nonplussed when I nodded towards my file cabinet and said, "I don't use one."

I did the test on a Monday. You do two buccal swabs-1 per cheek, 10 seconds each. Then you put the swabs in an envelope and seal it. You add a consent form and the swab envelope to a larger envelope, seal that, and send it out via Fedex-it's already labeled and paid for overnight shipment. This sounded easy, but I wanted it sent off that day, my doorman wasn't sure when or whether Fedex would be in the building, and the closest Fedex drop box had been shut down some time ago. So I wandered around a bit til I found a place. More lost time and money. I had tried to contact Fedex to come pick it up, but their site didn't work properly, so I had to wander.

Also, it was a little confusing, because you don't include an order form. Instead, you login to your Genesight account and order the test there (You can print out and use an order form, but only if you don't use the online system). They give you an order number, but there's nowhere to record it on anything you mail in. I wrote it on the swab envelope, anyway.

The results came in at 10pm on Wednesday night, which is probably within the 36 hour turnover time. I can't get into the clinical details, but the test results were about as helpful as I expected. The pharmacokinetic stuff was fair. They suggested some dose adjustments that might be helpful. They also yellow-boxed, meaning use, but with caution, a couple meds that have caused serious side effects in this patient, in the past.

The pharmacodynamic stuff was not particularly good. Meds were recommended that have been of little or no use in the past, or have actually had deleterious effects. I didn't expect much of this aspect, so I wasn't too disappointed. I already knew that genetic testing that's supposed to predict which drugs will be helpful is not ready for prime time, yet.

The best I can say for Genesight, so far, is that it made one potentially useful suggestion, and it seems to be covered by insurance, or if it isn't, it has a decent financial assistance plan, with patients who make less than $50K per year paying $20 for testing. Will I use it again? Hard to say.


David Bowie: The Jean Genie







Wednesday, July 6, 2016

Keeping Mum

Hippocrates

This post was prompted by an article written by Ben Goldacre in The Guardian, Care.Data is in Chaos. It Breaks My Heart. The article is about the Health and Social Care Information Centre (HSCIC, in the UK), which, "admitted giving the insurance industry the coded hospital records of millions of patients." These records, according to Goldacre, were line for line, and could be decoded by anyone with an inclination to do so. The purpose of this "gift", by the way, was for the insurance companies' actuaries to figure out premiums, based on likelihood of death (or illness, I assume).

Useless Fun Fact: Years ago, in another professional trajectory, I passed the first of the however many actuarial exams.

Anyway, then the HSCIC said it couldn't share documentation on this release of information, presumably because it's more important to protect insurance company privacy than patient privacy.

Summarily, in Goldacre's words, "...a government body handed over parts of my medical records to people I've never met, outside the NHS and medical research community, but it is refusing to tell me what it handed over, or who it gave it to, and the minister is now incorrectly claiming that it never happened anyway."

So I started to think about patient privacy, including a long-ago post, What, Exactly, Is HIPAA?, where I wrote that I would follow-up with more information about privacy, and I never did. Incidentally, I've looked, and I still haven't found any contradictory information about what constitutes a HIPAA covered entity, and I'm still convinced that I'm not one, because I don't bill patients electronically.

So I was wondering, what is the difference between privacy and confidentiality, as they relate to patients? And I found this article (Prater), which was helpful.

Basically, confidentiality is the, "...obligation of professionals who have access to patient records or communication to hold that information in confidence," while privacy is the, "...right of the individual client or patient to be let alone and to make decisions about how personal information is shared."

In other words, confidentiality is my professional, or at least ethical obligation to my patients, while privacy is a patient right I need to respect. From this I infer that technically, my patients do not have a right to confidentiality, and I don't have an obligation to protect patient privacy.

Here are some more details.

Confidentiality

Confidentiality goes back, at least, to the Hippocratic Oath:

And whatsoever I shall see or hear in the course of my profession, as well as outside my profession in my intercourse with men, if it be what should not be published abroad, I will never divulge, holding such things to be holy secrets.

It is a cornerstone of professional association codes of ethics. The AMA's code of Ethics, Opinion 5.05, states:

The information disclosed to a physician by a patient should be held in confidence. The patient should feel free to make a full disclosure of information to the physician in order that the physician may most effectively provide needed services. The patient should be able to make this disclosure with the knowledge that the physician will respect the confidential nature of the communication. The physician should not reveal confidential information without the express consent of the patient, subject to certain exceptions which are ethically justified because of overriding considerations.

And don't forget about postmortem confidentiality:

All medically related confidences disclosed by a patient to a physician and information contained within a deceased patient’s medical record, including information entered postmortem, should be kept confidential to the greatest possible degree...At their strongest, confidentiality protections after death would be equal to those in force during a patient’s life. Thus, if information about a patient may be ethically disclosed during life, it likewise may be disclosed after the patient has died.

In reading this stuff, I also found that the AMA has a slightly different slant on the definitions of privacy and confidentiality:

In the context of health care, emphasis has been given to confidentiality, which is defined as information told in confidence or imparted in secret. However, physicians also should be mindful of patient privacy, which encompasses information that is concealed from others outside of the patient-physician relationship.


An example of support for the legal status of confidentiality, as the privileged communication between patient and doctor, can be found in Jaffee v. Redmond, where the, "...U.S. Supreme Court upheld a therapist’s refusal to disclose sensitive client information during trial." (Prater)


Effective psychotherapy...depends upon an atmosphere of confidence and trust in which the patient is willing to make a frank and complete disclosure of facts, emotions, memories, and fears. Because of the sensitive nature of the problems for which individuals consult psychotherapists, disclosure of confidential communications made during counseling sessions may cause embarrassment or disgrace. For this reason, the mere possibility of disclosure may impede development of the confidential relationship necessary for successful treatment. (p.10, Jaffee v. Redmond)


Privacy

There is no constitutional right to medical privacy. Rather, healthcare privacy rights, "...have been outlined in court decisions, in federal and state statutes, accrediting organization guidelines and professional codes of ethics." (Prater)

The big example is HIPAA. Subject to HIPAA, "Individuals are provided some elements of control, such as the right to access their own health information in most cases and the right to request amendment of inaccurate health information...However, in [the] attempt to strike a balance, the Rule provides numerous exceptions to use and disclosure of protected health information without patient authorization, including for treatment, payment, health organization operations and for certain public health activities..."(Prater)

I've been trying to read through the relevant parts of this ponderous document about HIPAA, and on pages 757 and following, in part Squiggle164.512, Uses or Disclosures for which an Authorization or Opportunity to Agree or Object is not Required, I found what appear to be a number of these exceptions to the "privacy" provided by HIPAA. I think. I'm not a lawyer. Such as:

a) required by law
b) public health activities
c) victims of abuse, neglect, or domestic violence
d) health oversight activities
e) judicial and administrative proceedings
f) law enforcement purposes
g) about decedents, i.e. coroners, ME's, funeral directors
h) cadaveric organ, eye, or tissue donation purposes
i) research purposes
j) aversion to serious threat to health or safety
k) specialized government functions
j) worker's compensation

There are many specifics, including disclosing information to a patient's employer, but I'll leave those as an exercise for the reader.

Point being, HIPAA does little to protect patient privacy. I think the value of HIPAA is that it attempts to delineate what patient privacy rights are, and that it has succeeded in making people aware that the privacy of their medical information is vulnerable. It does not solve this difficulty, which becomes hugely magnified by the use of electronic health records. This leads to consideration of one more important term, security, or the means by which patient information is protected, such as a locked filing cabinet, or encrypted data.

What I glean from all this is that there is at least a notion of a patient's right to privacy, which I should try to respect. But that my standards for protecting patient information are much higher than anything HIPAA has to say.









Wednesday, June 29, 2016

In the genes?




I'm starting to look into genetic testing to help my work with patients who have not responded well to multiple psychotropic medications. It feels like a desperate bid, but I'm not sure what other help I can offer.

There are three main testing products, that I could find:

Genesight

Genecept Assay

and

Genelex


I have three main questions about these products:

1. What do they tell me?

2. How accurate/helpful are they?

3. How easy are they to use?


Genesight

Genesight seems to be the one mentioned most by people I asked. Practically speaking, it involves a buccal swab sent to Genesight via prepaid FedEx, with access to results online 36 hours after the sample is received. They are covered by some insurance plans, and have a financial assistance program. And it looks like, in order to try out the test, you need to speak with one of their representatives-there's no way to order online.

In terms of how well it works, their claim is, "Patients with uncontrolled symptoms who switch off of genetically discordant medications show the greatest reduction in depressive symptoms."

They also claim that, "70% of patients who have failed at least one medication are currently taking a genetically sub-optimal medication," and that, "GeneSight testing may help avoid drug-drug interactions and compounding side effects."

Finally, for patients younger than 18, Genesight can help in decisions about efficacy, tolerability, and dosing.

They site a paper, A prospective, randomized, double-blind study assessing the clinical impact of integrated pharmacogenomic testing for major depressive disorder, the results of which were:

Between-group trends were observed with greater than double the likelihood of response and remission in the GeneSight group measured by HAMD-17 at week 10. Mean percent improvement in depressive symptoms on HAMD-17 was higher for the GeneSight group over TAU (30.8% vs 20.7%; p=0.28). TAU subjects who had been prescribed medications at baseline that were contraindicated based on the individual subject's genotype (i.e., red bin) had almost no improvement (0.8%) in depressive symptoms measured by HAMD-17 at week 10, which was far less than the 33.1% improvement (p=0.06) in the pharmacogenomic guided subjects who started on a red bin medication and the 26.4% improvement in GeneSight subjects overall (p=0.08).

You'll notice that they talk about "trends" without any statistics, and mean percent improvement showed no significant difference (p=0.28), even though they point out that improvement in the Genesight group was higher. Recall that p=0.28 means there is a 28% chance that the differences they found were due to chance alone.

The "red bin" is a reference to the way Genesight presents its results, which I find easy to understand, if not entirely illuminating. This is an example:




I don't get the impression that the results give me information about which drugs will be helpful, as much as which drugs won't be harmful.

How does the test work? Genesight measures polymorphisms among 5 genes, CYP2D6; CYP2C19; CYP1A2; the serotonin transporter gene, SLC6A4; and the serotonin 2A receptor gene, HTR2A.

The CYP genes are clearly measures of rates of metabolism. A repeat length polymorphism in the promoter of SLC6A4 has been shown to affect the rate of serotonin uptake. The implications of this fact are not clear to me, but according to Wikipedia, genetic variations in the SLC6A4 gene have resulted in phenotypic changes in mice, including increased anxiety. HTR2A influences serotonin transporter binding potential, and variations in the gene have been associated with variations in outcome in treatment with citalopram.

So the answers to my three questions, for Genesight, are:

1. It tells me which drugs are more and less safe and tolerable to use. And if I accept their conclusion that patients switched off red bin drugs improved significantly, then perhaps it tells me which drugs will be effective, but I'm skeptical about this part.

2. The results are less impressive than they'd like me to think.

3. Results are clear and easy to read.  Turnover time is good. Getting hold of a test is not that easy.



Genecept Assay

The genecept FAQ page is much more informative than the Genesight pages. The test can be ordered online or by phone. It's covered by most insurance and they have a patient assistance program. Turnaround time is 3-5 business days from receipt of the sample, also a buccal swab, and they provide expert staff to help interpret results.

The online order form also gives you the option of becoming a "Preferred Provider", which means they'll send patients who are looking for genetic testing to you.

As for function:

The Genecept Assay® report is intended to aid clinicians in making personalized treatment decisions tailored to a patient’s genetic background and helps to inform psychiatric treatments that:

Are more likely to be effective
Have lower risk for side effects and adverse events
Are dosed appropriately

The report consists of two pages, and looks like this:



So they look at more genes than Genesight, and they provide one report about what's safe to use, and another about what's potentially helpful.  And in all honesty, I don't have the energy right now to look up how believable their first page markers are in terms of efficacy, and I think I would need their help to interpret these results, but they do provide more information than Genesight.

Answers:

1. Safety, tolerability, and efficacy

2. I'm too tired to check

3. Easy to get the test, harder to interpret results


Genelex

Genelex allows you to order tests online, too. They claim there is some insurance coverage (See *, below), they have some fancy software that's supposed to be helpful, in addition to their report, and they have a 3-5 day turnaround time.

Genelex restricts itself to CYP450 genes, but it includes three that Genesight doesn't, 3A4, 3A5, and 2C9, but doesn't include 1A2. This is a link to a sample result, which is too long to include as an image. And like Genesight, it's mainly about what is and isn't safe or tolerable to take.*

*Actually, I just learned on the FAQ page that Genelex also includes CYP1A2; NAT2;DPD Enzyme; UGT1A1; 5HTT; and HLA-B*5701, but that these are generally not covered by insurance. I also couldn't figure out what data these additional tests provide.

Answers:

1. Safety, tolerability, maybe efficacy?

2. For the CYP tests, same as above, for others, I don't know

3. The software seems like overkill. The report is clear and moderately informative. You can order the test online.


That's it for this topic, for now, for me. I have yet to decide whether I'm going to use any testing.




Thursday, June 23, 2016

Virtually Certified

A while back, I wrote about HealthTap, a platform that allows people to ask doctors questions in almost real time. The company was also developing a system for virtual care, and I recently received an email suggesting I take an online course, worth 2 CME credits, to be certified in said virtual care.

I was curious about the progress in this field, so I did take the course, and am now officially certified. I also accidentally clicked an  "okay" button, thinking it would allow me to print out my certificates. But it was the wrong button. The button I wanted said, "certificate only", but I didn't see it in time. The "okay" button indicated that I was allowing myself to be part of the HealthTap network of physicians. I didn't really want that, but I suppose it doesn't matter since I'm not going to do anything with it. So watch out for this if you decide to try it.

What I was mainly interested in, in the course, were the legal and regulatory issues related to virtual care. And I did learn a couple things. For instance, you do need to be licensed in your patient's state in order to provide virtual care. I don't know if that means the state where the patient resides, or just the state the patient is in when seeing you. I would guess the latter, since I can treat patients, in person, who live in neighboring New Jersey or Connecticut for example, where I'm not licensed, as long as they see me in my New York office.

Incidentally, you don't need to be licensed in any particular state to virtually treat patients outside the US.

The course referenced the Interstate Medical Licensure Compact, which, "Creates a new pathway to expedite the licensing of physicians seeking to practice medicine in multiple states. States participating in the Compact agree to share information with each other and work together in new ways to significantly streamline the licensing process."

A number of states have already enacted legislature that will allow this expedited pathway to proceed, and other states have introduced such legislation. Still others, such as New York, have done neither. One interesting point I noted is that in order to use this expedited system, you need to be primarily licensed in a state that has already enacted the legislature. So if I want to virtually care for patients in Montana, which has enacted this legislature, I can't, because I'm licensed in New York, which hasn't.

That point is irrelevant, though, since there is currently no administrative process for applying for this pathway, although they state that there, "...will be soon."

The video mentioned that there are CPT codes for virtual care, ranging from 99441 for a 5-10 minute telephone Eval/Management consultation, to a 99444 for an online E/M, to a 99446 inter-professional 5-10 minute consult, to a 99490 > 20 minutes of chronic care management. But most virtual care billing is done using the same CPT codes that would be used for a regular office visit, with a GT modifier, e.g. 99213 GT.

Most importantly, 25 states with parity laws, plus Washington DC, have enacted "...legislation requiring private insurers to pay for Virtual Care at the same level as equivalent in-person services, provided the care is deemed medically necessary."

According to this document, in New York, "The law requires telehealth parity under private insurance, Medicaid, and state employee health plans. The law does restrict the patient setting as a condition of payment."

This image depicts which states make it easy to provide virtual care (A is best), and which make it difficult (I'm not sure what the * means):




Aside from that, the course touts the virtues of virtual care, claiming that in some ways, it's superior to in-person care, and giving examples, such as the fact that patients have quicker access to virtual care. They also claim that many, if not most, common complaints can be treated virtually, and a lot of monitoring can be done at a distance, e.g. glucose. In addition, they mention the use of wearable devices for tracking activity, etc., and the up and coming virtual examination tools, like stethoscopes.

The video is careful to note situations which require in-person treatment, such as a wheezing infant, or chest and jaw pain in a 68 year old man.

A bit comical were the presentations. For a company that's promoting care via video-conferencing, they should probably have gotten better people to present in their video. One guy had shifty eyes, another had drooping eyelids and looked like he was falling asleep and forgetting what he needed to say, yet another guy looked like his shoulders were hiked up to the point of having no neck.

While I definitely prefer in-person work, I can see where psychiatry, and especially psychotherapy, are amenable to virtual care, probably more-so than specialties that require a physical examination. But given the regulatory and legal limitations, I'm not ready to go there, yet.






Monday, June 13, 2016

Orlando

The mass shooting in Florida has left me nearly speechless. But I want to post something because silence is the wrong response. 

Maybe I could write something useful about hate crimes or terrorism or psychotic enactments, but that would imply I have some understanding of this tragedy, and I don't. 

All I have is sadness, and my heart goes out to the victims and their loved ones. 

Tuesday, June 7, 2016

Film Review-In A Town This Size

I recently learned about a documentary, In a Town this Size, by photographer and filmmaker Patrick V Brown. It tells the story of the wealthy, Oklahoma oil town, Bartlesville, in which the pediatrician sexually abused many children over the course of many years. Brown, himself, was one of the victims.

I'm not really going to make this a review, as in, what did I like, what didn't I like, whether it's worth seeing. It is worth seeing, so please do. (That I know of, In a Town this Size is available on iTunes, on DVD from Netflix, and on Amazon Prime.) It addresses very important issues, aside from the obvious one of pedophilia. It addresses what it means to move from being a victim to being a survivor, to finding support, both within oneself and externally. It's also extremely well made, although quite simple-just interviews with a few interspersed pieces of footage and photographs. Mainly, I'm going to relate what it made me think about.

Dr. Bill Dougherty was a pediatrician and a prominent citizen in Bartlesville. He was friendly with the families of many of his patients, and was welcomed into their homes, and joined them on family vacations. Many of the adults considered him an "odd duck", because he had never married. Some assumed he was gay, but in that time and place, this was not a topic for discussion.

In the film, Brown interviews people who, as children, were abused by "Dr. Bill".  He also interviews their family members, including his own parents, as well as a few lawyers and therapists. Everyone who was interviewed was articulate and thoughtful. In part, this is a product of Brown's skillful interviewing-sensitive but appropriately direct. But I suspect it's also a product of the innate selection bias in who volunteers to be interviewed for a film like this, and what parts of the interviews made the final cut.

But the interviews did hit home the point for me that Bartlesville is a wealthy, educated town. There is footage of the Price Tower, the only skyscraper designed by Frank Lloyd Wright, and commissioned by Harold C. Price, of the H. C. Price oil company. There is also footage of the home of Harold Price, Jr., which looks like a Lloyd Wright structure to me. And Harold Price, Jr. is one of the interviewed parents.

The status of the town is what, perhaps, informed the title of the film. I couldn't tell if the idea was, "Who would believe something like this could happen in such a small town with so much money and power?" or, "Who would believe everyone didn't know about what was happening?" I suspect the ambiguity is intentional.

Which brings me to the topic of denial. Brown, himself, told his parents about the abuse after it had happened several times, starting when he was around 6 years old. But a child that young has neither the language nor the emotional wherewithal to describe sexual abuse, and the most he could come up with was, "He leaves his hands down there too long."

In their interviews, Brown's parents comment on their reactions. His father seemed to think he was talking about a normal genital exam, which is uncomfortable and embarrassing for everyone. His mother said that strange as it sounds now, maybe she'd heard of the word, "pedophile", but she couldn't imagine it applied to her family.

I was a bit outraged by their responses. I realize this is anachronistic of me, and their reactions to Brown's revelation were typical for the time, but even if you don't believe your child, don't you wonder? Aren't you at least a little suspicious? Don't you watch to see what your child's reaction is after the next pediatrician visit, or don't you insist on being present for the exam?

In fact, all but one of the parents interviewed say something along the lines of, "This sounds stupid now, but..."

Upon hearing that Dougherty had been accused of sexually abusing children, several of the parents went and comforted him. One ex-marine said he thought, "He's my friend. He'd never do that to me or my children. Besides, he knows I'd kill him."

Years later, after this man's sons revealed the abuse, his wife spent countless hours looking at photographs of her children from that period. The younger son's eyes are haunted. In retrospect, she says, she knew.

By all appearances in the film, Brown has a good relationship with his parents, and is working with his father, who is a lawyer, on changing the laws about the statute of limitations for reporting this kind of abuse. But I found myself outraged once more when he asked his father, "What made you finally believe me?"

The father's answer was that, as an attorney, he had gotten letters from other survivors of Dr. Bill's abuse, asking for legal help. He had to hear it from someone else in order to believe it.

Several of the survivors note that they don't feel angry at Dougherty. Some posit that this is because they don't have the self esteem to generate the anger. I wondered if their anger is threatening to them because it's not just towards Dougherty, it's towards their families, for not protecting them.

Based on the families interviewed, it seems like once it became clear, years after the fact, that the accusations of sexual abuse by Dr. Bill were true, the families did become very supportive of their children and of each other. The film, itself, is a testament to that.

My own reaction to Dougherty was interesting.  Generally, after I get over the initial horror of a story like this, my mind goes to, "What could possibly have happened to this man to have turned him into such a monster?" There's some sympathy involved, even if the crimes are inexcusable.

But I really don't have that much sympathy for Dougherty. There's something terribly opportunistic and psychopathic about him. Some of the survivors suggested that there was premeditation in his choice of pediatrics. My first thought about that was skeptical. I thought he probably felt an irresistible pull towards pediatrics, even though he knew this was a problem for him, and then rationalized the choice by convincing himself that he understood children, and that that would make him a good doctor. Apparently, when he wasn't abusing his patients, he was a good pediatrician.

As the film proceeded, I was less convinced by my argument. Atypically for a pedophile, he abused both boys and girls, although it seemed like there was a predilection for boys. His patterns of abuse also varied, and the choice of behavior seemed to vary with what he thought he could get away with. Several men report having been masturbated by him on the examining table. One woman reports having him take her on his lap and try to get her to masturbate him, while on vacation. On that same vacation he paraded in front of her and her sister in his underwear, showing his penis. It also seems that he sodomized a boy he knew to have psychiatric problems, and to me, that sounds like he thought the boy's story wouldn't be credible.

When Harold Price, who I mentioned above, visited him to offer support after he had been accused of the sodomy, Dougherty said something like, "That's absurd. I would never do that. Besides, he was ugly."

Such was Dougherty's power over these children that Brown seems to be in a minority in telling his parents about the abuse. Most of the kids didn't say anything to anyone. They didn't feel threatened, and they weren't told not to say anything, they just didn't.

Incidentally, after the truth about his abuse had come out, while the statute of limitations for criminal charges had run out, he did lose his medical license. However, he is still alive and living in Bartlesville, leaving his home only in disguise. He recently got married, for the first time, to a woman, at age 81.

The final point that struck me was about forgiveness, and its different meanings. The ex-marine father, who is a religious Christian, was torn for a long time between killing Dougherty and forgiving him. After reading a lot of scripture, and a lot of soul searching, he decided to forgive him. He says it is easier said than done. What puzzled me was the man's description of seeing Dougherty at a church with a woman and her sons, and thinking, There go those kids down the tubes. Does his forgiveness preclude speaking out against Dougherty to protect those children?

Brown's mother says she can't forgive him because he's shown no remorse. She and other's have written him many letters, and he has never responded. And Brown, himself, says he's not interested in forgiving Dougherty. It made me think about whether forgiveness is more for the one being forgiven, or the one forgiving.

In a Town this Size tells a horrifying story in a sensitive way. I think this approach has a further reach than a film that was more graphic and less forward-looking would have. The real strength of the film lies in the question Brown asks all the survivors, "How has the abuse impacted your life?" This simple question places the emphasis on where the survivors are now, and where they're headed, which is why this is a film about survivors, not victims.

One of the most powerful scenes takes place towards the end, where Brown goes to Bill Dougherty's house to confront him. He knocks on the door -forcefully, not timidly-, we hear a dog bark, but no one answers. Brown paces back and forth with his hands on his hips and knocks again. Still, no answer.

There is tremendous pathos in witnessing the courage it must have taken Brown to try to confront his abuser, only to be disappointed. But even if Brown didn't succeed in confronting the external version of Dougherty, I hope that In a Town this Size did succeed in helping him confront the internal version of the monster that is Dr. Bill.





Wednesday, June 1, 2016

A Tale of Two Hospitals

This is the story of two New York city hospitals, Mount Sinai, and Beth Israel.

According to Wikipedia. Mount Sinai was founded in 1852 as the Jews' Hospital, in response to discrimination against Jews by other hospitals, which would not treat or hire them. It is one of the oldest teaching hospitals in the US.

Also according to Wikipedia:


Beth Israel was incorporated in 1890 by a group of 40 Orthodox Jews on theLower East Side each of whom paid 25 cents to set up a hospital serving New York's Jewish immigrants, particularly newcomers. At the time New York's hospitals would not treat patients who had been in the city less than a year. It initially opened a dispensary on the Lower East Side. In 1891 it opened a 20-bed hospital and in 1892 expanded again and moved into a 115-bed hospital in 1902.[2] In 1929 it moved into a 13-story, 500-bed building at its current location at the corner of Stuyvesant Square. It purchased its neighbor the Manhattan General Hospital in 1964 and renamed the complex Beth Israel Medical Center, located at First Avenue and 16th Street in Manhattan.


According to other verbal sources I've encountered, Beth Israel Hospital was founded in response to discrimination by Mount Sinai against poor Jewish immigrants on the lower east side. Mount Sinai would not treat them, but restricted its Jewish patients to middle- and higher-class Jewish immigrants from Germany.

I like to believe that this is the reason for the inscription on the entrance to the original Stuyvesant Square building:



It's a little hard to see, and it's in Hebrew, but roughly translated (by me) it reads:

Welcome! Welcome! From far and near. So says the Lord and his Healers. (Isaiah 57:19)

Hospitals in NYC are like 7th graders. They merge, split up, merge again with a different hospital, move around. For a while, Beth Israel belonged to a group of hospitals known as "Continuum," which included Albert Einstein Hospital, from which Beth Israel got its medical school affiliation. A few years back, Columbia Presbyterian joined up with New York Hospital Cornell, and became New York Presbyterian. These two hospitals are in very different parts of Manhattan, so the merger gave them access to a huge group of patients from diverse neighborhoods. My guess is that this was done for financial reasons. Then Mount Sinai decided to be even bigger, and subsumed Beth Israel, as well as St. Luke's/ Roosevelt (these two had merged years previously, and were formerly affiliated with Columbia).

For a little orientation, this is a map:






This gives you an idea of the current relationship statuses, which I've color coded. Mt. Sinai is the white star. Beth Israel, now called Mt. Sinai Beth Israel, is all the way downtown in red. It is the southern-most hospital in Manhattan.

Well, in case you didn't read it in the NY Times, Mt. Sinai Beth Israel Hospital in Manhattan Will Close to Rebuild Smaller.

The 825-bed Beth Israel will be closed over the next four years, to be replaced by a 70-bed hospital somewhere nearby, with an ER a few blocks away. Residents will be dispersed to other Mt. Sinai hospitals, union employees will be found new jobs. And according to an email I got from Mt. Sinai, where I'm affiliated, everyone else will be assisted in finding new employment. I suspect that's not precisely what will happen.

Ken Davis, President and CEO of the Mt. Sinai Health System (and former chair of psychiatry) explained that health care is too expensive, that Beth Israel lost $115 million last year and stood to lose $2 billion in the next 10 years, that hospitals are no longer the most efficient vehicles for delivering care, etc.

They plan to have 16 outpatient practice locations and 35 stand alone operating and procedure rooms.

Interestingly, from what I can tell, they plan to keep the psychiatry building, and expand its services. I assume the department actually makes some money, although I thought that was because of the rapid turnover on the Dual Diagnosis unit.

I worry about the impact on the community. There used to be 3 hospitals in Beth Israel's area, Beth Israel, itself, Cabrini, and St. Vincent's. Cabrini closed in 2008. St. Vincent's, much larger, and beloved by the community, closed in 2010. It's been replaced by some pretty fancy condos. Beth Israel is also situated in an extremely desirable neighborhood.

Going from 825 beds to 70? It may look like, what's the big deal, there are dozens of hospitals throughout NYC. But there are 8.5 million people living in NYC, plus people come in for treatment regularly from neighboring New Jersey and Connecticut.

Dr. Davis may be correct in stating that hospitals are not the future of medicine. But I don't believe 70 inpatient beds are adequate to the needs of all of lower Manhattan. We seem to have come full circle since 1890.

The word I translated above as "Welcome!" is Shalom, and the Hebrew word has multiple meanings. It can mean welcome, peace, and hello. It can also mean, "Goodbye."





Tuesday, May 24, 2016

ODD Clinical Trial

This post is a sort of advertisement, except that no one's getting paid anything. A colleague of mine and his group just got a 2 year grant to conduct a trial of Regulation Focused Psychotherapy (RFP) for the treatment of Oppositional Defiant Disorder in children ages 5-12. This is the flyer:




That's the advertisement part. I think it's a great idea. But just to be clear, not only am I not being paid, I'm not involved in the study in any way except feeling pleased about it, and writing this post.

Why do I think it's a great idea? The American Academy of Child and Adolescent Psychiatry has a brochure about ODD. It describes treatment for ODD, which includes a combination of Parent-Management Training Programs and Family Therapy, Cognitive Problem-Solving Skills Training, Social-Skills Programs and School-Based Programs, plus or minus medication.

These are all useful tools, but none of them addresses the underlying affects, and difficulty in regulating these affects, that children with ODD experience. That's where RFP comes in.

The group conducting the study recently published the Manual of Regulation-Focused Psychotherapy for Children (RFP-C) with Externalizing Behaviors: A Psychodynamic Approach.




In it, they describe the way, "RFP-C enables clinicians to help by addressing and detailing how the child’s externalizing behaviors have meaning which they can convey to the child," and more specifically, that RFP-C can:
  • Achieve symptomatic improvement and developmental maturation as a result of gains in the ability to tolerate and metabolize painful emotions, by addressing the crucial underlying emotional component.
  • Diminish the child’s use of aggression as the main coping device by allowing painful emotions to be mastered more effectively.
  • Help to systematically address avoidance mechanisms, talking to the child about how their disruptive behavior helps them avoid painful emotions.
  • Facilitate development of an awareness that painful emotions do not have to be so vigorously warded off, allowing the child to reach this implicit awareness within the relationship with the clinician, which can then be expanded to life situations at home and at school.

That's my pitch. So if you know anyone in the New York City area who could benefit from this trial, whether child, parent, educator or clinician, please get this information to them.

Thanks.




Sunday, May 8, 2016

Outside In

In the context of my working on building my own practice website, I found a NY Times article, Inner Peace? The Dalai Lama Made a Website for That, compelling to read.

The website is, Atlas of Emotions, and it's not really about inner peace. It's more about the Dalai Lama's notion of emotions as reactive internal events that prevent inner peace, combined with information about the five emotions considered universal by the 149 experts surveyed for this purpose.

The emotions are:

Anger
Fear
Disgust
Sadness
Enjoyment

The site was conceived by the Dalai Lama as a "map of the mind", and developed by Dr. Paul Ekman (for $750,000), who conducted the survey, and has done pioneering work in nonverbal behaviors, especially facial expressions. He now has a company called the Paul Ekman Group, or PEG, which will teach you, for a fee, to read people's expressions and determine if, for example, they are lying. He was a major consultant for Inside Out, the Pixar movie that illustrates the emotional life of a girl named, Riley (I assume based on the expression, "Living the life of Riley," meaning the good life). He was also consultant and inspiration for the main character on the TV series, "Lie to Me", which I know nothing about.

The site is primarily visual, with the imagery designed by a company called, Stamen, that creates data visualizations. It's interesting that the colors used to depict the five emotions on the site:


match the colors of the corresponding characters in Inside Out:

I probably shouldn't be including either of these images without permission, but the Disney-fication was just so striking. Then again, we do associate colors with feelings, like red with anger, blue with sadness, and green with disgust, and red and green, at least, are related to changes in skin color that occur with their associated emotions. I don't know about purple with fear or yellow/orange with enjoyment.


The way the site works is you land on the home page, with those five circles of emotion, which are called, "continents". Remember, this is supposed to be a map. If you click on a continent, you get a brief description. For example, Sadness brings up, "We're saddened by loss."

You also get a menu to the right which lists, Continents, States, Actions, Triggers, Moods, and Calm. If you go to States, after you've clicked on Sadness, you get a graph of various states related to sadness, with overlaps, from least intense to most. The least intense for Sadness is Disappointment, "A feeling that expectations are not being met." The most intense is Anguish, "Intense agitated sadness."

There are left and right arrows to switch to other basic emotions, but also a down arrow, corresponding to the menu on the right, with more about the emotion you're looking at. The next one down is Actions, with another visual including a range of possible actions for each given state. For anguish, you can seek comfort, which is considered a constructive action. You can mourn, which is ambiguous. And you can withdraw, which is destructive.

This is a good illustration of one of the main limitations of the site-that it oversimplifies, but that probably makes it more widely accessible.

The next down is Triggers, which are either universal, like losing a loved one, or learned, like perceiving a loss of status.

And next down is Moods, the "longer lasting cousins" of emotions. For Sadness, the corresponding mood is Dysphoria.

That's as deep as the graphics go. The only thing left is calm, which you access from the right hand menu. It has nothing but a short description:

Experiencing Calm

A calm, balanced frame of mind is necessary to evaluate and understand our changing emotions. Calmness ideally is a baseline state, unlike emotions, which arise when triggered and then recede.

The only other feature of the site is a link to the "Annex", where you can find the scientific basis for the work, some more complicated definitions, the signals of emotional display, and a page of "Psychopathology", which lists various DSM diagnoses related to each emotion.

I wasn't thrilled with this page. For one thing, I disagreed with some of the categorization. For example, as an anxiety disorder, OCD was listed under Fear. But etiologically, at least from an analytic standpoint, OCD is more about a way of dealing with aggression, so I would have listed it under Anger. It also lists Mania under enjoyment, with a qualification about it being pathological enjoyment. But I don't think this is what's actually meant by the term, Enjoyment.

And this page doesn't mention the DSM, even though it includes diagnoses like Disruptive Mood Dysregulation Disorder (DMDD).


Overall, I have mixed feelings about the Atlas of Emotions. On the one hand, it recognizes that we usually don't know why we feel what we feel, or do what we do, and that's useful to know. To quote the NY Times quoting the Dalai Lama:

“We have, by nature or biologically, this destructive emotion, also constructive emotion,” the Dalai Lama said. “This innerness, people should pay more attention to, from kindergarten level up to university level. This is not just for knowledge, but in order to create a happy human being. Happy family, happy community and, finally, happy humanity.”

On the other hand, the goal is a calm state:

“When we wanted to get to the New World, we needed a map. So make a map of emotions so we can get to a calm state.”

I think this calm state is supposed to be an absence of emotion, either good-feeling or bad-feeling, a Buddhist ideal, so emotion is viewed as the enemy:

“Ultimately, our emotion is the real troublemaker,” he said. “We have to know the nature of that enemy.”

When I read this, I was reminded of the talk I attended, that I wrote about in Laughing Rats, where Jaak Panksepp noted that, "Most learning takes place through affective shifts." So if we contain our emotions, do we prevent ourselves from learning new things?

And in that same talk, Jean Roiphe noted that, "Ego functioning often involves "taming" certain affects, especially through thought and language, but it also involves intensifying some affects, so that people can feel truly alive. A full human life can't be reduced to an all or nothing switch of feeling in response to external events."

Also, I'm not sure "calm" isn't an emotion.

Maybe I just have trouble with this because I'm so steeped in a culture of neurotically exaggerated emotions, so the ideal of inner peace isn't just unattainable, it's laughably unapproachable, which, for me, quickly turns into undesirable.