Pages

Wednesday, October 30, 2013

No Worries, Lurasidone


This just in: In July, Lurasidone, originally approved for Schizophrenia, was approved by the FDA for treatment of Bipolar 1 depression.

You can now prescribe Latuda for your depressed, bipolar patients. I can't think of "Latuda" without hearing, "Latuda Matata".



Here's a link to the AJP in Advance abstract for Lurasidone Monotherapy in the Treatment of Bipolar I Depression: A Randomized, Double-Blind, Placebo-Controlled Study. The primary outcome measure was the MADRS, and the study was sponsored by Sunovion.

"Lurasidone treatment significantly reduced mean MADRS total scores at week 6 for both the 20–60 mg/day group (−15.4; effect size=0.51) and the 80–120 mg/day group (−15.4; effect size=0.51) compared with placebo (−10.7)"

According to a Wikipedia article that may be wrong,

"An effect size calculated from data is a descriptive statistic that conveys the estimated magnitude of a relationship without making any statement about whether the apparent relationship in the data reflects a true relationship in the population. In that way, effect sizes complement inferential statistics such as p-values."

I'm not sure what it means for an effect size to be 0.51, or for the placebo score not to be recorded with an effect size.

Once again, I went to ClinicalTrials.Gov, and lo and behold, I found:

Lurasidone HCI - A 6-week Phase 3 Study of Patients With Bipolar I Depression (PREVAIL3)

Now, I don't know for sure if this is the same trial they're talking about, but since it was last verified in August 2013, it seems likely (and it was first received in January 2011).

And miracle of miracles, the results are listed.



This may be a little hard to read, but the primary outcome result for Lorasidone 20-120mg is -11.8, and for placebo, -10.4. And the p value is 0.176, which, last I checked, is larger than 0.05.

The abstract had the difference for lorasidone as -15.4, but grouped into two separate dosage groups, 20-60mg, and 80-120mg. This appears to be a subgroup analysis, where you chop up your results into smaller groups after the study is over, and which is a no-no because it can yield spuriously positive results for individual subgroups (I'll get to how this works in a later post).

So the results on clinicaltrials.gov seem to indicate that there was no significant difference.

It turns out, there was another trial listed on clinicaltrials.gov, entitled, Lurasidone - A 24-week Extension Study of Patients With Bipolar I Depression. this study was first received in March 2009, and last verified in February 2013. There were no results reported for this study on the site.

So we have a 24 week study, conducted earlier than the 6 week study, with no results reported. And we have a 6 week study with conflicting results, depending on where you look.

I could be wrong about how to interpret this data. What do you think?


Tuesday, October 29, 2013

Speedy Delivery

I've been reading Ben Goldacre's Bad Pharma. It's even more disheartening than Let Them Eat Prozac. I believe he has a newer edition out, but the one I'm reading was published in 2012, so it's still fairly recent. The problems with how drugs are regulated are mind-boggling.

As an example, the FDA requires only 2 studies for a new drug to get approval. These are drug vs. placebo studies. Basically, the drug company has to show that their new drug was better than placebo for whatever its indication is. It does not have to prove that it's any better than existing drugs with the same indication. And these studies don't have to last very long.

The time to approval has been decreasing, too. In the 80's ACT UP was instrumental in getting laws passed to push not-completely checked out drugs through the system, so that people who were dying of AIDS could have access to potentially life-saving meds. And that makes sense. If you know you're going to die without the drug, and with the drug, you might live a little longer, then it's worth the risk.

But apparently, the legislation that was created for this purpose is now being used to get other, not-as-necessary drugs approved in an expedited way. One requirement that the FDA makes is that, when safety is in question, the drug company promise to do a follow-up study to determine real world effects. This can happen when, for example, a surrogate value is used to establish the efficacy of the drug. So the drug has been shown to lower blood pressure. But it's not clear what it does in terms of improving mortality, or if it may, in fact, be harmful in longer term use than was clear from the initial study.

Well, the drug companies, promise, and largely don't follow through. And the FDA doesn't do much to follow up.

Like I said, Bad Pharma was published in 2012. So this should be old news, with perhaps some improvement since then.

But today, Reuter's published, "Study questions FDA's shorter drug approval times", referencing a study published yesterday in JAMA Internal Medicine.

It found that expedited drugs underwent a median of 5.1 years of clinical testing before being approved, compared with 7.5 years for those that underwent a standard review. But in many cases safety monitoring trials that were supposed to be conducted after the products were approved were either not conducted, not completed, or not submitted to the FDA...

...Of the (20) drugs studied by Moore and Furberg in 2008, the FDA required 85 follow-up trials to monitor for safety. By 2013, only 40 percent of those studies had been completed.

I'm really not sure what the FDA expects when it asks multi-billion dollar Big Pharma to promise to do follow-up studies. Goldacre cites one case in which the FDA followed up so many years later that the drug was off patent, and the pharmaceutical company didn't care if it was removed from the market.

Why not have some teeth? Hey! You want your expensive drug expedited through the approval system? Okay! Chuck over a few million dollars to an independent agency, which will then offer a grant to have the follow-up study done by independent investigators. The drug will go on the market, but a couple years down the line, if that study determines that the drug is less safe than you claimed, you'll be subject to hefty fines, and possible criminal charges.

I mean, seriously? "Big Pharma, pretty please will you do a follow up study?" What are they thinking?




Sunday, October 27, 2013

Statistically Writing: Normal Distribution

I know it's been a while, but it's time to pick up again with our statistics education.

Last time, we learned about standard deviation and variance. Let's review.
Note: we're going to talk about population and not sample in this post:

Variance is denoted by Sigma Squared:

Variance, Mu=mean


You'll hopefully recall that variance is a measure of distance from the mean, in a collection of data. But it's a little awkward because of the squared element, which leads us to:

Standard Deviation, denoted by sigma:  

Standard Deviation

Standard Deviation is the square root of Variance, which yields a measure of distance from the mean that's not squared, and therefore has the same units as the mean.

So if you're considering the number of cockroaches in NYC apartments, let's say the mean is 50 (probably more, but yuck). The Variance would be in units of cockroaches squared, while the Standard Deviation would be in units of cockroaches.

Normal Distribution

This graph depicts a Normal Distribution, with the center line equal to the mean:

Normal Distribution
And this is the function that describes the graph:



Notice that the function is described as "p of x". Normal Distribution starts with an "N". It's sometimes referred to as a bell curve, because of the shape, and bell starts with a "b". So why a "p"?

"P" is for probability.

A watermelon can weigh anywhere from a few pounds to 20 pounds, but say that, on average, watermelons weigh 10 pounds. If you look at a graph of the weights of watermelons, it will look like the graph above, and the center line will be 10 pounds, the mean. In other words, the vast majority of watermelons will weigh 10 pounds, plus or minus maybe 5 pounds. And a few crazy melons will weigh 20 pounds, or 1 pound.

So that graph is the likelihood, or probability, that a given watermelon will have a certain weight. If I go to Whole Foods and pick up an average sized watermelon, the chance, or probability that it weighs around 10 pounds is very high.

However, the chance that it weighs exactly 10 pounds is not high. In fact, it's zero. Why? Because it's impossible to be that precise. It could weigh 10.00000001 pounds, for example.

The way you need to think about it is, what is the probability that a given melon will weigh between 9.5 pounds and 11.5 pounds? A range. And the probability is given by the area under the curve in that range, which, if you recall your calculus, is the integral of p(x) from 9.5 to 11.5.

The fact that the probability is the area under the curve helps clarify why the chance that a watermelon weighs exactly 10 pounds is zero. Because the area under the curve at 10 pounds, or at any individual point, is the area of a line, which is zero. A line has no width.

A note on probability:

The probability of ANYTHING is between zero and one, which is the same as 0% and 100%. It's important to know this because the graph of a normal distribution extends to plus and minus infinity, so the total area under the graph, which is the integral of that scary looking function, p(x), from minus infinity to plus infinity, is one.

Many real-life statistics are normally distributed (that is, they can be described by a symmetric bell-shaped curve). For example, heights of 3rd graders, or weights of watermelons. But not all statistics are normally distributed. There can be sets of data with many outliers. Or there can be sets of data that follow different distributions. But most of the data we look at in medical studies are normally distributed, and the statistical analysis tools we're used to reading about, ANOVA, t-test, paired t-test, are all designed for use with a normal distribution.

These are some of the defining features of a normal distribution:

* Mean = Median = Mode
* Symmetry-the left side of the graph mirrors the right side of the graph, and because of this:
* 50% of the graph is to the right of the mean, and 50% of the graph is to the left of the mean.

Let's do an example.

I rolled 2 (virtual) dice 100 times, and these are the totals:

7 4 2 2 6 7 8 7 6 5
4 7 12 7 4 8 11 7 6 6
6 4 10 10 6 7 7 11 7 2
5 4 2 8 7 12 6 12 8 4
6 4 7 3 6 5 12 5 12 6
8 7 9 7 2 7 5 6 6 6
7 11 12 12 7 6 8 9 3 5
7 7 4 10 5 7 7 6 8 9
9 7 11 8 5 8 5 6 8 7 9
6 8 11 4 11 7 5 3 6 6

The average total = 6.8.
And the standard deviation = 2.5
(I let my spreadsheet compute these for me)

Now, we probably all know that the most likely total of two dice will be a 7, but let's just check:

#2's:    5
#3's:    3
#4's:    9
#5's:    9
#6's:   19
#7's:   23
#8's:   11
#9's:    5
#10's:  3
#11's:  6
#12's:  7

Yes, there are twenty three 7's, making 7 the most frequent result. What we're looking at here, then, are frequencies, how often an individual total was rolled. And remember, these frequencies represent probabilities. So the probability, or likelihood of rolling a 7 is 23/100 or 23%, and the probability or likelihood of rolling a 4 is 9/100 or 9%. In the future, if I choose to bet on a pair of dice, based on the data above, I'd have a 9% chance of rolling a 4. (There are better and simpler ways to compute the odds on dice, BTW).

If we graph these frequencies, we get:




Notice, this doesn't quite look like a normal distribution, even if you draw in the curve along the tops of the bars and smooth it out. There are several reasons for this.

1. There are only 100 data points. That's not bad, but the larger the sample size  the more the graph will look like a perfect bell curve, and we're not quite there with this one.

2. It's not really a normal distribution. The result of a dice roll is an integer between 2 and 12. These are discrete results (discrete like individual, not discrete like secret). And for a normal distribution, you really need continuous results, like average weight of a watermelon.  But since I didn't want to buy and measure 100 watermelons, this will have to do. And it does approximate what we're talking about. But how well?

Let's check it against our requirements for a normal distribution:

1. The mean = 6.8. The mode would be 7, because that's the most common result. And the median is also 7, the middle value. So the mean is off a little. If I rolled the dice 1000 times, the mean would move closer to 7, and if I rolled the dice infinitely many times, the mean would be exactly 7. But I don't have that kind of free time, so this will have to do.

2. Symmetry: Are the left and right sides mirror images of each other? Not really, but if we rolled infinitely many times, they would be.

3. Are 50% of the values to the right of the mean, and 50% to the left? No.

But overall it's not a terrible approximation of a normal distribution.

Let's consider another property of normal distributions. the standard deviation is 2.5, and the mean is 6.8. This implies that anything that falls between 4.3 and 9.3 (6.8 +/- 2.5) is within 1 standard deviation of the mean. If we count up the data, there are 9 fives, 19 sixes, 23 sevens, and 11 eights, and five, six, seven, and eight all fall between 4.3 and 9.3, So there are 62 data points within one standard deviation of the mean, or 62% of all data is within one s.d. of the mean.
Two standard deviations would be between 1.8 and 11.8, and all but 7 data points are in that range (everything but the twelves, of which there are 7). So 93% of the data lies within two standard deviations of the mean.

These figures approximate something called The Empirical Rule, which is another property of a normal distribution, and which states that:

68% of the data lies within 1 s.d. of the mean
95% of the data lies within 2 s.d.'s of the mean, and
99.7% of the data lies within 3 s.d.s of the mean

The empirical rule is useful, because the integral of that nasty looking p(x) that represents the normal distribution is quite difficult to solve, so it's generally determined numerically, using something called the cumulative distribution function, which we won't get into, but which tells us the area under the curve to the left of any given point. But in the absence of the ability to solve that integral, or access to the cumulative distribution function, we can learn a lot from the empirical rule, because we know the whole, 68-95-99.7 thing.

And the values we computed above, 62% for 1 s.d. from the mean, and 93% for 2 s.d.'s from the mean, are pretty close, especially considering the limitations of our example (not enough data points, not really a normal distribution).

Pictorially, it looks like this:

I'll wrap up now, but keep in mind, a real normal distribution goes on forever, so there will always be points way, way to the right and left of the mean. But even so, the vast majority of the data lies within 3 s.d.'s of the mean. And importantly, 95% of the data is within 2 s.d.'s, so that only 5% of the data lies outside 2 s.d.'s. Meaning that the probability, p, that a data point lies further than 2 s.d.'s from the mean, in either direction, is < 0.05.














Wednesday, October 23, 2013

Book Review, Sort Of

I just finished reading David Healy's Let Them Eat Prozac.

In it, he takes a passionate position on the adverse effects of SSRI's, particularly with respect to suicidality. While initially convinced that prozac did not induce suicidality, Dr. Healy was subsequently an expert witness on several suicide/homicide lawsuits related to SSRI use. While researching those cases, he had more data made available to him, and was then convinced that SSRIs significantly increase suicidality. The supporting data include:

*Re-evaluations of results from SSRI studies

*Cases in which patients were started on an SSRI, became akathetic and agitated, stopped the SSRI, improved, then were subsequently restarted on the SSRI (or other serotonergic drug) and deteriorated once more (challenge/dechallenge/rechallenge, which is how causality can be established)

*Internal Memos from Lilly, which stated things like, "We know prozac causes a 5.6 times greater risk of suicidality than placebo, so we need to figure out how to deal with it, and specifically, how to figure out which patients will benefit from Prozac."

*His own trial of and SSRI in healthy volunteers, in which suicidality emerged

*Early studies in which subjects who developed akathisia and agitation on Prozac were either removed from the study, or given benzos for those side effects, essentially demonstrating that benefits in the study may have been from benzos, rather than Prozac.

*The fact that the BGA (the German equivalent of the FDA) did not approve fluoxetine, claiming that it was not sufficiently effective, and that it induced akathisia and agitation in patients, and ran the risk of inducing suicidality. Fluoxetine was approved in Germany a number of years later, under what appears to be powerful political pressure.

What I liked about the book is the fact that, despite how strongly Dr. Healy feels about this subject, and despite the fact that he lost a job because of his opinions, he still doesn't assume a conspiracy took place. Rather, he makes a case for a series of bad decisions, bad luck, predictable greed, overall good intentions, and selective blindness.

I also liked the fact that he is not advocating a complete moratorium on SSRI use. He believes, and this confirms my clinical impression, that there are subgroups of patients, some of whom do well on SSRIs, and others who don't. He advocates for research to figure out how to predict which group a given patient will fall into. He also advocates for research to specifically examine suicidality in SSRI use, and points out that, remarkably, no such study has been undertaken.

What I didn't like about the book was that his arguments are sometimes difficult to follow, causing me to question his conclusions, My impression, in general, though, is that the dude knows his stuff.

This is only "sort of" a book review because I'm including a link to his paper, Antidepressants and Suicide: Risk-Benefit Conundrums, as part of the online journal club The link is from his site, and another thing I like about his work is that he encourages open sources, and provides links to publications.

He's written more recent papers on this topic, but this one, in particular, summarizes what he covered in Let Them Eat Prozac, although the paper is for a professional audience, while the book is more for a lay population.

This is a table from the paper:


Note that on Placebo, there were 2 completed suicides, and 21 suicide attempts, on Active Comparators (e.g. TCAs), there were 5 completed suicides, and 24 attempts, and on All SSRIs, there were 23 completed suicides, and 186 attempts.

I'm very curious to hear people's evaluations of his statistical methods, so please comment.

Thursday, October 17, 2013

Generics

A recent letter in the NY Times got me thinking about generics. Specifically, how good are generics, compared to their brand name counterparts. (For the purposes of this post, I'm going to ignore the question of how good the brand names are, and how we know that, etc.) The letter, by Jack Drescher, states that, "the current accepted standard for bioavailability can range from 80 to 125 percent of the brand-name drug’s delivery system." But what does that mean?

About a year ago, people taking a certain generic form of Wellbutrin XL 300mg, called budeprion, started reporting what sounded like symptoms of depression. The FDA went on to announce the in-equivalence of this generic, which was taken off the market. This past Thursday, the FDA released an update:

Based on data submitted by Watson, FDA has determined that that company’s generic bupropion HCl ER 300 mg tablet product is not therapeutically equivalent to Wellbutrin XL 300 mg. Watson has agreed to voluntarily withdraw this product from the distribution chain. Also, FDA has changed the Therapeutic Equivalence Code for the Watson product from AB (therapeutically equivalent) to BX (data are insufficient to determine therapeutic equivalence) in the Orange Book. FDA does not anticipate a drug shortage.
We recommend that patients taking the Watson product continue taking their medication and contact their health care professional or pharmacist to address any concerns.


 So how does a generic drug come to be?
First, let's have some definitions and review.

According to the World Health Organization, "A generic drug is a pharmaceutical product, usually intended to be interchangeable with an innovator product, that is manufactured without a license from the innovator company and marketed after the expiry date of the patent or other exclusive rights."

What makes a generic drug "interchangeable with an innovator product" is the fact that it contains the same Active Pharmaceutical Ingredient (API), and demonstrates Bioequivalence:


Generics are not required to replicate the extensive clinical trials that have already been used in the development of the original, brand-name drug. These tests usually involve a few hundred to a few thousand patients. Since the safety and efficacy of the brand-name product has already been well established in clinical testing and frequently many years of patient use, it is scientifically unnecessary, and would be unethical, to require that such extensive testing be repeated in human subjects for each generic drug that a firm wishes to market. Instead, generic applicants must scientifically demonstrate that their product is bioequivalent (i.e., performs in the same manner) to the pioneer drug.
One way scientists demonstrate bioequivalence is to measure the time it takes the generic drug to reach the bloodstream and its concentration in the bloodstream in 24 to 36 healthy, normal volunteers. This gives them the rate and extent of absorption-or bioavailability-of the generic drug, which they then compare to that of the pioneer drug. The generic version must deliver the same amount of active ingredients into a patient's bloodstream in the same amount of time as the pioneer drug.
Using bioequivalence as the basis for approving generic copies of drug products was established by the Drug Price Competition and Patent Term Restoration Act of 1984, also known as the Hatch-Waxman Act. Brand-name drugs are subject to the same bioequivalency tests as generics when their manufacturers reformulate them.




You may recall this graph:


It's blood concentration vs. time for an oral drug. Sequential blood samples are taken after ingestion of the drug, to generate the curve. Peak concentration, Cmax occurs at time, Tmax.

Rate of Absorption = Cmax/Tmax.

and Total Extent of Absorption = Area Under the Curve (AUC)

(which is simply the integral of the function that describes the curve).

A generic drug is considered equivalent to the brand drug if its Rate of Absorption and Extent of Absorption do not significantly differ from those of the brand drug. In other words, the curves look the same, or almost the same.

How much is almost?

"Most regulators worldwide have decided that a 20% variation is generally not clinically significant.
Two versions of a drug are generally said to be bioequivalent if the 90% confidence intervals for the ratios of the geometric means (brand vs. generic) of the AUC and Cmax fall within 80% and 125%. The tmax (brand vs. generic) must also be comparable — and there should not be any significant differences between different patients." (same source as graph).

Once they've established bioequivalence, generic drugs are given Therapeutic Equivalence Codes. There are different codes for different types of meds, e.g. tablets vs. injectables. The highest rating seems to be AA-drugs that "contain active ingredients and dosage forms that are not regarded as presenting either actual or potential bioequivalence problems or drug quality or standards issues. However, all oral dosage forms must, nonetheless, meet an appropriate in vitro test(s) for approval." I'm not clear on what would constitute a "bioequivalence problem". An example of an AA drug is Acetaminophen/codeine, and this is what the heading looks like in the Orange Book (see below):



AB seems to be a more typical rating for generics. There are some subcategories in which a drug is rated compared to a specific version of the drug, for example, levothyroxine:



For those who are interested in the whole shpiel, this is a link to Approved Drug Products with Therapeutic Equivalence Evaluations , aka The Orange Book. It's called The Orange Book because the original edition was published in October, and with Halloween coming up, they decided on an orange cover.

And here's another link to FDA slides on bioequivalence:


Sunday, October 13, 2013

Panic Disorder Study


Dr. Amos at The Practical Psychosomatacist has already eloquently described the article on Panic Focused Psychodynamic Psychotherapy, but this is my take.

Once again, here's the Abstract:

Objective: The purpose of this study was to determine the efficacy of panic-focused psychodynamic psychotherapy relative to applied relaxation training, a credible psychotherapy comparison condition. Despite the widespread clinical use of psychodynamic psychotherapies, randomized controlled clinical trials evaluating such psychotherapies for axis I disorders have lagged. To the authors’ knowledge, this is the first efficacy randomized controlled clinical trial of panic-focused psychodynamic psychotherapy, a manualized psychoanalytical psychotherapy for patients with DSM-IV panic disorder. Method: This was a randomized controlled clinical trial of subjects with primary DSM-IV panic disorder. Participants were recruited over 5 years in the New York City metropolitan area. Subjects were 49 adults ages 18–55 with primary DSM-IV panic disorder. All subjects received assigned treatment, panic-focused psychodynamic psychotherapy or applied relaxation training in twice-weekly sessions for 12 weeks. The Panic Disorder Severity Scale, rated by blinded independent evaluators, was the primary outcome measure. Results: Subjects in panic-focused psychodynamic psychotherapy had significantly greater reduction in severity of panic symptoms. Furthermore, those receiving panic-focused psychodynamic psychotherapy were significantly more likely to respond at treatment termination (73% versus 39%), using the Multicenter Panic Disorder Study response criteria. The secondary outcome, change in psychosocial functioning, mirrored these results. Conclusions: Despite the small cohort size of this trial, it has demonstrated preliminary efficacy of panic-focused psychodynamic psychotherapy for panic disorder.

Following are a series of figures from the study, that tell you a little about how Psychoanalytic Psychotherapy for Panic Disorder works:



So the psychodynamic conception of panic is that it has unconscious meaning, often related to conflicts surrounding separation, autonomy, and anger, and treatment starts out by trying to determine what those meanings are.






Phase II involves addressing transference to examine the way the conflicts causing the panic can play out in real time.





Phase III involves more transference. I'm a little confused about the difference between how the transference is addressed in Phase II vs. Phase III, except that Phase III involves talking about termination, which must stir up feelings about separation.

And here is a glimpse at the results:




Some strengths of the study were that it treated a pretty sick cohort, with a lot of comorbidities, and that its methodology was quite rigorous, including adherence to a manual, training of clinicians, and supervision. Also it addresses the question of why a new method of treatment for panic disorder is necessary, namely, that 29%-48% of patients do not respond to treatment with CBT or meds, and 25%-35% drop out of those treatment modalities. The dropout rate for PFPP in this study was 7%, vs. 34% for ART. Not clear what this implies-perhaps PFPP is more tolerable, For one thing, it doesn't involve homework or exposure.

A weakness is that it compared with applied relaxation training (ART) rather than CBT. I had assumed this was a logistic/funding issue, but I contacted Dr. Milrod, and she repeated what was written in the paper, but for some reason I understood it better the second time around. Namely, that if you have a new treatment, drug or otherwise, and you don't yet know if it works, then you need to test it against placebo. If you just test it against a treatment that is already known to work, then you won't have any actual information.

For example, suppose they had tested PFPP directly against CBT. If the results for both PFPP and CBT had been good, then you wouldn't know if both treatments work, or if neither treatment is doing anything in this particular setting, but people got better spontaneously. If CBT results were better than PFPP, then you wouldn't know if PFPP doesn't work at all, or just works less well than CBT. And if PFPP results were better than CBT, you still wouldn't know if, for some weird reason, the PFPP subjects spontaneously did better, and PFPP doesn't actually work.

And the reason you wouldn't know any of this is that you don't have a placebo control, which would tell you what happens to subjects who are given no therapeutic intervention. I hope that was clear. And I'm not sure why you couldn't have three arms, PFPP, CBT and placebo.

In any case, this brings up the question of what constitutes placebo in a psychotherapy trial. According to Dr. Milrod, ART was chosen because it was known to be an "efficacious but less active therapy for Panic Disorder" (personal communication). There's a nice discussion of the therapy placebo issue here (also by Dr. Milrod).

Finally (re: comparison with CBT issue), the FDA's Guidance for Industry in Non-Inferiority Clinical Trials states: “In order to implement an equivalence or noninferiority trial, the magnitude [of medication] effect must be stable and well-established in the literature, with consistent results seen from one trial to the next.” And according to our study, this level of stability in magnitude of effect for Panic Disorder does not yet exist.

So you can't say PFPP is equivalent, or at least non-inferior to CBT because you can't say just how good CBT is for Panic Disorder.

Thursday, October 10, 2013

BRB

Sorry for the radio silence. I've been taking a little break to deal with some of the other stuff in my life. I've got posts in the making, but in the meantime, for your viewing pleasure, this is a photo of Gay Street, in Greenwich Village. Someone decided to make a paper hat for a street sign.


Thursday, October 3, 2013

New Kid In Town




On September 30th, the U.S. Food and Drug Administration announced the approval of Brintellix (vortioxetine) to treat adults with major depressive disorder. Brintellix is co-marketed by Takeda Pharmaceuticals and Lundbeck.

Brintellix
According to NEJM Journal Watch, Vortioxetine is a bis-aryl-sulfanyl amine that acts as a serotonin (5-HT) transporter inhibitor, a 5-HT1A receptor agonist, a 5-HT1B receptor partial agonist, and an antagonist of 5-HT3, 5-HT7, and 5-HT1D receptors.

The marketing angle seems to be that it doesn't impair psychomotor performance, e.g., driving:

"Vortioxetine was effective in treating depression in six clinical trials. Additionally, in another study, patients randomized to vortioxetine had driving performances similar to those on placebo." (same source)

I decided to be a good little doctor, so I checked out ClinicalTrials.gov. And this is what I found:

Out of 33 studies total:

7 compared Vortioxetine with an active comparator (duloxetine, venlefaxine XL, and in one study of previous non-responders, agomelatine) and placebo  for either efficacy or efficacy and safety in MDD.

7 compared Vortioxetine with only placebo, for either efficacy or efficacy and safety in MDD.

3 examined Vortioxetine in MDD with no comparator.

8 examined Vortioxetine for safety and tolerability in MDD.

3 looked at basic science (pharmacokinetics/concentration of neurotransmitters).

5 were studies of Vortioxetine in GAD


Among the 7  MDD studies with active comparators:

None had study results posted.

1 studied non-responders to previous meds

1 studied elderly patients

4 used doses between 2.5mg and 10mg (including the elderly study, which used 5mg)

2 used doses of 15mg or 20mg

The non-responder study used doses of 10mg-20mg.

1 study, completed in December 2008, referenced this study of vortioxetine 2.5mg and 5mg vs. placebo. There was no statistical difference between drug and placebo.


This may or may not add up to the 6 studies that demonstrated efficacy in treating depression, depending on how you count, and what the actual data, to which I have no access, shows.




Wednesday, October 2, 2013

Another Plug

Just a brief reminder to take our survey about whether and how you'd like to participate in an online journal club.

CLICK HERE

And here's a list of some upcoming posts:

1. Journal article review

2. Apps for writing process notes.

3. More on Statistics

Tuesday, October 1, 2013

Pick Two

Many years ago, I watched a Showtime special, with the comedian, Yakov Smirnoff, doing a one man show. He ended with a Q&A session:

Q: Is your name really Yakov Smirnoff?

A: No, it's Jack Daniels.

Q: What's health care like in the (former) Soviet Union?

A: Well, it's free. And you get what you pay for.


C'mon, people. This is where the US is headed. Remember the Design/Engineering adage?


Good, Fast, Cheap. Pick Two





Here are a couple of articles to check out:

Lower Health Insurance Premiums to Come at Cost of Fewer Choices

'Affordable Care' or a Rip-Off?

I've said it before but it bears repeating. Coverage does not equal care. The Affordable Care Act may make it possible for people to buy a plastic card with a number that they can give to doctors and hospitals. This doesn't mean they'll get good care. It doesn't even mean they'll get any care.

'“If a health plan has a narrow network that excludes many doctors, that may shoo away patients with expensive pre-existing conditions who have established relationships with doctors,” said Mark E. Rust, the chairman of the national health care practice at Barnes & Thornburg, a law firm. “Some insurers do not want those patients who, for medical reasons, require a broad network of providers.”'

Good, Fast Cheap. Pick Two.

Get the picture?