Welcome to my blog, a place to explore and learn about the experience of running a psychiatric practice. I post about things that I find useful to know or think about. So, enjoy, and let me know what you think.

Sunday, September 29, 2013


I consider psychotherapy supervision to be part of lifelong learning. I now supervise residents and interns in psychodynamic psychotherapy, but I also continue to work with a supervisor, myself.

The supervision I get is different from the supervision I provide. I call my supervisor every once in a while and say, "Hey, I'm a little stuck with a patient, can I come in and talk over the case with you." Sometimes I present process material, other times I just give an overview. It's a pretty relaxed thing, and it's now up to me to be honest with myself and recognize when I need some outside help.

I've had all kinds of supervisors, and I've learned things from all of them, even the ones I thought were idiots (like, what not to do). I've had brilliant supervisors and stupid supervisors. Warm supervisors and awkward supervisors. Helpful supervisors and confusing supervisors.

As a PGY-3, I had a child medication supervisor who never showed up (well, he showed up once). I had an adult medication supervisor who made me realize it's possible to do nothing but outpatient med management and still get to know your patients in a meaningful and caring way.

I had a kind, warm supervisor who inspired me to pursue analytic training. When I asked him if it was worth all the effort, he said, "It's right for the way you think." And he was right. He died just before I started my training. I still miss him.

I had a supervisor who would start drinking his mug of tea at the beginning of the session, and then fall asleep. He still managed to understand more about the patient than I could have seen myself. It wasn't my boring presentation style. He used to fall asleep in class, too. Eventually, I found his dozing off comforting.

I had a supervisor who made me think of Edvard Munch's The Scream every time I walked into his office. He's one of the reasons I don't do any CBT.

An interesting part of supervision, for me, was getting to see different offices. One supervisor had an office on Gramercy Park. He had this amazing mid-century furniture, with the most beautiful desk I've ever seen. The wood glowed. And there was never anything on it, except for a small, Inuit sculpture.

Another supervisor has a paper towel dispenser in the bathroom-I don't know where she even finds the towels to fill it. It's very thin, and the paper is so smooth and thick you could use it for wedding invitations.

The same supervisor, who is a child and adolescent psychiatrist and psychoanalyst, had two books by Bowlby on her shelf, Attachment, and Separation. She had them right next to each other, and I was always distracted by the thought, "Should they be together like that, or apart, at opposite ends of the shelf?"

The worst experience I had in supervision didn't have to do with the supervisor. I was sitting there, holding my notebook and reading my process notes, when I noticed the back of my hand start to itch. I turned it over to scratch it, and there was this huge spider crawling up my arm. To this day, I'm proud of the fact that I didn't scream. I did have spider dreams for some time afterwards.

So what makes a good supervisor? I think a lot of if is about style and personal preference. For me, there needs to be an environment where I feel safe enough to admit to my mistakes.

I had a well-meaning supervisor once who I found very unclear. I couldn't understand what he wanted me to do, even though I would ask him to clarify. I would try to implement what I thought he had suggested, but when I read my notes to him, he invariably told me I'd done it wrong. Eventually, I started making things up. And I was nervous when I was with the patient. I couldn't stop thinking, am I doing this right? What's he going to criticize if I say that? That patient quit treatment.

It's also important for a supervisor to know when the supervisee can go it alone. I had one supervisor who was great early on, when I needed a lot of specific direction. But down the road, she had trouble recognizing that I knew what to do without her help.

There are supervisors with whom I haven't felt all that safe, but from whom I still learned a lot. Some of the pearls:

When you're not sure where to go, get more history.

Don't worry that you're going to spook the patient by saying something prematurely. If you've thought through it, just say it.

Sometimes, telling the patient you've been thinking about what they said in the last session can have a powerful effect.

What have you learned from supervisors, and what kind of supervision do you find most helpful?

Friday, September 27, 2013

Online Journal Club Survey

The Practical Psychosomaticist and I are trying to figure out how to set up the FREE online journal club so people will be inclined to participate. There are different potential venues for this, such as continuing to use the comments sections of our blogs, starting a LinkedIn group, or using a Google+ circle.

We really want people to participate, as much as they can, so we're running a survey to see what people's preferences are.

Here's the link:

Online Journal Club Survey

It's a short survey-only three simple questions. There are options to comment, but it'll help us even if you don't comment.

If you're reading this, presumably you have an interest in the practice of psychiatry. So for the purpose of furthering lifelong learning and collaboration between colleagues, please take the survey, and then participate like crazy.


Tuesday, September 24, 2013


I seem to have a lot of random thoughts floating around in my head. In no particular order:

1. The Practical Psychosomaticist has a new article up for our fledgeling online journal club, entitled

Competency-Based Education, Entrustable Professional Activities, and the Power of Language

It's about new and improved ways to judge progress in residents. You can read my full comment over on the site, but the gist of it (my comment) is that it's an attempt to quantify things that may not be easily quantifiable. So I'm skeptical. But then, I don't trust checklists, in general. Here's me quoting myself (and mixing four different metaphors):

To my way of thinking, it comes down to a mode of teaching and learning. The traditional approach in residency has been an apprenticeship, starting as a scut-monkey PGY1 who just does what she’s told, and progressing to journeyman on graduation. And the entire arc is supervised, hopefully by supervisors who know when to be heavily involved, and when the resident can solo. This is in contrast to a style that involves rigid quantification. I agree that it’s an excellent idea to have specific expectations, and it’s also a great exercise to sit down and think about what makes a good doctor a good doctor. But it’s similar to using checklists to evaluate patients. Checklists have their role. They can be useful for tracking progress. But appropriate use of checklists, like in the Ham-D article, involves the ability to interview the patient. It’s about people getting to know people.

2. I'd really like to continue this journal club idea. But the logistics are more complicated than I anticipated. I was thinking about doing it as a LinkedIn group. Other possibilities include a Facebook page, Google+ Circle,  and a separate blog. I'm sure there are other ways to go about it. I'd appreciate suggestions.

3. I read an article in Wired magazine, How Successful Networks Nurture Good Ideas. In it, Clive Thompson writes about the power of collective thinking. He claims that writing for an audience clarifies ideas because "the effort of communicating to someone else forces you to pay more attention and learn more." He then cites several convincing studies and examples.

This has implications for so many things. An online journal club, for example, can reach out to many more people than can be gathered in one room. Think of the possibilities for exchange of ideas! Things like allowing the public access to drug research data, so safety and efficacy can be independently judged. Sites like Change.org, for worldwide petitions, and Fundly.com, for raising money for causes.

There's the flip side. You could conceptualize another article entitled, How Successful Networks Nurture Bad Ideas. Imagine if Hitler had written, Mein Blogge. (Sorry if that offended anyone, it wasn't my intention.)

But on a deeper level, communicating with an audience is what allows me to do my job. A patient comes into my office and somehow manages to articulate scary, icky thoughts and feelings that have been pent up and festering and unshared for years to an audience that includes myself and all the significant figures in the patient's life, as transferred onto me. And then those thoughts and feelings become less scary. And ideas start to congeal. And something in the patient shifts, and his life improves. Just by communicating with an audience. That's real learning.

4. I'm thinking about A Randomized Controlled Clinical Trial of Psychoanalytic Psychotherapy for Panic Disorder, by Milrod et al,  as my next journal article. Here's the abstract:

Objective: The purpose of this study was to determine the efficacy of panic-focused psychodynamic psychotherapy relative to applied relaxation training, a credible psychotherapy comparison condition. Despite the widespread clinical use of psychodynamic psychotherapies, randomized controlled clinical trials evaluating such psychotherapies for axis I disorders have lagged. To the authors’ knowledge, this is the first efficacy randomized controlled clinical trial of panic-focused psychodynamic psychotherapy, a manualized psychoanalytical psychotherapy for patients with DSM-IV panic disorder. Method: This was a randomized controlled clinical trial of subjects with primary DSM-IV panic disorder. Participants were recruited over 5 years in the New York City metropolitan area. Subjects were 49 adults ages 18–55 with primary DSM-IV panic disorder. All subjects received assigned treatment, panic-focused psychodynamic psychotherapy or applied relaxation training in twice-weekly sessions for 12 weeks. The Panic Disorder Severity Scale, rated by blinded independent evaluators, was the primary outcome measure. Results: Subjects in panic-focused psychodynamic psychotherapy had significantly greater reduction in severity of panic symptoms. Furthermore, those receiving panic-focused psychodynamic psychotherapy were significantly more likely to respond at treatment termination (73% versus 39%), using the Multicenter Panic Disorder Study response criteria. The secondary outcome, change in psychosocial functioning, mirrored these results. Conclusions: Despite the small cohort size of this trial, it has demonstrated preliminary efficacy of panic-focused psychodynamic psychotherapy for panic disorder.

It's an interesting study because it successfully manualized a type of treatment not previously believed to be manualizable.  Anyone game?

Wednesday, September 18, 2013

HAM-D'ing It Up

Last month I published a post entitled, Lifelong Learning-A New Frontier. In it, I introduced the idea of an online journal club, and I threw down the gauntlet with a challenge-let's talk about this paper:

A Rating Scale For Depression, by Max Hamilton

So here I am, talking about it. In written form.

In case it isn't obvious, this article introduced the Ham-D, or Hamilton Rating Scale for Depression, which is still in use.

And in case you happen to think there's something new under the sun, the paper begins with, "The appearance of yet another rating scale for measuring symptoms of mental disorder may seem unnecessary, since there are already so many in existence and many of them have been extensively used."

The year is 1960.

I'm gonna go on to delineate some random thoughts and reactions to the paper, in the hope that this will encourage dialogue, as might take place in an in-person journal club.

The first thing old Max H does is describe the purpose and appropriate usage of this particular rating scale. Or more accurately, what it's purpose isn't:

1. It's not devised for normal subjects
2. It's not self-rating
3. It's not about social adjustment/behavior
4. It's not broad range

Rather, it focuses on the measurement of symptoms in individuals already diagnosed with depression.

The present scale has been devised for use only on patients already diagnosed as suffering from affective disorder of depressive type. It is used for quantifying the results of an interview, and its value depends entirely on the skill of the interviewer in eliciting the necessary information… It has been found to be of great practical value in assessing results of treatment.

One question I have is, who makes the diagnosis? And based on what diagnostic system? The DSM-II was published in 1968, which means the HAM-D was developed to assess depression in people who may or may not have met the DSM-V criteria for Major Depression, were they being assessed today. So is it still appropriate to use the scale?

The scale includes 17 variables related to depression, plus 4 additional variables, diurnal variation, derealization, paranoid symptoms, and obsessional symptoms, that are either related to type rather than severity or intensity of depression (diurnal variation), or are seen only rarely in the context of depression (the other three). Each variable is rated on either a 5 point (0-4) or a 3 point (0-2) scale, with the latter in use when quantification is difficult, e.g. insomnia and agitation. It's interesting to note that on the modern HAM-D form, agitation is measured on a five point scale, which Hamilton found "impracticable".

The scale was written with the intention of having a given patient rated by two different raters. Where only one rater is available, the score should be doubled.

Some caveats for the raters:

1. No distinction is made between intensity and frequency of a symptom-the rating is at the discretion of the rater, who is expected to take both into account.
2. Depressive Triad: depressive mood, guilt, suicidal tendencies-the rater needs to avoid a  halo effect, e.g. giving guilt and depressive mood the same rating because they're closely related.

Table 1 is the correlation matrix.

It's how well each individual symptom correlated with each of the other individual symptoms. So, for example, Depression has a 1.0, since it correlates 100% with Depression. Guilt correlates with depression 49.1% of the time, and 100% with Guilt, etc.

This is followed by the extraction of some data, summarized into 4 factors-not sure how these are obtained.
As I understand it (poorly), factor analysis is a way to take your data and look at it as fewer variables than you started with. I briefly perused the Wiki Article, which seemed to involve some Linear Algebra. And since it's been many a year since I was intimate with eigenvectors, I'm gonna leave it at that. In other words, it's magic.

But, for example, Factor 1 has high correlations with depressed mood, guilt, suicide, delayed insomnia, work and interests, retardation, genital, and insight; And low correlation with agitation and anxiety, so they call it a "retarded depression"
This, so the article claims, corresponds well with the classical description of depression.

Which one? Melancholia? Seems like.

Finally, the end of the paper includes several case descriptions, not just scores. This is in stark contrast to today's style. I suppose this is knowable, but I don't know it-were most papers written with case descriptions then?

Please comment so we can get a discussion going. It's a short paper. Check it out.

Sunday, September 15, 2013

If You're So Inclined

If you read my recent post, The Vicissitudes of Kong, you'll know about AbbVie filing suit to prevent the EMA (European Medicines Agency) from gaining access to the data from their trials.

David Healy is asking people to sign his petition, which requests that AbbVie, and InterMune, another pharmaceutical company that has filed suit to prevent access to its data, drop their respective lawsuits.

I'm posting this link to the petition:


Obviously, people need to make their own decisions about whether to sign, but take a look at it, and if you think it has merit, and you're so inclined, you might want to sign it.

Tuesday, September 10, 2013

Flu Season

It's that time of year, again. Time to get a flu vaccine. Well, time to decide whether or not to get a flu vaccine.

I never do. Why?

1. You can get vaccinated and still get the flu, it's specific for certain strains.
2. I'm always already sick by the time I think of getting vaccinated.
3. I know the chance of getting Guillian-Barré is, literally, one in a million. But I happen to know someone who was that one. That doesn't change the odds, but it makes it feel like it does, and I'd rather have the flu.
4. I don't work in a hospital setting, so I'm not at a higher risk than most "healthy adults".
5. I don't treat geriatric patients, so I'm not putting my patients at higher risk.

However, I decided to be a little more responsible this year, so I looked it up in The Cochrane Reviews. They have this really nice section called, PEARLS - Practical Evidence About Real Life Situations (it's at the lower half of the linked page).

Here's the conclusion:

Bottom line: 
There is insufficient evidence to decide whether routine vaccination to prevent influenza in healthy adults is effective. Influenza vaccination did not affect the number of people needing to go to hospital or to take time off work (the follow up period was up to 3 months post vaccine).
Vaccination against influenza avoided 80% of cases at best (in those confirmed by laboratory tests, and using vaccines directed against circulating strains), but only 50% when the vaccine did not match, and 30% against influenza-like illness. Some vaccines cause pain and redness at the injection site (NNH* 1), muscle ache (NNH 27), and other very rare serious harms such as transient paralysis. *NNH = number needed to treat to cause harm in one individual.

Incidentally, The Cochrane Reviews Site is awesome. Check it early and often. The Cochrane Collaboration was established in 1993, and is named after the British epidemiologist and medical researcher, Archie Cochrane (1909-1988).

Archie Cochrane

I'm always disappointed that it isn't named after Doc Cochran from Deadwood, SD-apparently there were several doctors in the camp, but the name is fictional.

Brad Dourif as Doc Cochran

I guess this is a peculiar post for a psych blog, but I'm a doctor, and I treat patients in my office, and flu vaccination is an important question. And who says lifelong learning has to be restricted to psychiatry?

Sunday, September 8, 2013

The Vicissitudes of Kong

There's a scene in Peter Jackson's version of King Kong where the girl, Ann, (Naomi Watts) has been carried off by Kong, and somehow manages to escape. She then comes face to face with a T-Rex. Kong shows up, angry that she's run off, but willing to fight to save her, and there's a split second where she has to make a choice between the two, and it's completely clear to both Ann and the audience who she's better off with. Because Kong, as big and scary and destructive as he may be, is not actually trying to hurt her. Whereas good 'ol T-Rex wants only to kill and devour her for lunch.

This is kind of how I feel about insurance companies vs. Big Pharma. Insurance companies can only survive by charging outrageous premiums and then denying coverage. The harm is innate. But I truly believe that if pharmaceutical companies could produce drugs that were always helpful and never harmful, they would flourish and be pleased as punch.

Here is where my analogy falls apart, because Kong is in love with the girl, and I attribute no such caring and altruism to Big Pharma. And while some drugs are unquestionably helpful, and others are questionably helpful, none are free of adverse effects.

In that vein, there's been a lot of online chatter about a recent panel discussion that took place in Brussels, about the potential for conflict between public health and commercial confidentiality in clinical trials run by pharmaceutical companies. See, for example, this post on 1 Boring Old Man, or this one on DavidHealy.org. The ongoing issue is, of course, data transparency.

For some reason, I'm having trouble embedding the video, but here's a link:

Session 3: Balancing Public Health and Commercial Confidentiality

The video is almost an hour long, and since time is a limited commodity, and Breaking Bad and Dexter have only a couple more episodes, each, priorities need to be established.

But I'll give you the gist of it, as well as a transcript of a brief section.

The panel members, from various agencies, I believe, including the European Federation of Pharmaceutical Industries and Associations (EFPIA) and the Pharmaceutical Research and Manufacturers of America (PhRMA), speak about how important it is for companies to maintain confidentiality in their clinical data, so they can't be scooped by competitors. The claim is that this benefits the public, and public health, because it allows the companies to continue to pursue the noble goal of discovering, producing, and selling new drugs.

One of the panel members is Neal Parker, a US lawyer and representative of AbbVie, a spinoff of Abbott Pharmaceuticals, and makers of Humira. AbbVie has already taken action in the courts to prevent the European Medicines Agency from releasing data on Humira to a rival company. It won an interim judgment preventing release of this data on April 30th of this year.

Parker emphasized the importance of confidentiality, and included adverse events as potential data to be kept from the public. There were several responses to this from the audience.

Hans Georg Eichler, the EMA’s senior medical officer, said, “I have been a regulator for many years and I am totally flabbergasted.”

And Aginus Kalis, head of the Dutch Medicines Evaluation Board, asked, “Are you aware you are working in the healthcare industry, with patients and human beings?”

While I agree with these sentiments, I think the outrage is counterproductive in this setting. It makes Eichler and Kalis look like histrionic foils to Parker's man of reason. And the last thing I want is for people to buy into Parker's rhetoric.

Because, overall, he is reasonable. He says that AbbVie will consider revealing data on a drug by drug basis, and that as long as the purpose of the revelation is for the scientific community to learn from it, and as long as appropriate safeguards are put in place so the lucky scientists don't go running to AbbVie's competitors in Bangladesh, there's no reason they won't share their information.

One audience member asked if these scientists would be free to share the data with clinicians. I don't quite recall what the response was, and this may be an indication that some double-speak was going on.

Later in the session, a woman in the audience asked Parker to give an example of an adverse event that AbbVie would not release to the public. He told her he couldn't think of a case where AbbVie wouldn't be willing to share this information. She said, "But it's happened before." And he responded with something like, "I can only speak for AbbVie." The subtext: I can't talk about SSRIs and suicidality.

Since most people reading this are not going to watch the whole megillah, I want to point you in the direction of what I thought was the most telling of Neal Parker's comments. Parker is responding to a question from an audience member, asking about why it's so important to maintain this confidential data, when presumably, the interprative analysis gleaned from the data is already available to the public in the discussion and conclusion sections of the product label information.

It runs from minutes 19:41 to 20:47, and this is my transcript:

The detail of the give and take of the problem solving which is reflected in the narratives of some of these clinical study reports is internal sensitive information which is nowhere reflected in the label.

A company’s,...the process of getting these products approved with the regulatory agencies is a give and take of issues, challenges, um, reworking of data in response to regulators’ concerns or concerns that we have identified and raised ourselves, which needs to be explained and articulated in documents that we submit to regulators to get products approved. And if I’m a competitor to Abbvie, and I’m in a competitive landscape, where there are a lot of products on the market, and I want to enter that market, the first thing I want is Abbvie’s clinical study report, ‘cause I want to know what problems I am gonna have to confront when I try to get a product approved, and that is a competitive advantage, and that’s why we consider this information, depending on the circumstances, CCI (confidential clinical information).
(Boldface mine)

It’s the “reworking of data” that no one outside the company has access to that really worries me.

Sunday, September 1, 2013

No Present Like the Time

This post is a spinoff of two posts from two different blogs:

1. Conflicted on 1 Boring Old Man

2. Preventing Transition to Schizophrenia on Psycritic

Here's the deal. 1BOM writes about Jeffrey Lieberman's article in Psychiatric News, Early Detection of Schizophrenia: The Time is Now. 1BOM is all for early intervention in the treatment of schizophrenia, but he's conflicted because he thinks the fuss may be all cheerleading and no content.

In the comments, Psycritic references a model used in Portland, Maine, which involved training people in the community to recognize prodromal symptoms early, and then making treatment accessible. Then he/she (Psycritic, that is), links to his/her post.

Both posts are excellent, and I'm certainly all for early remediation, but I guess I'm just innately skeptical.

These are some of my thoughts on the topic:

-Jeffery Lieberman recently suggested we make nice with Big Pharma. I don't trust what's going on there. So what is this really about?

-How does the Portland model work?

-Does the model work?

-What are the implications of early detection/diagnosis?

This is what I learned:

In the Portland Program, "37% of the community referrals were found to be at high risk of psychosis, and another 20% had untreated psychosis, yielding an efficiency ratio of 57%. Prodromal cases identified were 46% of the expected incidence of psychosis in the catchment area. Community educational presentations were significantly associated with referrals six months later; half of referrals were from outside the mental health system."

They worked with people in the community, such as teachers, and general medical providers, and trained them to identify potential cases. A lot of work was done to de-stigmatize mental health issues, so people would be more willing to accept the help that was being offered.

The program is great once cases are identified. But what happens with the 43% of potential cases that turn out not to be prodromal or psychotic? Do they wonder why their teachers and doctors thought they might be "crazy"?

Another article, entitled, Is there any point in detecting high risk factors prior to a first psychosis? The link is to the abstract, which has a link to the full article, which is in Dutch. This review article concluded:

conclusion Screening all genetically vulnerable persons in the general population has no consequences for treatment. Early diagnosis by psychiatrists is certainly advisable. However, larger groups and longer studies are needed in order to demonstrate conclusively the preventive effect of interventions prior to a first psychosis.

Recall one diagnosis that didn't make it into DSM-5: Psychosis Risk Syndrome. The reasons this diagnosis was rejected included the difficulty in using its criteria to distinguish normal from abnormal:

There are plenty of eccentric teenagers out there-most don't go on to develop a full-blown psychotic disorder.

Here are some other problems:

-False positive rates in research settings range from 50%-84%. And note that these are research settings, where all raters have been extensively trained.

In pioneering studies in both Australia and the United States, young people with “subthreshold” psychotic symptoms and/or functional decline in the context of genetic risk for schizophrenia were ascertained and followed longitudinally for development of psychosis (Miller et al., 2003; Yung et al., 2003). In these earliest studies, prodromal status was associated with a 40 to 50 percent rate of “conversion” to psychosis within 1 to 2 years (Miller et al., 2003; Yung et al., 2003). A consortium of research groups in North America reported a more modest rate of transition to psychosis; for example 35 to 40 percent within 2.5 years (Cannon et al., 2008; Woods et al., 2009). In Australia, one-year transition rates for patients ascertained in successive years have steadily decreased over time: 50% in 1995 → 32% in 1997 → 21% in 1999 → 12% in 2000 (Yung et al., 2007). 
(Corcoran, 2010)

-Stigma associated with this "pre" diagnosis, as well as effects on the patient's future. Does a 17 year old with this diagnosis plan to go to college?

-The potential for inappropriate use of antipsychotics, with little evidence that this would be an effective strategy.

-Who is being assessed? In a research setting, the subjects are people who have been referred because something is off. In a more general population, the false positive rate will be even higher.

In the process of looking all this up, I came across an Editorial entitled, The Impossible Dream: Can Psychiatry Prevent Psychosis? This was from Early Intervention in Psychiatry, and written in 2007.  The authors were Jeffrey Lieberman and Cheryl Corcoran. Their conclusion:

We have demonstrated the political will and proven
capacity to address mental illness as a public health
issue, but are still limited in our identification of risk
states and effective prevention strategies. The
powerful tools of neuroscience – neuropsychology,
neuroimaging, neurophysiology and genetics – hold
great promise for the development of diagnostic
methods and novel interventions for the prevention
of serious mental illnesses in adulthood. A more
complete understanding of the pathological 
mechanisms that underlie schizophrenia and 
other disorders will facilitate diagnosis and inform the design
of innovative, safe and effective interventions. Then
we will truly be able to realize the promise of this
revolutionary new approach to prevent serious
mental illness.

My concern is about DSM-5. I read somewhere that the reason DSM-5 uses an Arabic five rather than a Roman five, like the previous DSM's, is that it's really DSM-5.0. Soon there will be 5.1, and  5.2. Like iOS.

Clearly, this topic has been on Jeffrey Lieberman's agenda for a while. But right now, the ability to effectively prevent mental illness is merely a wish. And I wouldn't put it past psychiatrists who want to be able to predict and intervene right now, and the pharmaceutical industry, which has a vested interest in encouraging the use of more medications, to jump the gun and push for Psychosis Risk Syndrome to make it into an upcoming edition of DSM-5.

Patience is what's called for. Now is not the time.