Faculty job searches (3) – “I didn’t come here to make friends”

Thanks to everyone for their interest in these posts, and apologies for the 2-week gap since the last one. When I left off, our search committee had made an unexpectedly easy cut from 160 applicants to 27, and our next job was to reduce this “long-short” list further to a group of 5 whom we would invite for interviews.

As is my wont, a bit of digression first. I’ve discussed previously aspects of the academic job search that I consider, if not myths, then at least not hard and fast rules in any search that I’ve been part of here at my State U. Of these, the one I feel most confident in rejecting is the idea that having PI-level funding (e.g. a K99/R00 award) is a prerequisite for a successful job search. See the comments to the last two posts for more on this and other topics. An additional myth I would like to bust is that a search committee “already knows who they want to hire” before the applications come in. It is true that we have had searches in the past that were focused in terms of research area, e.g. we wanted to hire someone in genomics, but if we run an ad on Science Careers or somewhere similar, we have no idea of what specific individual we want to hire. This isn’t to say that we have never pursued and hired specific individuals, usually as “poaches” of PIs from other institutions, but unless you are scouring our university’s human resources site, you wouldn’t even know that these positions were open to begin with. (As a public university, we are obliged to post positions even if we do have a targeted hire in mind – at private universities, such hires presumably occur entirely out of sight). If you see an advertisement for an open position in my department, it really is an open position.

A related idea is that, to get noticed by a search committee, you need some direct, semi-nepotistic connection to the department – e.g. your boss calls his buddy the department chair, and tells them how great you are, and then you get invited for a job talk. This must happen sometimes, but it didn’t happen with any of our candidates. But candidates do benefit from the sort of extended web of connection and reputation that falls under the rubric of “pedigree” – as I mentioned in my last post, almost 1/3 of our “long-short list” of 27 applicants had National Academy of Science members as postdoc advisors – someone else can run a hypergeometric test to confirm that this is greater than would be expected by chance. Among the benefits of pedigree is that you are coming from a lab with, potentially, a track record of turning out successful new PIs; another is that your work will glitter with the reflected “excitingness” of your advisor, to the extent that they are familiar to the search committee. On the other hand, very few PIs, even NAS members are true “household names,” and on a relatively diverse search committee like ours, it is rare that more than one or two members really knows your advisor’s work well.

So how did we go about winnowing down our list of 27? First, we each read every application in depth, including the papers that each applicant included in their package. I can’t say what every committee member was looking for, but for me there were three major questions, focusing on the candidate’s research statement: is it interesting, is it distinctive, and is it feasible?

Interesting is subjective, of course, but I think we all made a good faith effort to consider the judgement of our colleagues whom we were representing on the committee. And it is amazing how many CNS papers turn out to be boring data dumps when you actually read them – this seems to be especially true at Nature and Nature Genetics. Being first author on a 50-author behemoth suggests that one has serious managerial chops, but would more than a few people in the department actually be excited to hear about your work?

By distinctive I mean, are you doing something that nobody else is doing at the department or at the university (or, ideally, in the world)? Our research community is small enough that we try to avoid the potential for conflict between new and established faculty, and our department is diverse enough that we want to avoid piling up too many people in one specific area. In addition, if you come from a super-high-power lab, are you setting an agenda that takes you out from under the shadow of your former advisor?

In considering feasibility, I was thinking mainly about the potential to attract sustainable funding. One thing I’ve learned from NIH study section service is that there is a vast wasteland of stuggling faculty out there who seemed to start life with every advantage – CNS papers as postdocs, prestigious new-faculty awards from the American Cancer Society – but now suffer from a persistent inability to land an R01 or publish a high-impact paper on their own. Looking good on paper is not enough! This is the dark side of pedigree – you come from a Cell factory lab, with a very creative and dynamic advisor, and it turns out that the creativity and Cell papers don’t come with you.

A danger sign for me, then, was a too-vague research statement – so you’ve isolated 10 new cancer susceptibility genes, but you can’t tell me what you’re going to do with them. Almost as bad, though, is if you tell me you’re going to do proteomic analysis of each of these gene products, but all your previous training is in human linkage mapping. Ambition is nice, but BS is often easy to smell out.

On the other hand, I think there are specific strengths in our department and university (as at any institution), and if I could imagine you taking particular advantage of those strengths – especially if you had a track record of collaboration in your training – then this counted as a positive for me.

After we spent a week or so reviewing our long-short list in detail, and before we met again, I asked everyone on the committee to rank their personal top-tier and second-tier choices, around 5 of each (i.e. a number similar to the number of candidates we planned to interview). It was once we all met that the search process began to resemble a “Survivor”-style reality show, with calculation and horse-trading and alliance-making. The problem is this: if you want to recruit someone in developmental genetics, say, but the other committee members have other interests, then you really need to find a single candidate in that area and throw your support behind them, to ensure that they get an interview. If you try to push two or three candidates equally hard, you will seem inflexible to your colleagues who want to invite people in other disciplines. But if you signal that you are willing to be flexible, then you will probably find another committee member to back your choice, while you back theirs.

The result, however, is that the process of narrowing down the final list inevitably has some feeling of randomness, as if, with a slight fluctuation in the space-time continuum (or a slightly different committee roster), the “second-best” developmental geneticist could have ended up as the favorite, without any detrimental effect on the overall quality of candidates invited out for interviews. In other words, we almost certainly failed to interview applicants as good or better as the ones that we did bring out. And given the imperative to bring out a diverse slate of candidates, it would have been challenging to bring out both developmental geneticists – one good applicant ends up “laterally inhibiting” their research-space neighbors.

If everyone had come to our meeting with the same 5 candidates in their top-tier list, however, our work would have been done. As it happened, 13/27 candidates ended up in someone’s top-tier list, while 7 actually ended up without any top- or second-tier votes. It was straightforward to eliminate this latter group, as well as the 7 candidates who received only second-tier votes. These 14 who didn’t make the cut included 6 applicants with CNS papers, and two with K99 awards, for those scoring along at home.

Of the 13 who comprised the “short-short” list, one got top-tier votes from all four committee members, so they were a definite invitee. It was obvious that we wouldn’t be able to cut down the remaining 12 without some serious arguments, so we decided to seek outside opinions: for each application, we chose one or two additional faculty (in our department and in others on campus) to consult with, based on their familiarity with the candidate’s field. And we returned to studying these applications in more depth over several days before meeting again.

Our last meeting was definitely the most contentious, but we did manage to narrow down to a final list of five candidates. Not surprisingly, each committee member got at least one of their top-tier applicants into the final group (apart from the one who made everyone’s top list). And all of us are still talking to each other, and we did end up making a successful hire!

In a last post, hopefully next week, I’ll try to summarize the key lessons from my inside view of the faculty search committee, and perhaps discuss a little bit about how we ended up narrowing down from five candidates to one job offer. Thanks again for reading, and I will try to answer questions in the comments section in the meantime.

Faculty job searches (2) – making the first cut

I appreciate all the interest my first post generated, as well as the fact that the supportive comments (many from people in considerably more prominent departments than mine) have so far outnumbered the doubters. I also want to acknowledge that there is no one-size-fits-all formula for how a search committee should work, what priorities a department should have in its hiring (e.g. whether they want to require new colleagues to come in with K awards), or how successful a new PI could be with any sort of arrangement. If your university/department is doing well, i.e. getting good papers out, getting funds enough to stay afloat and help temporary stragglers, and hiring exciting new colleagues who prosper, then I can’t say that our way of running a job search is better than yours. What I can say is that what I am describing has been typical of all the job searches my department has run since I got here (and I’ve been on 3 or 4 search committees previously), and I suspect it is similar to that of other basic science departments at my medical school, based on the characteristics of their hires.

Okay, on to the mechanics of the search. We had a relatively open-ended advertisement, not focusing on any specific area apart from research that would fall under the broad category indicated by our department’s name. There were four members of the search committee, representing relatively diverse areas spanning human disease, developmental biology, genomics and evolution: one relatively senior faculty member, who had been with the department for 20+ years; myself, who has been here ~10 years, another tenured member who moved here as an established PI a few years ago, and a more junior, tenure-track assistant professor. So, varying interests and varying levels of institutional memory.

We had ~160 applications, and before the committee met, I made a master Excel spreadsheet on which I listed a brief impression of every one. I didn’t read every application from front to back, initially – instead I did more or less what I do for NIH grant applications, when I first get the pile for study section, which is to look at the applicant’s publication list and their research statement (Specific Aims, for a grant). The two questions I had were: are they a good fit and are they productive (not do they have Cell, Nature or Science papers)? Despite what some commenters have stated, the only clear “filter” I applied was of fit – if someone had four CNS papers but they were all focused on microbial pathogenesis, I wrote them off.

To make the first cut, each committee member was assigned 80 applications such that each application was read in full by two members, and each committee member would have 40 applications shared by one other member and 40 by another. The goal was to whittle down the list to the point that we could all read each application very closely. This went surprisingly easily: in one meeting, we went from 160 applications to 27 (I will call this the “long-short list”), with no serious controversy.

Apart from fit, what were the criteria that made an application an easy (if often depressing) “no”? Although we didn’t filter postdocs for K99s or other K-type awards at this point, we definitely filtered established PIs based on their funding. If an Assistant Professor was trying to make a lateral move, but didn’t have an R01 that would last more than a year, there was no way we would consider them. And if the applicant had no first-author papers as a postdoc, even in press, they were not going to get a closer look. And finally, there is what I would call the “meh factor” – “meh” being a frequent comment of mine on applications that I found simply unexciting for one reason or another. This is probably the most difficult criterion to explain or justify, and it will vary from one search committee to another, but it basically comes down to the question of, if this person joins our department, are we going to be excited to hear what they are working on? Is it incremental, or me-too-ish, or is it actually something novel (to us, at least) with a lot of room for growth?

Aha, say the haterz, that’s where you are hiding your filter for Cell-Nature-Science papers!

Not so: I have plotted here the impact factor of the “best” paper (don’t get the vapors, PLoS-ONE true believers) that each of the 27 first-cut applicants had during their postdoc (or within last 5 years, for established PIs):

image007

(Please forgive the ugly Excel formatting – I just realized that I don’t have R on my new laptop, and I didn’t want to wait while it is downloading.)

Three points: first, the papers published by our long-short list applicants cluster about equally between “super-elite” journals (CNS and spin-offs including Nature Genetics) and merely “elite” journals such as PNAS and Genome Research. This is what I meant in my first post – a PNAS paper can still get you an interview. Second, in red I have highlighted the final top eight applicants, including the five that we interviewed. We clearly had plenty of CNS to choose from, yet left more than half on the table. Third, there is clearly some relationship between super-elite publication and whether or not an applicant made subsequent cuts (6/15 of that group, vs. 2/12 of those with merely elite publications), but I personally believe that the later cuts were not based on the IF of any individual paper as much as on the “excitingness” of the research, a factor that can be, imperfectly, related to whether or not the CNS gods choose to smile on a particular topic.

As I said, the first cut went very easily, and I don’t think it would have varied by more than 2-3 applicants with a different committee makeup, a different funding climate, etc. Later rounds, which I will discuss later, were more contingent, and in a different universe many of these 27 applicants could have been invited. So I guess the take-home advice at this point, to go back to my first post, is that (a) CNS papers help, but one can succeed in the job search with a PNAS, an eLife or a PLoS Genetics; (b) K awards are relatively irrelevant as an independent variable – to be sure, very successful postdocs can get K awards, but this didn’t make much difference in our evaluation; (c) pedigree matters: of the 27 first-cut applicants, 8 came from labs of NAS members. There is a lot of tangled cause-and-effect there – in our first round, we certainly didn’t go into a lot of discussion of this PI vs that PI, but the “brand” of a high-profile advisor certainly helps. And of course, you don’t get into the National Academy without being good at producing high-profile papers. I will try to unpack pedigree in future posts, as well as talk about the more nitty-gritty arguments that went into selecting our short-short list of applicants.

Faculty job searches – a perspective from the other side of the table

After years of dormancy (good luck finding the old stuff!), I’ve been badgered back into the blogging game. In particular, I was urged to offer my perspective on the life sciences faculty search process, from the searchers’ perspective. Starting a little over a year ago, I served as chair of my department’s search committee, which concluded in the spring with a successful hire. With that experience still relatively fresh, I hope I can share some important insights into how our top candidates caught our eye, as well as the behind-the-scenes process of selecting those candidates. In addition, while respecting the bounds of confidentiality, I will try to talk about what made our final choice rise to the very top.

As a bit of background: my department is a basic science department at a medical school, part of a R1 state university in the Intermountain West. Our department, our biological research university and the university as a whole are what I would consider solidly upper-middle-tier. Think, say, Penn State, not Stanford. Not coincidentally, Penn State was one of the other offers I was considering when I took the job here, while Stanford’s offer must have been lost in the mail. But I love my department for its diversity and enthusiasm, and this was the job I wanted the most when I was done with my own first interviews over a decade ago.

Right off the bat, I think it is worth addressing three claims that are often made regarding the postdoc experience, two of which I think are pernicious fallacies and the third of which is something I hope to explore in future posts.

First: You need a Cell, Nature or Science paper to get a job. There is no question that you need a “high impact” paper to get noticed, but worrying too much about specific journals is a fool’s errand. Of our top 12 candidates, 7 had a CNS paper during their postdoc, winnowed down to 3 of the final 5 that we interviewed. And the candidate who got the offer? Spoiler alert, as this will be the subject of a future post, but they were one of the two CNS-less applicants in our final pool.

Second: You need to “bring money with you,” preferably in the form of a K99 award. This one would make me laugh, if it weren’t so destructive. Zero of our top 12 applicants had a K99 award, or any other grant that would contribute substantially to their independent research. Expanding the list to our top 27, there were 3 K99s as well as 3 more established PIs who already had existing funding. In other words, having PI-level funding was no guarantee of getting interviewed, nor was lack of such an impediment. If you are considering a job where you need to “bring money,” you should ask hard questions about the financial fundamentals of that department/university. And if you find out that your potential future department sees your funding as an opportunity to offer you a smaller startup package, run don’t walk away. (And if you are a current faculty member running a job search under such conditions, get in the fucking sea.)

Third: You need the “right pedigree” to get a job. This one is probably the most troubling, because it is hard to refute. I tried, consciously, to avoid using pedigree as a selection criterion, but even looking at my own notes I can see that I often noted “XXX lab,” where XXX was a PI whose work I knew relatively well, when summarizing a candidate’s research interests and background. Was this because it served as useful shorthand (“Bruneau lab – oh, they’re into cardiovascular development [1]), or because it served as a marker of possibly unearned prestige? I will try to keep the relationship between pedigree and applicant strength in mind, in future posts; although this is something that any postdoc about to go on the job market can’t do anything about, it is something that PhD students need to consider when planning their future training.

[1] Hypothetical example – we didn’t have any applicants from that lab!