Wednesday, 18 June 2014

My Negative CV



I present to you my negative CV. Rather than list all my successes (as I do here), here I’m listing all the jobs I haven’t been offered, all the papers I’ve had rejected, and all the awards I haven’t been given.

Before I start, I should probably note that I’m not doing badly at present. I had a successful PhD, and am in my third year of a five year post-doc. I have several publications, and was even lucky enough to win a prize for my PhD work. It would be dishonest to claim I’m not doing reasonably well, but I certainly know individuals with ‘stronger’ CVs – prestigious fellowships, publications in ‘big’ journals etc. My point in opening up my CV is more to show the extent of rejection that has gone with the successes I have had. This might offer hope to PhD students, suggesting that rejections don’t spell the end of their career, or it could provoke anxiety, wondering how they could put up with so much rejection (or even that they've been at the recieving end of a lot more rejection). Regardless, I hope that the information is useful for some. Whether or not potential future employees will regard it as ‘useful’ is another matter, but one I will have to cope with when the time comes.



Educations & Jobs:
2005
Apply for PhD position @ Cambridge – rejected
Apply for PhD position @ York – rejected
2009
Apply for post-doc position @ Cambridge – rejected
Apply for College fellowship @ Cambridge – rejected
2010
Apply for Wellcome fellowship – rejected
Apply for MRC fellowship – rejected
Apply for ESRC fellowship – rejected
Apply for post-doc position @ Oxford – rejected
Apply for post-doc position @ Birkbeck – rejected
2011
Apply for British Academy fellowship – rejected

Awards and Prizes
2010
Nominated for BPS postgrad award – nope
2011
Nominated for BPS postgrad award – nope
Nominated for EPS Frith Prize award – nope
2013
Nominated for BPS postgrad award – nope

Publications
2008
Submit to Neuroimage – rejected
2010
Submit to Neuropsychologia – rejected
2012
Submit to Nature Neuroscience – rejected
Submit to Neuron – rejected
Submit to Science – rejected
2013
Submit to PLOS: Biology – rejected
Submit to PNAS – rejected
Submit to PLOS: Biology – rejected
Submit to PNAS – rejected

As can been seen, I’ve been rejected a few times since 2005 (this doesn't include undergrad courses I was rejected from). I’ve no idea what the average rate of rejection is for someone at a similar stage in their career. I was relatively selective in applying for post-docs in or around the London area (for personal reasons), so perhaps applied for fewer jobs than others might do when finishing their PhD. Who knows. What is clear is that in order to have even a small amount of success you need to keep banging on the door until someone lets you in. The only reason I got into Cambridge to do a PhD was because I went away, did an MSc, and reapplied the following year with a stronger CV. The only reason I won an award for my PhD work was because my PhD supervisor was willing to repeatedly nominate me over three successive years.

I think two points stem from this: (1) get used to rejection, it is part of the job and (2) find a way to channel rejection into something productive. The easiest thing to do when you have a paper rejected is to sit on it for a few months – the best thing to do is to work on it straight away. That exasperated annoyance you get when reviewers/editors haven’t realised the brilliance of your manuscript? Use it to make your paper better and submit it somewhere else (although perhaps wait a day or two just to calm yourself down a little bit first…).



Note – although every attempt has been made to ensure this CV is accurate, it was actually surprisingly difficult to retrace my steps (at least is terms of rejection). It seems even electronic memory has a positive bias.

Saturday, 31 May 2014

Replication and methods sections


Wow, things got a bit shouty there didn’t they. If you’re not up to speed, some people like replication, others don’t, and they don’t seem to get along very well. In an effort to remove all the interest out of this topic I thought it best to write a boring post about methods sections.

First things first, I like replication (who doesn’t!). I haven’t been involved in attempting to perform direct replications of other peoples work (at least not until very recently), but I replicate my own findings as much as possible to persuade myself of the validity of my results. I’m broadly supportive of the recent ‘replication movement’ although I’m not sure how novel an idea it really is given plenty of areas in psychology have been replicating for many a year.

Here is the small point I want to make, and it relates to the issue of being able to replicate purely on the basis of the methods section from a paper from another lab. The pro-replicators often state that if methods sections are written appropriately, anyone should be able to perform a replication of the study. However, this strong statement seems to me to be a bit naïve. I have two reasons for thinking this:

1.  A good methods section should include ‘all the necessary details in order to perform the experiment again’ however it should also exclude ‘any extraneous detail’. Without the latter then methods section could read something along the lines of:

“Procedure – each participant was welcomed in the lobby of the ground floor of 17 Queen Square, London, UK. The experimenter shook their right hand and encouraged them to enter the lift. The experimenter pressed the button for the 2nd floor. Doors closed within 10s of this button press…….The participant was asked to take a seat in front of the computer screen with both feet firmly on the floor such that their upper and lower legs formed a right angle with each other.

You’re probably reading this thinking this is absurd, and I agree. My (small) point is that it can sometimes be difficult to decide what is ‘necessary detail’ and what is ‘extraneous’. We make a judgement called based on our experience and knowledge of the literature. This will always be the case.  The issue being, one person’s idea of ‘extraneous’ will sometimes be different from another person’s – I wouldn’t dream of stating that the participant held a hot beverage before starting the experiment, but some think this is obviously relevant in certain situations.

2.  A well written methods section doesn’t allow ‘anyone’ to replicate the experiment. If I gave a well written methods section from a psychology journal to a historian of art I wouldn’t expect them to be able to replicate the experiment (or, if they managed, they wouldn’t do it very well). This argument applies to a lesser extent within sub-divisions of psychology as well – I would expect a cognitive psychologist who studies memory to be able to replicate my experiments more thoroughly than a social psychologist (and vice versa). If this wasn’t the case, why do we think it is useful giving students experience in running experiments if not to ‘learn the technique’?

This isn’t to say that we can’t and shouldn’t be able to replicate based purely on the methods section of someone’s paper. It’s simply to say, writing a methods section is hard and requires some amount of subjectivity with regards to what to include. Also, replication is hard – it’s impossible to replicate exactly – judgement calls have to be made about whether subtle differences between the original and the replication attempt matter.

Given all this, I heartily recommend talking to each other more (preferably in a civil manner). If I wanted to replicate someone’s experiment I would email them and ask them as many questions as possible. They don’t have a right to be involved, or contribute, or have a say in the experiment I am running, but it would be foolish for me to not want to communicate with them. Equally, I would be honoured if someone thought my experiment was worthy of replication. I would of course be nervous – what if it doesn’t replicate!? – and worried – did I make a mistake previously!? – but all these things are natural consequences of being a human being with a vested interest in my own research. I hope I’d be grown up enough to deal with those anxieties. Time will tell…


Monday, 10 February 2014

In defence of ‘trends’



I don’t have a problem with people reporting ‘trends’. There, I’ve said it now so I can’t take it back. I see a lot of tweets highlighting research papers reporting non-significant ‘trends’. For example, someone might write in their results section “our comparison of interest was marginally significant (p=.08)”. So how bad is it to say something like this instead of “our comparison of interest wasn’t significant (p=.08)”? My argument is it depends on the circumstances and the exact wording. This isn’t to defend dodgy statistical practices, but just to add a bit more nuance rather than vilifying anyone who reports ‘trends’.

So, when is it definitely bad? When you want an effect to be there, you bend over backward and report one-tailed p-values, then report a ‘marginal’ effect, and finally make conclusions based on that ‘effect’. This situation undoubtedly applies to a lot of cases and is clearly the wrong way to go about doing things. Perhaps more contentious though: when is it OK to report a ‘trend’? Well, I would argue that as long as you are upfront about whether an effect is significant or not, and you are consistent in the manner in which you report it, it is OK to bring attention to the fact that a certain contrast revealed a ‘trend’.

For example, say I have run 3 experiments. Each experiment has four conditions in a 2x2 factorial design [A1B1 A2B1 A1B2 A2B2]. In all three experiments I see a significant difference between A1 and A2 in condition B1, but not in B2. Great, all three experiments agree with each other! In Experiments 1-2, I also see a significant interaction between these two factors. In other words, the difference between A1 and A2 in condition B1 is greater than the difference between A1 and A2 in condition B2. Great, everyone likes an interaction. However, in Experiment 3, the interaction doesn’t quite reach significance (p=.06). In such a situation I don’t see an issue with saying “the interaction term didn’t reach significance (though we note a trend in line with Experiments 1-2)”*. Some might disagree, but in my view as long as you are upfront about what you are reporting then the reader is at liberty to decide for themselves whether to ‘believe’ your results or not.
 
If you do disagree, the fact remains that the use of the word ‘trend’ is probably here to stay. So with that in mind, I’ve tried to come up with some suggestions that should hopefully bring clarity if people decide they do want to use the dirty word:

  1. State first and foremost that the effect is not significant – it isn’t, get over it.
  2.  State clearly what you think a ‘trend’ is, ahead of time. For example, any two-tailed p-value between .05 and .08.
  3. Apply this criterion to all contrasts, whether it is a contrast you predicted would be significant or not. If you think ‘trends’ are worth bringing attention to, this applies equally to effects you might not be as interested in or you didn’t predict would be significant.
  4. Don’t draw conclusions from any ‘trends’ unless they are supported by further evidence – as in the example I outlined above.

In this way all you are doing is bringing attention to particular non-significant p-values in a consistent non-biased manner. I’m guessing there will be some who disagree with this type of approach. I’d be interested to hear people’s views regardless.

* Actually, in this situation you could run a mixed-ANOVA to compare the interaction term across experiments. If you find a significant interaction between A and B, that didn’t further interact with the between-subject factor ‘Experiment’, then everyone is happy.