Tuesday 25 October 2022

Ethics committees shouldn't provide methodological reviews (IMHO)

I hesitate to write this blog in case I unleash a further torrent of strong opinions either way on Twitter, but I couldn’t resist. I won’t link to anyone’s tweets, as I don’t want to draw people into a conversation they don’t want to be further drawn into. Many of you will have seen a recent debate on Twitter about whether ethics committees (or IRBs) should include methodological reviews. In my opinion, they shouldn’t. Yes to improving methods and experiments, no to doing this through the pre-existing ethical review system.

Good methods aren’t an ethical issue (or aren't an ethical issue that is relevant to an ethics committee)

Why do ethics committees exist for research that is conducted on human participants (which I will focus on, as I am in a psychology department and the debate has largely centred on human research and psychology)? The answer is because psychology has a history of conducting experiments that have done actual harm to participants. The classic examples are the Stanford Prison Experiment and Milgram’s experiments on obedience. We have a clear moral and legal obligation to ensure the safety of our human participants. We need to ensure that they suffer no harm during the experiment, that they are able to consent in an informed manner, and that we have clear plans in place in relation to holding their data. We also have an obligation to ensure our research doesn’t inadvertently affect non-participants, for example when conducting research into specific groups that could lead to discrimination. Getting these things wrong could cause genuine harm to our participants and wider community and having a formal committee that reviews this is critical.

The argument put forward is that running experiments with “bad methods” is unethical, therefore should be considered by the ethics committee. The question then becomes what is unethical about running a “bad” experiment? One possibility is that it is a waste of the participant’s time. I don’t think this is an ethical issue. If there is a clear statement in the information provided to the participant that they will not benefit in any way from participating (apart from remuneration for their time) then this would seem to cover this possibility. If the participant provides informed consent knowing this to be the case, this doesn’t seem like an issue to me.

Even if it was, we then have to ask what “wasting someone’s time” means. I’m sure I could find a few psychologists who think an experiment I design is theoretically important, but if I sampled 100 people on the high street and asked them if I was wasting someone’s time doing this experiment, they may well have a very different answer. Equally, I might design a very good experiment methodologically, but the question itself might be completely pointless (e.g., does the presence of a teddy bear increase the likelihood of someone choosing a Twix compared to a Mars bar?). There are no societal norms that provide a clear benchmark here.

The last point is that there are clear ethical and legal guidelines in place that allow ethics committees to set a clear bar for the acceptance or rejection of applications. Although plausible that this could be the case for methods reviews, the same structure does not currently exist. The likely scenario is then that the bar must be set so low that it becomes essentially meaningless.

Ethics committees would struggle to assess methods

Let’s say good methods are an ethical issue that warrants consideration by the ethics committee. How then would methods get reviewed? Presumably the committee would consist of a wide range of researchers and the individual with the most expertise in a given area would be assigned to assess the methods of that application. I think this could work in a department that isn’t too methodologically diverse. For example, in my department most researchers are in some sense “cognitive psychologists”, despite the fact that some of us study memory, some language, some social interactions, and some development. There is therefore a common underlying theoretical framework and range of methods that we all might be able to assess. Indeed, we do include (e.g.,) power analyses in our ethics applications and it isn’t too onerous (although I would argue it isn’t necessary).

In more methodologically diverse departments this won’t be the case. If you are the only quantitative researcher in a department of qualitative researchers (or vice versa) then there is not enough expertise to provide an informed review. This is an issue in some departments (regardless of whether methods are reviewed by ethics committees) – for example in relation to a lack of peer support and feedback from colleagues. The problem would be exacerbated if a formal (inadequate) review process was introduced and likely alienate colleagues further.

My other worry is that by claiming methods are an ethical issue, it has the potential to draw attention away from the real ethical issues that led to the formation of ethics committees in the first place. In an ideal world where everyone has lots of time, this might not be an issue, but if one committee member happens to pay a bit more attention to the methods review and less to the information provided to the participant prior to consent, this could cause problems.

Good intention, bad policy

At this point you might be thinking “but shouldn’t we be providing peer review and support to colleagues in relation to their experiments?”. The answer to this is a very big yes. However, (1) I don’t think this should be subsumed within an ethics application and (2) I don’t think it should be formalised to the extent that your experiment can be rejected. A good department should have multiple support structures in place to provide feedback on new experiments. This can be within-lab with lab meetings, across labs with departmental research groups/interest groups, within PhD tuition with thesis advisory panels, and ad hoc with peer-to-peer conversations. Many of us also get feedback on grant applications that includes detailed feedback on our proposed experiments. The best way to encourage a given researcher to improve their experiment is to ensure your research environment has multiple mechanisms in place to provide supportive, collaborative feedback. 

Perhaps this could be achieved through a formal ethics process (and it appears some institutions manage this relatively successfully), however it seems more difficult to achieve than through face-to-face, collaborative meetings that allow a rapid back and forth of feedback and response (instead of a binary pass/fail of an ethics committee) and where no researcher is given the power to reject your proposal if they don’t find it up to standard. Granting power to a specific individual needs to be carefully considered, with further structure in place to ensure that power isn’t abused (e.g., a senior colleague blocking a more junior colleague from conducting research because they don’t agree with their methodological approach). 

The general point here, which applies to several other issues in academia, is that the best way to improve research in your institution is to focus your efforts on creating a positive, diverse, collaborative research environment where people want to do their best research (and have the time and resources to achieve this). We can’t use small procedural tweaks to fix larger institutional problems.  


Acknowledgements: Thanks to three reviewers (who will remain anonymous) for providing feedback on an early draft of this blog. You know who you are.

No comments:

Post a Comment