tag:blogger.com,1999:blog-5846717876675549851.post4078591599761139369..comments2022-12-10T00:38:23.167-08:00Comments on Aidan's Aviary: Some thoughts on the UCL “Is Science Broken” debateAidan Hornerhttp://www.blogger.com/profile/03786625919694616118noreply@blogger.comBlogger13125tag:blogger.com,1999:blog-5846717876675549851.post-21999319964180362952015-03-19T07:36:32.361-07:002015-03-19T07:36:32.361-07:00Oh yes totally. You need to do one thing at a time...Oh yes totally. You need to do one thing at a time. Changing the publication structure to RR and at the same time having public reviews (I am totally happy with keeping names anonymous personally) would have been too ambitious. Anonymoushttps://www.blogger.com/profile/08650498486195818917noreply@blogger.comtag:blogger.com,1999:blog-5846717876675549851.post-5494717025559284722015-03-19T07:29:57.203-07:002015-03-19T07:29:57.203-07:00I fully agree - open peer review (even if anonymis...I fully agree - open peer review (even if anonymised) is always a good thing. It's something I considered trying to package along with RRs when they were launched but the reality was that it was already a massive push into the unknown for Cortex so there wasn't a lot of enthusiasm for even more unknowns (at least that was the view at the time -- these things are continually open to revision and debate).<br /><br />You're right of course about the wide grey area -- which brings me back to the role of the editor. Navigating successfully through swathes of grey is pretty much a (good) editor's job regardless of whether a paper is RR or not, and it's a sad reality of modern publishing that editors have become a bit zombified at many journals, overstretched between too many editorial boards and unable to read papers or offer any critical insight. Dorothy Bishop has a classic blog post about this: http://deevybee.blogspot.co.uk/2010/09/science-journal-editors-taxonomy.htmlChris Chambershttps://www.blogger.com/profile/10437328364681252945noreply@blogger.comtag:blogger.com,1999:blog-5846717876675549851.post-3468113575760880932015-03-19T03:05:00.616-07:002015-03-19T03:05:00.616-07:00Yeah as I said this sort of thing is what prompted...Yeah as I said this sort of thing is what prompted JN to change their policies. I think anything that makes reviewers think twice about whether what they're asking for is actually necessary is a good thing. <br /><br />I think there is probably a relatively wide grey area though. This may really be what always made me wary about prereg. I feel it may encourage reviewers, editors and authors to be too conservative about post-hoc robustness exploration. But this needn't necessarily be the case. Again, this is where public reviews would come in very handy. At least if that happens it is then readily apparent and someone can spot it, use the data, and do it themselves if they feel so. Anonymoushttps://www.blogger.com/profile/08650498486195818917noreply@blogger.comtag:blogger.com,1999:blog-5846717876675549851.post-17272120569634835942015-03-19T02:29:02.376-07:002015-03-19T02:29:02.376-07:00@Sam - I'm not sure we disagree. If the explor...@Sam - I'm not sure we disagree. If the exploratory analysis that the reviewer asks for is needed to address a confound or support the author's conclusions then it would be required under criterion 5. What I'm referring to are cases ask for additional analyses for other reasons, e.g. to address additional questions that are not central to the point of the paper. This often happens in standard reviewing because a reviewer has a particular bent or interest, or wants to ask a different question of the data. In such cases where reviewers want this and authors don't, I don't think it's unreasonable to expect reviewers (as authors of comments) to perform such analyses themselves on the open data rather than expecting the authors to do it in order to get published.Chris Chambershttps://www.blogger.com/profile/10437328364681252945noreply@blogger.comtag:blogger.com,1999:blog-5846717876675549851.post-44645813630921720222015-03-19T01:44:36.756-07:002015-03-19T01:44:36.756-07:00I think this may be one of the key points where ou...I think this may be one of the key points where our views differ:<br /><br />"In general, however, the authors will retain control in this situation -- there is no power for a reviewer to impose a particular exploratory analysis for its own sake." <br /><br />I think there are many situations where they should. Of course, in the ideal situations this would have been predicted in the Stage 1 review, but I don't think that is realistically the case. Very often you just won't think of a possible confound until you see the results next to the methods. I think the two-stage review may help not just the authors but also the reviewers to think ahead more. That would certainly be a good thing. But I think except for the simplest designs there will always be things you can think of only when you see the data.<br /><br />I don't think it's generally fair to say that reviewers should just conduct their own analysis on the data if they feel something should be done. For me the threshold to reanalyse someone's data is extremely high. It is time-intensive and I already spend a lot of time on reviews as it is. I have to be seriously skeptical of some results to go that far (as with that telepathy paper). I think the primary responsibility lies with the authors to support their conclusions.<br /><br />I agree that a good editor should be able to make that judgement call though. The reason J Neuroscience banned supplementary materials was in part because reviewers kept asking for too many tangential analyses and control experiments. So perhaps there can be less of that after all. Not every paper has to answer all questions.Anonymoushttps://www.blogger.com/profile/08650498486195818917noreply@blogger.comtag:blogger.com,1999:blog-5846717876675549851.post-41306882770559558192015-03-19T00:58:09.057-07:002015-03-19T00:58:09.057-07:00Yes, that's right - reviewers are of course we...Yes, that's right - reviewers are of course welcome to suggest additional exploratory analyses at Stage 2 but authors wouldn't be required to conduct them unless doing so was necessary to adhere to the Stage 2 review criteria (available here: http://cdn.elsevier.com/promis_misc/PROMIS%20pub_idt_CORTEX%20Guidelines_RR_29_04_2013.pdf). One of the only situations I can imagine this happening would be if the authors wanted to male a claim about the interpretation of the results that the reviewers and editors believe wasn't justified without an additional analysis (this could violate criterion 5: "Whether the authors’ conclusions are justified given the data"). This is basically Sam's points about robustness tests which I agree with completely. Under such circumstances, the authors may be required to either report the analysis or remove the claim.<br /><br />In general, however, the authors will retain control in this situation -- there is no power for a reviewer to impose a particular exploratory analysis for its own sake. As an editor I would be particularly sympathetic to the authors in any disagreement because the requirement for data sharing means that a reviewer can easily conduct such an analysis themselves if they wanted to, and if they feel it reveals something important they could submit the outcome of that analysis as a comment.<br /><br />My instinct is that the opposite situation is more likely: that reviewers may object to exploratory analyses included by authors, particularly if the authors based their conclusions on them at the expense of the pre-registered analyses. This relates to S2 criterion 4 ("Whether any unregistered post hoc analyses added by the authors are justified, methodologically sound, and informative"). Disagreements in such cases are no different from standard unregistered papers, but requiring careful and proactive editing. <br /><br />(As a footnote I must say I really enjoy editing Registered Reports - it's cool being able to watch the dialogue unfold between authors and reviewers at the outset, and then tracking this all the way through to the final outcomes -- and like I said on Tuesday night, the tone of the interactions has been very constructive so far, much more so than my experience editing unregistered papers. The whole process has helped build my faith in something Sophie said - that we do this job, above all, because we love finding out new things. Wouldn't it be great if by accepting papers in advance, we allow scientists to recapture this by freeing them from the need to get "good results" and tell stories).Chris Chambershttps://www.blogger.com/profile/10437328364681252945noreply@blogger.comtag:blogger.com,1999:blog-5846717876675549851.post-61186247978178422152015-03-18T15:26:11.043-07:002015-03-18T15:26:11.043-07:00I think Chris already answered this (or a similar ...I think Chris already answered this (or a similar question) by saying that Registered Reports will really require editors that are more proactive than what is commonly the case now (which he I think called 'clicking buttons'). If that works I think that can only be a good thing. Editors should be able to make a judgement call. As much as I appreciated that the review process at F1000 was all about the reviewers, the lack of real editorial decision making I think is holding their model back.<br /><br />But on the plus side, I think if all reviews were public this would also further encourage proactive editing. Since everyone can read the reviews they can tell if the editing was lazy or biased or whatever. <br /><br />Regarding Hugo's other point, personally I think that even with Registered Reports the balance of exploratory and confirmatory results in most good studies is likely going to be at last 50:50. I think this is one of the things that has put me off pre-reg in many discussions. I agree that it is a good idea to declare a priori what you are planning to do but I think you also need robustness tests, exploring the data from a skeptical perspective. A lot of those ideas you are almost inevitably going to get *after* you collected and seen the data. Anonymoushttps://www.blogger.com/profile/08650498486195818917noreply@blogger.comtag:blogger.com,1999:blog-5846717876675549851.post-717010372372594212015-03-18T12:36:32.234-07:002015-03-18T12:36:32.234-07:00Good question. Would be interested to hear from Ch...Good question. Would be interested to hear from Chris about this. My guess is that reviewers are welcome to suggest further exploratory analyses, but authors have more power to simply say no as they already have prelim acceptance. It could make that exchange more collegiate as reviewers have less power to sink a paper if authors don't want to do those further analyses. Aidan Hornerhttps://www.blogger.com/profile/03786625919694616118noreply@blogger.comtag:blogger.com,1999:blog-5846717876675549851.post-36698697079800676712015-03-18T12:05:39.428-07:002015-03-18T12:05:39.428-07:00Great blogpost Aiden.
From what I heard yesterday...Great blogpost Aiden.<br /><br />From what I heard yesterday it seems there is no 'binding' into a design/analysis in the sense that non-planned analyses are encouraged, but must be logged as non-planned. I'd personally be very happy to read a pre-reg article that says - we did the planned analyses, found nothing (short section), but then we realised... (much longer non-planned results section). That would not look bad to me. It would look honest. <br /><br />I had to leave before the wine so I didn't see discussion of:<br />What happens when 4 or 5 reviewers (numbers on my last two fMRI papers) at stage 2 suggest many vast post-hoc analyses based on the novel surprising findings that were not predicted at stage 1. <br /><br />Presumable the authors have the opportunity to log which analyses were suggested by reviewers and which were their own. <br /><br />A third of the analyses in my labs' recent paper Howard et al. 2014 Current Biology were due to very helpful reviewers suggestions in response to particular results we had found. I doubt these analyses would have been suggested at stage 1 of a pre-reg without knowing the results (but I could be wrong). HugoSpiershttps://www.blogger.com/profile/12830772289074109082noreply@blogger.comtag:blogger.com,1999:blog-5846717876675549851.post-19141680158246571772015-03-18T09:26:24.591-07:002015-03-18T09:26:24.591-07:00Good post!
I've never known anyone who says t...Good post!<br /><br />I've never known anyone who says that non-prereg experiments are all worthless. Certainly I don't believe that! Although I would say that the problem with non-prereg evidence is that its worth is hard to determine. A paper can present the most compelling results but you don't know how many other analyses were run before those results appeared. Maybe none, maybe a few, maybe a hundred.Neuroskeptichttps://www.blogger.com/profile/06647064768789308157noreply@blogger.comtag:blogger.com,1999:blog-5846717876675549851.post-48849299485651303962015-03-18T09:14:14.010-07:002015-03-18T09:14:14.010-07:00As I said, I have met people who honestly state th...As I said, I have met people who honestly state that non-prereg experiments are worthless. However, I regard this mainly as trolling. Clearly nobody who actively promotes pre-reg (like Chris or EJ) has been arguing this. <br /><br />As for the new problems or the problems that aren't new but got worse (like publish & perish) I think they have largely to do with the challenges of a generally changing world. As Marty has put it in the past, there are just too many scientists. I don't agree that there can ever be too many (and I don't think he thinks so either) but I think the number of people and rapidly changing technology (both in science itself and publishing) are causing these problems. But I believe that these things are also the means with which we can solve them.Anonymoushttps://www.blogger.com/profile/08650498486195818917noreply@blogger.comtag:blogger.com,1999:blog-5846717876675549851.post-18918462618383001392015-03-18T08:42:53.210-07:002015-03-18T08:42:53.210-07:00I agree, I am ideologically opposed to the idea th...I agree, I am ideologically opposed to the idea that non-prereg papers are worthless. As you state though, I'm not sure anyone has or would argue this so it is a bit of a non-starter. I also agree with your statement about science being in (relatively) good health. Some things have probably become worse (and steps are being introduced to address those issues, e.g., prereg) and some things have improved (e.g., data sharing, open access). I think you are correct in that we need to be brave in confronting what isn't going right, whilst not losing sight of the big picture.Aidan Hornerhttps://www.blogger.com/profile/03786625919694616118noreply@blogger.comtag:blogger.com,1999:blog-5846717876675549851.post-87480702971377623272015-03-18T08:26:45.318-07:002015-03-18T08:26:45.318-07:00I really hate online media - lost the comment I wr...I really hate online media - lost the comment I wrote because I wasn't logged in :( Trying again...<br /><br />Thanks for this summary. I'd like to point out that most of the questions & answers from Chris' talk are already published in this paper: http://orca.cf.ac.uk/59475/1/AN2.pdf<br /><br />I had in fact taken some notes about them because I don't think all the answers are satisfactory. However, the questions were supposed to be asked by the audience (and they were great!) and also I think the event wasn't supposed to be about debating pre-registration but about how to improve science in general. <br /><br />I may write a blog post (as me this time) at some point in which I discuss these points. However, I agree with you that just going on about logistical issues is just nitpicking. In the end we need to try out the preregistration concept (both the standard model and the Registered Reports model Chris talked about). You can't know what good it does without any data.<br /><br />You may be right that some concerns are driven from an ideological opposition. But at the same time I have actually heard people suggesting that unless experiments are preregistered they are worthless. I *am* ideologically opposed to that idea. I think it was pretty clear from yesterday's discussion that nobody on the panel believes this of course (and I might add that I never thought any of them did!).<br /><br />I hope perhaps the main point of my opening talk (had I not screwed up part of it :P) came across that considering that we have discussions like this the answer to the question "Is science broken" is a clear No (I believe you tweeted something to that effect too...) Anonymoushttps://www.blogger.com/profile/08650498486195818917noreply@blogger.com