Monday, September 28, 2020

JoEP's excellent editorial on standards and practices

I was so thrilled to read this editorial in the Journal of Economic Psychology. I think they've hit the right balance on every point except possibly 14, despite having to balance disparate traditions from psychology and economics. Here's the text distilled into a list:

  1. Quality replication studies are encouraged and JoEP has a section dedicated to them (which is slowly increasing in usage.)
  2. They have added a "brief reports" section to the journal "to speed up the pace of research by conveying short, concise points without the need to artificially inflate the discussion to meet some arbitrary length standard."
  3. Null results are welcomed, especially as replication/brief report papers, but need to be accompanied by properly conducted power analysis.
  4. Pre-registration will not be considered a pro for publication. The details and justification for this point are fantastic so you should read section 1.2 in its entirety.
  5. Multiple studies in a single paper are also not considered a pro for publication; scientific logic dictates how many studies are appropriate.
  6. Making data and relevant code available will be enforced going forward, except in exceptional circumstances.
  7. At the same time, "data free-riding" is discouraged.
  8. "Authors need to disclose any use of deception and the necessity for its usage will be evaluated."
  9. "In studies of economic decision making incentivization is the standard practice, and this is the standard we adhere to."
  10. While in an ideal world we would abolish significance categories altogether, for now p<.05 is a one star effect, p<.01 is two stars, and p<.001 is three stars. Given that p values are themselves random variables, three categories is about the right level of precision for reporting them.
  11. Effect sizes should be presented with the highest possible clarity, for example by standardizing regression coefficients.
  12. "Multiple comparison corrections or strategies of avoiding multiple comparisons should be used", noting that the necessity of multiple comparisons depends on the specificity of the hypothesis being tested.
  13. "We ask authors to state in writing that they have reported all implemented experimental conditions ... and disclosed all measured variables; as well as how their sample size was originally determined. We also plan to add additional guidelines concerning indiscriminate removal of participants from the sample."
  14. Submissions at JoEP are not anymore anonymous.

I hope that none of these are particularly controversial, even though standard practice most certainly does not reflect them and they are inconsistently enforced even when required. 

I've highlighted 4 and 13 because 4 is certainly the most controversial, and 13 is relevant to that controversy. In fact it's the key to the solution. Those who object to mandatory pre-registration complain that it places an undue burden on the researcher and restricts the natural exploratory research process. Those who favor pre-registration counter that the pre-registration doesn't bind you to do exactly what the documents lays out; it simply requires you to explain the deviation or update the pre-registration. And to that I say: if you honestly don't want to hamper the exploratory research process, then it should be sufficient to adhere to 13. If all designs tried and refined are described, if all measured variables are reported, etc, then a careful reviewer can easily detect fishy statistical practices including all of the practices mentioned in the editorial as motivating the pre-registration movement. And it's much easier to detect these practices if this information is concisely included in the paper itself rather than having to dig through a separate pre-registration document(s); I guarantee this digging simply won't happen. Proposals to require these kinds of statements have been around since long before the pre-registration craze and I wish there were more than a handful of referees and editors out there sporadically enforcing the practice.

I'll skip commenting on the rest and refer you to the editorial instead, since it does such a good job. Except, on point 14. It's not that I even disagree with the decision, given the reality of the situation. I'm just frustrated that this seems to be an important issue that journals have collectively thrown up their hands over due to the reality of the situation. There must be other improvements that can be made. E.g. authors should be able to optionally remove all identifying information from their submissions, temporarily remove drafts from the internet where possible, and reviewers can be asked to promise not to look up the authors or papers online until after their reviews are submitted; of course this is unenforceable but would go along way towards establishing a norm. This is only 30 seconds of brainstorming; surely the profession as a whole can come up with better solutions than all-out surrender.

And to those who insist that author identity and affiliation don't actually affect the results of peer review, I invite you to conduct the following study that would be able to actually provide RCT evidence: Authors from top institutions and/or with particular notoriety in their field partner with authors from lower-ranked institutions and/or lesser name recognition who work in the same field. Journals are asked for permission prior to the study to, at some point in the future, submit genuine research papers with only the author and institution falsified, on the condition that the author and institution will be corrected, regardless of outcome, when the paper is either rejected or accepted. Any papers written by involved authors who want to submit to one of the involved journals will be assessed by the authors themselves and perhaps by a third party(ies) as plausibly submitted by another author from the opposite category. If so, it is submitted randomly under one or the other name. In the case of multiple-author papers, the lead author would be the single name on the submission until the review process is concluded, at which point it will be corrected to include all others. Of course that's only the basic gist, but you get the idea.

This would be much more logistically feasible to arrange by a senior faculty with sway at a wide range of journals, so how about one of you put your money where your mouth is? :) Even if the study is failure due to reluctance from journals or from researchers, that's an interesting finding of mismatch between stated and revealed beliefs...