We chat about two new studies that took different approaches for evaluating the impact of paying reviewers on peer review speed and quality.
Dan and James discuss a recent piece that proposes a post-publication review process, which is triggered by citation counts. They also cover how an almetrics trigger could be alternatively used for a more immediate post-publication critique.
Dan and James discuss a recent editorial which argues that double-blind peer review is detrimental to scientific integrity and offers some suggestions for improving peer review, such as open peer review reports and requiring ORCIDs for all authors.
We chat about the events that started the replication crisis in psychology and Dorothy Bishop's recent resignation from the Royal Society
In this episode we chat about a Nordic approach for evaluating the journal quality and how we should be teaching undergraduates to evaluate journal and article quality
We discuss the recent retraction of a paper that reported the effects of rigour-enhancing practices on replicability. We also cover James' new estimate that 1 out of 7 scientific papers are fake
Open access articles have democratized the availability of scientific research, but are author-paid publication fees undermining the quality of science?
Dan and James discuss a paper describing a journal editor's efforts to receive data from authors who submitted papers with results that seemed a little too beautiful to be true
Dan and James answer a listener question on what practices should the behavioural sciences borrow (and ignore) from other research fields
We discuss how following citation chains in psychology can often lead to unexpected places, and how this can contribute to unreplicable findings. We also discuss why team science has taken longer to catch on in psychology compared to other research fields.
Dan and James discuss why innovation in scientific publishing is so hard, an emerging consortium peer review model, and a recent replication of the 'refilling soup bowl' study.
Dan and James discuss how scientific research often neglects the importance of maintenance and long-term access for scientific tools and resources
Dan and James discuss the Retractobot service, which emails authors about papers they've cited that have been retracted. What should authors do if they discover a paper they've cited has been retracted after they published their paper?
We discuss two recent plagiarism cases, one you've probably heard about and another that you probably haven't heard about if you're outside Norway. We also chat about the parallels between plagiarism and sports doping—would people reconsider academic dishonesty if they were reminded that future technology may catch them out?
We chat about a paper on the invisible workload of open science and why academics are so bad at tracking their workloads
We chat about a recent blogpost from Dorothy Bishop, in which she proposes a Master course that will provide training in fraud detection—what should such a course specifically teach and where would these people work to apply their training? We also discuss whether open science is a cult that has trouble seeing outward.