April highlights from the world of scientific publishing

6th May 2014

What I learned from Twitter last month: new data on how much money publishers charge libraries, discussions of where and how post-publication peer review should happen, and who publishes in megajournals and why.

What publishers actually charge universities

As mentioned last month, data is beginning to emerge on what particular universities and funders are paying in article processing charges (APCs) for open access papers. The Wellcome Trust was first with a dataset from 2012-13 published on @figshare. This month Cambridge University, Queen’s University Belfast and the Austrian Science Fund added data, and the @WellcomeTrust added more data from two previous years. I haven’t yet seen a post coming to any conclusions from this data yet, however – no doubt someone will point one out if there is one.

More importantly, data has emerged on what libraries pay publishers for online subscriptions to academic journals. This data is hard to get hold of because publishers frequently impose confidentiality clauses on libraries, but enormous amounts of work by Cambridge University mathematician Tim Gowers, including Freedom of Information requests, have lifted the lid on the whole system, at least as regards Elsevier (heard through @McDawg and others). He details this in a very long blog post, which was well summarised by Michelle Brook (@MLBrook) of the Open Knowledge Foundation. Gowers has attempted to get data on subscriptions paid to Elsevier by UK universities in the Russell Group (the universities that do the most research and are generally seen as the most prestigious in the UK). He also summarises similar attempts made by others.

To me the most revealing thing was the insight into how prices for bundles of online journals are set – it seems to bear no relation to the size of the university or any other rational reason, but instead depends mostly on what each university used to pay for print-only subscriptions before electronic journals existed. This historical quirk explains why some universities, such as Exeter, pay considerably less than others, such as University College London. Basing pricing of the current online publishing system on such historical data seems to me to make little sense. There are lots of other fascinating insights into how this arcane system works in the post, and I urge you to read it.

Gowers’ post also included a survey of Cambridge mathematicians, which suggested that most “would not suffer too much inconvenience if they had to do without Elsevier’s products and services, and a large majority were willing to risk doing without them if that would strengthen the bargaining position of those who negotiate with Elsevier.”

Several people have commented on or added value to Gowers’ post:

Post-publication peer review – venues and incentives

There was lots of discussion on peer review this month, for example on the @BioMedCentral blog (here, here and here), on the blog of Ian Mulvany of eLife (@IanMulvany), on the scatterplot blog (heard via @EASEEditors), on ScienceExchange (heard via @Impactstory and @Publons) and on the blog of Penn State University cell biologist Arjun Raj (@arjunrajlab). But two pieces in particular are worth highlighting.

Firstly, an article in a journal, a nice change from blog posts and a nice example of what added value you get from an edited article in terms of in-depth analysis and well-structured text. It is a ‘NeuroView’ article in the Cell Press journal Neuron entitled ‘The Vacuum Shouts Back: Postpublication Peer Review on Social Media‘. Here, @DoctorZen first makes it clear that he supports pre-publication peer review but advocates post-publication review as complementary to it, and faster. Post-publication review also means that authors will need to stay engaged with each paper after it is published:

For a long time, scientific publishing was like shouting into a vacuum. Authors tended to view surviving a journal’s peer review as the “finish line.” Once a paper was accepted, it was time to move on to the next project and next manuscript. After publication, discussions about a paper were often ephemeral: opinions expressed over lunch at conferences or around journal club tables wouldn’t go any further than the four walls of the room. More lasting evidence of a research community’s opinions about a paper, like citations, could take years to accrue.

Faulkes discusses the main objections to post-publication peer review (in fact the article is a good source of links to arguments against it), but he shows most of these to be “diversionary tactics”. On the fact that online discussion can be somewhat more robust than that in the pages of journals, he acknowledges that debates can get nasty and that some types of comment are simply unacceptable. However, he makes the interesting point that

…concerns about “tone” are often from established, tenured, white guys at big research universities working at established journals. One of the most profound things about social media is that it has lowered the barrier to creating and spreading conversations. This can give voice to people who were previously marginalized, for whatever reason. In the past, scientific commentary could be regulated by gatekeepers who were part of the scientific “in crowd.” Now, people who are not part of that crowd don’t need permission of gatekeepers to spread a scientific conversation to a wider audience. This means that the conversation cannot be as easily controlled by authority. Complaining about “tone” is one way to try to assert power and stifle voices by making “polite” equivalent to “innocuous.”

Faulkes then discusses the various places where post-publication peer review can take place: on publisher websites (largely unsuccessful), on centralised sites such as PubMed Commons and PubPeer (which may yet be successful but are still very new) and on social media. He doesn’t feel centralisation is necessary:

If the original work can be distributed in many different places, it should not be fatal to have the commentary about that work distributed in many places too.

He ends by drawing an analogy between post-publication peer review on social media and at conferences:

These informal conversations were never part of the scientific record, but there was never any question that they were an important part of the scientific endeavor. Social media is just the biggest research conference in the world.

The second article I’d like to highlight is a blog post on PubChase by its founder Lenny Teytelman (@lteytelman) entitled ‘We Can Fix Peer Review Now‘. He makes the excellent point that:

We ask scientists to comment on static, final, published versions of papers, with virtually no potential to improve the articles. We ask scientists to waste their time and then take the lack of participation as evidence against post-publication peer review.

This explains why so much comment about papers online is negative – only negative comments have a chance of effecting any change, in the form of retractions or corrections. If a new version of a paper could be published that took the comments into account, this would act as an incentive for constructive comments. This is already happening on sites that publish without or before peer review, such as arXiv, BioRxiv and F1000Research. But more is needed:

There is no reason to wait for publishers to innovate. With the exception of a few, innovation is neither the forte nor the goal of the publishers. As scientists, with just a few minutes of our time, we can contribute to the online annotation and discussion of published research already. We can push for constructive post-publication discussions and peer review as authors and readers. The tools are at our disposal. Let’s use them. Let’s elevate the tone of the commentary and let’s comment on the vast majority of papers that are good and not headed for Retraction Watch. If we make an effort as scientists now, we will validate the post-publication peer review naturally and will lead to a healthier scientific publishing and discourse.

Interestingly, both these articles received online comments. For Lenny Teytelman’s piece they are below the post itself, as it is published on a blog. But Neuron doesn’t have comments below articles, so Zen Faulkes compiled the comments using Storify. It may sound ridiculous, but post-publication peer review of articles about post-publication peer review is valuable!

Who publishes in megajournals?

Finally, a couple of interesting articles about megajournals. ImpactStory had a post entitled: ‘The 3 dangers of publishing in “megajournals”–and how you can avoid them‘. And a paper came out in @thePeerJ by David Solomon of Michigan State University entitled ‘A survey of authors publishing in four megajournals‘. I have to declare an interest because I am one of the peer reviewers of this paper, and my review is open and can be seen here. As @skonkiel of ImpactStory put it, the study found that the top reasons why authors publish in megajournals are: 1) quality of journal 2) open access 3) impact factor.

Tags: 

3 Comments on “April highlights from the world of scientific publishing

stephenjjohnson says:

Anna, your open peer review of the Solomon paper now has a DOI:http://dx.doi.org/10.7287/peerj.365v0.1/reviews/1

sharmanedit says:

Ah, so it does! Thanks Stephen.

[…] ← April highlights from the world of scientific publishing […]

Leave a Reply