This is a great idea that is long overdue.

It is certainly consistent with the post-publication process that is inherent in science and is its oldest roots to boot.

Besides the publishers leeching off of the current publication system, it makes a fine balance where any idea or result should be reviewed but crackpot ideas shouldn't be publicly sanctioned.

Weighting (reviewing) reviewers would certainly prevent both that and cronyism. So it could, and should, be tested. Posthaste.

The public review has a many synergies - for example, it can accelerate the development of many ideas in discussions. As everything, it has its caveat too - for example, just the most loud people tend to become most biased. Another problem is, the public review may be unreliable and infective in prohibiting of frauds. In this moment it still appears like the generally positive trend for me.

It is certainly consistent with the post-publication process
Because I do like balanced approaches instinctively, it seems for me, it would be optimal if the post-publication process would complement the pre-publication validation, but it would not replace it completely. The scientists should simply maintain both approaches. The problem is their matrix is not symmetric: only published results can become a subject of public review, whereas the anonymous review can be applied both before, both after publication.

It's actually just one of many things that badly need to be fixed with our system of science and science education. Perhaps the biggest problem is that many like to imagine that everything is working so well.

The papers with free downloads can be found here http://www.fronti..._for/137

Nikolaus Kriegeskorte argues that scientists, not publishers, are in the best position to develop a fair evaluation process for scientific papers.

I completely agree. Peer review, as it is now, isn't half bad. But taking it to those who know how to do it best is a step to make it better.

However, it is unclear how exactly such a system should be designed

Therein lies the rub. The advantage that the current system has is: due to the distributed nature of journals the review process is distributed (i.e. there is no way to put any political pressures on it that will encompass ALL journals and keep 'unwanted publications' out of circulation).
A new system will have to make sure it keeps this decentralized approach - otherwise there is the danger of institutional bias.

completely transparent, post-publication, perpetually ongoing

This part (although laudable) looks like lot of effort. I wonder who will have the time to do all these perpetual reviews.

In the first step after a manuscript is published online, anyone can publicly post a review or rate the paper.
In the second step, independent web-portals to the literature combine all the evaluations to give a prioritized perspective on the literature. The scoring system could simply be an average of all of the ratings.

That part doesn't sound like a good idea. Papers are highly specific and require deep immersion in the subject (otherwise they're likely to be misunderstood - even from highly educated people in slightly different subjects). It often happens that one out of 5 reviewers doesn't understand the paper in his OWN specialty he's supposed to review as it is right now.

So review by people from different specialties (and averaging independent of proficiency on the subject) is arguably worse than useless. THAT will surely distort the review process in a very bad way.

and remains fossilized in pre-publication phase

I would argue for keeping it in the pre-publication phase. Review has to be anonymous. If you have a chance to find out who the paper is from that you are reviewing you're introducing all kinds of potential bias.
(E.g. if the author has published several crank papers there is no way that any new stuff of his will be reviewed objectively if the reviewers can just go look it up on open access)

Maybe do it like this: Submit a paper to open access with the wish to have it reviewed (or not). If you don't wish for a review it will be flagged as such and put on open access.

If you do wish for a review it will FIRST go through the (anonymous) review process and then be revised by the author (if needed) and re-reviewed until no further criticism from reviewers arise (much like it is done today in the review process)
OR the author decides to put it on open access as it is after some review round - with ALL reviews attached.

In the first step after a manuscript is published online, anyone can publicly post a review or rate the paper. In the second step, independent web-portals to the literature combine all the evaluations to give a prioritized perspective on the literature. The scoring system could simply be an average of all of the ratings.


This is a horrible idea.

Think of all the crackpots, lunatics and political zealots going through controversial articles and generating walls of text so tall and wide and impenetrable that the reviews themselves would need professional reviewers to weed out all the bullshit.

I mean, simply look at this comment section. Every day it's full of spam from amateurs who believe they've overturned General Relativity with crystal vibrations and ether waves.

And think of the wailing and gnashing of teeth, and conspiracy theories that ensue when you ban the lunatics and shills from reviewing the articles...

"The peer review system is satisfactory during quiescent times, but not during a revolution in a discipline such as astrophysics, when the establishment seeks to preserve the status quo." Hannes Alfvén

Oop, you have no time neither patience to build and maintain such a black lists? Just use Edd Witten's (or some other celebrity's) lists

The whole point of anonymous peer reviewis that personal tastes don't come into it when judging whether a paper is good or not. Science should be divested from the influence of having a 'famous' or 'not so famous' scientist write it. The material must be able to speak for itself (otherwise it isn't scince but fiction).

So introducing personalized (or shared) white/blacklists is a very bad idea. That way we'd very soon get the problem that some people decry on this site: censorship of 'unwanted truths'. And this time it would be real.

I`m with AP here, the reviews really must remain anonymous, and further they should be anonymous on both sides, for the same reason. That is, when the poster of a paper looks at the reviewers, his responce should not be biased based on the person doing the review, but rather focussed on the critiques in the review.
Regardless, I suggest this will need to be an evloving system. While it has to start somewhere, I suggest it will have to eveolve to continue to give relevent feedback to both the posters of papers and the reviewers of them.

In a very general sense we might look at the history of wikipedia as a clue to how this might work. The concept behind wikipedia, that anyone can edit the encyclopedia, was ridiculed in its early days as unworkable. I was one of those skeptics. It is not perfect, but because many professionals now use it every day the content is usually balanced and current

This also addresses the main gripe many of the "anti science " folks have: The lack of cross disciplinary peer review or "self policing". Climate "science" is the worst example. We now have sociologist,economist, mathematicians, etc. publishing papers based on completely inappropriate use of datasets from other fields. Since only people in their own narrow disciplines review the research a lot of crap gets through to the general public

For a real-life working example of post-publication discussion, see the ICML this year (International Conference on Machine Learning for those from other fields). Here's an example discussion page for a particular paper:

http://icml.cc/di...291.html

I think it's great for detailed discussions, which often include reviews, as well as additional background and non-obvious links to other papers.

I'm not so keen on making the leap from this to ratings because of the issues raised before. Detailed, non-anonymous comments are one thing; uninformed ratings without a detailed analysis or review would just add noise and reinforce popularity trends (which already exist anyway, and don't need another echo chamber).