You can drag widgets from dashboard and drop it here.

About Readersourcing is an independent, third-party, non-profit, academic/scientific endeavour, aimed at quality rating of both scholarly literature and scholars.

The system is mainly conceived and built upon the concept of Crowdsourcing, a neologism coined by Jeff Howe in 2006 and currently described in Wikipedia as the process in which a task "traditionally performed by an employee or contractor" is outsourced "to an undefined, generally large group of people or community in the form of an open call". In particular, the term readersourcing refers to a specific instantiation of such concept. The final purpose of the system is to have readers of scholarly papers that also participate in the rating process of the content they read. There are several rationales for this approach. One of them is to overcome the always more frequent shortage of available competent referees, mainly caused by the increased rate at which scientific articles are written and submitted for review nowadays, without being actually paired by an equally significant growth rate of referees. Modern technologies and globalization have in fact provided several advantages to scientific writing, but they do not help the peer reviewing process to the same extent, finally unbalancing the existing equilibrium between scientific writers and reviewers. Another good reason is to exploit, for free, the opinions that readers of a scholarly paper do have after having read it: currently, these opinions are often wasted and forgotten, or spread in a very informal and not effective way. The readersourcing model aims at taking advantage of reader opinions, in order to overcome referees shortage, and also to follow the mass collaboration, collective intelligence, and wisdom of the crowd principles enabled and enhanced by Web 2.0. Of course, simply allowing readers to express their judgement on the paper they read cannot be a reasonable approach, as not all readers can be considered equally prepared and reliable; that is why the proposed model also assigns a rating to each reviewer, so that judgments from those who have proven to be good reviewers do count more than those who should not be trusted. Such a rating is implicitly and dynamically generated by the system, through the continuous comparison of the judgments expressed by the readers on each paper with its current score; providing - or having provided - correct (wrong) judgments will therefore lead to higher (lower) reader ratings, hopefully generating a virtuous circle. Although it is not yet possible to interact with the system, you can find some more information about it both on the document section of this website and through the links provided on the right of this page.