Join us for the Open Science Fair 2017

Share your ideas for opening up Science!

Open Science Fair 2017 (OSFair2017) is having its first opening international conference in Athens, Greece, 6-8 September 2017.

OSFair2017 will critically showcase the elements required for the transition to Open Science: e-infrastructures and services, policies as guidance for good practices, research flows, new types of activities (disseminate, mine, review, assess, etc.), the roles of the respective actors and their networks.

OSFair2017_Twitter

Join us September 6–8 for three days of expert talks, roundtable discussions, workshops,  hands-on training and a poster session. Follow the links to get all details about the event and find out how to be part of it:

Niarchos

The conference will be held in the recently built Stavros Niarchos Cultural Center,  where the National Library of Greece is housed. Registrations are open!

Announcing the Open Science Fair 2017

Save the date! Make your voices heard for opening up Science.

Open Science Fair 2017 (OSFair2017) is having its first opening international conference in Athens, Greece, 6-8 September 2017.

In the spirit on Open Science, OSFair2017 will bring together all different stakeholders: policy makers, funders, publishers and content providers, research infrastructures/communities, researchers, libraries, institutions and innovators, aiming to bring change towards a more open and sustainable scenery.

OSFair2017 is a three-day conference that will include keynotes by experts from the area of Open Science, roundtable discussions, workshops, hands-on and training sessions. It will actively engage with the elements required for the transition to Open Science: e-infrastructures and services, policies as guidance for good practices, research flows and new types of activities (disseminate, mine, review, assess, etc.).

OSFair2017_Twitter

OSFair2017 is organized jointly by four EU funded projects in the area of Open Science. The OpenAIRE, OpenUP, FOSTER and OpenMinTeD projects share the vision of a science that is free of accessibility and information barriers and is an enabler of social innovation. It is partly supported by the EOSCpilot project which aims to secure and support an open research environment for Europe.

What do reviewers think about the established peer review system?

OpenUP ran a survey to capture current perceptions and practices in peer review, dissemination of research results and impact measurement among European researchers from different disciplines and all career stages. The survey invitations were sent to a random sample of researchers who deposit their publications on arXiv, Pubmed and RePEc. This ensured that the researchers who participated in the survey had produced at least one publication as a main author, as a result of which they had at least some direct experience in the areas covered by the survey. The survey consisted of four sections. The first section asked a series of questions on the respondents’ scientific discipline, career stage, gender and other characteristics. The following three sections asked questions on peer review practices, dissemination of research results and impact measurement/use of altmetrics. We received almost 1000 completed responses.

In this post, we present researchers´ experiences as reviewers under the established peer review system. The results showed that the overall satisfaction with the established reviewing process is rather low. Only a fraction of reviewers felt that their review work is being explicitly acknowledged in their organisation or that it benefits their career development! Around 20% and 30% respectively ‘strongly agreed’ or ‘rather agreed’ with these statements. In addition, around half of the researchers agreed that their incentives to work as reviewers would increase if their work was  awarded or if the process became more collaborative with authors, editors and/or publishers. Revealing reviewer’s identity was viewed as an incentive to work as a reviewer by a quarter of respondents.

Do you want learn more about what do researchers think on peer review methods? Stay tuned for more updates!

Introduction of OpenUP’s Pilot Studies

In context of OpenUP’s 6th Work Package, seven pilot studies will be kicked off this year. The aim of the pilot studies is to test and evaluate selected innovative peer review, dissemination, and impact measuring approaches applied to specific research areas and communities. To achieve this, the OpenUP team will involve seven research communities from arts & humanities, social sciences, energy, and life sciences. The pilot studies will be implemented in close collaboration with the research communities. Together with the communities, the OpenUP team will apply and test technical and processual solutions identified. The goal is to evaluate the tested methods applied to the specific settings and research communities and to identify working practices, developing standards, and remaining gaps.

The pilot studies have been designed to cover as many areas and application contexts as possible within the restricted scope of the project. They follow a similar structure and common criteria but operate rather independently and individually from each other. The seven pilots are attributed to three Use Cases, which correspond to the OpenUP pillars.

OpenUP_Pilot-Studies

The results of the pilots will be evaluated individually. The insights gained from the evaluation of the pilots will feed back to the framework studies produced by the Work Packages 3, 4, and 5, and provide useful input for the policy recommendations produced in Work Package 7. Beyond that, the OpenUP use cases and pilots strive to produce success stories and best/good practices that can further support other communities to apply new Open Science methods. First interim results will be provided in November 2017. The final evaluation report will be released in July 2018.

More information about the individual OpenUP pilot studies will be provided soon. Stay tuned!

Moving research beyond academic venues

OpenUP explores innovative ways of research dissemination

game-1821015_640Open Access and Open Scholarship have revolutionised the way scholarly artefacts are evaluated and published, while the introduction of new technologies and media in scientific workflows have changed the “how and to whom” science is communicated, and how stakeholders interact with the scientific community.

OpenUP aims to explore innovative ways of disseminating research outputs beyond traditional academic dissemination. Such ways include the dissemination of research results in traditional media (e.g. newspapers, TV), social media (e.g. via blogging), or museums, using text, images, videos, games, comics and other media formats.

Why investigating? Why using innovative ways of dissemination?

  1. These dissemination media and methods can support researchers in influencing their research impact,
  2. they can help spreading scientific results to a wider audience (e.g. the industry, policy makers, specialists outside the research community, journalists, the general public) and
  3. using interactive Web tools for dissemination would make transparent how research outputs are reflected in non-research communities and the general public, which can deliver further impact evidence relevant for researchers, funders and other stakeholders.

Stay tuned as OpenUP is working on a comprehensive framework to support researchers to disseminate their work!

Open Data – Open Science – OpenUP

OpenUPSaturday 4 March is the 2017 Open Data Day! The use of Open data is an integral element of Open Science. The OpenUP project invites you to explore key aspects and challenges of the currently transforming science landscape. OpenUP aspires to come up with a cohesive framework for the review-disseminate-assess phases of the research life cycle that is fit to support and promote Open Science. Did you know that the idea of “open science” came on scene, for the first time, in the late 16th and early 17th century? However, at the present we are experiencing a more radical reorganization of science and research lifecycle, as societies produce amounts of knowledge unknown in previous periods of human history. We need new ways to evaluate and publish scholarly artefacts and these have been provided by Open Access and Open Scholarship. In parallel, the introduction of new technologies and media in scientific workflows has changed the “how and to whom” science is communicated, and how stakeholders interact with the scientific community.

The EU funded project OpenUP addresses key aspects and challenges of the currently transforming science landscape and aspires to come up with a cohesive framework for the review-disseminate-assess phases of the research life cycle that is fit to support and promote Open Science. Its main objectives are to:

  1. identify and determine ground-breaking mechanisms, processes and tools for peer-review for all types of research results (publications, data, software),
  2. explore, identify and classify innovative dissemination mechanisms with an outreach aim towards businesses and industry, education, and society as a whole, and
  3. analyse a set of novel indicators that assess the impact of research results and correlate them of channels of dissemination.

OpenUP does so by following a user-centred, evidence-based approach, engaging all stakeholders (researchers, publishers, funders, institutions, industry, public) in an open dialogue through a series of workshops, conferences and training, and validating all interim results via a set of seven pilots involving communities from four research disciplines: life sciences, social sciences, arts & humanities, energy. The project will finally produce a set of concrete, practical, validated policy recommendations & guidelines for national and European stakeholders, including EU institutions, a valuable tool in advancing a more open and gender-sensitive science system. OpenUP partners bring expertise and capacity for evaluating and promoting new approaches in support of open science with decade-long experiences in establishing OA e-Infrastructures, excellent skills and innovative approaches for dissemination, impact indicators and policy design and implementation.

The establishment of the Altmetrics term

Altmetrics.wikipedia

‘Altmetrics’ has become an increasingly relevant concept both in the context of scientific and scholarly communication, but also in the realm of research evaluation. ‘Altmetrics’ – alternative metrics – are non-traditional metrics proposed either as a completely different or in some cases a complementary solution to more traditional citation based impact metrics of research, such as impact factor and h-index. But how and when did the term emerge?

Altmetrics has had some predecessor in the early days of the internet. In the late 90s and early 00s, there have already been some attempts to introduce new measures and motivations to utilize the web as a source for analysis and monitoring of scholarly activity. But it was not until 2010, when the term ‘Altmetrics’ has been introduced by the information scientist Jason Priem, by claiming that he would prefer the term over other terms since it implies ‘a diversity of measures (of scholarly communication)’. Priem was particularly concerned with how the internet could not only transform measures but scholarly communication as a whole.

Shortly after he coined the term in 2010, Priem together with his colleagues published a manifesto in which an understanding of Altmetrics has been proposed which expressed his understanding of web based scholarly communication and influenced the Altmetrics community sustainably: “That dog-eared (but uncited) article that used to live in a shelf now lives in Mendeley, CiteUlike, or Zotero – where we can see and count it. That hallway conversation about a recent finding has moved to blogs and social networks – now, we can listen in (…). This diverse group of activities forms a composite trace of impact far richer than any available before. We call the elements of this trace Altmetrics”, to cite Priem.

Since 2010, the literature on Altmetrics has grown enormously and the term has been adopted by many different scientific and non-scientific communities. Starting in open access journals such as PloS One and PloS Biology, the topic has soon been taken up by the informetric and scientometric community. In addition, we observe that after 2010 there is a surge of scientific articles, which covers approximately the 82% of all articles related to Altmetrics being published since 1990! Given its heterogeneity, the Altmetrics narrative has also flourished among different policy and scientific communities, among which bibliometrics, information science, science communication, and library science are most important. In recent years, a number of Altmetrics providers appeared (such as Impactstory 2011, Altmetric.com 2011, and Plum Analytics 2011), which have been influencing this movement significantly.

Are you interested in further reading? You can find our full report here!

 

Picture: The original logotype from the Altmetrics Manifesto

Peer Review Week 2016

Here is our final brief for the Peer Review Week 2016!

Opportunities of a more Open and Transparent Peer Review

During the last decade the peer review system, a key research quality assurance mechanism, has come under close scrutiny. Traditional peer review methods fail to meet the requirements and needs of today’s rapidly evolving research ecosystem, which is characterised by an increasingly digital and interactive scholarly communication, a growing number of research actors and highly specialised communities dealing with complex problems, as well as a rapidly growing science production. Current peer review workflows are not scalable and often operate like a black box. A number of recent fraud and bias cases in scientific publishing make it particularly evident that the current system is in need of improvement (just to list a few: Faked peer reviews prompt 64 retractions, It’s a Man’s World — for One Peer Reviewer, at Least, PLOS ONE ousts reviewer, editor after sexist peer-review storm, Publishing: The peer-review scam).

Alternative peer review methods such as Open Peer Review try to increase transparency and encourage honest open responses by the reviewers. Open Peer Review has been adopted by journals such as the British Medical Journal, Atmospheric Chemistry and Physics, and a majority of BioMed Central publications. Another example is the publishing platform f1000research, which has an open post-publication peer review system in place.

An international study from 2012 suggests that many researchers, who already have published and reviewed in higher quality, international, and English-language journals are overall satisfied with the peer review system used by scholarly journals (69%). However the study makes also clear, that “most researchers believe there could be improvements to the process”. Over 50% percent of respondents rated Open Peer Review to be effective (20% Open Peer Review, 25% Published Open Peer Review). The reasons expressed by the respondents are that Open Peer Review ensures that the reviewers are “honest, more thoughtful, and less likely to be vitriolic in their response”. In addition, “publishing names and reports helps the reader decide on the quality of the work and encourages dialogue” (Mulligan).

In a recent small survey the YEAR Network, about 200 early career researchers expressed their opinion on how they would prioritise the suggested Open Science policy actions by the European Commission. Among other, the surveyed young researchers see a high priority in experimenting with more open and transparent peer review, and promoting a discussion on evaluation criteria of research. [A preliminary evaluation of the results will be published shortly on the YEAR website.]

In the EU coordination and support action OpenUP we address this by focusing, among other topics, on innovative peer review practices. Our goal is to test previously defined Open Peer Review workflows in dedicated pilot studies. An example is the pilot study on Open Peer Review for Conferences. The requirements and workflows will be elaborated and defined in close collaboration with researchers, publishers, institutions and conference organizers. In a second step the workflows will be applied and tested in a dedicated pilot study with the aim to assess applicability and acceptance within the scientific community. See more about our activities: http://openup-h2020.eu/about-the-project/

References:

Mulligan, A., Hall, L. and Raphael, E. (2013), Peer review in a changing world: An international study measuring the attitudes of researchers. J Am Soc Inf Sci Tec, 64: 132–161. doi:10.1002/asi.22798

Peer Review Week 2016

Our next brief for the Peer Review Week 2016 is on Recognizing reproducibility and contains 2 think pieces on adopting a new criterion in peer review.

Reproducibility of science and its role in scholarly peer review

Looking at recent literature on the subject, more and more it appears that “reproducibility” should be one of the (key ?) components of peer review. In addition to providing an assessment (based on the reviewer’s judgement) of the scientific relevance and validity of the “published” research results, a reviewer should also attempt to “reproduce” the results presented in the “publication”, or at least ensure that the original researchers made available all elements (e.g. protocol, data, software) needed to reproduce the results.

Results that cannot be reproduced by independent analysis hardly can be considered relevant advancements in a field of study. Nonetheless, there are today many studies showing that a considerable number of research findings, published in well-known peer-reviewed scientific journals, could not be reproduced. As a reference, consider one of the latest studies [1].

But what is meant by “reproducibility”? And how its meaning changes among research domains? For the first question, a starting point may be provided by the diagram below, that somehow defines four levels of “reproducibility” (thanks to Carol Goble for her chart [2]).

op

We move from repeat (clearly not part of peer-reviewing and in some cases – as reported in the literature – impossible even for the same team that conducted the original research) to replicate (successful in a small percentage of attempted cases, according to literature) to reproduce (successful in a slightly higher percentage of attempted cases) to reuse (and pushing the research towards producing new scientific knowledge, which should be the ultimate goal of scientific research). The amount (and format) of information to be made available to other scientists in order for them to perform one of the above actions clearly depends on the action to be attempted and the specific field of study.

The second issue, rarely mentioned in the literature, is that the possibility of performing one of the above actions heavily depends on the field of the research. Without entering into the long-standing debate about Hard Science and Soft Science (see for example the entry “Hard and soft science” in Wikipedia:( https://en.wikipedia.org/wiki/Hard_and_soft_science) it is clear that we may think of two extremes. On one extreme we have a research flow (data and process) which is objectively defined, allowing perfect replicability and enabling reproducbility; on the other extreme we have a research flow based on data which are the personal observations of the researcher (e.g. on some ancient artefact) and the process is the reasoning and the conclusions that the researcher has drawn from that data. In the latter case we might have at most a reuse, assuming that the subsequent researchers agree with the original conclusions.

To conclude, as a prerequisite to include reproducibility in the peer review process, it would be nice to have some kind of guidance (a classification, a taxonomy ?) in order for the reviewer to be able to understand which “level of reproducibility” would be possible and if the published information (results, data, process, etc) are enough to attempt a replication.

Comments and suggestions are mostly welcome.

[1] http://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970 (visited 20 September 2016)

[2] http://www.slideshare.net/carolegoble/ismb2013-keynotecleangoble (visited 20 September 2016)

Reproducibilty as a new quality criterion for peer review

In this piece, Peter Kraker argues that reproducibility should be adopted as a new criterion in peer review. He argues that a reproducible paper is of higher quality, as one does not have to take the researchers’ word of how they calculated their results. Peter suggests that in this way, reproducible would become the overall quality standard of choice – just like peer-reviewed is the preferred standard right now. As a welcome side-effect, researchers would make more datasets and source code openly available. Read the whole post here: http://science.okfn.org/2013/10/18/its-not-only-peer-reviewed-its-reproducible/

Peer Review Week 2016

Our next brief  for the Peer Review Week 2016 is on Redefining publishing.

Redefining publishing

Due to the advance of digital technologies and the increasing impact of the open access movement, scholarly publishing is being redefined.  As new tools, platforms and services diversify the academic publishing scene, the nature and the stages of the publishing process are continuously revisited and reevaluated in scholarly discourse.

Problems seem to begin with the word `publishing.`  By the common practice of using preprint servers or repositories to disseminate research result within given scientific communities, the term publish moves away from the traditional concept of publishing research articles in print journals, and implies more the act of sharing results publicly.

“Some scientists are going a step further, and using platforms such as GitHub, Zenodo and figshare to publish each hypothesis, data collection or figure as they go along. Each file can be given a DOI, so that it is citable and trackable. Himmelstein, who already publishes his papers as preprints, has been using the Thinklab platform to progressively write up and publish the results of a new project since January 2015. “I push ‘publish’ and it gets a DOI with no delay,” he says. “Am I really gaining that much by publishing [in a conventional journal]? Or is it better to do what is fastest and most efficient to get your research out there?” (Powell)

Considering the traditionally embedded meaning of the notion by the academic publishing system, a certain cautiousness is connected to its use. As Christophe Dessimoz explains it:“I am saying “made available” instead of “published” because although preprints can be read by anybody, the general view is that the canonical publication event lies with the journal, post peer-review. Because of this, many traditional journals tolerate this practice: peer-review technically remains “pre-publication” and the journals get to keep their gatekeeping function. The key benefit of preprints is that they accelerate scientific communication. Indeed, peer-review can be long and frustrating for authors. Reviewers sometimes misjudge the importance of papers or request unreasonable amounts of additional work. The ability to bypass peer-review can thus be liberating for authors. Thus, if we instead recognized preprints as the canonical publication event, so goes the idea, peer-review would be relegated to a secondary role and journals would loose their gatekeeping function. This is the “post-publication” peer-review model.

A similar problematic issue is discussed by Tony Ross-Hellauer in his recent OpenAIRE blogpost on post-publication peer review. He suggests using better words for the publishing process in light of the expanding array of scholarly dissemination and review tools. See more in: Tony Ross-Hellauer. 2016. Disambiguating post-publication peer review. OpenAIRE blog.

References:

Dessimoz, C. 2016. Thoughts on pre- vs. post-publication peer-review. Dessimoz Lab blog posts. Accessed on Sept 16, 2016: http://lab.dessimoz.org/blog/2016/03/31/pre-vs-postpublication-review

Powell, K. 2016. Does it take too long to publish research? Nature 530: 7589. http://www.nature.com/news/does-it-take-too-long-to-publish-research-1.19320

Ross-Hellauer, T. 2016. Disambiguating post-publication peer review. OpenAIRE blog. Accessed on Sept. 14, 2016: https://blogs.openaire.eu/?p=1205