Open Science
The Psychology Department is committed to promoting open science practices, which help to limit false discoveries and improve the reproducibility and generalizability of research findings.
For a brief introduction to open science, see the links documented in this blog post.
There are three main components of open science that you should consider implementing:
- Pre-registration
- Reproducible data analysis
- Transparency
Pre-registration
Public pre-registration of a hypothesis and data analysis plan prior to conducting a study can help to prevent false discovery, mainly by encouraging mindfulness of researcher degrees of freedom and demarcating a priori hypothesis testing from HARKing (hypothesizing after the results are known). It’s the exercise of specifying a plan that limits flexibility (regarding analysis decisions and reframing hypotheses) that makes this exercise valuable; making the plan public is just an incentivizing motivator to invest the time to do it and do it well.
Several options are available, but a popular one is with the Center for Open Science.
If you are new to pre-registration and not sure about what to include, you may find this template helpful:
AsPredicted pre-registration template
Reproducible data analysis
A reproducible data analysis is one in which every step of one’s data cleaning and analysis pipeline is documented, preferably with computer code that any individual can execute and, starting from the rawest form of data that was collected, reproduce all of the statistics that are reported in a journal article.
This has been made extremely easy and intuitive by the advent of R-Markdown documents in the RStudio development environment (originally for analyses conducted in R, but recent versions can also accommodate Python and other languages) and also Jupyter notebooks, popular with Python users but useful in multiple languages, including R.
The basic idea behind both is that they allow one to weave together narrative text, code, and statistical/graphical output, so that you can explain the intention behind a block of code (what it’s supposed to do and, if it’s not obvious, how it works), show the code and the output it generates (e.g., a table of statistical tests or a graph), and then explain what conclusions should be drawn from that output. These reports can then be easily “knitted” into formats—such as an HTML web page, a PDF, or a Word doc—that can easily be shared with others. Moreover, with Overleaf, a collaborative web platform for writing and submitting journal articles, R-Markdowns can be used to automatically feed figures, tables, and statistical text into manuscripts.
Transparency
You are encouraged to publicly share as much of your scientific process as you can. So long as it does not contain information that could be used to identify a human participant, the goal should be to share all raw data that was used for a publication and, otherwise, deidentified and/or processed versions of the data, along with statistical code for reproducing analyses (such as an R-Markdown notebook mentioned above).
If you have an NIMH grant, you may be required to upload data to their archive (https://nda.nih.gov). Otherwise, we recommend using a Texas-based data-sharing service called the Texas Data Repository (https://dataverse.tdl.org). See their website for instructions on how to create a “data-verse” for your lab or research group. This process is fairly intuitive and easy, but one advantage of sharing your research materials on this platform is that friendly UT librarians are available and eager to offer assistance should you have questions or need help.
Another appealing feature of the Texas Data Repository is that it supports differential tiers of sharing, from completely open (anyone with an internet connection can download your materials) to highly restricted (you can personally approve or deny each request for a download) or something in between (e.g., the person must provide their contact information and sign a license agreement before they are allowed to download).
Here are some examples of data-verses that have been set up by members of our department:
Monfils Fear Memory Lab Dataverse
Mood Disorders Laboratory Dataverse
For a useful transparency checklist, see this publication and checklist creation tool.
Return to:
Science Portal | Open Science | Tools | Collaboration | My Portal | Funding | Compliance | Contact Us | Overleaf