A French mathematician recently developed a proof on all the different types of tessellating pentagons. Mathematicians had been working on this problem for a century. On the other hand, the recently developed Zika virus vaccine set the record for the fastest vaccine to be approved for human trial, at less than a year.
Which is the more typical timeline for learning? Do scientists typically learn things over the centuries, or are discoveries made at an ever-shorter record-setting pace?
Obviously, it depends on the problem being looked at, as well as how many people are working on it. The Zika vaccine, for instance, was backed by a massive amount of resources because of the scale of the disease. But to give you an idea of the general process researchers go through when trying to learn something, I’ll tell you the story of a recent research project Duke TIP completed, which looked at former Duke TIP participants.
Idea generation
At the end of November 2012, David Lubinski, a professor at Vanderbilt and the codirector of the Study of Mathematically Precious Youth, had just given a talk about his research to all of TIP’s staff. Afterward, several of us began talking about potential collaboration.
Lubinski and some colleagues had just completed a project that concluded that above-level tests (like the PSAT 8/9 you have the opportunity to take) are good predictors of subsequent outcomes. My most recent paper, at the time, reviewed how often research findings in psychology are replicated after they are initially published. It turns, they aren’t replicated all that often. Merging those two projects together, we had our idea: could we replicate the results that Lubinski and his colleagues found with a sample of former Duke TIP participants?
Preparation
We spent the next ten weeks (while working on other projects and taking holiday vacations) formalizing our idea, making sure we could get the data, making sure we had permission to use the data, and developing our plan for who would do what, how, and when for the project. Given that half the team was based at Vanderbilt and half was based at Duke, most of this was done over email, although the occasional phone and video conference also took place. (Talking face to face adds a sense of team that isn’t quite there when only emailing collaborators.)
Data collection and analysis
Once all the preparations were in place, we began data collection.
Unlike many research projects, our data collection consisted of Internet searches. Lubinski and his colleagues’ original research looked at the educational, occupational, and creative accomplishments that the highest scorers from their talent search achieved before they turned forty, so we had to do the same thing.
Could we find former TIPsters? Could we make sure the people we found were the actual people we were looking for? That’s a bit more complicated for Johns and Jennifers, especially if Jennifer got married and changed her last name.
In the end, we discovered that we could find a great many individuals, and that our results were quite similar to what Lubinski and his colleagues found previously.
Writing
After completing the analysis, we had to write up our findings.
I sent the first draft of an actual manuscript to coauthors on January 31, 2014—just before leaving for my honeymoon. As going on a honeymoon suggests, other life events happen throughout the research process.
For some researchers, the writing phase is a fast, but dreaded, necessity. For others, it’s a time spent carefully considering the best way to present your work to the world. I definitely fall in that latter group, as do many of my colleagues from this project. We spent over a year editing before submission. Sometimes it felt like mere tinkering, but each round contributed another layer of polish.
Editing also highlights the value of writing with a team. I would share a draft that I thought was good only to find my coauthors could reveal unknown avenues for improvement. In our team writing, the result was greater than the sum of its parts.
Submission
On November 19, 2015, we submitted our manuscript to the journal Psychological Science—the same journal that published the original article that we were replicating.
The editor of the journal then sent it out for “blind review” to three reviewers. It’s called blind review because the reviewers don’t know who wrote the article, and we (the authors) don’t know who the reviewers are. The goal behind blind review is to make sure authors are not rewarded or punished by their reputations, and to allow reviewers to be completely honest in their reviews, without worrying about upsetting the authors. The editor picks reviewers who know the relevant previous scientific findings so that they can review with a knowledgeable, but skeptical, eye.
In our case, the reviewers were quite positive and had many helpful suggestions for clarification, so the editor gave our manuscript a “revise and resubmit” decision, which means we were asked to make further edits, write a letter explaining our changes, and then resubmit the paper. The other decision possibilities are rejection or acceptance, but it is quite rare for a submission to be accepted without any revision.
After making our changes and writing our letter, we resubmitted. On March 25, 2016, we were notified that our paper had been accepted for publication.
Acceptance isn’t the end
You may think that once it’s been accepted, the work is done and the celebration begins. But there’s still more to do after acceptance.
First, we had to do copy edits and page proofs (reviewing the formatted version of what the publication will look like). Then we needed to work with our communications departments to decide how we would talk about our project to the world in press releases, blog posts, and other media.
After all that, starting with the initial idea we had in late November 2012, the article was finally published online on May 25, 2016. A little later, it was published in print, in the July 2016 issue of the journal. After about forty-three months, we had created new knowledge!
…Or did we? What if we made a mistake somewhere? What if our sample wasn’t representative?
That’s why it’s important to replicate findings. Future studies have to keep looking at what we think we know to make sure it’s true. And that means this process starts all over again.
As this example and the tessellating pentagons mentioned earlier show, it can take scientists a long time to really learn something knew. It takes a lot of effort and patience. But if the topic is important enough, it can be well worth the wait.
Matt Makel is TIP’s Director of Research.