As the ranks of this academic elite have swelled, tenured university positions have become increasingly rare and grants have become harder to secure. The field is fraught with so much competition and so little oversight as a result of it that the quality of research conducted today is under scrutiny.
There are a number of tell tale signs. Projects like the Reproducibility Initiative are being funded to verify landmark cancer studies. Just a year ago, biotech company Amgen was able to reproduce only six of 53 important cancer publications. And a study published in October found that the results of a quarter of all studies are not published because of "ethical failure," thereby exposing participants to harm without any benefit to them.
The public, which regards science with great respect and trust, would not like to hear that a significant amount of high-impact research is questionable – especially not when it's done on their dime. And they certainly would not like to participate in research that has a one-in-four chance of going nowhere. What's next? Fraud?
There's a fair share of that, as it turns out. Last year, another group of researchers found that most biomedical and life science studies are retracted because of serious misconduct. And fraud has become the leading cause of recalled papers, they found, with instances of it having nearly tripled since the early to mid-2000s. This trend has vastly outpaced the rise in the number of papers published. Error, on the other hand, accounts for just one in five of retractions.
These results were startling because they contradict findings published the previous year that concluded that error accounted for three-quarters of retractions. It turned out that the retraction notices themselves played a role in helping to "distort the scientific record" because they were too vague, the researchers said in the follow up paper. (Researcher Grant Steen was involved in both studies.)
Articles are retracted when they are deemed unusable for the foundation of future research. The reasons for retracting are often a result of misconduct, which includes fraud, plagiarism, serious errors and duplication (self-plagiarism in the form of publishing in multiple journals). Science journals generally deal with accusations of fraud as soon as they are made aware of it. It's important to point out that although the rise in fraud is an alarming trend, retractions account for a very small chunk of all published studies.
"I do know people who've come across instances of fraud," Jeff Kelly, a Scripps University researcher, acknowledged. "But I think for 99.9 percent of scientists absolute integrity is essential."
For Kelly, a researcher of 25 years, ethics are clear-cut and there's no wiggle room. In August, his team pulled two studies when they realized that they had misinterpreted the data. It wasn't a major error, but in Kelly's specialized focus of study – understanding the processes that lead to the development of diseases like Alzheimer's – any error is a big drop in a narrow bucket.
"I'd feel really bad if a grad student spent their time trying to recreate an experiment that we know is wrong," he reasoned.
Most scientists are still doing commendable work and the uptick in misconduct should not be sensationalized, of course. At the same time, as far as scientist stereotypes go, self-correcting researchers like Kelly are as much a minority as those who commit outright fraud.
Ferric Fang, a co-author on the 2012 retraction study, says the rise in misconduct should not be treated as a trivial matter.
"A vast majority of scientists think it's wrong and don't engage, but it turns out to be quite common – more than half of scientists say they have seen it done," Fang told AOL Jobs.
Indeed, over half of participants in an international survey of biostatisticians conducted in 2000 reported that they had witnessed fraud at some point in their careers. In a separate study conducted in 2005, 12.2 percent of researchers reported they themselves had overlooked others' use of flawed data or questionable interpretation of data, and 20.6 percent said they changed methodology or results of a study in response to pressure from a funding source.
The problem with retracted research extends past the number of people directly engaged in the act. It helps to think of the scientific enterprise as an ocean in which ideas are constantly exchanged and breakthrough discoveries are used to spawn new ones. In this ecosystem, retractions are oil spills that need to be cleaned up quickly before they spread to influence new work.
The breakdown in this balance appears to be most evident in the biomedical and life sciences, a harbor for many fields of study, like that of cancer, brain disorders and diseases, stem cells and genetics. These are areas of research that directly impact scores of people who are counting on scientific breakthroughs to improve or save their lives.
This risk isn't just theoretical. Consider the infamous case of German researcher Joachim Boldt.
Boldt's research on hydroxyethyl starch (HES), an intravenous fluid used by hospitals to stabilize patients suffering severe blood loss, concluded that the substance was harmless. He was eventually accused of fabricating data and dozens of his papers were retracted.
A subsequent meta-analysis published last February found that there is indeed a link between HES use and increased mortality rates and acute kidney damage in critically ill patients. Yet Boldt's bogus research was so prolific and influential that HES was used by hospitals up until recently without much concern.
Why is this happening?
There are a number of interconnected factors at play.
For one, there is a lack of oversight and good review. That's partly to do with how science has become highly specialized, making it difficult for researchers who practice in different fields to catch one another's mistakes, according to Ivan Oransky, an editor at Retraction Watch.
The peer review process varies for different fields. "Physics, for example, was done and dusted, it was beat up a little bit," Oransky explained. By the time a paper is published in a physics journal, many sets of eyeballs have appraised it. By comparison, fewer people weigh in on a particular life science topic, making it easier for mistakes to slip through the cracks. Oransky's blog, which he runs with his colleague, Adam Marcus, has been tracking recalled papers since 2010. A quick scan of the site shows that a majority of the duds are from the biomedical and life science fields.
Science as an occupation has also grown popular over the years. It's expanded 60 percent faster than the entire U.S. workforce since the '70s, according to the most reliable figures offered by the Bureau of Labor Statistics. There were only a couple hundred thousand scientists in the '50s; today, there are over 7 million.
That may sound like great news (and it is) but science is an expensive enterprise. The labs, the subjects, the exotic equipment, the time – someone has to pay for it. Traditionally, that someone has been the federal government, but Uncle Sam has taken an axe to the nondefense research and development budget repeatedly in recent decades.
The massive cuts have not been kind to researchers. In terms of funding, Fang says that "science is at the worst point it's been since the WWII era." Nondefense R&D has been slashed by about 10 percent since 2003, according to the American Association for the Advancement of Science. Application success rates for grants are at a historical low point and tenure track positions at universities have fallen by almost 70 percent since 1973.
What all of this tells us is that there is no shortage of scientists, only the funding that keeps them productive.
"You have more and more people competing for grants – it's a winner-takes-all problem. It's become the way life sciences get funded," Oransky said.
Even among those practicing, there is a fear of self-correction. Kelly believes that his colleagues might not be upfront about their mistakes because they think it will stain their research track records. "I don't understand. I see it as admirable and not something that's a demerit on one's record," he said.
Perhaps egos are to blame, but a stronger argument points an accusing finger at over-competition. Healthy competition is, of course, a good thing, but too much competition over scarce resources has negative effects. Scientists scrambling to outdo their colleagues for the sake of getting money to stay employed is hardly an efficient system.
What can be done?
Whether it's gross fraud or the seemingly more benign unwillingness to self-correct, misconduct in the forum of science is unacceptable and should be quelled.
Giving science a bigger capital cushion – not just a life raft in the form of the occasional stimulus – should alleviate industry pressure, free up researchers to do better work and mentor the next generation of scientists to do the same. This wouldn't just be throwing money into the wind, either. Science is an investment that brings both qualitative and quantifiable returns. The results of a life-saving vaccine don't disappear after the check is cashed.
The Obama administration has fortunately recognized this and vows to increase funding for science, technology, engineering, and mathematics (STEM), but many voices in the scientific community are calling for more.
"We need to increase scientific funding – and that's a small drop in the bucket considering the scale of the government budget – to dramatically improve research," Fang said. "We need to let scientists to do science, rather than fighting with each other."
*The chart was compiled to include mathematical and computer scientists, engineers and technicians, in addition to social, life and physical scientists. Because the BLS re-categorized occupations over the years, it is difficult to extract accurate figures for life, physical and social scientists alone. Download the spreadsheet here.
Editor's Note: Certain photos originally appearing with this article, intended as general depictions of scientific work, have been removed to avoid any potential for confusion by readers.