There’s a problem I’ve noticed with science/scientists today which is illustrated by the following story.
A few days ago I was talking to a grad student with whom I’ve taken a couple of classes. We have a few things in common so we always chat when we pass each other in the hall.
“By the way,” she said after a few minutes of chit-chat, “You work with Famous Reaction, don’t you?” [Famous reaction is a reaction that happens in nature and has also been adapted for a variety of uses in industrial settings.]
“Not really,” I replied. “But a lot of Dr. Hand-Waver’s students have studied it, so I know a fair bit about it.”
My fellow grad then mentioned that she’d been adding the reagents to her system of study to see if they would catalyze a reaction. But no sooner had she done this than she noticed a few funny things happening, which she described to me…
“Oh!” I said. “Yes, A. found something similar when he was working on X…this was what he found, and this was how he fixed it! Do you want to see what he wrote about it?”
She frowned. “Nah. Too complicated. I’m just going to ignore it. I have to hurry and get results and get this paper written, because I want to graduate in December.”
As she walked away I was a bit flabbergasted. What kind of results, what kind of paper would she end up with if she ignored a potentially huge factor?
One of the problems I see with science today is the push to publish. Don’t get me wrong: I think it’s great that so much information is out there, available for people to sift through. But because there’s such a huge emphasis on publishing (without lots of papers, no fellowships! no dissertation! no grants! no tenure!), people rush to get papers out without making sure the all the loose ends are tied up.
For example, I’m adapting a new methodology, first utilized in marine environments a couple of years ago, to freshwater systems. When my advisor first read about this she was really excited. We both read the paper a few times. Of course, I was struggling to figure out exactly what they were doing (this was halfway through my first year), but my advisor started noticing little things. The authors hadn’t controlled for factor A, known to have a huge influence on formation rate of chemicals Y and Z. They had published production rates while completely ignoring the fact that decay rates had to be accounted for. The real production rate was very close to the detection limit. And so forth.
After we’d discussed all the issues, Dr. Hand-Waver shook her head sadly. “I would never have published this,” she said. “It doesn’t look like they even bothered looking at their numbers. They were so excited that they measured something that they didn’t bother to figure out what, exactly, they’d measured.”
I wish I could say that this was an isolated incident, but I know it’s not. In my microbiology class, we’ve been reviewing papers pertinent to our topics of study. At least one of the articles I’ve read had similar issues–they found something, it seemed cool, and they made a half-hearted attempt to prove that something using various techniques. But it was clear to me–a chemist first, geologist second, microbiologist maybe tenth–that they had rushed the results, rushed the paper off for publication, and ignored the elephant in the lab.
If publication weren’t such an important part of evaluation processes, would we get more thoughtful papers? But if it weren’t such an important part of evaluation processes, how would we measure the quantity/quality/usefulness of the research done? I have no answers.