Randomly generated science!

Details further highlighting the perils of academia’s “publish or perish” philosophy taken to the extreme (which it is) have emerged in Nature, also reported on in the Guardian. When I was an undergraduate at the University of York I remember hearing of a “random paper generator”: “SCIgen”. I seem to recall even having tried it, supplying some keywords and witnessing with amazement an entire manuscript of impressive sounding nonsense fly out the other end. The generator was created by three MIT graduates to demonstrate that conferences would accept anything, even if it made no sense. They succeeded! What’s more, others have been using the program ever since. French computer scientist Cyril Labbé has catalogued over 100 IEEE papers that are computer-written, a further 16 by Springer, and more. Labbé took the concept to the extreme when he created a fictional author, Ike Antkare, and had 102 fake papers lodged with Google Scholar. Quickly Antkare’s h-index rose to 94, making him the 21st most cited author on the planet (h-index is a measure of an author’s contribution to academia, it represents the number of papers an author has that have been cited the same number of times. An index of 5 means an author has 5 papers that have been cited 5 or more times).

The whole thing resonates with last year’s sting operation, published in Science, wherein 150 severely flawed papers were accepted for publication in a wide variety of open access journals (I wrote about that too). Researchers are under incredible pressure to publish: promotions and positions are tremendously competitive, and your publication record largely dictates what will happen to you. Funding bodies (and governments) wanting to measure research performance to the nth degree place an unhealthy attention on publications. The problem is that metrics often perturb the system they are trying to measure, as it is natural for people to start gaming the system. To get ahead researchers need to publish very frequently, but the simple truth is that it’s impossible to pump out more than 3 or 4 (at most!) really groundbreaking journal papers a year on a consistent basis. As a researcher you choose how to invest your time. Do you work on something potentially groundbreaking, but perhaps high risk as it may not pay off, or may not give you that critical publication by the time your current funding runs out? Or you can concentrate on publishing more incremental research more frequently and have a longer looking publication record when its comes to accessing more funds? What ought to happen is for the incremental stuff to be turned away from the higher quality journals, however the aforementioned sting operation demonstrates that this isn’t necessarily the case (with some exceptions, my personal opinion based on my experience is that some very dubious work can get into very high places based purely on who’s name is on the author list). The gate keepers to the journal are the same over-loaded stressed out academics panicking about where their next funding is coming from, and often they don’t do a great job (again, I’ve seen some pretty poor reviews). Academics are not paid for reviewing, and short of a benevolent and altruistic attitude towards the institution of science (which does not pay the bills), its not clear to me what motivates a particularly thorough job. Especially with ever growing numbers of manuscripts to review coming through the door. “Reputation”, you might think… but then reviewing is largely anonymous.

I’ve painted a bit of a caricature here, but its to highlight the point, and there is truth in it. Its hard to measure scientific progress with fine instruments. Blue sky research may have no obvious use right now, but at some point in the future another researcher you have never heard of will connect the dots and do something amazing. Or not. But you can’t really tell in the here and now what will be relevant in 10 or 50 years time. Trying too hard will only push people towards publishing incremental, rather than transformational, research. I’m occasionally regaled with stories of how some of the greatest scientists of yesteryear had periods of a few years when they didn’t publish anything – they were thinking, getting it right, and published only when ready. I wonder sometimes who is actually reading the thousands of incremental papers that researchers are pressured into publishing at an alarming rate, if the reviewers don’t even do a particularly good job.

I’m afraid I have to leave this article without much of a solution. A bad ending I know. To me it is clear that there’s a problem with the current scientific model, but I can’t offer any earth-shattering alternatives (yet). Though it does strike me that what we have here looks a little bit like the perceived problems with short-term outlooks in the financial markets (and Politics, though the latter does not seem overly well appreciated). Short-selling has periodically been banned in some markets, and high flying people in finance are being paid their bonuses in longer-term company shares they can only access years later, rather than cash. I wonder if it’s possible to engineer a longer-term outlook in science too?