Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Many destructive behaviours are rewarded in a system that is putting emphasis on the quantity and proposed impact of publications. Many scientific fields suffer from behaviours of individuals who are more interested in social status (or just economic security) than in seeking truth. Just like in overall society the humble truth seekers necessary for slowly chiseling out the firm ground of solid knowledge are in the minority. They are definitely disadvantaged in this system, and more likely to be driven out than those who lie, rush or cut corners.

At the modern complexity and importance of science, we do need to put much more emphasis on checking whether it is done right. The original vision of the Royal Society involved rigorous peer review, to the point of demanding scientists to demonstrate their experiments in front of their peers: https://www.sciencemuseum.org.uk/objects-and-stories/17th-ce... The curator of these literal peer reviews held the first paid position in science.

Journals do have the profits to hire or otherwise pay peer reviewers full time. They would become experts in their field in the process, and thus more suited for the task than the ones in the current system, where even undergrads are asked to review for high-profile outlets like NeurIPS. Also, full-time rigorous peer reviewing would be an interesting career prospect for many current scientists. And here is your startup idea..



It's way way bigger than that. The whole world changed since those early days of modern science.

During those days, the science you were talking about was one of countless hobby clubs for aristocratic circles, where bored rich folk could dedicate themselves to one upping each other in the discovery game their peers and recent predecessors had invented. There were certainly cheaters then, too, but the whole thing was very insular and the only people who even cared about it were the people who were in on the game themselves. So excelling within the rules and elaborate demonstrations of the game were all part of the club sport.

400 years later, billions of people across five or six generations have been told that this game is the secret to human prosperity on earth and that the more its played, the more aligned we'll be with the truth of the universe, the longer we'll all live, the more leisure and luxury we'll all enjoy, etc

That's a whole different game! The demand for scientific output has gone from an exhibition sport shared among a small and snobby circle to a replacement for an eroded theology. Trillion dollar governments and globe-spanning trillion dollar industries now (ostensibly) make their billion dollar decisions based on each day's summary of the sport, and billions of people dreaming about prosperity or salvation await the next big game's result as though it were a demonstration of divine grace.

There are now so many games being played, by so many people, with so many bets and spectators, that incentives are unfathomable and referees are sparse and cheating is rampant.

Unfortunately, you're not going to clean all that up by telling some journals to reshuffle their revenue allocations.


The rot has set in relatively lately, but is getting exponentially worse. Probably still in the 1990's or even early 2000's (depending on the field) an academic could have had a fine career publishing (a good paper) every few years. Or write a (good) book or two in their whole career.

I did my PhD under a professor who was very respected and influential in his field. He published quite rarely, although he had piles of manuscripts that would have passed review but he didn't find them worthy. And refused to have his name on his lab's papers if he didn't feel like he contributed enough (nowadays many/most professors who won't even read the paper require them to be added as an author).

In the current system he would have no chance in academia. He said so himself. As has Peter Higgs: https://www.theguardian.com/science/2013/dec/06/peter-higgs-...


The professor sounds like a man of class.

As I was reading the article I got to the part where the whistleblowers accused zlokovic of pressuring them to change data.

If you changed data then you're also responsible. Every one of those whistleblowers should have refused.


In my experience a lot of older professors are like this, likely because it wasn't yet about counting papers for them to get their tenures (other games were involved for sure).

I agree they should have refused. I hope I would in their situation.


> where bored rich folk could dedicate themselves to one upping each other in the discovery game their peers and recent predecessors had invented. There were certainly cheaters then, too, but the whole thing was very insular and the only people who even cared about it were the people who were in on the game themselves. So excelling within the rules and elaborate demonstrations of the game were all part of the club sport.

I see this meme about the early scientists a lot. Unfortunately, these first early scientists are not so easily categorized. I would encourage others to delve into the biographies of these progenitors. It is true, some were very much in the vein of this meme. But many were much more complicated individuals.

For example, Darwin is about as blue blooded as it comes. Yet Origin of Species has a very long section at the beginning where Darwin painstakingly goes over all the scientists before him that that in any small way had already discovered evolution.

Another good one is that while digging potatoes with his hands at the family farm in his native New Zealand, Rutherford got the news that he had been awarded a scholarship to study physics at Cambridge under William Thomson (Lord Kelvin).

Many other scientists came from very 'low' births. But science is a 'strong-chain' domain where only the 'correct' ideas survive. Anyone can, and did, contribute despite those obstacles.


I think you underestimate the possibility of cultural change happening from the top. It's in fact a very common way to achieve cultural change.


Sorry but who actually thinks that being a scientist is the “key to prosperity”?

I think far and away most scientists become researchers despite the fact that it’s generally not at all a prosperous venture.


Not "being a scientist": "science".

Science is seen as the road to human (not just personal) propsperity by people educated in the last 100-ish years, which is mostly everybody now, which puts extreme pressure to perform/produce on what was once a pure little hobby sport whose spectators were almost all invested as players themselves.


Science and the advancement of technology is literally the only thing that has ever led to lasting prosperity for humans. All other periods of prosperity over the last ten thousand years have been temporary and highly localized. You can lose your golden age of prosperity through a single assassination. Only science breaks the wheel.

If you factor in all future humans in your utility calculations, then science is by far the #1 noblest pursuit humans can ever undertake. We're talking about scientific advancements potentially helping trillions of people before all is said and heat-deathed. Great works of art can also be enjoyed by all future humans, but science has a super-linear (if not actually exponential) growth curve where every advancement makes future advancements a little easier.

The question is whether the noble pursuit of "science" and the day-to-day activities of "being a scientist" have diverged or not. There is mounting evidence that the institution of science has been subverted to the extent that many people who are professional scientists are not actually contributing to the pursuit of science. Or, far worse, detracting from it, as we see here. It's one of the great tragedies of our era.


> Science and the advancement of technology is literally the only thing that has ever led to lasting prosperity for humans. All other periods of prosperity over the last ten thousand years have been temporary and highly localized. You can lose your golden age of prosperity through a single assassination. Only science breaks the wheel.

How exactly do you know this civilization will not crash too? (and take with it that highly localized habitat)


That was exactly my question when I read that quote too.

Everything is temporary on a large enough scale and everyone w/i that golden era also thought it would last and that belief is part of why it didn't.

Humans will never build anything that truly lasts for a very simple reason. The lessons learned get forgotten after 2 generations (3 at most). Stop and consider how the US fought for its freedom and how nowadays many people from the US would vote to have more limitations on speech.


> The lessons learned get forgotten after 2 generations (3 at most).

What are you talking about? Did we all forget the Pythagorean Theorem after 3 generations? Did we forget the force-multiplying effect of levers and pullies a few generations after Archimedes died? How can you sit here with Wikipedia at your fingertips and tell me that the collective sum of humanity remembers nothing from over 100 years ago?

> everyone w/i that golden era also thought it would last and that belief is part of why it didn't

Yet from almost every collapsed golden era, scientific progress from that era made its way back to collective knowledge (sometimes very slowly, admittedly). People in this modern age have this extremely simplified view of what "collapse" actually looked like.


I said lesson, I didn't say facts.

What's the difference between data and information?

data is data

information is data with context

The values and lessons learned to build something that truly lasts gets forgotten over time until the thing that was built gets changed into a weaker form as the people involved stop valuing the things that gave it the stronger form.


Which civilization? The U.S.? That might slow science down, but it would not significantly reverse progress on it. Of course nations/kingdoms/empires/civilizations wax and wane, but you can't use the word "too" if you're talking about a total worldwide collapse of human civilization as a single event -- such a thing has never happened. If your point is that a Chicxulub or a worst-case Wyoming Supervolcano could outmatch the progress that science has made, then: sure, what's your point?

The vast majority of established science will outlive mere nations and petty politics. There are too many copies of Wikipedia, too many printed textbooks and encyclopedias, to lose a significant portion of established science. The possibility of worldwide humanity-destroying events does not disprove this at all. In fact, if we ever faced a humanity-destroying event, then it would be nothing less than our collective scientific progress that would have any chance of seeing us through it.


Every institution is exploited to some degree.

What do you mean by many? As an institution, I sincerely doubt that science is exploited to a similar degree as basically any others.

I don’t really accept these ridiculous trumpisms. “Many people are saying”.

How many people are *doing*? I’ll wager it’s far and away a major minority.

Can you back up your “many” with actual numbers as a percentage of investment?


> I sincerely doubt that science is exploited to a similar degree as basically any others.

The world being shitty in other ways does not diminish the tragedy that we are only a fraction of how effective we could be at pursuing science.

> Can you back up your “many” with actual numbers as a percentage of investment?

"A 2011 analysis by researchers with pharmaceutical company Bayer found that, at most, a quarter of Bayer's in-house findings replicated the original results."

"In a 2012 paper, C. Glenn Begley, a biotech consultant working at Amgen, and Lee Ellis, a medical researcher at the University of Texas, found that only 11% of 53 pre-clinical cancer studies had replications that could confirm conclusions from the original studies"

"The ... paper examined the reproducibility rates and effect sizes by journal and discipline. Study replication rates were 23% for the Journal of Personality and Social Psychology, 48% for Journal of Experimental Psychology: Learning, Memory, and Cognition, and 38% for Psychological Science"

https://en.wikipedia.org/wiki/Replication_crisis

> I don’t really accept these ridiculous trumpisms. “Many people are saying”.

I didn't say "many people are saying", and comparing me to Trump is a much greater insult than things comments get flagged and removed for. Respond to what I actually said and avoid the (extreme) personal insults.


Like journalism, they trade money for social status and prestige.


Or just to do something that they are interested in. And to have quite a bit of freedom.


After seeing that academics still have to take a “two job” approach with serious research and fun research I’m not convinced of this. Not to mention the very protracted timelines to get to that stage of freedom.


That depends on whether you accept the risk of having to find another line of work. Depending on the place, you can do almost anything you want even at PhD level. I've been on a chain of grants, doing quite freely what I want, for over a decade. But I accept it may break at any point and then I'll just go do something else.

After PhD there will be nobody telling, and often not even caring, what to do. But that may mean that you don't get your PhD or you don't get another grant to live on. If you get a tenure it almost literally means that you can't be fired even if you do nothing at all. What is surprising is that almost all with tenure keep running the rat race even though they don't really get anything at least material out of it.

(Nitpick: I think serious research is the fun one. The one churned to get another grant is neither serious nor fun.)


How did you get a position without a postdoc?

> After PhD there will be nobody telling, and often not even caring, what to do.

If you make the hiring cut in you’re in for about 5 years of grunt work and committees as an assistant prof, right?

By “serious research” I mean the one they do for career advancement and hiring.


> How did you get a position without a postdoc?

I am a postdoc, just have been for quite a while (on four different grants at least). In Finnish academia it's not that uncommon to stay a postdoc even until retirement.

> That’s not true. If you make the hiring cut in you’re in for about 5 years of grunt work as an assistant prof.

For teaching and admin yes. But at least in fields I know, what research you do or whether you do at all is all up to you. Of course the risk is that you'll be unemployed after the assistant prof. term ends. My point is that if you don't care about that, you're quite free to do whatever research-wise.


Maybe, but it's probably a relatively small circle that elevates someone's status by being in academia. (Not to say that isn't the circle whose opinions matter to them, though).


It's not necessarily even status, just feeding themselves: Publish or die, and you aren't publishing if your conclusions aren't positive and strong.

In medicine it's especially tough, as testing is among the most expensive and the number of available data points is quite low, so one gets to fignt over individual data points that make a difference. Did this person not follow the protocol, and I have a paper, or not?

I've seen something similar when working in agriculture: Early tests of new plant strains are low data, because a company will start by testing so many experimental plants that it'd be unaffordable to test them all very strongly. This makes people really argue about single data points in those early tests. Was this chunk of the field contaminated? Attacked by a wild animal? But there at least the interest in fraud is small. If a breeder gets a stinker through, it just goes into trials on the other hemisphere, where it's planted in an order of magnitude more fields, and therefore just lead to disappointment 6 months later.

With medicine, the cost of replication is so high, and often takes so many years, that it's not just that an honest mistake is catastrophic, but that the difference in outcomes for the person doing the curation is so high, one doesn't have to be all that dishonest to make biased decisions that will lead to a strong career.


> It's not necessarily even status, just feeding themselves: Publish or die, and you aren't publishing if your conclusions aren't positive and strong.

The wider science needs to learn a lesson from the Economics science: Incentives Matter. A lot.


Much of the problem is that wider science has taken too many lessons from economics.

Incentives matter a lot but creating incentives that work is almost impossible.

Science used to run on ethics (like many other professions). But we learned from economists that there are no such thing, only utility (money) maximizers.


Incentives that work are easy. Do research in corporate labs. That's all it takes. Now there is an incentive to do work that replicates (because it is intended to be used to build useful things), there are people responsible for detecting and resolving fraud (managers), they are incentivized to do so by a mix of carrots and sticks some of them legal.

Fact is, corporate R&D doesn't have this relentless problem with reproducibility. It's academic output that does, because academics only care about getting a paper published and don't expect that anyone will use their results. Often they don't even make their code or data available at all because it's not to their advantage for others to be able to replicate their work, as that would yield fewer papers. But this is all wrong. Science exists to be used in technology, not for its own sake.


There are corresponding responsibilities in academia. Perhaps even more so. But like a business culture can, and often does, rot the incentives and responsibilities, so has the academic culture.

I've worked for two corporate labs and have collaborated with several. There was less rigor if anything.

Corporate lab work is quite cozy and stable. In corporate lab your job doesn't end if you don't (pretend) to get a new major discovery every year. They don't make anything available, often not even internally.

Science is about a lot more than technology.


Hmm I've had the opposite experiences. Corp labs release cool and useful stuff all the time. On the front page right now is Seamless, a very useful thing released by a corporate lab. And corps have performance evaluation and management programmes designed to encourage high performance output. And of course the competent ones manage to bring research into production regularly.

Whereas with academics, you get a paper. You might get data and code, or might not, depending on field and temperament of the researchers. If you're really lucky that data/code might actually be correct, match the paper and be usable for something, but it really depends a lot on the field. You almost certainly won't get products.


> like a business culture can, and often does, rot the incentives and responsibilities, so has the academic culture.

Maybe the problem is that there are many corporations, with different cultures.

But there is mostly just one academic culture across the whole system.


> Also, full-time rigorous peer reviewing would be an interesting career prospect for many current scientists. And here is your startup idea..

I don’t know about this. (But I don’t have any answers for the question either).

If you’re a full time peer reviewer, first, are you really a peer? But more importantly, your motivations change. No longer are you looking to try to see if the paper is worthy of publishing or if it is sound; instead, your motivation is to push through as many papers as possible. When getting paid depends on approving papers, the quality will drop.

Maybe the problem is where the money exchange occurs. What about if authors paid to have their paper reviewed, instead of published? Currently, journals only get paid when a paper is published. What about if they got paid to review the paper at all? It would limit the paper submissions to Nature/Science/Cell, but you’d be paying for a high quality review (which often makes a paper better). You might even have luck with decoupling reviewers from journals completely… make the journals compete over the best (already) reviewed papers.


>instead, your motivation is to push through as many papers as possible.

I think this is an assumption that doesn't have to hold true in practice. Maybe it's biased by the 'publish or perish' paradigm that's pervaded academia, but there's no reason to replicate the same problem elsewhere.


Rigorous peer review, let alone paid (which causes its own issues beyond money), is totally infeasible with the current publication pressure. It's hard to find reviewers even with the current lax criteria of both reviewers and quality of their review (the latter is more problematic).

The publication volume is just way way way too high. But researchers who don't publish multiple articles per year, regardless if you have anything of value to publish, perish. If you don't churn out paper-per-year in your PhD, you don't get the PhD.

Much of the manuscripts that get submitted to journals are incredibly bad. Most are just bad. But as both editor and reviewer, I usually let them be published out of pity if the stuff isn't blatantly wrong; the poor PhD student's whole career is on the line.

Journals being full of crap is not SO bad within science because everybody knows they are full of crap. But if an "outsider" thinks that being published in a peer-reviewed journal, even a "good" journal, means that the article isn't crap, it can be literally life-or-death (like in this case).

Academia is broken because it's being run as a business whose purpose is to churn out papers. Welcome to neoliberalism.


Although the above comment might sound negative and harsh, it is a perfect distillation of modern research-oriented academic environments. (I was a moderately-successful professor in these environments. I woke up one day and simply couldn't do it anymore.)


Publication rate DURING your PhD is highly variable from field to field...I don't know what field you are in but generally speaking across my 3 fields students are required to have at least 1 from the entire PhD to graduate. Some even have none published by time of graduation.

1 per year is a ridiculous standard.


It is wildly different, true. In some areas, it is not uncommon to see PhD "thesis" that is essentially an intro stapled to 3-4 papers.

It's also true that publication rates post PhD vary wildly as well, e.g. expectations for a tenure packet.

However, it's also fair to say that expected publication rate has significantly grown universally. A generation or two ago, a solid career could be built on a handful of high impact papers. That's hard to imagine now.


That thesis format is the norm in Finland in sciency fields and also in some arts.

You can write a monograph (essentially a book) instead but that's frowned upon. Especially by admin because the papers bring the univerisity more money than monographs.

Nowadays piles of inconsequential papers where you contributed little but your name is almost a requirement for a solid career. It's horrible.


This does vary across countries, and fields. In UK I was a bit surprised that you're not expected to publish at all before post-doc.

In Finland usually at least 3 peer reviewed (first author) papers are required for a PhD in my field (cognitive science). In some fields (e.g. many engineering fields) even more. And PhD grants are typically for three to four years.

It is ridiculous and the paper quality is what you'd expect.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: