Table of contents:

About 50% of scientific experiments turned out to be NON-reproducible
About 50% of scientific experiments turned out to be NON-reproducible

Video: About 50% of scientific experiments turned out to be NON-reproducible

Video: About 50% of scientific experiments turned out to be NON-reproducible
Video: Dido - Thank You (Official Video) 2024, November
Anonim

By chance, in a stream of news and information, I came across an article in Nature Scientific Reports. It presents data from a survey of 1,500 scientists on the reproducibility of research results. If earlier this problem was raised for biological and medical research, where on the one hand it is explainable (false correlations, the general complexity of the systems under study, sometimes even scientific software is accused), on the other hand, it has a phenomenological character (for example, mice tend to behave differently with scientists different genders (1 and 2)).

However, not everything is smooth and with morenatural science disciplines such as physics and engineering, chemistry, ecology. It would seem that these very disciplines are based on "absolutely" reproducible experiments carried out under the most controlled conditions, alas, an amazing - in every sense of the word - the result of the survey: up to 70%researchers faced Non-reproducibleexperiments and results obtained not only by other groups of scientists, BUT and by the authors / co-authors of published scientific works!

Does every sandpiper praise its swamp?

Although 52% of respondents point to a crisis of reproducibility in science, less than 31% consider the published data to be fundamentally incorrect and the majority indicated that they still trust the published work.

Of course, you shouldn't hack off the shoulder and lynch all science as such only on the basis of this survey: half of the respondents were still scientists associated, in one way or another, with biological disciplines. As the authors note, in physics and chemistry, the level of reproducibility and confidence in the results obtained is much higher (see graph below), but still not 100%. But in medicine, things are very bad compared to the rest.

A joke comes to mind:

Marcus Munafo, a biological psychologist at the University of Bristol, England, has a longstanding interest in the reproducibility of scientific data. Recalling the days of his student days, he says:

One time I tried to reproduce an experiment from literature that seemed simple to me, but I just couldn't do it. I had a crisis of confidence, but then I realized that my experience was not all that rare.

Latitude and longitude depth problem

Imagine that you are a scientist. You come across an interesting article, but the results / experiments cannot be reproduced in the laboratory. It is logical to write about this to the authors of the original article, ask for advice and ask clarifying questions. According to the survey, less than 20%have done this ever in their scientific career!

The authors of the study note that, perhaps, such contacts and conversations are too difficult for the scientists themselves, because they reveal their incompetence and inconsistency in certain issues or reveal too many details of the current project.

Moreover, an absolute minority of scientists attempted to publish a refutation of irreproducible results, while facing opposition from editors and reviewers who demandeddownplay comparison with original research. Is it any wonder that the chance of reporting the non-reproducibility of scientific results is about 50%.

Maybe, then, it is worthwhile to at least carry out a reproducibility test inside the laboratory? The saddest thing is that a third of the respondents even NEVERand did not think about creating methods for verifying data for reproducibility. Only 40%indicated that they regularly use such techniques.

Another example, a biochemist from the United Kingdom, who did not want to disclose her name, says that attempts to repeat, reproduce work for her laboratory project simply doubles the time and material costs, without giving or adding anything new to the work. Additional checks are carried out only for innovative projects and unusual results.

And of course, the eternal Russian questions that began to torture foreign colleagues: who is to blame and what to do?

Who's guilty?

The authors of the work identified three main problems of reproducibility of results:

  • Pressure from superiors to get the work published on time
  • Selective reporting (apparently, it means the suppression of some data, which "spoil" the whole picture)
  • Insufficient data analysis (including statistical)

What to do?

Out of 1,500 surveyed, more than 1,000 specialists spoke in favor of improving statistics in collecting and processing data, improving the quality of oversight from bosses, and more rigorous planning of experiments.

Conclusion and some personal experience

Firstly, even for me, as a scientist, the results are staggering, although I'm used to a certain degree of irreproducibility of the results. This is especially evident in the works performed by the Chinese and Indians without third-party "audit" in the form of American / European professors. It is good that the problem was recognized and thought about its solution (s). I will tactfully keep silent about Russian science, in connection with the recent scandal, although many honestly do their job.

Secondly, the article ignores (or rather, does not consider) the role of scientific metrics and peer-reviewed scientific journals in the emergence and development of the problem of irreproducibility of research results. In pursuit of the speed and frequency of publications (read, increase in citation indices), the quality drops sharply and there is no time for additional verification of the results.

As they say, all characters are fictional, but based on real events. Somehow one student had a chance to review an article, because not every professor has the time and energy to thoughtfully read the articles, so the opinion of 2-3-4 students and doctors is collected, from which the review is formed. A review was written, it pointed out the irreproducibility of the results according to the method described in the article. This was clearly demonstrated to the professor. But in order not to spoil relations with "colleagues" - after all, they succeed in everything - the review was "corrected". And there are 2 or 3 such articles published.

It turns out a vicious circle. The scientist sends the article to the editor of the journal, where he indicates “ desired"And, mainly," unwanted »Reviewers, that is, in fact, leaving only those who are positively disposed towards the team of authors. They review the work, but they cannot “shit in the comments” and try to choose the lesser of two evils - here is a list of questions that need to be answered, and then we will publish the article.

Another example, which the editor of Nature talked about just a month ago, is Grazel's solar panels. Due to the tremendous interest in this topic in the scientific community (after all, they still want an article in Nature!), The editors had to create a special questionnaire in which they need to indicate a lot of parameters, provide equipment calibrations, certificates, etc. to confirm that the method for measuring efficiency panels conforms to some general principles and standards.

AND, third, when once again you hear about a miracle vaccine that conquers everything and everyone, a new story about Jobs in a skirt, new batteries or the dangers / benefits of GMOs or the radiation of smartphones, especially if it was promoted by yellow writers from journalism, then treat with understanding and don't jump to conclusions. Wait for the confirmation of the results by other groups of scientists, the accumulation of the array and data samples.

PS:The article was translated and written hastily, about all the errors and inaccuracies noticed, please write in the LAN.

Recommended: