Schlagwörter
Aktuelle Nachrichten
America
Aus Aller Welt
Breaking News
Canada
DE
Deutsch
Deutschsprechenden
Global News
Internationale Nachrichten aus aller Welt
Japan
Japan News
Kanada
Karte
Karten
Konflikt
Korea
Krieg in der Ukraine
Latest news
Map
Maps
Nachrichten
News
News Japan
Polen
Russischer Überfall auf die Ukraine seit 2022
Science
South Korea
Ukraine
Ukraine War Video Report
UkraineWarVideoReport
United Kingdom
United States
United States of America
US
USA
USA Politics
Vereinigte Königreich Großbritannien und Nordirland
Vereinigtes Königreich
Welt
Welt-Nachrichten
Weltnachrichten
Wissenschaft
World
World News

12 Kommentare
A massive seven-year project exploring 3,900 social-science papers has ended with a disturbing finding: researchers could replicate the results of only half of the studies that they tested.
The conclusions of the initiative, called the Systematizing Confidence in Open Research and Evidence (SCORE) project, have been „eagerly awaited by many“, says John Ioannidis, a metascientist at Stanford University in California who was not involved with the programme.
The scale and breadth of the project is impressive, he says, but the results are “not surprising”, because they are in line with those from smaller, earlier studies.
The SCORE findings — derived from the work of 865 researchers poring over papers published in 62 journals and spanning fields including economics, education, psychology and sociology — don’t necessarily mean that science is being done poorly, says Tim Errington, head of research at the Center for Open Science, an institute that co-ordinated part of the project.
Of course, some results are not replicable because of either honest mistakes or the rare case of misconduct, he says, but SCORE found that, in many cases, papers simply did not provide enough data or details for experiments to be repeated accurately.
Fresh methods or analyses can legitimately lead to distinct results. This means that, rather than take papers at face value, researchers should treat any single study as „a piece of the puzzle“, Errington says.
[removed]
I’m glad for this study to exist! Replicability is a hugely important thing in all sciences. I’m less glad for the number of times the article brings up ‘automated tools’ being developed to judge and review studies. I’m not saying it’s bad, I’m just nervous.
Naval Ravikant would have a field day with this.
I think the big problem is not that many published result are not replicable, but that too many people believe that science is a big shiny monolith of perfection, which it never was. Science exists in the real world, and should be viewed in that light.
Could it be due to societal changes? Or is 10 years too short?
This is good but a lot of sociology studies I read are of „moving targets.“ That is, they are of attitudes/beliefs/practices that are constantly evolving and in some cases evolving rapidly which is why sociologists want to study them.
I think a lack of replicability might just be an inherent weakness of some types of otherwise perfectly sound science, simply because they are so context-dependent that you are unlikely to find exactly the same variables in the wild ever again.
Call me pessimistic, but that’s better than I would have thought considering the challenges of controlling variables when studying human behavior.
Thats fine test the ones thst did replicate more and keep going. Thats just science
I mean you can just look at half of the stuff that gets posted here. A lot of it just seems like its just confirming biases people have.
Ok, now do hard sciences.
>However, many of the failures might have been caused by the SCORE researchers needing to make guesses about procedures or to recreate raw data
I think I would be more convinced about this study if it can use the same raw data and create the same results. If you had to guess the raw data, then it would be a problem.