DOI: https://doi.org/10.4414/smw.2015.14120
The track record of science in the last century is impressive. However, recent observations are eroding this firmly established picture. The biggest menace eroding trust in science is the mounting awareness that scientific results are very often false [1]. In his publication, Ioannidis analysed existing insufficiencies, and supported the depressing notion with multiple citations. Further evidence is discussed below [2–5].
The reasons for this are multiple, not the least the enormous explosion of scientific activity worldwide. For clinical research only, ~180,000 clinical studies from 187 countries are currently registered with clinicaltrials.com, with ~35,000 actively recruiting (as of 14th December 20014). This is a conservative estimate, as not all studies are registered with the US registry, but it demonstrates convincingly the growing global presence of clinical research, which most probably has repercussions on quality. Results of biomedical research are published in millions of articles every year (already estimated at >2 million/year more than 20 years ago!) [6]. However, ~90% of major scientific advances are covered by only 150 journals [7]. The rest are produced at an enormous cost with very limited impact (~85% of all research expenditure, corresponding to approximately 200 billion US-dollars) [8, 9]. This waste of resources may also contribute to the constantly rising development costs for new drugs, currently reaching around USD 1 billion.
A recent analysis of phase II studies, which are an important step in drug development and necessary for progression to phase III trials, reported that <20% of these trials are successful, mostly owing to insufficient efficacy of the tested substance [2]. In a letter, Prinz et al. from Bayer Healthcare discussed and confirmed these findings. They found ~60% unreproducible results, in spite of all the efforts going into target validation [3]. This has enormous economic repercussions, as confirmatory phase II trials are the basis for further investment into the very expensive development of potential drug candidates.
The expanding science bubble is not the only cause. In 2005, Ioannidis published a much-cited seminal paper with the provocative title: “Why most published research findings are false”. His analysis centred on the statistical weaknesses (mostly insufficient power and low pretest probabilities), which are summarised as follows (by a nonstatistician for nonstatisticians): smaller studies; smaller effect sizes; more relationships tested; more flexible designs, definitions, outcomes and analytical modes; more financial interests; the hotter the field, the less likely the reported outcome is to be true. Evidently many of these apply to much reported biomedical research and lower the probability of the results being true [1].
Clinical research is a very expensive, complex and, in addition, heavily regulated business. Internationally accepted standards have to be applied (International Congress on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use [ICH] GxP: good clinical research practice [GCP], good manufacturing practice [GMP,] good laboratory practice [GLP], etc. [10]) in order to protect study participants. Institutional or regional/national review boards and ethics committees approve proposed studies, and these should be registered in national or international study registries. There are many reasons for this, but most go back to the multiple and relatively easily applied modifications/falsifications/fabrications of clinical trial data during the study process and publication biases once the study is closed [4]. Barely more than half of all clinical studies are published, mostly those with positive (marketable) results. This leads to biased evidence for clinical medicine, going back many years. In addition, only a tiny minority of randomised phase III trials are ever reanalysed independently, and 35% of these reanalyses interpret results differently from the original article. This sheds some doubt on the published results of the “gold standard of clinical research” [5]. For these reasons, “to restore the integrity of the clinical trial evidence base” by reproducing, accessing and (re)analysing old, invisible and abandoned trials has a high priority [11, 12].
For the developing firm, the goal is to have market access for a safe and effective drug, which can ultimately be sold at a profit high enough to pay at least the expenses for research and development, salaries and dividends. These are important incentives to arrive as fast as possible at positive results, with highly effective drugs with minimal side effects.
What about the interests of the clinical scientists and physicians doing the research? They have multiple incentives to get the most out of a research project or study – and not always because of Big Pharma influences, even though conflicts of interest are at the forefront of potential causes. We will next have a look at these incentives for the medical and clinical scientist, before we elaborate some broader hypotheses.
In summary, we have to acknowledge the presence of a serious problem, which might undermine the foundations of medical science.
“A conflict of interest is a set of circumstances that creates a risk that professional judgment or actions regarding a primary interest will be unduly influenced by a secondary interest” (Institute of Medicine) [13].
The existence of conflicts of interest (COIs) is commonly put forward as a possible, identifiable and quantifiable cause for unethical behaviour. COIs are ubiquitous where humans interact, especially when one of the two actors is the agent for a third party. Optimising conflictual decisions by negotiation to reduce negative effects (in a transparent market) or by simply keeping the partners unaware of the COI (fraud?) are possible solutions. This seems especially evident in the banking business, where a bank’s interests may interfere with those of a client. Often the client is not even aware of this situation (e.g. “kick-back” problem). The same holds true for many commercial situations where clients may be put at a possibly unfair economic disadvantage, without knowing it, opening up a huge field of activity for the consumer protection movement.
Examples of potential COIs in medical research are:
‒ Patient care vs doctor / clinical researcher as agent for research;
‒ Scientific truth vs career opportunities (publication numbers, impact factors, university rankings);
‒ Science vs marketing (pharma, doctors, publishers);
‒ Healthcare system costs vs income/expenses of doctors, hospitals, cantons, pharma, insurance).
The common denominator is that a third party is at risk, which can be the patient, scientific truth or costs in the healthcare system. The decisions are not always easy: better income or a less expensive healthcare system? One “spun” publication more or reproducible scientific truth?
Physicians have the time-honoured Hippocratic obligation to protect their patients’ interests, even if this interferes with their own colliding interests [14]. Distorted and falsified research results diminish the credibility of the scientific enterprise, and the reputation of hospitals, faculties and universities. Scientific misconduct has been widely analysed and concerns all steps of clinical research, from skewed study designs to biased publication of positive but not of negative results [15–19]. Similar distortions are probably true for medical guidelines [20]. As a side effect, the Enlightenment narrative of a society continuously improving through human intelligence is endangered, and short-term financial and social success at any cost are the remaining goals.
However, taking hidden or evident COIs pushing to irreproducibility as the only explanation neglects the multiple forces and the complexity of modern research. Individually, the number of publications and their impact factors determine career opportunities and chances to have grants approved, thus directly influencing the curriculum vitae (résumé) of a researcher. Institutionally, the same holds true for university rankings and attribution of institutional grants, and also media attention to new “breakthrough” discoveries. These are often oversold, the hype created leading to even wider media attention, which in turn has repercussions on reputations, student numbers and finances [21]. These mechanisms are not research-specific and can be seen in many segments of our society. In our context, they undermine the role of science in our society, and the role of evidence-based medicine as a cornerstone of patient care. They are symptoms of science as a career, not a vocation. In the words of Nassim Taleb: “The relation of a career scientist to science corresponds to the relation of a prostitute to love” [22].
The competitive setting and associated COIs may undermine ethical norms and hence confidence in the scientific process. Biomedical research programmes of large academic hospitals add to their reputation; they attract successful researchers/physicians and enhance the potential to raise public and third-party research money. There is an evident advantage to being scientifically successful with multiple repercussions on standing in the scientific marketplace. However, most of the time the actors are not villains abusing power and influence to further their careers, but are acting normally in situations where awareness of potentially unethical behaviour or consequences is missing. This has been called “ethical blindness” and will be explored somewhat [23]. Individuals and institutions may be influenced by many mechanisms:
Framing:Research fits perfectly into the narrative of an evolving society. Hospitals are part of this idea, a safe haven from whence salvation may come. Science is the efficient tool for this endeavour.
Competition: Academic institutions function in very competitive markets, and their value is measured in the number of Nobel prize-winners, publications and their impact points, attributed grants and, finally, their ranking in international comparisons. Efficient organisation of research and clinics becomes an ever more important focus of attention, which may make the awareness of ethical limits difficult (ethical blindness). Top-down managerial methods with stringent procedures and total concentration on attaining goals can lead to blindness to sensitive contexts. The individual may become part of the process, his/her individual responsibility may be weakened by participation in the perfectly working machinery. This may be enhanced by financial pressure and the compulsion to grow (rather big than very good), with the implicit need to acquire third-party funds, even of questionable provenience.
Similar pressures also exist on an individual level: competition with peers, competition for the next publication or co-authorship, positive media attention; in other words, again diminishing individual responsibility and awareness of fair limits. In parallel, missing sanctions have the same effect: no denouncements, no dismissals, no penalties, the transgressions are trivial. Individually as well as institutionally one can hide behind publication numbers, impact factors and rankings, and still leave a positive impression and track record.
Hierarchies:The organisational structure is traditionally very hierarchical; chief physicians / full professors (“patrons”) decide to a large extent on the future careers of their subordinate collaborators. The more publications, the better – whether they are true, improve patient care, or add to the research enterprise in a sustained way is nearly negligible (“publish or perish”). Correspondingly, pressure from above is very efficient.
Peer pressure: Top-down pressure is compounded by peers, who participate in the same “rat race”. It is unacceptable to “whistle blow”. Difficult situations arise, where social pressure makes one do as the others without reflection.
Societal institution:The search for new knowledge is a societal institution in our Western society; research brings an a priori bonus and penetrates every aspect of our life [24]. Therefore, research organisations are nearly immune to critical scrutiny – they possess the truth or at least the tools to find it – and can maintain this immunity for a very long time in spite of negative evidence. This may explain the uncritical attitude of many – known cases are just exceptions, there is no reason to look closer at what happens.
Temporal dynamics: In the current gloomy economic situation temporal dynamics have similar effects. Innovations (research/publications being part of it!) are one of the main tools for escaping and regaining growth. Therefore, adding to economic growth by innovation is only positive and not to be questioned, in spite of marginal efficacy or even negative effects.
Routines:Academic routines also play a role (possibly accentuated by the Bologna system, where every step in academic training is sanctioned by an examination or an academic degree). The lifelong learning enterprise adds to the felt need of any curious, creative and ambitious individual (aren’t we all?) to take the next step forward, whatever the ethical costs.
(I adapted these aspects to our subject on the basis of a recent Coursera MOOC course on “Unethical decision making in organizations” [25].)
In conclusion, multiple forces push researchers to behave as they do. The astonishing observation is that these observations have been reproduced (sic!) again and again for many years, and nothing seems to change. Again, we should renew our focus – away from Big Pharma, which is a preferred target of journalists and politicians – to the medical community, universities, teaching hospitals and medical faculties. The reason is simply that in such interdependent relations active and passive ethical corruption cannot be separated. Pharma is regulated by laws or preventively regulating itself [26], but what about the research community? The following reflections in the media are not really a pleasure to read…
In summary, multiple forces, some going beyond simple conflicts of interest, push medical science further down the slippery slope.
Scientific successes, publications of “breakthrough” discoveries, awards and Nobel prizes are headline material; the public is eagerly waiting for scientific progress. Credibility of and confidence in scientific progress are among the most important social narratives in our knowledge society. They are part of the glue that makes it functional. The permeation of social activities with science is ubiquitous, from marketing to consumer decisions to political discussions. Ideologies and beliefs are put into question by asking for the scientific evidence. Truth is established by means of scientific evidence (until falsified by further knowledge). This holds especially true in the natural and biomedical sciences. New discoveries, if they are true, attract a lot of attention, and rightly so.
In recent decades, however, doubts concerning this narrative emerged, and not without reason [27]. Not a day passes without reports in the international print or electronic media on plagiarism, multiple publications, falsifications, faked data, including abused/faked/bought peer review and retractions of publications, even in serious scientific publications. They seriously degrade the reputation of medical research. A growing news market and blogs profit from these scandals [28]. These reports are of course welcome media subjects, as good news is no news, and are, however, corroborated by some rather rare research on research misconduct [16, 17]. Perhaps as a consequence, corrective postpublication public peer-review (“online journal club”) has had growing success and puts authors, peer reviewers and publishers under pressure [29]. Sooner or later, financing public bodies may also react. One could argue that this mirrors the general weakening of trust in institutions in our postmodern “post-trust society” [30, 31] and the soil in which scandal-seeking media thrive, transforming what was once a slow, considered and robust process of scientific publication into a high speed news machine of questionable reliability. Media mechanisms are more and more predominant.
Of course we are not discussing the normal scientific process, but its perversion by multiple social influences and unethical individual acts. The accumulating evidence has led to multiple publications in the lay press [32–36]. Some openly question the validity of the research paradigm (was Potter wrong?) and the disappearing trust, others centre on the dominant influence of the pharmaceutical industry on the medical profession. The multiple mechanisms to abuse clinical research in order to show new and old drugs in the best possible light to (legitimately) maximise profits and to (not so legitimately) hide whenever possible undesirable side effects and absent gains in therapeutic efficacy have been meticulously explored and documented [37, 38]. The result has been termed “marketing-based and not evidence-based medicine” [39, 40]. The medical profession does not escape either. “High doses of medical corruption worldwide” is the headline of a Deutsche Welle online article [41], and the lead says: “…the question is not whether, but to what extent….” A cursory 1-hour visit to a major bookstore in Bern demonstrated a prominent exhibit with no fewer than five books targeting doctors [42]. So, the media – print and internet – reflect amply the weakened reputation of our profession. The facts have been known for many years [43], slowly eroding confidence in medical science and Big Pharma [44]. There is no doubt that there is an urgent need to try to re-establish the credibility and integrity of medicine and research [45]. The repercussions potentially endanger the once so-successful scientific model.
The research machine, however, turns at full speed; it is normal to pursue the goal of improving medicine, but sometimes the solidity of the evidence, the harm done and the money wasted are neglected. It is normal to pursue an academic career – even if only half of the truth is said ‒ and add to the “reputation” of an institution. Paradoxically, and in spite of abundant critical scientific and lay documents, the clinicaldoctor’s reputation and trustworthiness is, in most opinion polls, found in a leading position. This may be based on the special doctor-patient relation, which would be undermined by any doubt.
In summary, public awareness of something seriously wrong with medical science is mounting on all levels of the modern multimedia society.
Publications from Anglo-Saxon countries predominate, but continental Europe and Switzerland show similar developments, as shown by examples from Germany [36] and – often treated with much discretion – Switzerland [46]. However, the resounding silence around the role of the medical professionals and clinical researchers, as compared with the pharmaceutical industry, remains disturbing. Systematic surveillance or analysis of the research integrity of researchers and research institutions is lacking. Compare this to the American Office of Research Integrity (ORI) (with online research training clinics) [47], or grants by the private MacArthur foundation to retractionwatch to establish a database of retracted papers [48]. There is, as yet, no analogous Swiss institution. This may be set up by the science academies, or the main fund distributors, the State Secretariat for Education, Research and Innovation (SERI) or the Swiss National Science Foundation (SNF). In fact, they have just published a call for submissions on research infrastructures, which also covers these aspects [49]. However, research institutions lag behind. Most seem to have no compulsory public declaration of researchers’ COIs, or review of COIs by institutional review boards (IRBs), in contrast to American clinics [50]. I am not aware of any publicly accessible, critical analysis of the research output (financed to a large extent by public money). Potential transgressions remain in the background and have no or minimal consequences. The role of industry has, in contrast, been addressed by multiple measures, codified in legal terms and international agreements. This process is ongoing and is currently at a new level with, on the one hand, the “physician payments sunshine act” in the USA [51] and analogous measures in Switzerland (Pharma-Kooperations-Kodex) [52], and on the other hand the worldwide clinical trial transparency process, with the compulsory registration of clinical trials in public registries to correct for the ubiquitous publication bias [53–55].
What could be done to re-enforce the integrity of the research process and the role of the physicians and clinical researchers involved? Research ethics committees / institutional review boards scrutinise research proposals, but once accepted there are few obstacles to abuse of the system. Distortions are still considered trivial offenses by many, and, until recently, did not prompt corrective measures, if they were detected at all. Why is that so and can we do anything about it?
We are living in a “post-trust society” [31]; when trust is absent the common reaction is to reinforce controls, mostly by asking for a transparent declaration of measurable, mostly financial, COIs. Examples are the compulsory COI rules of official agencies (Swissmedic, Finma, extraparliamentary commissions). Admittedly many of the multiple interactions are complex and complete transparency may remain an illusion. The declaration of COIs suggests that transparent financial dependencies are also at least partially neutralised, or will lead to corrective measures. However, sometimes they have to be enforced, like the publication of the details of a sponsoring contract for an economic university chair by a bank, which was made public only after a journalist's litigation.
The Swiss Academy of Medical Sciences (SAMW) has elaborated and the Swiss Medical Association (FMH) later officially adopted guidelines on research integrity and on interactions with the pharmaceutical industry [56]. They essentially appeal to researchers to behave ethically according to the guidelines. In addition, chairs in medical ethics abound and postgraduate ethics degrees (MAE) can be obtained. So far, so good! However, whether these measures, which target mostly clinical doctors, improve irreproducibility, remains to be demonstrated.
Ioannidis again has given some valuable input on how to improve reproducibility, summarised here [57]. He suggests relying on methods which have been shown to improve reproducibility. We should push large-scale collaborative research (as in cancer and human immunodeficiency virus research), establish a replication culture, enforce complete trial registration including raw data, data sharing (see also [58]), better statistical methods – more Bayesian and less frequentist statistics (see also [59]) – improved and more stringent peer review, reporting and dissemination of research results. He also points out the variable interests of researchers to promote publishable, fundable, translatable or profitable results, which may make this project difficult to realise. Importantly, he points to the existing wrong incentives I have mentioned, a missing reward system which “nudges” [60] towards more reproducible research results [57].
This corresponds to no less than a gradual transformation of the current science culture, a long and arduous process of uncertain outcome and certain resistance by all who might lose something. This is only possible with strong and convincing leaders. Ideally, it should be initiated from within the system by medical faculties and large teaching hospitals, in discussion with the local research community. Some ideas as to what direction this change could go (as outlined by Ioannidis [57]) are summarised and completed by my additional input.
On a technical level, the statistical problems Ioannidis and others stressed may be corrected by a joint effort of researchers and their supervisors, as well as journal editors and peer reviewers. Publishers now also react beyond the recommendations of the International Committee of Medical Journal Editors [61]. Nature has tightened the statistical review process, and others may follow [62]. In addition, new types of review process have appeared, once only used in physics. The ubiquitous internet enables readers to analyse and comment on papers post-publication on-line. It is not so rare for inconsistencies to be pointed out, which in turn might lead to retractions or corrections [28]. Such comments have also led to litigation of (fired!) authors against the critical peers. However, as pointed out by Bastian, such mechanisms need to be reinforced, in order to foster a more critical discussion of the publication tsunami [63].
On an educational level,additional activities may change in a durable way the basic attitudes of the physicians and researchers involved. Examples are courses for undergraduate/graduate/doctoral students as part of their training (if not already in place), with adequate preparation of the teaching medical/research faculty. Similarly, such training might be integrated into medical specialty boards, as are existing courses in communication skills. These, of course, would be targeted at physicians, not researchers. As an incentive for young researchers, confirmatory studies of former published trials should be especially honoured by medical faculties, possibly in collaboration with publishers and editors. The initially publishing journal could “promise” the publication of confirmatory trials, e.g. in a special dedicated section.
On an organisational level, one could imagine local units charged to supervise the integrity of in-house research activities (in-house “ORIs”). They could be attached to the dean’s office, local clinical trial units or hospital ethics committees. For instance, they would be in charge of establishing publicly accessible lists of researchers COIs. With some expertise in “change management” they could make a substantial contribution. Their activity should be supported by local authorities and they should have the right and power to sanction transgressing researchers.
On a political level, one could imagine modifications of relevant laws, linking financial support to research integrity.
“Big data” tools are available for longitudinal analysis of the proposed activities, but need to be adapted to the tasks:
‒ Existing compulsory clinical trial registries to discover publication bias and nonpublication of clinical studies (clinical trials, WHO, CH).
‒ The physician sunshine act in the USA and the Pharmakodex in Switzerland could serve to uncover excessive industry influences, but would also have to be developed or adapted for the collection of data from heterogeneous sources.
‒ Data from “retractionwatch” might attract attention to questionable and hence retracted publications.
‒ The annual public reports by internal ORIs would give an in-house picture at large teaching hospitals, their compilation a national survey.
‒ These data could be complemented by surveys of physician/researcher attitudes.
Ideally, they should show substantial improvement of the dire consequences of COIs and ethical blindness. Problems exist in the implementation of such measures without a formal legal basis. At the other extreme, should there be more stringent rules to protect whistle-blowers? What could be eventual sanctions for offenders? There are suggestions to make research fraud a crime [64], and, in the light of more and more sophisticated ways to bypass peer review, the installation of a retraction penalty [65]. Whether such repressive measures will lead to better science remains open.
As we are in a situation of impending system failure, we might as well dare to push these or similar measures. Otherwise Popper might get falsified by a social, not a scientific process, as the Cassandra prophesied [32]. Or to put it more prosaically, in the words of a Wall Street Journal blogger in the thread “Scientists’ elusive goal: reproducing study results” [66]: “When money and science compete, money wins”.
In summary, some more technical corrective measures may be easily implemented. However, to change an expanding and “successful” science culture that is currently destroying its own foundations will need a sustained effort by the medical and scientific community on all levels.
1 Ioannidis JPA; Why most published research findings are false. PloS Med 2005;2(8):e124 DOI:10.1371/journal.pmed.0020124
2 Arrowsmith J. Phase II failures: 2008–2010. Nat Rev Drug Discov. 2011;10(5):328–9.
3 Prinz F, Schlange T, Asadullah K. Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov. 2011;10(9):712–3.
4 Obrist R, Biollaz J. Klinische Forschung zwischen Industrie und Ärzteschaft. Schweiz Ärzteztg. 2009;90(41):1569–71. German.
5 Ebrahim S, Sohani ZN, Montoya L, Agarwal A, Thorlund K, Mills EJ, et al. Reanalyses of Randomized Clinical Trial Data. JAMA. 2014;312(10):1024–32. doi:10.1001/jama.2014.9646.
6 Lundberg GD. II. Perspective from the Editor of JAMA, The Journal of the American Medical Association. Bull Med Libr Assoc. 1992;80(2):110–4.
7 Garfield E. In truth, the “flood” of scientific literature is only a myth. Scientist. 1991;Sept. 2:11–25; cited by Lundberg GD [6].
8 MacLeod MR, Michie S, Roberts I, Dimagl U, Chalmers I, Ionnidis JPA, et al. Biomedical Research: Increasing value, reducing waste [Comment]. Lancet. 2014;383:101–4.
9 Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence [Viewpoint]. Lancet. 2009;374:86–9.
10 ICH GxP: International conference on harmonisation of technical requirements for registration of pharmaceuticals for human use (ICH): ICH HARMONISED TRIPARTITE GUIDELINE FOR GOOD CLINICAL PRACTICE E6(R1) (1996) http://www.ich.org/products/guidelines/efficacy/efficacy-single/article/good-clinical-practice.html
11 Loder E, Godlee F, Barbour V, Winker M, PLOS Medicine editors. Restoring the integrity of the clinical trial evidence base. Calling researchers and editors to help restore invisible and abandoned trials [editorial]. BMJ. 2013; (epub 2013June13) 346:f3601 DOI: 10.1136/bmj.f3601
12 http://www.alltrials.net.
13 Conflict of Interest in Medical Research, Education, and Practice; Bernard Lo and Marilyn J. Field, Editors; Committee on Conflict of Interest in Medical Research, Education, and Practice; Institute of Medicine of the National Academies (2009) http://www.nap.edu
14 WMA Declaration of Helsinki Ethical Principles for Medical Research involving Human Subjects. Adopted by the 18th WMA General Assembly, Helsinki, Finland, June 1964, and last amended 2013. http://www.wma.net
15 Stamatakis E, Weiler R, Ioannidis JPA. Undue industry influences that distort healthcare research, strategy, expenditure and practice: a review. Eur J Clin Invest. 2013;43(5):469–75.
16 Fanelli D. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PloS ONE. 2009;4(5):e5738; DOI: 10.1371/journal.pone 0005738
17 Fang FC, Steen RG, Casadevall A. Misconduct accounts for the majority of retracted scientific publications. PNAS early edition 2012; http://www.pnas.org/cgi/doi/10.1073/pnas.1212247109
18 Song F, Parekh S, Hooper L, Loke YK, Ryder J, Sutton AJ, et al. Dissemination and publication of research findings: updated review of related biases. Health Technol Assess. 2010;14(8):ii, ix-x. DOI: 19.2210/hta14080
19 Riveros C, Dechartres A, Perrodeau E, Haneef R, Boutron I, Ravaud P. Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals. PLoS Medicine. 2013;10(12):e1001566. DOI:10.1371/journal.pmed.1001566
20 Choudry NK, Stelfox HT, Detsky AS. Relationships between authors of clinical practice guidelines and the pharmaceutical industry. JAMA. 2002;287:612–7.
21 Niederer A. Übertreibungen in universitären Pressemitteilungen. NZZ 18.12.2014, basierend auf Sumner P, Vivian-Griffiths S, Boivin J, et al. The association between exaggeration in health related science news and academic press releases: retrospective observational study. BMJ. 2014;349:g7015 DOI: 10.136/bmj.g7015
22 Nassim Nicholas Taleb, Susanne Held. Antifragilität: Anleitung für eine Welt, die wir nicht verstehen. (2013) Albrecht Knaus Verlag, pg 541. German.
23 Palazzo G, Krings F, Hoffrage U. Ethical Blindness. J Bus Ethics. 2012;109(3):323–38.
24 Hoffmann, Christoph. Die Arbeit der Wissenschaften. (2013) Zürich-Berlin: diaphanes. German.
25 Palazzo G, Hoffrage U. Unethical decision making in organizations. (2014 MOOC course) http://www.coursera.com .
26 Verhaltenskodex der pharmazeutischen Industrie in der Schweiz über die Zusammenarbeit mit Fachkreisen und Patientenorganisationen (Pharma-Kooperations-Kodex); ScienceIndustries, Interpharma, VIPS (2013). http://www.scienceindustries.ch/_file/12857/pharma-kooperations-kodex-2013–d.pdf. German.
27 Lotter W. Deal? Brand eins, Thema «Vertrauen», 2014;(10)36–44. German.
28 http://retractionwatch.com.
29 https://pubpeer.com.
30 Frevert U. Vertrauensfragen: Eine Obsession der Moderne. Munich: C. H. Beck; 2013. German.
31 Ragnar E. Löfstedt. Risk Management in a Post-Trust Society. Basingstoke: Palgrave MacMillan; 2005.
32 Lehrer J. The Truth Wears Off. Is there something wrong with the scientific method? The New Yorker 2010; Dec. 13. http://www.newyorker.com/magazine/2010/12/13/the-truth-wears-off
33 Freedman DH. Lies, Damned Lies, and Medical Science. The Atlantic 2010;(11) http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/308269/
34 Angell M. The truth about the Drug Companies, How they deceive us and what to do about it. New York: Random House; 2004.
35 Angell M. Drug Companies & Doctors: A Story of Corruption. The New York Review of Books; 2009, January 15 http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/308269/
36 Weiss B. Dr. Schwindel und Prof. Schmu. Brand eins Thema «Vertrauen». 2014;10:118–25. German.
37 Ben Goldacre; Bad Pharma. (2012)London: Fourth Estate; 2012.
38 Gøtsche PC. Deadly medicines and organized crime. London, Mew York: Radcliffe Publishing; 2013.
39 Spielmans GI, Parry PI. From Evidence-based Medicine to Marketing-based Medicine: Evidence from Internal Industry Documents. J Bioeth Inq. 2010;7(1):13–29.
40 Deyo RA, Patrick DL. Hope or Hype. The obsession with medical advances and the high cost of false promises. New York: AMACOM; 2005.
41 DW: Top Stories World.. High doses of medical corruption worldwide. Health. 6 Jan. 2013. http://www.dw.de/high-doses-of-medical-corruption-worldwide/a-16501875
42 Exhibit Stauffacher book shop: on a single visit on 30. June 2014 in a major book shop in Bern I found a prominent exhibit of 5 critical books about doctors/dentists; Werner Bartens: Das sieht aber gar nicht gut aus; Gunter Frank: Gebrauchsanweisung für Ihren Arzt; Michael Imhof: Eidesbruch; Peter Volmer: Darfs noch eine Hüfte sein? Tanja Wold: Murks im Mund;
43 Ankier SI. Dishonesty, Misconduct and Fraud in Clinical Research: an International Problem. J Int Med Res. 2002;30:357–65.
44 URLs: http://www.pmlive.com/pharma_news/pharmas_reputation_stays_low_in_2013_543607 or http://www.patient-view.com/bull-corp-reputation.html
45 Geyman J. The Corrosion of Medicine, Can the profession reclaim its moral legacy? Monroe ME; Common Courage Press: 2008.
46 Malka S, Gregori M. Vernebelung, Wie die Tabakindustrie die Wissenschaft kauft. Zürich: Orell Füssli Verlag AG: 2005. German.
47 Office of Research Integrity (ORI) of the Department of Health and Human Services; http://ori.hhs.gov/
48 Raeburn P; Retraction Watch awarded a two-year, $400,000 grant from the MacArthur Foundation. Knight Science Journalism at MIT; KSJ Tracker December 15, 2014; https://ksj.mit.edu/tracker/2014/12/retraction-watch-awarded-a-two-year-400000–grant-from-the-macarthur-foundation/
49 Swiss road map for research infrastructures. State Secretariat for Education, Research and Innovation (SERI); http://www.sbfi.admin.ch/themen/01367/02040/index.html?lang=en
50 Steinbrook R. Online disclosure of physician – industry relationship [Perspective]. N Engl J Med. 2009;360(4):325–7.
51 Agrawal S, Brennan N, Budetti P. The Sunshine Act — Effects on Physicians [Perspective]. N Engl J Med. 2013;368:2054–7. DOI: 10.1056/NEJMp1303523.
52 Verhaltenskodex der pharmazeutischen Industrie in der Schweiz über die Zusammenarbeit mit Fachkreisen und Patientenorganisationen (Pharma-Kooperations-Kodex) vom 6. September 2013 (Stand: 1. Mai 2014) ScienceIndustries, Interpharma, Vips. German.
53 USA: http://www.clinicaltrials.gov.
54 WHO: http://www.who.int/ictrp/en/.
55 Switzerland: http://www.kofam.ch/en/swiss-clinical-trials-portal.html
56 Scientific Integrity (2008); Collaboration between the medical profession and industry (2013); http://www.samw.ch
57 Ioannidis JPA. How to make more published research true. PloS Med. 2014;11(10):1–6; e1001747. DOI: 10.1371/journal.pmed. 1001747.
58 Olson S, Downey AS (rapporteurs); Sharing clinical research Data. Workshop summary. IOM, http://www.nap.edu
59 Johnson VE. Revised standards for statistical evidence. PNAS. 2013;110:19313–7.
60 Richard H. Thaler, Cass R. Sunstein. Nudge: Improving decisions about health, wealth and happiness. London; Penguin: 2009.
61 ICMJE. Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals. 2013. http://www.icmje.org/recommendations/
62 Nature Announcement: Reducing our irreproducibility. Nature. 2013;496:398.
63 Bastian H. A stronger post-publication culture is needed for better science. PLOS Med. 2014;11:e1001772.
64 Silvermann E. Should research fraud be a crime? A reader poll. The Wall Street Journal Pharmalot blog http://blogs.wsj.com/pharmalot/2014/07/16/should-research-fraud-be-a-crime-a-reader-poll/ , 2014, July 16.
65 Retractionwatch.com, Is it time for a retraction penalty? 31 comments! Retractionwatch.com/2014/09/18/is-it
66 Kullmann B in the Wall Street Journal blog thread “Scientists’ Elusive Goal: Reproducing Study Results.”
Disclosures: No financial support and no other potential conflict of interest relevant to this article was reported.