Hoopoe
Senior Member (Voting Rights)
Arguing that personal anecdotes should be considered more reliable than the experiment being discussed in the very paper that found the experiment to have null results as expected. There is no other field of science where this exists. In fact this is a literal feature of pseudoscience and belief systems.
The Wikipedia page has a list of possible indicators of pseudoscience. It is astonishing how well it describes the characteristics of the BPS research we discuss on this forum. I have highlighted some that I think are particularly relevant.
A topic, practice, or body of knowledge might reasonably be termed pseudoscientific when it is presented as consistent with the norms of scientific research, but it demonstrably fails to meet these norms.[1]
Use of vague, exaggerated or untestable claims
- Assertion of scientific claims that are vague rather than precise, and that lack specific measurements.[37]
- Assertion of a claim with little or no explanatory power.[38]
- Failure to make use of operational definitions (i.e., publicly accessible definitions of the variables, terms, or objects of interest so that persons other than the definer can measure or test them independently)[Note 4] (See also: Reproducibility).
- Failure to make reasonable use of the principle of parsimony, i.e., failing to seek an explanation that requires the fewest possible additional assumptions when multiple viable explanations are possible (see: Occam's razor).[40]
- Use of obscurantist language, and use of apparently technical jargon in an effort to give claims the superficial trappings of science.
- Lack of boundary conditions: Most well-supported scientific theories possess well-articulated limitations under which the predicted phenomena do and do not apply.[41]
- Lack of effective controls, such as placebo and double-blind, in experimental design.
- Lack of understanding of basic and established principles of physics and engineering.[42]
- Assertions that do not allow the logical possibility that they can be shown to be false by observation or physical experiment (see also: Falsifiability).[19][43]
- Assertion of claims that a theory predicts something that it has not been shown to predict.[44] Scientific claims that do not confer any predictive power are considered at best "conjectures", or at worst "pseudoscience" (e.g., ignoratio elenchi).[45]
- Assertion that claims which have not been proven false must therefore be true, and vice versa (see: Argument from ignorance).[46]
- Over-reliance on testimonial, anecdotal evidence, or personal experience: This evidence may be useful for the context of discovery (i.e., hypothesis generation), but should not be used in the context of justification (e.g., statistical hypothesis testing).[47]
- Presentation of data that seems to support claims while suppressing or refusing to consider data that conflict with those claims.[28] This is an example of selection bias, a distortion of evidence or data that arises from the way that the data are collected. It is sometimes referred to as the selection effect.
- Promulgating to the status of facts excessive or untested claims that have been previously published elsewhere; an accumulation of such uncritical secondary reports, which do not otherwise contribute their own empirical investigation, is called the Woozle effect.[48]
- Reversed burden of proof: science places the burden of proof on those making a claim, not on the critic. "Pseudoscientific" arguments may neglect this principle and demand that skeptics demonstrate beyond a reasonable doubt that a claim (e.g., an assertion regarding the efficacy of a novel therapeutic technique) is false. It is essentially impossible to prove a universal negative, so this tactic incorrectly places the burden of proof on the skeptic rather than on the claimant.[49]
- Appeals to holism as opposed to reductionism: proponents of pseudoscientific claims, especially in organic medicine, alternative medicine, naturopathy and mental health, often resort to the "mantra of holism" to dismiss negative findings.[50]
- Evasion of peer review before publicizing results (termed "science by press conference"):[49][51][Note 5] Some proponents of ideas that contradict accepted scientific theories avoid subjecting their ideas to peer review, sometimes on the grounds that peer review is biased towards established paradigms, and sometimes on the grounds that assertions cannot be evaluated adequately using standard scientific methods. By remaining insulated from the peer review process, these proponents forgo the opportunity of corrective feedback from informed colleagues.[50]
- Some agencies, institutions, and publications that fund scientific research require authors to share data so others can evaluate a paper independently. Failure to provide adequate information for other researchers to reproduce the claims contributes to a lack of openness.[52]
- Appealing to the need for secrecy or proprietary knowledge when an independent review of data or methodology is requested.[52]
- Substantive debate on the evidence by knowledgeable proponents of all viewpoints is not encouraged.[53]
- Failure to progress towards additional evidence of its claims.[43][Note 3] Terence Hines has identified astrology as a subject that has changed very little in the past two millennia.[41][23] (see also: Scientific progress)
- Lack of self-correction: scientific research programmes make mistakes, but they tend to reduce these errors over time.[54] By contrast, ideas may be regarded as pseudoscientific because they have remained unaltered despite contradictory evidence. The work Scientists Confront Velikovsky (1976) Cornell University, also delves into these features in some detail, as does the work of Thomas Kuhn, e.g., The Structure of Scientific Revolutions (1962) which also discusses some of the items on the list of characteristics of pseudoscience.
- Statistical significance of supporting experimental results does not improve over time and are usually close to the cutoff for statistical significance. Normally, experimental techniques improve or the experiments are repeated, and this gives ever stronger evidence. If statistical significance does not improve, this typically shows the experiments have just been repeated until a success occurs due to chance variations.
- Tight social groups and authoritarian personality, suppression of dissent and groupthink can enhance the adoption of beliefs that have no rational basis. In attempting to confirm their beliefs, the group tends to identify their critics as enemies.[55]
- Assertion of claims of a conspiracy on the part of the scientific community to suppress the results.[Note 6]
- Attacking the motives, character, morality, or competence of anyone who questions the claims (see Ad hominem fallacy).[55][Note 7]
- Creating scientific-sounding terms to persuade nonexperts to believe statements that may be false or meaningless: For example, a long-standing hoax refers to water by the rarely used formal name "dihydrogen monoxide" and describes it as the main constituent in most poisonous solutions to show how easily the general public can be misled.
- Using established terms in idiosyncratic ways, thereby demonstrating unfamiliarity with mainstream work in the discipline.
Last edited: