Indicators of Poor Scientific Reasoning

The following have been proposed to be indicators of poor scientific reasoning.  This information was gathered from Reference.com

Use of vague, exaggerated or untestable claims
  • Assertion of scientific claims that are vague rather than precise, and that lack specific measurements.
  • Failure to make use of operational definitions (i.e. publicly accessible definitions of the variables, terms, or objects of interest so that persons other than the definer can independently measure or test them). (See also: Reproducibility)
  • Failure to make reasonable use of the principle of parsimony, i.e. failing to seek an explanation that requires the fewest possible additional assumptions when multiple viable explanations are possible (see: Occam’s Razor)
  • Use of obscurantist language, and misuse of apparently technical jargon in an effort to give claims the superficial trappings of science.
  • Lack of boundary conditions: Most well-supported scientific theories possess well-articulated limitations under which the predicted phenomena do and do not apply.
  • Lack of effective controls, such as placebo and double-blind, in experimental design. (see Scientific control)

Over-reliance on confirmation rather than refutation

  • An assertion should allow the logical possibility that it can be shown false by an observation or a physical experiment (see also: falsifiability)
  • Assertion of claims that a theory predicts something that it has not been shown to predict
  • Assertion that claims which have not been proven false must be true, and vice versa (see: Argument from ignorance)
  • Over-reliance on testimonial, anecdotal evidence or personal experience. This evidence may be useful for the context of discovery (i.e. hypothesis generation) but should not be used in the context of justification (e.g. Statistical hypothesis testing).
  • Pseudoscience often presents data that seems to support its claims while suppressing or refusing to consider data that conflict with its claims. This is an example of selection bias, a distortion of evidence or data that arises from the way that the data are collected. It is sometimes referred to as the selection effect.
  • Reversed burden of proof. In science, the burden of proof rests on those making a claim, not on the critic. “Pseudoscientific” arguments may neglect this principle and demand that skeptics demonstrate beyond a reasonable doubt that a claim (e.g. an assertion regarding the efficacy of a novel therapeutic technique) is false. It is essentially impossible to prove a universal negative, so this tactic incorrectly places the burden of proof on the skeptic rather than the claimant.
  • Appeals to holism as opposed to reductionism: Proponents of pseudoscientific claims, especially in organic medicine, alternative medicine, naturopathy and mental health, often resort to the “mantra of holism” to explain negative findings.

Lack of openness to testing by other experts

  • Evasion of peer review before publicizing results (called “science by press conference”). Some proponents of theories that contradict accepted scientific theories avoid subjecting their ideas to peer review, sometimes on the grounds that peer review is biased towards established paradigms, and sometimes on the grounds that assertions cannot be evaluated adequately using standard scientific methods. By remaining insulated from the peer review process, these proponents forgo the opportunity of corrective feedback from informed colleagues.
  • Some agencies, institutions, and publications that fund scientific research require authors to share data so that others can evaluate a paper independently. Failure to provide adequate information for other researchers to reproduce the claims contributes to a lack of openness.
  • Assertion of claims of secrecy or proprietary knowledge in response to requests for review of data or methodology.

Lack of progress

  • Failure to progress towards additional evidence of its claims. Terence Hines has identified astrology as a subject that has changed very little in the past two millennia. (see also: Scientific progress)
  • Lack of self correction: scientific research programmes make mistakes, but they tend to eliminate these errors over time. By contrast, theories may be accused of being pseudoscientific because they have remained unaltered despite contradictory evidence. The work Scientists Confront Velikovsky (1976) Cornell University, also delves into these features in some detail, as does the work of Thomas Kuhn, e.g. The Structure of Scientific Revolutions (1962) which also discusses some of the items on the list of characteristics of pseudoscience.

Personalization of issues

  • Tight social groups and granfalloons, authoritarian personality, suppression of dissent, and groupthink can enhance the adoption of beliefs that have no rational basis. In attempting to confirm their beliefs, the group tends to identify their critics as enemies.
  • Assertion of claims of a conspiracy on the part of the scientific community to suppress the results.
  • Attacking the motives or character of anyone who questions the claims (see Ad hominem fallacy).


Use of misleading language

  • Creating scientific-sounding terms in order to add weight to claims and persuade non-experts to believe statements that may be false or meaningless. For example, a long-standing hoax refers to water as dihydrogen monoxide (DHMO) and describes it as the main constituent in most poisonous solutions to show how easily the general public can be misled.
  • Using established terms in idiosyncratic ways, thereby demonstrating unfamiliarity with mainstream work in the discipline