People make **all statistics**, even the most official ones. Misinforming individuals through the use of statistical data may be referred to as statistical manipulation (Huff, 100). For example, if a political candidate was told by **his campaign manager** that 80% of voters like coffee, then he might claim that coffee drinkers are 80% of the electorate and thus conclude that it is safe to drink coffee during election years.

In conclusion, statistics can be manipulated in order to deceive people. Journalists should be aware of this practice so they can report facts accurately.

Statistics, when handled incorrectly, can cause a casual viewer to believe anything other than what the data reflects. That instance, a statistical argument is misused when it claims an untruth. In certain circumstances, the abuse is unintentional. For example, a statistic may be misinterpreted or omitted.

Abuse of statistics occurs when someone uses information in an inappropriate way or fails to use it at all. The term is often used interchangeably with "misuse of evidence", but it is broader because it includes both legal and illegal forms of error. Statistics can be used to support any argument for or against any matter before the courts or administrative bodies. They are also employed by economists, political scientists, business people, and many others who need **accurate information** about reality. Misuse of statistics is therefore one of **the most common types** of error committed by professionals in various fields.

Misuse of statistics can be intentional or unintentional. Intentional misuse involves presenting misleading or false data while attempting to prove some point. This can be done by omitting relevant facts or analyzing the data incorrectly. Intentional misuse is sometimes called "scientific misconduct". Unintentional misuse refers to mistakes made during **statistical analysis** or presentation of results. These can include incorrect assumptions, errors, or omissions in calculations. When such errors are discovered after the fact, they can lead to erroneous conclusions being drawn from the data.

In others, it is done on purpose and for the benefit of the perpetrator. A statistical fallacy occurs when the statistical rationale involved is wrong or incorrectly applied. For example, assuming that all women like pink products will lead to **the erroneous conclusion** that all women should buy pink products.

Statistical arguments are often misunderstood. It is important to understand that a statistical argument is different from a mathematical argument. With a statistical argument, evidence from multiple observations of a phenomenon is combined to reach a conclusion about the underlying population from which the samples were drawn. With a mathematical argument, evidence from single experiments is combined to reach a conclusion about the nature of **that evidence** itself. For example, suppose that a researcher wants to know if men and women taste food differently. She conducts a simple experiment in which she places red wine in the mouth of 20 people (10 men, 10 women). She finds that men prefer more spicy foods and women tend to favor sweeter flavors. Based on this evidence, the researcher could conclude that there is a difference between how men and women taste food. However, the study was not designed to answer this question and so this conclusion would be incorrect. Statistical arguments can be used intentionally or unintentionally. Those who use them intend to apply statistics to produce valid conclusions, but they may misuse data by ignoring relevant facts or overgeneralizing results.

The misleading statistics trap may be extremely harmful to the pursuit of knowledge. In medical science, for example, correcting an untruth might take decades and cost lives. In politics, correcting the misinformation that surrounds **many important issues** can be nearly impossible.

Here are two examples of how statistics have been used to support views that were not actually reflected by the data:

In the early 20th century, American farmers used chemicals on their crops, which caused some insects to become resistant to these chemicals. To prevent this resistance from spreading, the farmers kept using more powerful chemicals, which caused more insects to die. This is called "the pest control paradox". Scientists could have used statistical analysis to prove that crop protection agents were effective, but they would have had to examine a large number of cases - which was difficult at the time because there were no reliable ways of testing insecticide effectiveness on **a large scale-** so they didn't. However, they did find evidence that supported the idea that pests were becoming resistant to these agents.

In **the mid-20th century**, scientists began developing drugs to treat illnesses. Some of **these drugs** worked very well, while others had little or no effect at all. To make decisions about which ones to develop further, researchers needed to know which ones were likely to fail.

A statistical abuse is a trend of **incorrect statistical analysis**. They are connected to **data quality**, statistical methodologies, and interpretations in many ways. Misuse can also occur from erroneous analysis, which leads to bad judgments and failing tactics. The following are examples of **common statistical blunders**:

Using significance tests as discovery tools. Statistical tests are used to determine if there is a pattern or not in a set of data. They should never be used as discovery tools that lead to new knowledge. For example, if you want to know whether there is a difference between men and women when it comes to salary, don't use a t-test. Use one of the many other methods available for such data.

Misinterpreting correlation as causation. Correlation does not imply causation. Two things can happen simultaneously but not necessarily be related. For example, there might be an increase in the number of lightning storms after it starts raining a lot during thunderstorms. However, this does not mean that increasing moisture in the air causes **more lightning strikes**. Causality requires a cause-and-effect relationship - something that triggers one thing that leads to another thing.

Failing to check assumptions before using statistics. Not all statistics are created equal. Some statistics require certain assumptions to be valid, and if these assumptions are not met, then the results may be invalid.