Skip to main content

p-hacking 

Manipulation of statistics such that the desired outcome assumes "statistical significance", usually for the benefit of the study's sponsors. Usually done by ex post facto choice of significance labels and simple reporting of results as being conclusive regardless of calculated p-value. This depends on the public's general lack of understanding of statistical measures and press non-reportage of details.

Also the basis of nutrition research.
Study: "Soy protein was shown to reduce cancer rates in rats with a p <1.0" (effectively random data)

Next day's headline: "Soy Cures Cancer"

Sales soar 1000%

p-hacking = Profit.
p-hacking by Cortical Vortex June 12, 2015
Related Words

Reverse P-Hacking

A statistical manipulation where instead of tweaking data to achieve p-values below 0.05, the researcher sets an artificially high significance threshold (e.g., p < 0.01) and then selectively reports only results that meet that stricter cut‑off, ignoring equally valid findings that fall just above. The goal is to appear more rigorous while actually discarding meaningful results. Reverse p‑hacking can also involve running multiple analyses and reporting only those that fail to reach significance, to support a null hypothesis. It’s a form of hidden selective reporting that distorts the evidence base, often used to dismiss real effects.
Reverse P-Hacking *Example: “He ran five different models, then reported only the one where p = 0.08 as ‘insignificant’ – reverse p‑hacking, using a stricter threshold to bury a genuine trend.”*

Inverted P-Hacking

A practice where a researcher designs a study or chooses a statistical method specifically to push p‑values as high as possible, usually to support a null hypothesis or to discredit an existing finding. Instead of seeking significance, they seek non‑significance by manipulating sample sizes, outlier removal, or covariate selection. Inverted p‑hacking is common in industry-funded research or ideological debates where the desired outcome is “no evidence of effect.” It operates under the same selective reporting logic as standard p‑hacking but in the opposite direction.
Inverted P-Hacking Example: “The company’s study added a dozen irrelevant variables until the true effect of their pollutant disappeared – inverted p‑hacking, engineering non‑significance.”

Negative P-Hacking

A form of statistical manipulation aimed specifically at producing a p‑value that supports a negative or null conclusion. Researchers selectively exclude outliers, choose particular time windows, or drop certain subgroups until the p‑value rises above 0.05, then claim “no effect.” Negative p‑hacking is especially common in replication studies or in fields where null results are easier to publish than positive ones. It allows researchers to reject true effects under the guise of methodological rigor, effectively laundering bias through statistical noise.
Negative P-Hacking Example: “He removed the three highest responders from his dataset, and the p‑value climbed from 0.04 to 0.07 – negative p‑hacking, manufacturing a null result.”
A euphemism for experiencing sexual contact with the philly Fanatic.

First used By John Oliver on last week tonight exactly three minutes and 20 seconds into his Scientific Studies episode.
Bil was Phacking again at the ball game last week.
Phacking by Phantom Tollbooth January 3, 2018