For Science Journalists & Researchers

Verify before you publish. One bad study ends careers.

Surgisphere. Amyloid fraud. The Stanford president. Every science scandal started with a study someone trusted without checking. Verify any paper in 2 minutes — before your story runs.

13,000+

Papers retracted in 2023

1 in 7

May contain fraudulent data

2 min

To verify any paper

Verify research from the world's leading journals

Nature
The Lancet
NEJM
Science
Cell
PNAS
JAMA
BMJ
PubMed
arXiv
Nature
The Lancet
NEJM
Science
Cell
PNAS
JAMA
BMJ
PubMed
arXiv
n

Recently Analyzed

Real papers. Real scores. Click to preview.

How It Works

Three steps from press release to verified story.

1

Paste a DOI, PMID, or upload PDF

Got a press release with a study link? A PDF from a PR flack? Paste the identifier or upload the file directly. Works on any published paper.

2

AI performs investigative-grade analysis

Statistical power calculations, methodology benchmarking, conflict detection, and citation quality assessment — the kind of vetting a specialist scientist would do, in minutes.

3

Get a verified story — or a bullet dodged

Comprehensive report with scores, red flags, citation-ready language, and stronger alternative studies. Know whether to run the story, add caveats, or kill it entirely.

Investigative-Grade Analysis

Beyond AI Summaries

ChatGPT gives you summaries. We give you ammunition.

Specific funding amounts, undisclosed conflicts, endpoint switching, statistical inadequacies — the concrete evidence you need to know if a study is worth reporting on, or worth avoiding.

Know If The Numbers Hold Up

Every report includes post-hoc power calculations, sample size adequacy assessment, and field benchmarks. Know instantly if a study was too small to detect its own claimed effect — the kind of flaw that makes a story fall apart after publication.

Example: "N=47 is underpowered for this endpoint — results may not replicate."

Gold Standard Comparison

See exactly how a study's design compares to the gold standard for its research question. We rank methodology on evidence hierarchy so you can tell readers how seriously to take a finding — not just that a study "suggests" something.

"RCT vs observational vs case report — where does this study land on the evidence ladder?"

Reporter-Ready Language

Stop wondering how to hedge a finding. Every report includes ready-to-use language for your story — plus alternative papers that are methodologically stronger, so you can report with appropriate confidence.

"While Smith et al. (2024) suggests X, the study's observational design limits causal claims..."

Concrete Evidence, Not Vibes

We surface the specifics that matter: exact funding amounts, institutional affiliations, protocol deviations, endpoint changes, p-hacking indicators, and cherry-picked outcome reporting. The details that expose hype — and protect your byline.

"Primary endpoint changed post-registration; secondary endpoint promoted to primary."

You're on deadline. A bad study could be your next correction.

A press release lands in your inbox. The study looks solid — published journal, university PR behind it, catchy finding. Your editor wants it by 4pm.

That's exactly how it happened with Surgisphere. The Lancet published a hydroxychloroquine study that shut down WHO trials worldwide. It was fabricated. Journals retracted it. Journalists who reported it had to run corrections. The amyloid fraud that misdirected Alzheimer's research for 16 years. The Stanford president who resigned over manipulated data. Every one started as a study someone trusted on deadline.

13,000+ papers were retracted in 2023 alone — and Retraction Watch tracks over 48,000 total, many still being cited years later. 1 in 7 papers may contain fraudulent data.

A correction costs more than a subscription. Verify before you publish.

What we catch that peer review doesn't

Paper Integrity scans for the red flags that slip past reviewers — and past reporters on deadline. Funding conflicts, methodology gaps, buried limitations, and credibility red flags.

Conflict of Interest

Who funded the research? Do the authors have financial ties to outcomes? We surface what's disclosed — and flag what's missing. The pharma funding buried on page 14.

Selection Bias

Were participants chosen to favor certain results? We examine recruitment methods and sample representativeness — the fine print that makes "promising results" meaningless.

Methodological Rigor

Is the study design appropriate? Are the statistical methods sound? We assess the scientific foundation so you know if the headline matches the evidence.

Reporting Bias

What results were emphasized? What was buried in the appendix? We look at the full picture, not just the abstract — which is all most PR flacks want you to read.

Detailed Reports

Get comprehensive analysis with numerical scores, methodology assessment, and actionable insights — fast enough to make your deadline.

Share Your Findings

Download reports or generate social posts to share analysis with editors, colleagues, or your audience. Show your work, protect your credibility.

Ready to verify your next story?

Our Story

Why we built Paper Integrity

For science journalists, researchers, and anyone who needs to know if a study is real.

The Problem We Kept Running Into

Origin Story

It started the way most ideas do — with frustration. And it came from both sides of the lab bench.

Every week, another link would land in the group chat. A friend sharing a study about supplements. A family member forwarding a paper on vaccine efficacy. A colleague citing research on environmental toxins. And every time, the same question: “Is this legit?”

Our founder spent years in biology research — the kind where you read methodology sections for breakfast and argue about sample sizes over coffee. So naturally, he became the guy people asked. Can you look at this study? Who funded it? Is this journal even reputable?

He'd spend thirty minutes pulling apart a single paper. Checking the funding disclosures. Cross-referencing the authors' publication history. Evaluating whether the conclusions actually matched the data. And then he'd do it again the next day for someone else.

But it wasn't just the non-scientists struggling. Science journalists were fighting their own version of the same battle — on deadline, with a press release in hand, trying to figure out if the study behind it was worth reporting. Researchers were spending hours vetting citations. The volume was crushing. Hundreds of thousands of papers published every month, and the manual process of evaluating each one hadn't scaled in decades.

The tools had changed. The rigor required to use them hadn't.

Why It Matters Now More Than Ever

The Stakes

Science has a credibility problem — and it's not because the science itself is failing. It's because the infrastructure for evaluating it hasn't kept pace with the output.

For science journalists, the pressure is structural. You cover beats that require trusting studies you don't have time to fully vet. A PR agency sends a press release with a credentialed author, a real journal, a compelling finding. The story writes itself — until it doesn't. Surgisphere published in The Lancet and NEJM simultaneously before anyone checked if the data was real. The Alzheimer's amyloid fraud misdirected $40 billion in research for 16 years. Marc Tessier-Lavigne resigned the Stanford presidency over manipulated data.

For researchers, the challenge is volume. Funding conflicts buried in supplementary materials. Retracted papers that continue accumulating citations years after withdrawal. Reproducibility crises across entire fields.

That gap between published and trustworthy is where bad decisions get made — in newsrooms, in clinics, and at kitchen tables.

2.8M+

Papers published annually on PubMed alone

13,000+

Retractions in 2023 — and climbing fast

1 in 7

Papers may contain fraudulent or manipulated data

Research Clarity at Every Level

The Solution

Paper Integrity was built to close that gap — for journalists, researchers, and non-researchers alike.

We use AI to analyze scientific publications the way a seasoned peer reviewer would: examining funding sources, author conflicts of interest, methodological rigor, statistical validity, journal reputation, citation quality, and reproducibility indicators. The output scales to the reader.

For Science Journalists & Newsrooms

Verify before you publish. Check funding conflicts, methodology red flags, and retraction status before a study becomes your story — or your correction. The Chrome extension works right in your browser, on deadline.

For Research Integrity Officers

Triage incoming allegations faster. Screen submissions for methodology red flags, undisclosed conflicts, and statistical anomalies before committing to a full investigation. A rapid first-pass that never gets fatigued.

For Researchers & Academics

A rapid triage layer for literature review. Flag potential issues in funding, methodology, or citation networks before you invest hours in a deep read. A second set of trained eyes on every paper you cite.

Paper Integrity is proudly built in America by a small, independent team — no big tech affiliations, no institutional conflicts, no agenda. Just a commitment to making research transparency accessible to everyone.

Verify before you publish. Every time.

One retraction correction costs more — in reputation and time — than a year of Paper Integrity. Check the study before you write the story.

Less than your morning coffee habit. More than a correction costs.

$29/month is expense-report friendly. One avoided correction pays for years of coverage.

Sample Reports

Browse real analyses

Free

  • Browse pre-analyzed papers
  • Full bias reports to inspect
  • See exactly what you'll get
POPULAR

Reporter

For individual journalists

$29/month

  • 30 analyses/month
  • Power calculations & benchmarking
  • Reporter-ready citation language
  • Social media summaries
  • Chrome extension access
  • Email support

Need more? Add credits at $3 each.

Newsroom

For teams & desks

$99/month

  • 150 analyses/month
  • 3 team seats included (Coming Soon)
  • Extra seats $20/mo each
  • All Reporter features
  • Priority support
  • Usage dashboard

Credits

Pay as you go

$3/analysis

  • $3 per analysis
  • Never expires
  • All features included
  • No commitment

Common Questions

Everything journalists and researchers need to know before trusting an AI with their verification workflow.

Can I use Paper Integrity findings in my reporting?

Yes — with appropriate framing. Paper Integrity is a screening and analysis tool that surfaces red flags for further investigation. Use our findings to ask better questions, identify experts to contact, and decide whether a study merits the caveats you'll add to your story. We provide reporter-ready language in every report to help you describe findings accurately. As with any tool, your editorial judgment remains the final word.

How accurate is AI analysis compared to human peer review?

Paper Integrity is a screening tool, not a replacement for peer review. We identify red flags — funding conflicts, methodological gaps, statistical issues — that warrant closer examination. Think of us as a thorough first-pass that catches what hurried readers miss. The final judgment is always yours.

Is my paper content kept private?

Yes. Your submissions are encrypted in transit and at rest. We process papers solely to generate your report — we don't use them to train AI models, share them with third parties, or retain them longer than necessary. You can delete your data anytime.

What's the methodology behind the bias scores?

We analyze multiple dimensions: funding source disclosure, author conflicts of interest, sample selection methods, statistical rigor, outcome reporting completeness, and citation context. Each dimension is scored and weighted based on established research integrity frameworks. Full methodology details are included in every report.

How is this different from plagiarism checkers?

Plagiarism checkers find copied text. Paper Integrity analyzes research quality — whether a study's conclusions are supported by its methods, whether conflicts are disclosed, whether the statistics hold up. We're checking if the science is sound, not if the words are original.

Can I analyze papers I didn't write?

Absolutely. Most users analyze papers they're reporting on or citing in their own work. If you can legally access a paper, you can analyze it. Paste the DOI from a press release, upload a PDF, or paste a PubMed link.

Why not just ask ChatGPT about my paper?

You can — but you'll get generic answers. Paper Integrity cross-references Retraction Watch, funding disclosure databases, and citation networks. We check if a paper has been retracted, flagged, or criticized in ways ChatGPT can't see. Plus, our structured methodology catches the same red flags every time — no prompt engineering required, and no hallucinated citations.