It started the way most ideas do — with frustration. And it came from both sides of the lab bench.
Every week, another link would land in the group chat. A friend sharing a study about supplements. A family member forwarding a paper on vaccine efficacy. A colleague citing research on environmental toxins. And every time, the same question: “Is this legit?”
Our founder spent years in biology research — the kind where you read methodology sections for breakfast and argue about sample sizes over coffee. So naturally, he became the guy people asked. Can you look at this study? Who funded it? Is this journal even reputable?
He'd spend thirty minutes pulling apart a single paper. Checking the funding disclosures. Cross-referencing the authors' publication history. Evaluating whether the conclusions actually matched the data. And then he'd do it again the next day for someone else.
But it wasn't just the non-scientists struggling. Science journalists were fighting their own version of the same battle — on deadline, with a press release in hand, trying to figure out if the study behind it was worth reporting. Researchers were spending hours vetting citations. The volume was crushing. Hundreds of thousands of papers published every month, and the manual process of evaluating each one hadn't scaled in decades.
The tools had changed. The rigor required to use them hadn't.