I grew up on the GeoCities web. The internet was full of sites that existed for no reason other than someone thought they would be useful or interesting. EarthCam pointed cameras at cities around the world and let you watch. IMDb started as a Usenet project where film enthusiasts compiled credits by hand. StumbleUpon dropped you on a random website and half the time it was someone’s passion project about an esoteric topic that you never knew you cared about until you spent forty-five minutes reading it.
Those sites had something in common: they stood on their own. They were not features inside a platform. They were not optimized for engagement metrics. They were not gated behind accounts. They were just useful, sometimes strange, often delightful, and they were made by people who wanted them to exist.
That web is mostly gone. So I have been building it back. I built Statistics Tools — 132 browser-based statistical calculators, each one a deep educational resource — as the kind of site I grew up loving. It is one of several tool platforms I have built recently, and it is the best example of what I think the web should be: useful, independent, and comprehensive.
What replaced the useful web#
The web in 2026 is dominated by a handful of monoliths owned by the biggest companies in the world. The rare independent website is either a personal presentation — much like this one is about me(!) — or it is a content farm engineered to rank in search results without actually helping anyone. The middle ground where useful, independent, deep-content websites used to live has been hollowed out.
Search for a statistical concept and the first page of results is SEO-optimized articles that define the term in the first paragraph, pad the middle with filler, and link to a premium tool at the bottom. Search for how to run a t-test and you get a blog post that explains what a t-test is but does not actually let you run one. The information is thin. The utility is zero. The site exists to capture traffic, not to be genuinely useful.
The platforms are worse. Social media reduced information to fragments — threads, posts, short videos — optimized for engagement rather than understanding. A complex statistical concept does not compress into a tweet. A careful explanation of when to use a Welch’s t-test instead of a standard t-test is not engaging content. It is boring, precise, and necessary. The engagement-driven web has no place for it.

Building deep utility instead of thin content#
When I built Statistics Tools, I made a deliberate decision: every tool would be deeply contextual. Not just a calculator with an input field and an output number, but a complete resource that integrates the computation with everything you need to understand it.
The one-way ANOVA calculator does not just return an F-statistic. It renders the formula in LaTeX, walks through the calculation step by step, explains the assumptions (independence, normality, homogeneity of variance), provides preset example datasets so you can explore before entering your own data, discusses what to do when assumptions are violated, links to the Shapiro-Wilk test for checking normality and Levene’s test for checking equal variances, and includes guidance on how to report the result in APA format.
That is a lot of content for a single calculator. Multiply it by 132 calculators and a 433-term glossary, and you have something that does not exist at the thin-content layer of the web. It exists at the depth where the early web used to live — where someone cared enough to make something comprehensive.

Why AI makes the deep independent web possible again#
Here is the part that would not have been possible five years ago. Building 132 statistical calculators, each with deep educational content, step-by-step walkthroughs, historical context, worked examples, and cross-references to related tools — that is not a project a solo developer would have attempted. The surface area is too large. The research alone would take months. The content production would take longer.
AI changed that calculus. I used AI to accelerate both the research and the content generation across every tool — the mathematical walkthroughs, the assumption explanations, the example datasets, the glossary definitions. My background in bioinformatics and biochemistry at Queen’s University gave me the statistical training to direct the process and ensure the content was accurate. AI handled the scale. I handled the judgment.
This is the same workflow I used to build PDF Pony (145 tools), ImageNurse (120 tools), GIS Tools (101 tools), and MyText (161 tools). The pattern is consistent: AI makes it feasible for one developer to cover an entire functional domain at a depth that used to require a team and a budget. The independent useful website — the kind I grew up loving — is economically viable again because the cost of producing deep, comprehensive content has dropped by an order of magnitude.
Democratizing access to statistical analysis#
Statistical software has historically been expensive. SPSS costs over a hundred dollars a month. GraphPad Prism runs several hundred a year. R is free but has a steep learning curve that excludes anyone who is not a programmer. The result is that access to statistical analysis is gated by either money or technical skill — which means students, small organizations, researchers in developing countries, and non-technical professionals are often locked out.
I built Statistics Tools to eliminate those barriers. A student working on a thesis can run a paired t-test, check a Shapiro-Wilk normality test, compute Cronbach’s alpha for scale reliability, and estimate sample size for their next study — all without installing software, creating an account, or paying for a license. A journalist evaluating a health study can check whether the reported p-value and effect size actually support the headline. A small business owner running their first A/B test can determine whether their results are statistically significant or just noise.
And because every calculator is also a lesson, these users do not just get a number — they learn what the number means. That is the difference between a thin tool and a deep one, and it is the difference between a website that extracts value and one that creates it.
The web I want to build#
The web I grew up on was not better because the technology was better — it was better because the incentive structures were different. People built websites because they wanted to share something useful or interesting, not because they wanted to maximize engagement or capture email addresses for a marketing funnel.
I cannot bring that web back. But I can build the kind of websites that would have existed on it. Statistics Tools is 132 calculators, each one a deep resource, running entirely in the browser, available in four languages, free to use, with no accounts and no tracking. It is the kind of site I would have been thrilled to stumble onto in 2002. The fact that I can build it in 2026 — solo, with AI — is the optimistic part of this story.
The web does not have to be a collection of platforms and content farms. It can still be a place where independent sites exist because someone thought they should, and where the goal is utility, not extraction. That is what I am building.
Statistics Tools runs entirely in the browser, works offline, and is available in four languages.