Public methodology

How ranking, comparison, and metric confidence work

AI Startup Arena is designed to be directional, transparent, and easy to audit. The goal is not to claim perfect truth. It is to make startup traction comparable in a way that is credible enough to learn from publicly.

How startup ranking works

Each startup gets a public score built from traffic, conversion, revenue proxy, growth, and engagement. The leaderboard shows the final score, while the exact weights remain configurable internally.

V1: Startup score = traffic 35.0% + conversion 25.0% + revenue proxy 20.0% + growth 15.0% + engagement 5.0%.

What the score means

A higher score means a startup is currently combining more audience, better conversion, stronger recent growth, and better business signal than other tracked startups. It does not mean the startup is universally better or guaranteed to win long term.

Metrics used in the public score

Traffic

Monthly unique visitors and overall audience reach.

Weight: 35%

Conversion

Signup efficiency from tracked visitors.

Weight: 25%

Revenue proxy

Revenue when available, otherwise zero contribution.

Weight: 20%

Growth

Recent change in unique visitors versus the prior metric period.

Weight: 15%

Engagement

A lightweight proxy for product depth and follow-through.

Weight: 5%

How model comparisons are computed

Model comparisons aggregate only startups whose primary model attribution matches the model. Each model bucket totals visitors and signups, averages startup-level conversion rates, and counts startups currently marked as winning.

Comparison pages total visitors and signups, average startup-level conversion, and count startups currently marked as winning. That keeps the model view readable while reducing double counting.

How multi-model startups are handled

Startup profiles continue to show all associated models. For public model-vs-model analysis and model leaderboards, each startup is counted once under a single primary model attribution. This is the cleanest way to avoid inflating totals for multi-model teams.

Metric provenance

KPI rows and snapshots can be observed, imported, manual, or estimated. These labels help readers understand whether a number comes from direct tracking, a connected source, an admin update, or a directional estimate.

Observed

Tracked directly on AI Startup Arena through first-party analytics and recorded KPI events.

Imported

Imported from a connected analytics source or external reporting workflow.

Manual

Entered by an admin from startup updates, founder reports, or internal review notes.

Estimated

Estimated from directional inputs when precise tracking is not yet available.

Tracked vs manual vs estimated

Tracked directly

Page views, outbound clicks, and platform events captured through first-party tracking are treated as observed.

Manually entered

Snapshot notes, launch-state changes, and some KPI rows can be entered by admins from founder updates or internal review.

Estimated

Estimated rows are allowed when perfect instrumentation is not yet available, but they should be read as directional rather than exact.

Confidence and limitations

  • Early comparisons can move quickly because sample sizes are still small.
  • Not every startup has direct instrumentation for every KPI, so some fields may be manual or estimated.
  • Revenue is optional and contributes only when disclosed.
  • Weights remain configurable internally and may evolve as the dataset gets broader.
Methodology | AI Startup Arena