Methodology

How We Score

This page explains every step: which sources we read, how much weight each one carries, how we handle old data, and how we detect unusual review patterns. There are no hidden variables.

11 source categories
3 independent scores
18-month half-life on posts
Community catalog

The catalog grows with the community.

If a product is missing, you can submit it for scoring. Our agents validate the submission, source where it's sold, check for affiliate coverage, and pull community sentiment from across the web. A complete, scored record is typically ready within 90–120 seconds.

Add a product

Account required. 3 submissions per day.

01

The Scores

Three numbers, each measuring something different. They are calculated separately and displayed separately. A product can score high on Match and low on Product Rating — and you should see both numbers, not a blended compromise.

Semantic Match0–100%

How well this product matches your search query, adjusted by your profile. Calculated at query time. Not stored between sessions.

Product Rating0–5.0

The community's aggregate opinion of this specific product. Trust-weighted and recency-decayed. Sample size n shown on every card.

Brand Quality0–5.0

The community's broader assessment of the brand. Used as a supporting signal when a product has fewer than 15 data points.

Composite weights

With a search query

Semantic Match45%
Product Rating35%
Brand Quality20%

Without a query (browse mode)

Product Rating65%
Brand Quality35%
02

Sources and Trust Weights

Different sources carry different weights. A trust weight between 0 and 1 is applied to every data point from that source before it contributes to a Product Rating.

Source categoryWeight

Lab data

Bicycle Rolling Resistance — tires only. Standardised drum testing.

1.00

Scored editorial — primary

road.cc, Cycling Weekly, BikeRadar, CyclingNews, Pinkbike

0.90

Scored editorial — specialist

Cyclist, MBR, Singletrack, Gran Fondo, Triathlete Magazine

0.85

Specialist forums — Tier 1

Weight Weenies, The Paceline Forum, Slowtwitch, TrainerRoad Forum

0.80

Specialist forums — Tier 2

MTBR, Road Bike Review, Escape Collective Community, BikeRadar Forum

0.75

Unscored editorial

DC Rainmaker, Escape Collective, VeloNews, BikeRumor, Rouleur

0.75

Cycling discussions — focused

Specialist subreddits: r/velo, r/bikewrench, r/gravelcycling

0.65

Independent video reviews

Peak Torque, Hambini, Shane Miller, Francis Cade, Berm Peak

0.65

Cycling discussions — broader

General cycling communities and related subreddits

0.55

Retailer reviews

Sigma Sports, Tredz, Merlin, Competitive Cyclist — verified purchase only

0.55

Network YouTube

GCN Tech, GMBN Tech — lower weight due to commercial production relationships

0.50
03

The Pipeline

Every post goes through five steps before it contributes to a Product Rating.

01

Crawl

Each source fetched on a schedule. New posts since the last run are queued for processing.

02

Clean

HTML stripped. Whitespace normalised. Posts matching known bot-pattern signatures are flagged for review, not silently removed.

03

Chunk

Each post split into 500-token chunks with 50-token overlap. Chunk boundaries never break mid-sentence where possible.

04

Embed

Each chunk converted to a vector embedding and stored with source metadata.

05

Score

Sentiment extracted per source. Trust weight applied. Recency decay applied. Bot anomaly score applied as a downward adjustment. Weighted aggregate written to the product score database.

04

Recency

Older posts carry less weight. A post from 18 months ago contributes at 50% of its original weight. A post from 3 years ago contributes at approximately 25%. The decay function is continuous — there is no cutoff date at which posts stop contributing entirely.

Decay reference

1 month98%
6 months84%
12 months71%
18 months50%
24 months35%
36 months~25%
05

Anomaly Detection

Unusual posting patterns raise a bot score on that account's data. Posts are not removed — they are down-weighted in proportion to the anomaly score. The raw data and bot scores are available in the source breakdown on each product page.

Signals that contribute to a higher bot score: account age relative to post volume, posting frequency anomalies, sentiment homogeneity across posts, and template-like sentence structure.

06

What we don't do

No brand pays for placement. Affiliate commission is earned after a click — it does not affect ranking.

No blended composite score. The three scores are always shown separately.

No silent removal of data. Posts flagged by anomaly detection are down-weighted, not deleted.

No suppression by affiliate status. Products without affiliate coverage appear in results at the same rank they earn.