Overwatch OW ADVISOR

Guide

What Does This Tool Do?

This tool learns "which player wins more on which hero" from your past match data. Every match you record adds to the data, improving recommendation accuracy over time.

The flow is simple

  • Record matches (hero compositions and results)
  • The tool learns patterns from your data
  • Get winning compositions suggested for your next match

It starts with recommendations based on game knowledge (map aptitude and archetype matchups), and as data accumulates, it increasingly reflects individual hero strengths and synergistic pairs.

Getting Started

Four steps to start using OW Advisor.

1

Create Account

Register with a username and password. You can also sign in with Google.

2

Register Party Members

Add your regular teammates on the Group management page. You can set their main roles and hero pools.

3

Record Matches

Enter map, hero compositions, and match results on the Record page. More data means better recommendations.

4

Get Recommendations

Visit the Recommend page to get optimal hero composition suggestions for your next match.

Using Recommendations

The Recommend page suggests optimal hero compositions based on your accumulated match data.

Basic Steps

  1. Select map and side (Attack/Defense)
  2. Select participating party members
  3. Optionally enter known enemy heroes
  4. Click the "Get Recommendations" button

Enemy Information

Enemy hero input is optional, but entering them improves accuracy through counter analysis. The enemy tank has the most impact. Try entering any enemy heroes you spot at the start of a match.

Reading Results

Results are displayed in score order. Each composition shows a score, data level (Lv1-3), and confidence bar. Higher scores indicate compositions more likely to win based on data. Choose from multiple suggestions based on your team's preferences and the situation.

Statistics Dashboard

The Statistics page shows analysis of your accumulated match data across five tabs.

  • Overview — Total matches, win rate, recent win/loss trends
  • Player — Each member's hero usage and win rates
  • Hero — Per-hero usage count and win rate analysis
  • Map — Win rates by map, identifying strengths and weaknesses
  • Trend — Win rate over time and streak analysis

Some tabs may be unavailable when data is limited. More detailed analysis becomes available as you record more matches.

Meta Analysis

View hero aptitude by map and counter relationships against enemy archetypes (dive, poke, brawl, etc.). This page visualizes the game knowledge (prior distributions) used internally by the recommendation engine.

View Meta Analysis

Using Groups

Groups let you share data with your party members. Match records, players, and statistics are all managed per group.

Creating a Group

A default group is automatically created when you register. You can create additional groups from the Group management page.

Inviting Members

Each group has an invite code. Share this code with friends so they can join your group. You can find the invite code on the Group management page.

Switching Groups

If you belong to multiple groups, you can switch the active group using the dropdown in the navbar. Switching changes the displayed data accordingly.

Data Isolation

Group data is completely isolated. Match records from Group A are invisible to Group B. This lets you manage separate records for different parties.

Understanding Recommendation Scores

Recommendation scores are calculated from three components.

  • Individual Performance (50%) — How well the player performs on that hero
  • Pair Synergy (30%) — How well ally hero combinations work together
  • Counter (20%) — Advantage or disadvantage against enemy composition

How to read scores

The baseline score is 0.000 (neutral). For example, "+0.050" means performance 0.050 points above average is expected. This is not a win rate but a relative strength ranking score for comparing compositions.

Data Levels

  • Lv1 — Recommendations based on individual performance only
  • Lv2 — Pair synergy is added
  • Lv3 — Counter information is also included

A wider confidence bar means more data supports that recommendation.

What Happens With Limited Data?

With limited data, recommendations lean heavily on per-map game knowledge. This is a Bayesian estimation feature that prevents unreliable recommendations when data is scarce, using general game knowledge as a starting point and updating with personal data.

How data volume affects recommendations

  • ~10 matches — Game knowledge still dominates, individual performance starts to appear
  • ~30 matches — Individual tendencies become visible
  • ~100 matches — Pair synergy and counter data reach reliable levels
  • 300+ matches — Automatic weight optimization becomes available

Maturity Levels

The system progressively enables features based on data volume.

  • Cold Start (up to 30 matches) — Fixed weights, game knowledge-based recommendations
  • Growing (30–100 matches) — Calibration monitoring of recommendation accuracy begins
  • Stable (100–300 matches) — Automatic weight optimization enabled
  • Mature (300+ matches) — All features enabled. High-accuracy recommendations

How to Record Matches

Record the hero picked at match start. Even if you swapped mid-match, record the initial pick.

Why initial picks?

This tool optimizes "what to pick first in the next match". Mid-match swap decisions depend on the situation, so we focus on the initial selection.

  • Select map, side, and party members
  • Choose each member's initial hero pick
  • Record the result and save

The impact of swaps on match outcomes is indirectly learned through the counter layer (see FAQ below for details).

FAQ

Is it useful with limited data?

Because we use Bayesian estimation, even with limited data the system provides estimates based on game knowledge (map aptitude). It never gives random recommendations, and naturally transitions to personalized recommendations as data grows.

As a guideline, individual tendencies start appearing around 10 matches, and pair synergy becomes visible after 30 matches. Even early on as a "rough guide," accuracy improves the more you use it.

Is it OK to ignore hero swaps?

We don't record swap details, but their impact is indirectly reflected in the data.

For example, if Player A picks Reinhardt first, the enemy runs dive, they swap but still lose — recording this creates a "Reinhardt × Enemy Dive × Loss" data point. If this pattern continues, Reinhardt's recommendation score naturally drops against dive comps.

In other words, matches where swaps occurred are learned as "signals that the initial pick wasn't optimal." The tool's goal is to recommend optimal initial picks that minimize the need for swapping.

Are scores win rates?

No. Scores are relative ranking values, not win rate percentages.

Scores are a composite of three factors: Individual Performance (hero-specific win tendency), Pair Synergy (compatibility with allies), and Counter (advantage against enemy comp). Each Bayesian estimate is weighted and summed.

0.000 is the baseline — positive means better than average, negative means worse. Think of it as a comparative index for compositions rather than an intuitive number like "60% win rate."

Do I need to enter enemy info?

Recommendations work without enemy info, but accuracy improves the more you provide.

  • No enemy info — Recommendations use individual performance + pair synergy only
  • Enemy tank only — Counter layer activates, reflecting tank matchups
  • Multiple enemies — Archetype detection (dive, poke, etc.) activates for full-power recommendations

If you can see the enemy team at match start, entering what you can is recommended.

What data does counter analysis use?

No special input needed. Every match record (ally heroes + enemy tank + result) automatically feeds the counter layer.

For example, "Player A's Reinhardt × Enemy Winston × Loss" and "Player B's Genji × Enemy Winston × Win" — these combinations accumulate with each match.

When enough data accumulates, the system automatically learns matchup patterns like "D.Va is better than Reinhardt against Winston." Just keep recording your regular matches.

How is pair synergy determined?

Pair synergy is the win rate of two players together, minus each player's individual strength.

For example, if Player A (Reinhardt) + Player B (Genji) win more than the average of their individual win rates, that's positive synergy. If less, it's negative synergy.

When pair data is limited, synergy influence is automatically reduced, growing stronger as data accumulates. Bayesian estimation naturally adjusts the weight based on data volume.

What happens to old match data?

Old data isn't discarded entirely but gradually loses influence.

  • Time decay — Exponential decay with a 60-day half-life. Recent matches are weighted more heavily
  • Patch boundary penalty — Additional 50% weight reduction when crossing season boundaries (major patches)

For example, a match from 90 days ago has roughly 35% weight from time decay alone. If a patch occurred in between, it drops to about 18%. This tracks meta changes while retaining some historical trends.

Statistical Background

For those interested — an overview of the statistical methods behind this tool's recommendations.

Show details

Empirical Bayes (Beta-Binomial Model)

Win rate estimation for each hero, pair, and counter uses a Beta-Binomial model. Starting from prior distributions based on per-map and per-archetype game knowledge, it updates with individual data through Bayesian estimation.

Bayesian Shrinkage

With limited data, estimates are pulled toward game knowledge priors (shrinkage). This provides stable estimates without being swayed by a few biased results. As sufficient data accumulates, individual performance is reflected more strongly.

Time Decay and Confidence

Older match data naturally loses weight through exponential decay (60-day half-life). This is independent of patch cycles — simply weighting recent data more. Separately, data crossing major patches (season boundaries) receives an additional penalty (×0.5). Patch dates are manually registered, causing pre-balance-change meta data to decay rapidly.

For detailed theory, see the Scoring Engine Theory page.

Try the App

Related Pages