comperank provides tools for computing ranking and rating based on competition results. It is tightly connected to its data infrastructure package comperes. Basic knowledge about creating valid competition results and Head-to-Head expressions with comperes is needed in order to efficiently use comperank.

Understanding of competition is quite general: it is a set of games (abstract event) in which players (abstract entity) gain some abstract scores (typically numeric). The most natural example is sport results, however not the only one. Product rating can also be considered as a competition between products as “players”. Here a “game” is a customer that reviews a set of products by rating them with numerical “score” (stars, points, etc.).

Rating is a list (in the ordinary sense) of numerical values, one for each player, or the numerical value itself. Its interpretation depends on rating method: either bigger value indicates better player performance or otherwise.

Ranking is a rank-ordered list (in the ordinary sense) of players: rank 1 indicates player with best performance.

comperank leverages the tidyverse ecosystem of R packages. Among other things, it means that the main output format is tibble.

Overview

comperank gets inspiration from the book “Who’s #1” by Langville and Meyer. It provides functionality for the following rating algorithms:

As you can see, there are three sets of functions:

  • rate_*(). Its output is a tibble with columns player (player identifier) and at least one rating_* (rating value). Names of rating columns depend on rating method.
  • rank_*(). Its default output is similar to previous one, but with ranking_* instead of rating columns. It runs rate_*() and does ranking with correct direction. One can use option keep_rating = TRUE to keep rating columns in the output.
  • add_*_ratings(). These functions are present only for algorithms with iterative nature and competition results with games only between two players. They return tibble with row corresponding to a game (see wide format in Structure of competition results) and extra columns indicating ratings of players before and after the game.

This README provides examples of basic usage of these functions. To learn more about algorithms behind them, see corresponding help pages.

For this README we will need the following packages:

Installation

comperank is not on CRAN yet. You can install the development version from GitHub with:

Structure of competition results

All functions in comperank expect competition results in one of the formats from comperes package. That is either long or wide format.

Long format is the most abstract way of presenting competition results. Basically, it is a data frame (or tibble) with columns game (game identifier), player (player identifier) and score where each row represents the score of particular player in particular game. One game can consist from variable number of players which makes this format more usable. Inside a game all players are treated equally.

Programmatically long format is represented with longcr S3 class which should be created with as_longcr() function from comperes.

For examples we will use ncaa2005 data set from comperes package, which is already of longcr class. It is an example competition results of an isolated group of Atlantic Coast Conference teams provided in book “Who’s #1”:

Wide format is a more convenient way to store results with fixed number of players in a game. Each row represents scores of all players in particular game. Data should be organized in pairs of columns “player”-“score”. Identifier of a pair should go after respective keyword and consist only from digits. For example: player1, score1, player2, score2. Order doesn’t matter. Column game is optional.

Programmatically wide format is represented with widecr S3 class which should be created with as_widecr() function from comperes:

All comperank functions expect either a data frame with long format structure, or longcr object, or widecr object.

Algorithms with fixed Head-to-Head structure

Massey and Colley methods were initially designed for competitions where:

  • Games are held only between two players.
  • It is assumed that score is numeric and higher values indicate better player performance in a game.

Colley method

Idea of Colley method is that ratings should be proportional to share of player’s won games. Bigger value indicates better player performance.

Algorithms with variable Head-to-Head structure

All algorithms with variable Head-to-Head structure depend on user supplying custom Head-to-Head expression for computing quality of direct confrontations between all pairs of players of interest.

Computation of Head-to-Head values is done with functionality of comperes package. Programmatically it is implemented as summary of players’ matchups - mini-“games” in widecr format between pair of players. In other words, for every directed pair (order matters) of players (including “pair” of player with oneself):

  • Data frame of matchups is computed in wide format, i.e. with columns game, player1, score1, player2, score2.
  • This data frame is summarised with Head-to-Head expression supplied in dplyr fashion.

For more robust usage comperes provides h2h_funs - a list of the most common Head-to-Head expressions which are designed to be used with rlang’s unquoting mechanism. All comperank functions are designed to be used smoothly with it.

Examples of computing Head-to-Head values for more clarity:

# Examples of h2h_funs elements
names(h2h_funs)
#> [1] "mean_score_diff"     "mean_score_diff_pos" "mean_score"         
#> [4] "sum_score_diff"      "sum_score_diff_pos"  "sum_score"          
#> [7] "num_wins"            "num_wins2"           "num"

h2h_funs[1:3]
#> $mean_score_diff
#> mean(score1 - score2)
#> 
#> $mean_score_diff_pos
#> max(mean(score1 - score2), 0)
#> 
#> $mean_score
#> mean(score1)

# Computing Head-to-Head values with unquoting
comperes::h2h_long(ncaa2005, !!! h2h_funs)
#> # A long format of Head-to-Head values:
#> # A tibble: 25 x 11
#>   player1 player2 mean_score_diff mean_score_diff_pos mean_score sum_score_diff
#>   <chr>   <chr>             <dbl>               <dbl>      <dbl>          <int>
#> 1 Duke    Duke                  0                   0       8.75              0
#> 2 Duke    Miami               -45                   0       7               -45
#> 3 Duke    UNC                  -3                   0      21                -3
#> 4 Duke    UVA                 -31                   0       7               -31
#> 5 Duke    VT                  -45                   0       0               -45
#> 6 Miami   Duke                 45                  45      52                45
#>   sum_score_diff_pos sum_score num_wins num_wins2   num
#>                <dbl>     <int>    <dbl>     <dbl> <int>
#> 1                  0        35        0         2     4
#> 2                  0         7        0         0     1
#> 3                  0        21        0         0     1
#> 4                  0         7        0         0     1
#> 5                  0         0        0         0     1
#> 6                 45        52        1         1     1
#> # … with 19 more rows

comperes::h2h_mat(ncaa2005, !!! h2h_funs["mean_score"])
#> # A matrix format of Head-to-Head values:
#>        Duke Miami  UNC  UVA   VT
#> Duke   8.75   7.0 21.0  7.0  0.0
#> Miami 52.00  34.5 34.0 25.0 27.0
#> UNC   24.00  16.0 12.5  7.0  3.0
#> UVA   38.00  17.0  5.0 18.5 14.0
#> VT    45.00   7.0 30.0 52.0 33.5

# Computing Head-to-Head values manually
comperes::h2h_mat(ncaa2005, mean(score1))
#> # A matrix format of Head-to-Head values:
#>        Duke Miami  UNC  UVA   VT
#> Duke   8.75   7.0 21.0  7.0  0.0
#> Miami 52.00  34.5 34.0 25.0 27.0
#> UNC   24.00  16.0 12.5  7.0  3.0
#> UVA   38.00  17.0  5.0 18.5 14.0
#> VT    45.00   7.0 30.0 52.0 33.5

# To account for self play use `if-else`
comperes::h2h_mat(ncaa2005, if(player1[1] == player2[1]) 0 else mean(score1))
#> # A matrix format of Head-to-Head values:
#>       Duke Miami UNC UVA VT
#> Duke     0     7  21   7  0
#> Miami   52     0  34  25 27
#> UNC     24    16   0   7  3
#> UVA     38    17   5   0 14
#> VT      45     7  30  52  0

All functions for methods with variable Head-to-Head structure are designed with this rule in mind: the more Head-to-Head value the better player1 performed than player2.

Keener method

Keener method is based on the idea of “relative strength” - the strength of the player relative to the strength of the players he/she has played against. This is computed based on provided Head-to-Head values and some flexible algorithmic adjustments to make method more robust. Bigger value indicates better player performance.

Markov method

The main idea of Markov method is that players “vote” for other players’ performance. Voting is done with Head-to-Head values and the more value the more “votes” gives player2 to player1. For example, if Head-to-Head value is “number of wins” then player2 “votes” for player1 proportionally to number of times player1 won in a matchup with player2. Beware of careful consideration of Head-to-Head values for self plays.

Actual “voting” is done in Markov chain fashion: Head-to-Head values are organized in stochastic matrix which vector of stationary probabilities is declared to be output ratings. Bigger value indicates better player performance.

As stochastic matrices can be averaged (with weights), this is the only method capable of direct averaging ratings for different Head-to-Head expressions.

Offense-Defense method

The idea of Offense-Defense (OD) method is to account for different abilities of players by combining different ratings:

  • For player which can achieve high Head-to-Head value (even against the player with strong defense) it is said that he/she has strong offense which results into high offensive rating.
  • For player which can force their opponents into achieving low Head-to-Head value (even if they have strong offense) it is said that he/she has strong defense which results into low defensive rating.

Offensive and defensive ratings describe different skills of players. In order to fully rate players, OD ratings are computed: offensive ratings divided by defensive. The more OD rating the better player performance.

Algorithms with iterative nature

Rating methods with iterative nature assume that games occur in some particular order. All players have some initial ratings which are updated after every game in order they appear. Although, it is possible to consider games with more than two players, comperank only supports competition results with all games between two players.

Iterative ratings

Iterative ratings represent the general approach to ratings with iterative nature. It needs custom rating function and initial player ratings to perform iterative ratings computation. Rating function should accept four arguments: rating1 (scalar rating of the first player before the game), score1 (his score), rating2 and score2 for the data about second player’s performance. It should return a numeric vector of length 2 with elements respectively representing ratings of players after the game.

All functions assume that the order in which games were played is identical to order of values in column game (if present) or is defined by the row order.

Arguably, the most useful function is add_iterative_ratings(), which adds to widecr format of competition results information about game ratings before and after the game.

rate_iterative() and rank_iterative() return ratings after the last game.

# Adds 1 to winner's rating and subtracts 1 from loser's rating
test_rate_fun <- function(rating1, score1, rating2, score2) {
  c(rating1, rating2) + ((score1 >= score2) * 2 - 1) * c(1, -1)
}
add_iterative_ratings(ncaa2005, test_rate_fun)
#> # A widecr object:
#> # A tibble: 10 x 9
#>    game player1 score1 player2 score2 rating1Before rating2Before rating1After
#>   <int> <chr>    <int> <chr>    <int>         <dbl>         <dbl>        <dbl>
#> 1     1 Duke         7 Miami       52             0             0           -1
#> 2     2 Duke        21 UNC         24            -1             0           -2
#> 3     3 Duke         7 UVA         38            -2             0           -3
#> 4     4 Duke         0 VT          45            -3             0           -4
#> 5     5 Miami       34 UNC         16             1             1            2
#> 6     6 Miami       25 UVA         17             2             1            3
#>   rating2After
#>          <dbl>
#> 1            1
#> 2            1
#> 3            1
#> 4            1
#> 5            0
#> 6            0
#> # … with 4 more rows

# Revert the order of games
ncaa2005_rev <- ncaa2005
ncaa2005_rev$game <- 11 - ncaa2005_rev$game
add_iterative_ratings(ncaa2005_rev, test_rate_fun)
#> # A widecr object:
#> # A tibble: 10 x 9
#>    game player1 score1 player2 score2 rating1Before rating2Before rating1After
#>   <dbl> <chr>    <int> <chr>    <int>         <dbl>         <dbl>        <dbl>
#> 1     1 UVA         14 VT          52             0             0           -1
#> 2     2 UNC          3 VT          30             0             1           -1
#> 3     3 UNC          7 UVA          5            -1            -1            0
#> 4     4 Miami       27 VT           7             0             2            1
#> 5     5 Miami       25 UVA         17             1            -2            2
#> 6     6 Miami       34 UNC         16             2             0            3
#>   rating2After
#>          <dbl>
#> 1            1
#> 2            2
#> 3           -2
#> 4            1
#> 5           -3
#> 6           -1
#> # … with 4 more rows

# Rating after the last game
rank_iterative(ncaa2005, test_rate_fun, keep_rating = TRUE)
#> # A tibble: 5 x 3
#>   player rating_iterative ranking_iterative
#>   <chr>             <dbl>             <dbl>
#> 1 Duke                 -4                 5
#> 2 Miami                 4                 1
#> 3 UNC                   0                 3
#> 4 UVA                  -2                 4
#> 5 VT                    2                 2

Elo method

Elo method is, basically, an iterative rating method with fixed Elo rating function. General idea is that rating increase for winner should be the bigger the more is rating difference between players. In other words, win over a better player should lead to more rating increase and win over a considerably weaker player shouldn’t affect rating that much.