Micro- and Macro-Level Churn Analysis of Large-Scale Mobile Games

 
CONTINUE READING
Micro- and Macro-Level Churn Analysis of Large-Scale Mobile Games
Under consideration for publication
arXiv:1901.06247v1 [cs.LG] 14 Jan 2019

                                         Micro- and Macro-Level Churn Analysis
                                         of Large-Scale Mobile Games
                                         Xi Liu1∗ , Muhe Xie2 , Xidao Wen3∗ , Rui Chen2 , Yong Ge4 , Nick
                                         Duffield1 and Na Wang2
                                         1 Texas A&M University, College Station, TX, USA
                                         2 Samsung   Research America, Mountain View, CA, USA
                                         3 University of Pittsburgh, Pittsburgh, PA, USA
                                         4 The University of Arizona, Tucson, AZ, USA

                                         Abstract. As mobile devices become more and more popular, mobile gaming has
                                         emerged as a promising market with billion-dollar revenues. A variety of mobile game
                                         platforms and services have been developed around the world. A critical challenge for
                                         these platforms and services is to understand the churn behavior in mobile games,
                                         which usually involves churn at micro level (between an app and a specific user) and
                                         macro level (between an app and all its users). Accurate micro-level churn prediction
                                         and macro-level churn ranking will benefit many stakeholders such as game developers,
                                         advertisers, and platform operators. In this paper, we present the first large-scale churn
                                         analysis for mobile games that supports both micro-level churn prediction and macro-
                                         level churn ranking. For micro-level churn prediction, in view of the common limitations
                                         of the state-of-the-art methods built upon traditional machine learning models, we
                                         devise a novel semi-supervised and inductive embedding model that jointly learns the
                                         prediction function and the embedding function for user-app relationships. We model
                                         these two functions by deep neural networks with a unique edge embedding technique
                                         that is able to capture both contextual information and relationship dynamics. We
                                         also design a novel attributed random walk technique that takes into consideration
                                         both topological adjacency and attribute similarities. To address macro-level churn
                                         ranking, we propose to construct a relationship graph with estimated micro-level churn
                                         probabilities as edge weights and adapt link analysis algorithms on the graph. We devise
                                         a simple algorithm SimSum and adapt two more advanced algorithms PageRank and
                                         HITS. The performance of our solutions for the two-level churn analysis problems is
                                         evaluated on real-world data collected from the Samsung Game Launcher platform.
                                         The data includes tens of thousands of mobile games and hundreds of millions of user-

                                         ∗ This work was done during the authors’ internships at Samsung Research America
                                         Received xxx
                                         Revised xxx
                                         Accepted xxx
Micro- and Macro-Level Churn Analysis of Large-Scale Mobile Games
2                                                                         X. Liu et al

app interactions. The experimental results with this data demonstrate the superiority
of our proposed models against existing state-of-the-art methods.
Keywords: Churn prediction; Representation learning; Graph embedding; Inductive
learning; Semi-supervised learning; Mobile games

1. Introduction
With the wide adoption of mobile devices (e.g., smartphones and tablets), the
past decade has seen a rapid increase of the mobile gaming industry into a billion-
dollar market around the globe. $70.3 billion revenue was generated by mobile
games in 2018 according to Newzoo’s Global Games Market Report [1]. And
it is expected that mobile games will generate $106.3 billion revenue in 2021,
accounting for more than half of the overall game market. Driven by this in-
creasingly vital market, software and hardware providers of mobile devices (e.g.,
Apple, Google and Samsung) have provided integrated mobile game platforms
and services for end users, game developers and other stakeholders. Some ex-
amples of such platforms and services are Apple App Store, Google Play and
Samsung Game Launcher [2].
    One particularly crucial task, within these mobile game platforms and ser-
vices, is understanding the churn behavior in mobile games, which usually in-
volves two levels of analysis: the micro level (between an app and a specific
user) and the macro level (between an app and all its users). In many traditional
business applications [3], accurately predicting micro-level churn has been a long-
standing and important task, the objective of which is to predict the likelihood
that a user will stop using a service or product. In our problem setting, the aim
is to predict the likelihood that a user will stop using a particular game app
in the future. The task is vital for the following reasons. First, the churn rate
of a mobile game is an important business metric to measure its success. With
accurately predicted churn probabilities of player-game pairs, a game platform
is able to prioritize its resources for better operation and management. Second,
predicting individual churn probabilities will enable a game platform to design
better marketing strategies to improve user retention. Examples include sending
push notifications and providing free items in games to users who are likely to
churn. Since, as well known, the acquisition cost for new users is much higher
than the retention cost for existing users, successful micro-level churn prediction
could largely reduce costs for game developers and platform operators to increase
the number of active users, which plays a critical role in the success of a mobile
game. Third, the micro-level churn prediction provides direct input to determine
the right timing of app recommendations for a game platform. The results of this
research will enable testing of the hypothesis that a user is more likely to act on
an app recommendation when he/she is about to stop playing other games.
    The objective of macro-level churn ranking is to provide a list of games ranked
by their total number of users who will churn in the near future. It is an equally
important problem, and its solution can be applicable to various mobile app
services such as trend prediction [4], rating and review spam detection, ranking
fraud detection [5], and especially trend-sensitive recommendation [6]. There
has been a general consensus that the mobile app market is highly dynamic; new
apps continuously enter the market and distract users’ attention from existing
Micro- and Macro-Level Churn Analysis of Large-Scale Mobile Games
Micro- and Macro-Level Churn Analysis of Large-Scale Mobile Games                  3

ones. A recommender can use the results of macro-level churn ranking as a prior
knowledge, e.g., an attribute in the representation of games. A recommender
can also slightly adjust the position of recommended games before delivering to
users in preference to those that have smaller total numbers of near-future churn,
e.g., a post-filter. Macro-level churn ranking in general can be very challenging
because the order of the games on the list (1) can change very fast with the
dynamics of the market, and (2) is based on future events, which can only be
estimated. The macro-level churn ranking problem is intimately related to the
micro-level churn prediction problem. Consider the extreme case that all micro-
level churn predictions are 100% correct, then the number of users to churn in
the future can be directly computed.
    There have been several previous studies [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]
on micro-level mobile game churn prediction by using traditional machine learn-
ing models (e.g., logistic regression, random forests, Cox regression). However,
we observe several major limitations of these studies. First, they were developed
for predicting churn of a single or a few mobile games; none is capable of handling
churn prediction of large-scale mobile apps and users. For real-world applications,
a solution needs to handle tens of thousands of mobile games and hundreds of
millions of user-app interactions on a daily basis. Second, user-app interaction
data often comes with rich contextual information (e.g., WiFi connection status,
screen brightness, and audio volume), which has never been considered in the
existing studies. Third, the existing methods rely on handcrafted features that
usually cannot scale well in practice.
    To overcome all these limitations, we propose a novel inductive semi-supervised
embedding model that jointly learns the prediction function and the embedding
function for user-game interaction. The user-app interaction data includes de-
tailed information of opens, closes, installs and uninstalls of game apps for each
individual user. This data is collected from the Samsung Game Launcher plat-
form1 , which is pre-loaded in most smartphones manufactured by Samsung. We
model the interplay between users and games by an attributed bipartite graph
and then learn these two functions by deep neural networks with a unique em-
bedding technique that is able to capture both contextual information and dy-
namic user-game interaction. Our method is fully automatic and can be easily
integrated into existing mobile platforms.
    With the predicted micro-level churn probabilities, we devise a simple method
SimSum to estimate the macro-level churn ranking. Under the presupposition
that the churn probabilities are properly estimated, we show that SimSum is
based on an unbiased estimation. Furthermore, as the churn behavior may be
related among users and games, we propose to construct a user-game graph with
edges weighted by the estimated churn probabilities and adapt link analysis
algorithms to infer the macro-level churn ranking of games. Among the family
of link analysis algorithms, PageRank [18] and HITS (Hyperlink-Induced Topic
Search) [19] are chosen because they have been widely applied in abundant real-
world applications and shown to have stable performance.
Contributions. Our research contributions are summarized as follows:
1. To the best of our knowledge, this paper is the first to develop a solution

1   https://www.samsung.com/au/support/mobile-devices/how-to-use-game-tools/
Micro- and Macro-Level Churn Analysis of Large-Scale Mobile Games
4                                                                         X. Liu et al

Fig. 1. An example of an attributed bipartite graph for mobile game churn pre-
diction

     for both micro-level churn prediction and macro-level churn ranking of large-
     scale mobile games using hundreds of millions of user-app interaction records.
     This solution has been tested in Samsung Game Launcher, one of the largest
     commercial mobile gaming platforms. Although the paper mainly applies the
     proposed solution to mobile game churn analysis, the solution is also applicable
     to churn analysis in other contexts.
2.   We propose a novel semi-supervised and inductive model based on embedding.
     Our model can capture the dynamics between users and mobile games based on
     the introduced temporal loss in the formulated objective function. The model
     is able to embed new users or games not used in training. This is critical for
     mobile game churn prediction because new games and users continually enter
     the market.
3.   We develop an attributed random walk technique that enables us to sample the
     contexts of edges in an attributed bipartite graph and that takes into account
     both topological adjacency and attribute similarities.
4.   We propose a simple method SimSum and adapt two link analysis algorithms
     PageRank and HITS to solve the macro-level churn ranking problem.
5.   We conduct a comprehensive experimental evaluation with large-scale real-
     world data collected from Samsung Game Launcher. The experimental results
     demonstrate that our model outperforms all state-of-the-art methods with
     respect to different evaluation metrics for micro-level churn prediction and
     macro-level churn ranking.
    The rest of the paper is organized as follows. Section 2 formulates the two-
level mobile game churn analysis problem. Section 3 discusses our solution in
detail. Section 4 presents our experimental results on large-scale real-world data.
Section 5 reviews the related literature. Finally, Section 6 concludes our work.

2. Problem Formulation
In the context of mobile games, churn is defined as a player stopping using
a game within a given period (i.e., there is no app usage in the period). The
duration T of the period may vary from application to application depending on
different business goals. T = 14 days and T = 30 days are some typical settings
used in industry [7, 8, 12]. In this paper, we consider the generic micro-level game
Micro- and Macro-Level Churn Analysis of Large-Scale Mobile Games
Micro- and Macro-Level Churn Analysis of Large-Scale Mobile Games                5

Table 1. Notations and definitions
               Notations              Descriptions or Definitions

                  G (t)               Attributed graph at time t

                 H(t)           Historical attributed graphs {Gi }i=t
                                                                  i=t0

                  U (t)              Set of all player nodes in G (t)

                  V (t)              Set of all game nodes in G (t)

                  E (t)                  Set of all edges in G (t)
                   (t)
                  euv       Indicator of the existence of edge (u, v) in G (t)
                   (t)
                  xu                Feature vector of user u ∈ U (t)
                   (t)
                  xv               Feature vector of game v ∈ V (t)
                   (t)
                  zuv       Aggregated feature vector of edge (u, v) ∈ E (t)
                                                          (t)          (t)
                nu , nv      Numbers of attributes in xu and xv , resp.
                                                                     (t)
                   d                 Number of attributes in zuv
                   m                     Embedding dimension

                   g           Edge embedding function g : Rd → Rm
                   f          Churn prediction function f : Rm → [0, 1]
                   lp       Number of embedding hidden layers in Part I
                   ln       Number of prediction hidden layers in Part III
                  |·|                     Cardinality of a set
                 A\B              Set of elements in A but not in B
                  1(c)             Indicator function for condition c

                S (t) (v)         Ranking score of game v at time t

churn prediction and macro-level churn ranking problems without assuming any
particular value of T . We note that uninstall is different from churn. Regarding
only uninstall as churn would be problematic since there may be a large time
gap between cessation of playing and uninstall, if any.
    The relationship between players and games can be represented by an at-
tributed bipartite graph as illustrated in Fig.1, whose two parts correspond to
players and games. In the sequel, we use the terms player and user, and game
and app interchangeably. Let G (t) be the attributed bipartite graph at time t, the
vertex set U (t) denote the set of users and the vertex set V (t) denote the set of
games. A player is represented by a node u ∈ U (t) and a game is represented by a
                                                                   (t)
node v ∈ V (t) . Each user u is associated with a feature vector xu ∈ Rnu , where
                   (t)                                                    (t)
nu is the size of xu ; each game v is associated with a feature vector xv ∈ Rnv ,
                           (t)
where nv is the size of xv . There is an edge between nodes u and v in G (t) if
player u has played v in the time window [t + 1, t + T ]. The set of neighbor nodes
Micro- and Macro-Level Churn Analysis of Large-Scale Mobile Games
6                                                                                X. Liu et al

                                                        (t)
that connect to game v at time t is denoted by Nv . The set of edges at time t
is denoted by E (t) .
    Now we are ready to define the two-level mobile game churn analysis problem
below.
Definition 2.1 (Mobile game churn analysis). Consider a collection of at-
tributed bipartite graphs observed from time t0 to time t (t > t0 ), which is
                                      (i)
denoted by H(t) = {G (i) }i=t
                          i=t0 . Let Nv be the number of neighbors of game v at
              (i)
time i and euv be the indicator of the existence of edge (u, v) in G (i) . The dual
objectives are:
                                                                         (t+1)          (t)
1. Micro-level churn prediction that estimates the probability P r(euv = 0|euv =
   1, H(t) ) for an edge (u, v) in G (t) , which is the probability that (u, v) disappears
   in G (t+1) ;
2. Macro-level churn ranking that ranks the games in a descending order of the
   total number of users to churn at time t + 1.
    The notations used in this paper and their descriptions are listed in Table 1.

3. Methods
3.1. Overview of Our Solution

Most existing works [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17] rely on traditional ma-
chine learning models to solve the micro-level churn prediction problem, which
suffer from several major limitations as mentioned in Section 1. In view of the
recent progress in graph embedding, a promising way is to adopt graph em-
bedding frameworks for churn prediction. However, we face several key technical
challenges that have not been addressed in prior studies: (1) Most existing meth-
ods are transductive, and thus cannot produce embeddings for new player-game
pairs. (2) Existing methods are either purely supervised or unsupervised, and
thus do not take full advantage of relevance between embedding and a task.
(3) Existing methods are node-centric, and thus are not directly applicable to
edge-centric tasks. (4) Existing methods mainly handle a static graph and do not
incorporate graph dynamics in embedding. In the mobile game industry, play-
ers and games, however, change very quickly. There are many new players, new
games, and new player-game relationships almost every day.
     In addressing these challenges, we propose a novel inductive semi-supervised
embedding model in dynamic graphs that jointly learns the prediction function
f and the embedding function g. The prediction function f and the embedding
function g are learned by deep neural networks (DNN). The architecture of the
proposed DNN is presented in Fig. 2, which consists of three parts. Part I is
                                                             (t)
responsible for producing embedding feature vectors g(zuv ) from raw edge feature
           (t)
vectors zuv ∈ Rd , where d is the size of raw edge feature vectors. To learn the
probability of churn, we need to construct a feature vector for each (u, v) with
 (t)
euv = 1. However, it is impractical to calculate features for all possible edges
that may appear in the prediction period because the number of possible edges
                                                                                    (t)
is huge, which is O(|U (t) | · |V (t) |). Instead, we construct the feature vector zuv
of (u, v) from attribute-wise cosine similarity aggregation of xu (t) and xv (t). As
Micro- and Macro-Level Churn Analysis of Large-Scale Mobile Games                  7

Fig. 2. Deep neural network architecture of the inductive semi-supervised em-
bedding model in training

           (t)
such, all zuv can be readily computed from the features of |U (t) | + |V (t) | nodes,
where | · | denotes the cardinality of a set.
    Part II is for inferring contexts from embedding feature vectors. Here the con-
text of an edge refers to the edges that are similar to and co-occur with the edge
under some graph sampling strategy, for example, random walk. Part I and Part
II form the unsupervised component of our model. They are jointly trained by
minimizing the error of incorrect context inference and inconsistency with tem-
poral smoothness (see Section 3.3 for an explanation of temporal smoothness).
Part I and Part II are trained in an inductive and edge-centric way. In contrast to
transductive node embedding that learns a distinct embedding vector for each
node, our model is to learn an embedding function that is generalized to any
unseen edges as long as their feature vectors are available. We propose a novel
attributed random walk to sample similar edges as contexts (see Section 3.4 for
details).
    Part III fulfills the supervised churn prediction task from embedding feature
vectors. Part III forms the supervised component of the proposed model, which
is trained by minimizing the error of incorrect churn predictions. The supervised
component and unsupervised component are simultaneously trained as combined
into a single objective function. Traditional unsupervised embedding techniques
are not designed in a task-specific way and hence are not able to incorporate
task-specific information to improve performance. In contrast, in our model Part
III and Part II share the common hidden layers in Part I, and therefore they
are implicitly coupled with each other. This helps the embedding align with the
supervised prediction task.
    Part I and Part III both consider graph dynamics in training. Part I handles
graph dynamics by requiring the embeddings of the same edge at two consecutive
timestamps to stay close. Part III handles graph dynamics by requiring the churn
probabilities of the same edge at two consecutive timestamps to follow a decaying
pattern.
    The overall objective function of our model consists of four parts:
  L := LS + αLU + βLT + γLR .                                                    (1)
LS denotes the supervised loss due to incorrect predictions and will be discussed
8                                                                            X. Liu et al

in Section 3.2.1. LU denotes the unsupervised loss, which comes from failures of
context inference and will be addressed in Section 3.2.2. LT is the temporal loss
that consists of two parts: temporal smoothness and temporal dynamics, and will
be explained in Section 3.3. LR presented in Section 3.2.3 is the regularization
term, and (α, β, γ) are trade-off weights.
     After the prediction function f and embedding function g are learned through
                                                                                   (t+1)
minimizing the loss function in Equation (1), the churn probability P r(euv =
    (t)
0|euv = 1, H(t) ) of any individual user-game pair (u, v) can be estimated by
        (t)
f (g(zuv )). Under the assumption that the micro-level churn probability is prop-
erly estimated, we are able to show that for a specific game v, the sum of its
churn probabilities of all its users is an unbiased estimator of the ground truth
    (t)      (t+1)
|Nv \ Nv           |. This inspires us to derive a simple solution for macro-level churn
                                                              (t)    (t+1)
ranking: first estimate the total number of churns |Nv \ Nv                | for any game
         (t)
v ∈ V by summing all its users’ churn probabilities and then rank all games
based on the estimations. This method is referred to as SimSum. Furthermore,
we observe that there may be correlations in the churn behavior between differ-
ent games and different users. For instance, for the same user, the probability of
his/her churning a game is likely to be influenced by the probability of churning
other games. On the other hand, similar users may exhibit similar patterns of
churning the same game. Therefore, we propose to construct a user-game graph
with edges weighted by the estimated churn probabilities and adapt link analy-
sis algorithms to estimate macro-level churn ranking. Specifically, PageRank [18]
and HITS [19] are chosen due to their wide applications and stable performance.
Details will be discussed in Section 3.5.

3.2. Static Loss Functions

3.2.1. Supervised Loss Function LS
                                                                                 (t)
The supervised loss function LS is designed for Part III. Let hks (g(zuv )) =
               (t)
φ(Wsk hk−1
       s   (g(zuv )) + bk ) be the k-th hidden layer for churn prediction (referred
to as prediction hidden layer in the sequel), where Wsk and bks are the weights
and biases in the k-th prediction hidden layer, and φ(·) is a non-linear activation
function. We model the churn prediction function f by ln such layers in Part III.
Then the prediction output layer can be represented by:

    f (g(z(t)      b (t+1) = 0|e(t)
          uv )) := P r(euv
                                         (t)
                                uv = 1, H )                                            (2)
             :=   σ(hlsn (g(z(t)
                             uv )))
                                   (t)    
                            ln
                    exp hs (g(zuv ))T ws
             :=                       (t)
                  1 + exp hlsn (g(zuv ))T ws
                                             

where Pbr(·) is the estimate of P r(·), σ(·) is the sigmoid function, and ws is the
sigmoid weights vector that combines the output from the last hidden layer to
Micro- and Macro-Level Churn Analysis of Large-Scale Mobile Games                                   9

predict churn. Now we can define the supervised loss LS as follows:
          t−1                                               (i)      
        1X X            (i+1)
                                              exp hlsn (g(zuv ))T ws 2
  LS =                 δuv      1 − e(i+1)
                                     uv    −                   (i)       ,                       (3)
        L i=0      (i)                       1 + exp hlsn (g(zuv ))T ws
               (u,v)∈E

                                                                     (i+1)
where L is the number of training examples and δuv                           is a censoring indicator
that will be discussed in Section 3.3.

3.2.2. Unsupervised Loss Function LU

The unsupervised loss function LU is devised for Part II, which guides to embed
                             (t)                             (t)
the handcrafted features zuv ∈ Rd into a latent space g(zuv ) ∈ Rm , where m is
the size of the latent space. Denote the k-th hidden layer for embedding (referred
                                                      (t)                (t)
to as embedding hidden layer in the sequel) by hke (zuv ) = φ(Wek hk−1
                                                                   e   (zuv ) + bke ),
          k       k
where We and be are the weights and biases in the k-th embedding hidden layer.
We use the lp layers in Part I to represent the embedding function g, and the
embedding output layer can be represented by:
  g(z(t)      lp (t)
     uv ) := he (zuv ).                                                                           (4)
   We can define the unsupervised loss function as follows:
           t                                           
           X        X             X                                            
  LU = −                                        (i)
                                               δuv  log P r (u0 , v 0 )|g(z(i)
                                                                           uv     ,               (5)
           i=t0 (u,v)∈E (i) (u0 ,v 0 )∈C (i)
                                        uv

         (i)
where Cuv denotes the context (i.e., contextual edges) of (u, v) in G (i) . The
                         (i)
contextual edges Cuv are obtained by attributed random walk on the bipartite
graph, which will be discussed in Section 3.4.
   The likelihood of having a contextual edge (u0 , v 0 ) of (u, v) conditional on the
embedding of (u, v) is:
                                                     
                                            (i)
                                exp g(zuv )T wu0 v0
  P r (u0 , v 0 )|g(z(i)
                          
                     uv ) = P              
                                                (i) T
                                                            ,                     (6)
                             (u∗ ,v ∗ ) exp g(zuv ) wu∗ v ∗

where wu∗ v∗ and wu0 v0 are the vectors of weights for edges (u∗ , v ∗ ) and (u0 , v 0 )
in the softmax layer, respectively. The denominator is computed by negative
sampling [20]. Although the embedding function is learned by training a context
inference task, it is still considered as “unsupervised” because the contexts are
calculated by sampling on the attributed graph, which is independent of any
supervised learning task [21].
    The objective function in Equation (1) contains both the supervised loss
function and the unsupervised loss function. Thus the embedding hidden lay-
ers are jointly trained with prediction hidden layers. Compared to traditional
embedding methods, the semi-supervised approach makes the embedding more
suitable for the prediction task.
    Note that the target of the embedding process is to learn a mapping func-
tion from the feature space to the embedding space, instead of directly learning
the embeddings. Thus, the input for the embedding hidden layers only contains
attributes. A new edge can be embedded for churn prediction as long as we can
10                                                                               X. Liu et al

observe its attributes. This indicates that the proposed approach is inductive. It
is able to produce the embedding for an edge that is not used in training.

3.2.3. Regularization Loss LR
Regularization loss is introduced mainly to avoid overfitting. The weights for
                                        lp
regularization consists of {{Wei , bie }i=1 , {Wsi , bis }li=1
                                                            n
                                                               , ws }. Therefore the regular-
ization part can be expressed as:
                lp                    lp                    ln
                X                     X                     X
     LR := λ0         kWei k22 + λ1         kbie k22 + λ2         kWsi k22
                i=1                   i=1                   i=1
                  ln
                  X
            +λ3         kbks k22 + λ4 kws k22 ,                                          (7)
                  i=1

where {λi }4i=0 are trade-off weights on different regularization terms.

3.3. Temporal Loss Function
Temporal loss refers to the loss related to graph dynamics. Different from existing
works [20, 21, 22], the proposed embedding model takes into account graph
dynamics. Understanding the temporal dynamics of attributed bipartite graphs is
crucial to precisely model the churn behavior. We make several key observations
of the temporal dynamics, which help to achieve good prediction performance in
practical settings.

     Observation 1. For a given user-game play relationship, the longer the
     relationship exists, the more likely the user is to churn the game.

    This is because the content of a mobile game is usually somewhat fixed.
Players can easily lose interests after going through all contents and passing
all levels in the game, let alone many players churn before passing all levels.
With more days of play, their initial interests in the game are gradually effacing.
Indeed, 71% of all mobile app users churn within 90 days [23]. The churn rate of
mobile games is even higher. We plot the average retention rate of mobile games
as a function of time, based on Game Launcher data in Fig. 3. It shows that
95% of user-game play relationships end after 40 days, which well justifies our
observation. We formally state Observation 1 below.
     f (g(z(i)           (i+1)
           uv )) / f (g(zuv    )) ∀0 ≤ i ≤ t − 1, (u, v) ∈ D(i) ,                        (8)
where D(i) = {(u, v) : (u, v) ∈ E (i) ∩ E (i+1) } and / represents “almost always
smaller than”. We refer to this observation as temporal dynamics in the following
discussion.
    The second observation we make is stated in Observation 2.
     Observation 2. For a given user-game play relationship denoted by an
     edge in the attributed bipartite graph, its context usually evolves slowly at
     two consecutive timestamps.

      This observation is also derived from the real-world data. Due to space limita-
Micro- and Macro-Level Churn Analysis of Large-Scale Mobile Games                11

        Fig. 3. Average mobile game retention rate as a function of time

tion, we omit the figure here. It follows that the topology and attribute values of
the attributed bipartite graph mostly evolve smoothly at two consecutive times-
tamps, resulting in similar contexts for a given edge at consecutive timestamps.
Therefore, its embeddings at these two timestamps should also be close, that is,
  g(z(i+1)
     uv    ) u g(z(i)                            (i)
                  uv ) ∀0 ≤ i ≤ t − 1, (u, v) ∈ D ,                             (9)
where u represents “almost always equal to”. We refer to this observation as
temporal smoothness in the later discussion.
   The final observation is:
  Observation 3. By definition, churn in nature introduces right censoring
  to the training dataset.

The problem of right censoring has been widely studied [24] and is illustrated in
Fig. 4. The observation period refers to some time duration in history. Suppose
we are at time t and the observation period is from time t0 to time t. Data for
training and testing all comes from the observation period. Since the label of a
player-game pair at a specific timestamp is determined by their interaction in the
next T time duration, the labels of some player-game pairs in the last T duration
could be unknown. For instance, the last observation of the pair of player 2 and
game 1 (denoted by p2-g1) was in the last T time duration, and therefore the
labels after that time are unknown. This is known as right censoring. In contrast,
the pair p3-g3 has play records every day in the last T days, and it is not censored
during the observation period.
    Considering the existence of censored instances, we introduce a binary indi-
        (i)                                                                   (i)
cator δuv to indicate whether an edge (u, v) is censored at timestamp i. δuv = 0
                        (i)
if (u, v) is censored; δuv = 1 otherwise. Then Inequality(8) needs to be updated
by
  f (g(z(i)           (i+1)
        uv )) / f (g(zuv    )) ∀ 0 ≤ i ≤ tuv − 1, (u, v) ∈ D(i)                (10)
  f (g(z(t uv )
        uv )      /   f (g(z(i)
                            uv ))   ∀ tuv ≤ i ≤ t, (u, v) ∈ E   (tuv )
                                                                         ,
where tuv denotes the timestamp when the edge (u, v) was observed, i.e., tuv =
          (i)
max{i : δuv = 1}. The update reflects the fact that after timestamp tuv the
12                                                                                  X. Liu et al

Fig. 4. An illustration of censored data by three randomly sampled player-game
pairs

label of (u, v) becomes unknown. Since the existence of edge (u, v) after tuv is
unknown, it is more reasonable to just require that Inequality (10) holds pairwise
between time point tuv and all time points after tuv . Therefore the temporal loss
can be expressed as:
             t−1
             X        X         n
     LT :=                       kg(z(i+1)
                                     uv    ) − g(z(i)
                                                  uv )k2 +                                 (11)
             i=t0 (u,v)∈D (i)
                                                                                         o
             1(δuv
                (i+1)
                                uv )) + 1(δuv
                      = 1)f (g(z(i)        (i+1)           (tuv )            (i+1)
         
                                                 = 0)f (g(zuv     )) − f (g(zuv    ))    +
                                                                                           ,

where 1(A) denotes the indicator function for event A and [x]+ = max{x, 0}.
The first term corresponds to the temporal smoothness and the second term cor-
responds to the temporal dynamics. Taking Equations (2) and (4) into Equation
(11), we have the temporal loss LT as follows.
             t−1
             X       X          n
     LT :=                       khlep (z(i+1)
                                         uv    ) − hlep (z(i)
                                                          uv )k2 +                         (12)
             i=0 (u,v)∈D (i)
                                                  (i)             
                                    exp hlsn (g(zuv ))T ws
             1     (i+1)
         
                 (δuv      = 1)                     (i)               +
                                  1 + exp hlsn (g(zuv ))T ws
                                                        (t    )            
                                       exp hlsn (g(zuvuv ))T ws
         1(δuv (i + 1) = 0)                                  (t   )            −
                               1 + exp hlsn (g(zuvuv ))T ws
                            (i+1)     
               exp hlsn (g(zuv ))T ws  o
                              (i+1)      + .
             1 + exp hlsn (g(zuv ))T ws

3.4. Context Generation by Attributed Random Walk

All above discussion assumes the availability of contexts of edges. There have
been a series of node-centric research on graph embedding proposing to apply
random walk to sample contexts [25]. These methods are normally topology-
Micro- and Macro-Level Churn Analysis of Large-Scale Mobile Games               13

Fig. 5. An illustrative example of attributed random walk in an attributed bi-
partite graph

based. In our problem setting, our goal is to embed an edge, not a node. In
this case, a simple topology-based random walk may return two adjacent edges
having the same player or the same game while totally ignoring the similarity of
the other end. This is undesirable. In contrast, attributed random walk measures
such similarity by attributes and allows to transit to similar nodes even if they
are not connected.
     To this end, we propose a novel attributed random walk technique that takes
into account both topological adjacency and attribute similarities to make the
transition decision of the walk. Fig. 5 gives an illustrative example of attributed
random walk in an attributed bipartite graph. For clarity, we omit the time
index in the following discussion. The solid line indicates that there exists an
edge in the attributed bipartite graph. We denote the type of node o by type(o).
A node’s type can be of value either player or game. The dashed directed lines do
not exist in the original attributed graph but may be considered as transitions
by our attributed random walk due to attribute similarities.
     It is time-consuming to calculate pairwise similarities between a node and all
other nodes with the same type. For this reason, we add augmented edges for a
proportion of the same-type nodes. For example, the augmented edges of node
o1 are {(o1 , o) : sim(o1 , o) > 1 − , type(o1 ) = type(o)}, where 0 <  < 1 is a
filtering parameter and:
                     x o1 · x o
   sim(o1 , o) :=               .                                              (13)
                   kxo1 k kxo k
The dashed undirected lines in Fig. 5 represent the added augmented edges be-
tween the nodes and their similar same-typed nodes.
    Consider a random walker that just traversed edge (v1 , u1 ) in Fig. 5 and now
resides at node u1 . Now the walker needs to decide which node to transit to.
Since attribute similarities matter in this walk, the walker cannot just evaluate
those nodes that are neighbors of u1 as suggested in [25]. The walker needs to
evaluate all nodes of the same type within the two-hop neighborhood of v1 . We
denote the one-hop same-typed adjacent nodes of v1 by N1 (v1 ) = {v : d(v1 , v) =
1, type(v) = type(v1 )}, where d(v1 , v) is the length of shortest path between v1
14                                                                           X. Liu et al

and v. For example, v2 ∈ N1 (v1 ) (due to the augmented edge between v2 and
v1 ). Similarly, we denote the set of two-hop same-typed neighbor nodes of v1 by
N2 (v1 ) = {v : d(v1 , v) = 2, type(v) = type(v1 )}. Then the transition probability
in attributed random walk can be calculated as follows:
              
                0,                  if type(o) 6= type(v1 )
               p1 ,
              
                                    if o = v1
   p(o|u1 ) = sim(v1 ,o) ,           if o ∈ N1 (v1 )                           (14)
              
                    q
                sim(v 1 ,o)e u1 o
              
                       q           , if o ∈ N (v )
                                         2   1

where p and q are normalization constants used to control the walk strategy.
Recall that eu1 o = 1 if there is an edge between u1 and o. The attributed walk in
an attributed bipartite graph walks through different types of nodes repeatedly. It
enlarges the probability of making any two consecutive edges in a path similar,
and thus the probability of making the whole set of edges in the attributed
random walk path similar. As an example, suppose we are at u1 from v1 as
shown in Fig. 5. Since v1 and v2 are highly similar, the walker may transit to v2
and produce a path with edges (v1 , u1 ) and (u1 , v2 ) although u1 is not connected
to v2 in the bipartite graph.
    Our proposed method shares some similarities with the idea of [26]. However,
in their work attribute similarities have no influence on which nodes a walker goes
through. In our proposed method, when choosing the next node to visit, there is
a nonzero probability to choose those that are not connected to the present node
but share similar attributes to the previous node. For those that are connected
to the present node, the probability to choose from them is weighted by the
similarity between the descendant and the ancestor nodes. In this way, an edge
can be the context of another even when they are not adjacent but similar in
both ends.

3.5. Macro-Level Churn Ranking
The objective of macro-level churn ranking is to provide a ranked list of games
based on their total numbers of users to churn in the near future. The ground
                                             (t)      (t+1)
truth of the number for game v ∈ V (t) is |Nv \ Nv          |. Let S (t) (v) denote the
                                              (t)
score of ranking for game v, where a larger S (v) means a higher position. One
                                                                           (t)    (t+1)
solution is to first estimate the total number of users to churn |Nv \ Nv               |
               (t)                      (t)
as the score S (v) for a game v ∈ V and then rank all games based on the
estimations. After the prediction function f and the embedding function g are
                                      (t+1)       (t)
learned, the churn probability Pbr(euv      = 0|euv = 1, H(t) ) of any individual
                                                      (t)
user-game pair (u, v) can be estimated by f (g(zuv )). We sum up the churn
probabilities across all users of game v and consider the sum as the estimation
of the total number of users to churn:
              X
    (t)
   Sss  (v) =       e(t)      (t)
                     uv f (g(zuv )).                                                (15)
             u∈U (t)

where the subscript “ss” marks that the score is computed by the SimSum
method.
   Now we show that SimSum provides an unbiased estimation of the ground
        (t)  (t+1)
truth |Nv \Nv      | under the assumption that the micro-level churn probability
Micro- and Macro-Level Churn Analysis of Large-Scale Mobile Games                                 15

is properly estimated. This is because we have:
   E |Nv(t) \ Nv(t+1) | − Sss
                           (t)
                               (v)|H(t) = E |Nv(t) \ Nv(t+1) |H(t) − E Sss
                                                                          (t)
                                                                               (v)|H(t) ]
                                                               

   =E           1(e(t)
                    uv = 1)1(euv
          X                                          X
                                 (t+1)
                                       = 0)|H(t) −        e(t)      (t)
                                               
                                                           uv f (g(zuv )),
            u∈U (t)                                              u∈U (t)

and H(t) = {G (i) }i=t
                   i=t
                      0
                        , then

  E |Nv \ Nv(t+1) | − Sss
     (t)                  (t)
                               (v)|H(t)
                                        
                                                                                             (16)

            uv E 1(euv
      X                                               X
          e(t)
                  (t+1)
                              = 0)|H(t) , e(t+1)        e(t)      (t)
                                                   
  =                                        uv    =0 −    uv f (g(zuv ))
         u∈U (t)                                                    u∈U (t)
          X
                   e(t)   P r((e(t+1)       0|e(t)   = 1, H )) − Pbr(e(t+1)
                                                           (t)
                                                                            = 0|e(t)      (t)
                                                                                              
     =              uv          uv      =      uv                     uv         uv = 1, H ) .
         u∈U (t)

Since the right hand side of Equation 16 is 0, we complete the proof.
    Furthermore, we observe that there may exist correlation of the churn be-
havior between different games and users. As such, we employ link analysis algo-
rithms, which are able to take into consideration the global information and mu-
tual reinforcing effects among different games and users, for inferring the macro-
level churn of games. In particular, we choose PageRank [18] and HITS [19]
algorithms due to their wide application and stable performance. To apply both
                                                           (t)                   (t)
algorithms, we first construct a user-game relation graph Gr = {V (t) , E (t) , Wr },
                                                                       (t)
where V (t) and E (t) are copied from the attributed graph G (t) and Wr is an ad-
jacency matrix with the element at u-th row and v-th column satisfying
     Wr(t) (u, v) = f (g(z(t)
                          uv )).                                                             (17)
This relation graph will be the input of both algorithms. The ranking scores
                                                     (t)    (t)
computed by PageRank and HITS are referred to as Spg and Shits , respectively.

 Algorithm 1: PageRank Based Macro-Level Churn Ranking
                              (t)
      Input: Graph Gr , maximal number of iterations M , damping factor dα
                (t)
      Output: Spg (v) for each v ∈ V (t) ranking scores of games
                               (t)
 1    k = 0, initialize Spg (i) = 1/|V (t) ∪ U (t) | for i ∈ V (t) ∪ U (t) ;
 2    while k ≤ M do
                                                                 (t)
 3       Copy last iteration scores to T (t) : T (t) (i) = Spg (i) for i ∈ V (t) ∪ U (t) ;
                    (t)    (t)
 4       for i ∈ V ∪ U do
                                                                               (t)
               (t)         1 − dα              P              (t)            Wr (i, j)
 5           Spg (i) = (t)           +   d α ×     j∈Ni
                                                        (t) T     (j) ×
                        |V ∪ U (t) |                                               (t)
                                                                        P
                                                                              (t) Wr (i, `)
                                                                              `∈Ni
 6       end
 7    end
              (t)
 8    return Spg (v) for v ∈ V (t)

                                                 (t)
    The procedures to compute Spg is given in Algorithm 1. The algorithm as-
signs a larger ranking score to a game if the game has more about-to-churn
16                                                                               X. Liu et al

players with larger scores. Moreover, the score will be propagated along with a
damping factor from one game to another game. The algorithm handles different
players differently for a specific game. In the algorithm, players with distinct
propensities of churn contribute differently to the computation of the game’s
                                    (t)
ranking score. Considering that Gr is a bipartite graph, in each iteration, the
algorithm computes a game’s ranking score by summing up its players’ scores.
The sum is weighted by the churn probabilities, where the players with larger
churn probabilities are more likely to contribute greater proportions to the game’s
ranking score. Meanwhile, this preponderance is normalized by the total number
of games the player is playing. The intuition is that the algorithm reduces the
proportion if the player is more probable to churn other games at this time point
as a player is rarely observed to churn all games at a single time point in exper-
iments. In this way, the correlation between game churn for a specific player is
addressed.

 Algorithm 2: HITS Based Macro-Level Churn Ranking
                        (t)
     Input: Graph Gr , maximal number of iterations M
               (t)
     Output: Shits (v) for each v ∈ V (t) ranking scores of games
                          (t)         (t)
 1   k = 0, initialize Ahits (i) = Hhits (i) = 1 for i ∈ V (t) ∪ U (t) ;
 2   while k ≤ M do
                                                            (t)
 3      Copy last iteration scores to T (t) : T (t) (i) = Ahits (i) for i ∈ V (t) ∪ U (t) ;
 4      for i ∈ V (t) ∪ U (t) do
              (t)        P          (t)        (t)
 5          Ahits (i) = j∈N (t) Wr (i, j)Hhits (j)
                                i
              (t)                   (t)
            Hhits (i) = j∈N (t) Wr (i, j)T (t) (j)
                         P
 6
                                i
 7      end
                        (t)                          (t)
 8      Normalize {Ahits (i)}i∈V (t) ∪U (t) and {Hhits (i)}i∈V (t) ∪U (t)
 9   end
                 (t)
10   return Ahits (v) for v ∈ V (t)

                                       (t)
    The procedures to compute Shits is presented in Algorithm 2. The algorithm
                                  (t)                              (t)
uses the final authority score Ahits (v) as the ranking score Shits (v) for game v.
                   (t)
Since the graph Gr is a bipartite graph composed of players and games, a larger
authority score is assigned to a game if the game has more players with large hub
scores. Similarly, a player has a large hub score when the player is playing many
games with large authority scores. In other words, the hub score for a player in
some sense describes the chance that a churn will happen with the player. And
the authority score of a game to some extent captures the number of its players
with high churn propensity. Meanwhile, for a specific game, the algorithm handles
different players differently, and for a specific player, different games contribute
different proportions. This is reflected by Lines 5 − 6 of Algorithm 2, where
the computation of the authority score and the hub score are weighted by the
estimated probability of churn.
Micro- and Macro-Level Churn Analysis of Large-Scale Mobile Games               17

Table 2. Dataset statistics
             Dataset    # of users    # of games     # of play records
               USA         15,000        19,705           76,468,301
              Korea        25,000        18,470          106,544,313

4. Experimental Evaluation
In this section, we conduct a comprehensive experimental evaluation over the
large-scale real data collected from the Samsung Game Launcher platform. We
compare our semi-supervised model (referred to as SS in the sequel) with the
following state-of-the-art models for mobile game churn prediction:
– LR: the logistic regression based solution used in [17, 13, 9]
– RS: the supervised variant of our model, in which the loss function contains
  only the supervised component and the regularization term
– DT: a decision tree based solution
– RF: the random forests based solution used in [17, 9]
– SVM: the SVM based solution used in [13, 9].
   In the experiments, we consider the churn duration T = 14, but again the
proposed solution is not restricted to any particular choice of T .

4.1. Dataset and Feature/Label Construction

Two anonymous datasets were collected independently from the Samsung Game
Launcher platform within a 4-month period (from August 1st, 2017 to November
30th, 2017) with users’ consent, one from the users in USA and the other from
the users in Korea. We summarize the key statistics of these two datasets in
Table 2.
    The collected data contains three major types of information: (1) play history,
(2) game profiles, and (3) user information. Each play record in the play history
contains the anonymous user id, the game package name, and the timestamp
of play. It is also accompanied with rich contextual information, such as WiFi
connection status, screen brightness, audio volume, etc. Game profiles are col-
lected from different game stores, which include features like genre, developer,
number of downloads, rating values, number of ratings, etc. User information
contains the device model, region, OS version, etc. However, data within games
(e.g., levels) is not available due to privacy concerns.
    Features and labels need to be generated with care in order to avoid data
leakage. In general, labels and features are taken from disjoint periods to avoid
data leakage. The semi-supervised model is trained based on features and labels
within the observation period. Labels are whether a player churns a game on that
day. Features are constructed from historical data before the day to be predicted.
The training set and test set are split by label days in chronological order [27],
for example, taking the labeled data in the first 2/3 of the observation period
as the training set and the remaining 1/3 as the test set. This is also to ensure
that there is a time difference between the test set and the training set. The
18                                                                       X. Liu et al

semi-supervised model along with the SimSum, PageRank and HITS methods
are evaluated on the test datasets.

4.2. Experimental Settings

We tune the hyperparameters, including learning rate, batch size, regularization
terms, number of layers and number of neurons per layer, based on the model
performance on the test datasets. To determine the parameters for micro-level
churn prediction, we follow what we did in our previous work [28]. A grid search
on these parameters is performed and the combination yielding the best perfor-
mance is chosen. The regularization parameters {λi }4i=0 are all set to be 1. α, β,
and γ in Equation (1) are chosen to be 0.02, 0.01, 1e-5, respectively. , p, and q
discussed in Section 3.4 are set to be 1, 1, and 0.05, respectively. The parameters
used for training the deep neural network are summarized below. The maximal
number of iterations M for the PageRank and HITS methods is set to be 100.
The damping factor for PageRank is set to be 0.85. We summarize the parameter
settings of the two datasets below.
Parameter settings of the USA dataset:
–    Player feature dimension: 10,042
–    Game feature dimension: 10,042
–    Player-game feature dimension: 30
–    Learning rate: the initial value is 0.017 and decay by η = η0 /(1 + k/2), where
     k is the number of epochs
–    Number of neurons: input layers 30, embedding layers 50, output layers 380K
–    Number of epochs: 6-8
–    Batch size: 1,024
–    Context number per user-game pair: 4
–    Optimizer: Adam method [29]
–    Activation function: rectified linear unit (ReLU)
Parameter settings of the Korea dataset:
–    Player feature dimension: 10,042
–    Game feature dimension: 10,042
–    Player-game feature dimension: 30
–    Learning rate: the initial value is 0.019 and decay by η = η0 /(1 + k/2), where
     k is the number of epochs
–    Number of neurons: input layers 30, embedding layers 50, output layers 632K
–    Number of epochs: 8-12
–    Batch size: 4,096
–    Context number per user-game pair: 4
–    Optimizer: Adam method [29]
–    Activation function: rectified linear unit (ReLU)
Micro- and Macro-Level Churn Analysis of Large-Scale Mobile Games                19

Table 3. Performance of different micro-level churn prediction models on USA
               Model           AUC          Recall        Precision
               SS              0.82         0.78          0.32
               RS              0.77         0.75          0.27
               LR              0.66         0.38          0.26
               DT              0.59         0.28          0.32
               RF              0.75         0.31          0.41
               SVM             0.61         0.78          0.18

Table 4. Performance of different micro-level churn models on Korea
               Model           AUC          Recall        Precision
               SS              0.82         0.70          0.34
               RS              0.76         0.70          0.25
               LR              0.67         0.59          0.21
               DT              0.58         0.26          0.30
               RF              0.73         0.27          0.42
               SVM             0.63         0.67          0.18

4.3. Experimental Results

4.3.1. Micro-Level Churn Prediction

We use three widely-used evaluation metrics to compare the performance of
different micro-level churn prediction models. The most important metric with
respect to the business goals is the area under the ROC curve (AUC). Following
previous studies [17, 13, 9], we also consider precision and recall. Accuracy is not
used because our data is imbalanced with around 85% negative instances in the
USA dataset and 86% negative instances in the Korea dataset.
    We report the main experimental results in Table 3 and Table 4. It can be
observed that in general our model achieves the best AUC and recall on both
datasets. Our model outperforms all single models (i.e., LR, DT and SVM) in terms
of all the three metrics. In particular, it is worth mentioning that SVM achieves
a high recall at the cost of a very low precision. This is because it makes a large
number of false positives. This fact makes it less useful for business decision
making. Compared with the ensemble method RF, which has been considered
so far the best method in the field, our model still achieves 34% AUC improve-
ment and 250% recall improvement. RF achieves the highest precision on the
datasets. However, we point out that this number is actually misleading because
it can easily overfit and only recognize a small proportion of churn labels. This
is partially evidenced by its poor recall, which makes it difficult to meet business
requirements of churn prediction (e.g., targeted promotion campaigns). We also
carefully compare the AUC of SS in training and testing in Fig. 6. Since the
20                                                                     X. Liu et al

             (a) AUC on USA                         (b) AUC on Korea

           Fig. 6. Comparison of AUC for SS in training and testing

             (a) AUC on USA                         (b) AUC on Korea

Fig. 7. AUC comparison between SS and RS under different numbers of epochs

curves are very close, it can be learned that our model is neither overfitting nor
underfitting.
    The performance difference between SS and RS justifies the benefits of in-
corporating unsupervised loss and temporal loss in the objective function. We
provide a further comparison between SS and RS with respect to the number of
epochs in Fig. 7. Both models take 5-7 epochs to reach relatively stable perfor-
mance. We observe that SS outperforms RS in general under different numbers
of epochs and for both Korean users and USA users.
    Since the architecture of our DNN is novel and unique (e.g., contain both
supervised outputs and unsupervised outputs), we expose more details on how
we choose the parameters and train the model. A comparison of AUC in train-
ing under different learning rates is given in Fig. 8. It can be observed that
the choice of the learning rate greatly influences the model performance after
the initial epoch. We experimentally find that 0.1 is too large for the learn-
ing rate, which makes the step in gradient descent too large to find a good
minimum and that 0.001 is too small, making it converge very slowly to the
optimal point. Therefore we experimentally test learning rates between 0.1 and
Micro- and Macro-Level Churn Analysis of Large-Scale Mobile Games                  21

              (a) AUC on USA                           (b) AUC on Korea

  Fig. 8. Comparison of AUC for SS in training under different learning rates

              (a) AUC on USA                           (b) AUC on Korea

Fig. 9. Comparison of AUC for SS in training under different training methods

0.001 and find that 0.017/(#Epochs/2+1) works best for training on USA while
0.019/(#Epochs/2+1) works best for training on Korea.
    For the supervised and unsupervised components, we try two different train-
ing methods: co-train and alternative train. Co-train means that we simulta-
neously train the supervised loss function and the unsupervised loss function;
alternative train means that we alternately train the unsupervised component
with the unsupervised loss function and the supervised component with the
supervised loss function. Alternative train is a widely-used training method for
similar structures [21, 20]. It is interesting to observe that, however, co-train out-
performs alternative train in terms of AUC under different numbers of epochs
as shown in Fig. 9. Therefore, we choose co-train as the final training method in
our experiments.

4.3.2. Macro-Level Churn Ranking

We use five widely-used evaluation metrics to compare the performance of dif-
ferent macro-level churn ranking methods based on different micro-level churn
22                                                                       X. Liu et al

prediction methods, including Kendall’s Tau correlation coefficient, weighted
Kendall’s Tau correlation coefficient, Spearman correlation coefficient, Average
Precision at K and Mean Average Precision (MAP).
    In statistics, the Kendall’s Tau coefficient, also known as Kendall rank cor-
relation coefficient, the weighted Kendall’s tau coefficient and the Spearman
coefficient are all measures of rank correlation: the similarity of two orderings on
the same data. For instance, Kendall’s Tau coefficient [30] is defined as:

                          P −Q
     Kendall’s Tau :=              ,
                        n(n − 1)/2

where P is the number of concordant pairs in two orderings, Q is the number
of discordant pairs, and n is the number of elements in the ranked list. In our
problem setting, one ordering is obtained by ranking the games based on one
                               (t)     (t)          (t)
of ranking scores, namely Sss (v), Shits (v), or Spg (v), and the other ordering
                                                                     (t)    (t+1)
is obtained by ranking the games based on the ground truth |Nv \ Nv               |.
The weighted Kendall’s Tau is a weighted version of Kendall’s Tau, in which
tie-breaking exchanges of high ranking are more influential than exchanges of
low rankings, e.g., in [31] an exchange to break the ties between elements with
rank r and s has weight 1/(r + 1) + 1/(s + 1). Spearman coefficient, on the
other hand, quantifies how well the relationship between two ranked lists can
be described using a monotonic function. Unlike Pearson correlation that can
only assess linear relationships, Spearman correlation considers both linear and
nonlinear monotonic relationships. The range of all three evaluation metrics is
between −1 and 1. Having a value closer to 1 indicates that the ranking of the
games is more similar to that based on the ground truth.
    Since macro-level churn ranking is based on the results of micro-level churn
prediction, we compare the performance of different combinations between the
two parts. Each combination is referred to in the “macro-micro” format. For ex-
ample, SimSum-SS represents the combination of using SimSum as the macro-
level churn ranking method and our semi-supervised method as the micro-level
churn prediction model. Fig. 10, Fig. 11, and Fig. 12 present the performance
of different combinations in terms of Kendall’s Tau, weighted Kendall’s Tau and
Spearman coefficients, respectively. Each performance metric is experimentally
evaluated on the same test datasets described in Section 4.1. Since micro-level
churn prediction is performed daily, macro-level churn ranking is also calculated
every day. The performance reported in the figures is the average over the testing
period. As the ranked list of games can be as long as 20, 000, we only report the
performance on the top K games where the maximal K value is set to 500. It
can be observed that in most cases the macro-level churn ranking methods in
combination with our semi-supervised method outperform those in combination
with different baselines (i.e., LR, DT), especially when the value of K is small.
This is because the SS achieves better performance in churn prediction than
other baselines. This indicates that the quality of the churn prediction results is
important for subsequent churn ranking, which takes the predicted churn prob-
abilities as inputs. Moreover, we find that based on the Kendall’s Tau, Weighted
Kendall’s Tau and Spearman coefficient, the performance of HITS is more sen-
sitive to the quality of churn prediction results. This may be because in the
iteration, HITS normalizes the scores over the sum of scores across all nodes,
which can be large and lead to a skewed score distribution. In contrast, SimSum
Micro- and Macro-Level Churn Analysis of Large-Scale Mobile Games                   23

      (a) SimSum Kendall’s Tau on USA         (b) SimSum Kendall’s Tau on Korea

       (c) HITS Kendall’s Tau on USA            (d) HITS Kendall’s Tau on Korea

     (e) PageRank Kendall’s Tau on USA        (f) PageRank Kendall’s Tau on Korea

Fig. 10. Comparison of Kendall’s Tau correlation coefficients in testing under
different macro-level churn ranking and micro-level churn prediction methods
24                                                                         X. Liu et al

                                          (b) SimSum Weighted Kendall’s Tau on Ko-
     (a) SimSum Weighted Kendall’s Tau on rea
     USA

     (c) HITS Weighted Kendall’s Tau on USA (d) HITS Weighted Kendall’s Tau on Korea

     (e) PageRank Weighted Kendall’s Tau on (f) PageRank Weighted Kendall’s Tau on
     USA                                    Korea

Fig. 11. Comparison of Weighted Kendall’s Tau correlation coefficients in test-
ing under different macro-level churn ranking and micro-level churn prediction
methods
Micro- and Macro-Level Churn Analysis of Large-Scale Mobile Games                     25

  (a) SimSum Spearman Coefficient on USA (b) SimSum Spearman Coefficient on Korea

    (c) HITS Spearman Coefficient on USA    (d) HITS Spearman Coefficient on Korea

                                           (f) PageRank Spearman Coefficient on Ko-
  (e) PageRank Spearman Coefficient on USA rea

Fig. 12. Comparison of Spearman correlation coefficients in testing under differ-
ent macro-level churn ranking and micro-level churn prediction methods
26                                                                           X. Liu et al

                                              (b) SimSum Average Precision at K on Ko-
     (a) SimSum Average Precision at K on USA rea

     (c) HITS Average Precision at K on USA (d) HITS Average Precision at K on Korea

     (e) PageRank Average Precision at K on (f) PageRank Average Precision at K on Ko-
     USA                                    rea

Fig. 13. Comparison of Average Precision at K in testing under different macro-
level churn ranking and micro-level churn prediction methods
Micro- and Macro-Level Churn Analysis of Large-Scale Mobile Games               27

Table 5. Mean Average Precision under different micro-level churn prediction
and macro-level churn ranking methods on Korea and USA
               Area-Model      SimSum       HITS          PageRank
               Korea-SS        0.41         0.40          0.40
               Korea-LR        0.40         0.39          0.39
               Korea-DT        0.40         0.40          0.39
               USA-SS          0.41         0.40          0.38
               USA-LR          0.38         0.38          0.37
               USA-DT          0.38         0.38          0.38

and PageRank normalize the scores based on only the scores of neighbors, which
is more robust.
     We also evaluate the performance of different churn ranking methods by
two widely-adopted metrics in the recommendation domain: Average Precision
at K and Mean Average Precision (MAP). Precision at K corresponds to the
percentage of relevant results in the top K games of the ranked list. A game at
position i in the ranked list based on our ranking scores, is considered relevant
if it is in the top i games of the ranked list based on the ground truth. The
comparison is reported in Fig. 13. Similarly, the scores reported are the average
of all days in the testing period. In Fig. 13, we observe similar patterns to those
observed under the previous statistic metrics. In most cases, different macro-
level churn ranking methods in combination with the proposed semi-supervised
mode SS achieve better performance than the combinations with other baseline.
This again indicates the importance of the positive interaction between micro-
level churn prediction and macro-level churn ranking. One shortcoming of this
metric is that it fails to take into account the positions of relevant games. To
overcome this limitation, we further adopt Mean Average Precision [32], in which
the relevance score is weighted by the position before being averaged. We report
the comparison of the methods in terms of MAP in Table 5. Similar conclusions
can be drawn from the results based on MAP. It is interesting to note that after
putting larger weights on relevant games with higher ranking, the difference of
HITS between other two methods in the sensitivity to churn prediction quality
is reduced.

5. Related Work
In this section, we review two categories of existing works that are relevant to
this paper. The first category includes the existing works for (game) churn anal-
ysis. The early works [7, 9, 8, 11, 13, 12] are based on more traditional machine
learning models, such as logistic regression, random forests, SVM, naive Bayes,
etc., and are experimentally evaluated on extremely small numbers of games (i.e.,
less than five). As shown in our experiments, their performance on large-scale
real data with tens of thousands of mobile games and hundred of millions of
user-app interactions is generally not satisfactory. In addition, we also observe
scalability issues when they are applied to large-scale data. Some recent research
has started to use more advanced models. [10, 14, 16] propose to use survival
You can also read