Measuring Mobile Phone Use: Self-Report Versus Log Data

Page created by Leroy Harrison
 
CONTINUE READING
Journal of Computer-Mediated Communication

Measuring Mobile Phone Use: Self-Report Versus
Log Data

Jeffrey Boase
Ryerson University, 350 Victoria Street, Toronto, Ontario M5B 2K3

Rich Ling
IT University of Copenhagen and Telenor, Rued Langgaards vej 7, 2300 Copenhagen, Denmark

   Approximately 40% of mobile phone use studies published in scholarly communication journals base
   their findings on self-report data about how frequently respondents use their mobile phones. Using
   a subset of a larger representative sample we examine the validity of this type of self-report data by
   comparing it to server log data. The self-report data correlate only moderately with the server log data,
   indicating low criterion validity. The categorical self-report measure asking respondents to estimate
   ‘‘how often’’ they use their mobile phones fared better than the continuous self-report measure asking
   them to estimate their mobile phone activity ‘‘yesterday.’’ A multivariate exploratory analysis further
   suggests that it may be difficult to identify under- and overreporting using demographic variables
   alone.
Key words: measurement, mobile phones, self-report, log data

doi:10.1111/jcc4.12021

Introduction
Mobile phones are without question the most common form of electronically mediated communication
in the world. We have adopted the mobile phone as a safety link, and we have adopted it to facilitate
coordination in the family and at work. Textile weavers in Nigeria (Jagun, Heeks, & Whalley, 2008)
fishers in India (Jensen, 2007), and lovers in Mozambique (Archambault, 2011) have all adopted the
mobile phone for their various purposes. Teens in the US, as well as many other countries, use the
mobile phone to send and receive an endless number of texts on all the large and small events of their
lives (Lenhart, et al., 2010). Given these diverse uses, it is a fertile area of research. Understanding how
we use the mobile phone helps to understand the functioning of society.
     The ability to do research on mobile telephony (or for that matter any form of mediated
communication) however is based on the ability to use valid data. In many cases research on the mobile
phone relies on questionnaires. In the questionnaires, respondents are often asked to estimate the
frequency or the duration of their mobile phone use either in summary form or during a specific time
period. Alternatively respondents can be asked to use a diary or some other device to quantify their use.
These methods are nearly the same methods used to collect data on internet and social media use.
     There are clearly problems with these approaches. It is difficult for people to remember or to
characterize their use. In addition there may be different motivations to over- or underreport mobile

508              Journal of Computer-Mediated Communication 18 (2013) 508–519 © 2013 International Communication Association
phone use. For people who see use as a marker of their popularity, there will be the impulse to pad
the numbers. The opposite is true for people with a more sober relationship to technology and who
feel the device is not so central to their lives but who nonetheless have one. We are interested to get a
handle on the reliability of this information. In this paper we compare self-report measures of mobile
phone voice and SMS (Short Message System; known as ‘‘texting’’ or ‘‘text message’’) use with server
log data to evaluate the criterion validity of self-report measures. We further conduct an exploratory
analysis to examine the extent to which demographic characteristics are associated with under- and
overreporting in self-report measures. Our analysis has two main implications. First, given that many
influential studies of mobile communication rely on self-report measures of mobile phone use, our
evaluation of these measures has strong implications for the field of mobile research as well as studies
on other forms of mediated communication that rely on similar methods. Second, given that most
research involving self-report measures also collects demographic data, our exploratory analysis of the
demographic characteristics associated with under- and overreporting of mobile phone usage informs
existing and future research about the types of respondents that are likely to contribute to the decreased
validity of self-report measures.

Self-Report Measures in Mobile Communication Studies
To confirm that the self-report measures we examine in this paper are indeed commonly used in
scholarly communication research we conducted an analysis of journal articles found through the Social
Sciences Citation Index. Using the ‘‘communication’’ filter to focus only on publications appearing
in communication sources, we searched with the keywords ‘‘mobile phone.’’ We then manually sifted
through the results and removed papers that were not based on empirical research, leaving us with 41
journal papers published between 2003 and 2010.
    In many cases the authors of these papers only mention the general nature of the questions asked
and do not provide their exact wording. However, enough description was provided for us to identify
two common types of self-report measures that operationalize intensity of mobile phone use: frequency
and duration. Frequency measures typically asked respondents to report on the number of calls or SMS
messages sent and received in a single day, or how often they use their mobile phones in more general
terms (e.g. ‘‘less than once a month,’’ ‘‘one to three times a month,’’ ‘‘once a week,’’ etc.). Duration
questions ask respondents to estimate amount of time that they spend using mobile phones in a single
day or over several days. Duration studies often involve the use of time diaries where respondents write
down the amount of time that they spend communicating using mobile phones. Sometimes duration
studies simply ask respondents to estimate the amount of time that they spend using their mobile
phones during a short period of time. Other types of self-report mobile phone use measures include the
number of friends and family contacted regularly by mobile phones, the places where mobile phones
are used, and self-perception attitudes about how extensively mobile phones are used. Approximately
40% of papers reviewed include at least one frequency of use measure, 27% include at least one duration
of use measure, 9% rely on self-report measures other than frequency or duration to gauge level of
use, and the remaining papers do not include any quantitative mobile phone use measures. Correlation
analysis showed little association between the type of measure used and the number of times that a
paper has been cited.
    The variety of approaches employed to measure mobile phone use indicates that there is not a single
commonly accepted approach to gathering data on this issue, likely because different questionnaires
are developed for different purposes. Nevertheless, the fact that approximately 40% of the papers
reviewed include at least one self-report measure regarding frequency of mobile phone use indicates

Journal of Computer-Mediated Communication 18 (2013) 508–519 © 2013 International Communication Association   509
that collecting this type of information is applicable to a variety of studies. This is not surprising given
that frequency of mobile phone use is indicative of the extent to which mobile phones have become
embedded into everyday life. Moreover, as with any study in mediated communication, establishing
that a technology is used at least to some extent is necessary before further implications of its use can
be investigated. Given the wide applicability and popularity of self-report frequency of use measures we
focus specifically on this type of measure in this study.

Examining Self-Report Measures of Mobile Phone Use
Although communication researchers often use self-report measures of mobile phone use, we could
identify only two studies that examined the validity of these measures. One is a convenience sample
study of 93 volunteers conducted by Parslow, Hepworth, and McKinney (2003). In this study volunteers
were asked to report on the number of incoming and outgoing voice calls during the previous day,
week, or month, and further estimate the number of minutes spent on the voice calls. Their responses
were then compared to log data collected from four different mobile phone operators. Their results
showed moderate correlation between the self-report and log data, indicating that self-report measures
did not fully represent actual use patterns. There was also evidence of over-reporting, especially among
users with relatively low levels of activity. In another study, Cohen, and Lemish (2003) used a quota
sample to compare self-report measures with real time data collection using Interactive Voice Response
technology over a 5-day period. They found the correlation between the questionnaire data and billing
data regarding the frequency of calling to be moderately strong, indicating a reasonable fit between the
self-report measures and actual behavior.
     While these two studies represent an important first step towards understanding the validity of
self-report measures in mobile communication studies, the nonrandom nature of their sampling
methods warrants further study of this issue. Moreover, these studies focused only on voice calling
activity and did not include measures of SMS messaging.
     In related research, social network studies have looked at self-report measures of communication
networks. Using four different samples, Bernard and Killworth (1977), conclude that respondents
do not know with any accuracy the individuals with whom they communicate. In another paper
that examines the results from seven experiments, Bernard, Killworth, and Sailer (1982), argue that
self-report of communication behavior bears no useful resemblance to actual communication behavior.
Freeman, Romney, and Freeman (1987) also find low levels of accuracy in respondents’ ability to
identify individuals who coattended colloquium presentations with them, further finding that errors
were biased by long-term attendance patterns. Although none of these studies examine self-report in
regards to mobile phone use, they do indicate that problems with self-report measures extend beyond
mobile phone use as a particular form of communication.
     There are three main reasons to question the validity of self-report measures regarding mobile
phone use: cognitive burden, social desirability, and conceptual validity. Cognitive burden occurs when
respondents attempt to recall behaviors that they do not typically think about or record on a regular
basis. To decrease this burden respondents often use heuristic shortcuts that decrease the validity
of measures (Sudman, Bradburn, & Schwartz, 1996). Moreover, since mobile phone interaction is
potentially regarded as a more general indicator of sociability, interest in increasing desirability may
occur and cause respondents to overreport the extent to which they use their mobile phones. Finally,
even if self-report measures of mobile phone activity are accurate, there is some reason to question the
extent to which they measure the concepts that they are intending to measure. In particular, questions
that focus on mobile phone activity in a single day are suspect because it is unclear if the activity

510             Journal of Computer-Mediated Communication 18 (2013) 508–519 © 2013 International Communication Association
occurring in that day is actually representative of more general levels of activity. For individuals with
irregular schedules, day-to-day mobile phone activity may fluctuate dramatically.

Method
In this study we draw on a combination of data collected through a nationally representative internet-
based survey conducted in Norway during October and November of 2008 and sever log data for the
month in which respondents completed the survey. In order to ensure the representativeness of the
sample, participants were recruited from an earlier telephone survey and a few questionnaires were given
over the phone to older respondents who did not have internet access. Two versions of the mobile use
question were included in the survey: a frequency of mobile activity occurring ‘‘yesterday’’ and frequency
of mobile phone use occurring over longer periods of time, e.g. on a weekly basis. The survey data that
we use in this study is drawn from the second wave of a two-wave study. In total, 1499 respondents
agreed to participate in the second wave (65% response rate), and respondents showing inconsistencies
between the first and second wave (for example, radically different ages, change in gender, etc. between
wave 1 and wave 2) were dropped leaving a total of 1382 respondents. Researchers potentially had access
to the log data for 613 of these second-wave respondents because these respondents were customers
of the telecommunication company funding the survey. Four hundred twenty-six of those customers
granted their informed consent for the researchers to access their log data.
     The demographic composition of those granting access to their data is as follows: 50% are female,
58% are married, 23% hold a college degree, 58% are working full time, the mean age is 49 years (SD =
15.00; skewness = −.17), the mean household size is 2.7 individuals (SD - 1.41; skewness - .85). Those
granting access to their log data were similar with the other second wave respondents who did not grant
access to their log data in terms of their sex, household size, and education. However, compared to the
other respondents, those who granted access to their log data were a mean of approximately 8 years
older, more likely to be married (58% married versus 42% married), and more likely to be working
full time (58% working full time versus 48% working full time). A separate analysis of second-wave
respondents indicates that the subset of respondents used in the analysis reported here–i.e. customers
who granted access to their log data–differ from other respondents in our representative sample mainly
because they are customers of a particular telecommunication company and not because of self-selection
in regards to their willingness to share their log data. This bodes well for the generalizability of our
results, although we caution that these results are still somewhat limited insofar as they apply most
directly to individuals who share the demographic traits of our subsample.

Self-Report Measures
The telephone survey data included two types of self-report measures aimed at measuring intensity of
mobile phone use for both voice and SMS messages. The first type of measure is continuous in nature
and asked respondents to report the number of times that they used their mobile phone ‘‘yesterday’’ for
outgoing voice calls and SMS messages. In one question they were asked: ‘‘How many times did you use
a mobile phone yesterday to call others, either for work or privately.’’ In another question respondents
were asked, ‘‘How many text messages did you send yesterday, either for work or privately?’’ We refer
to these types of continuous self-report questions as ‘‘yesterday’’ measures.
     The second type of question is categorical in nature and required respondents to report on how
often they use their mobile phones by choosing the most appropriate response from a set list. In one
question they were asked, ‘‘How often do you use a mobile phone to call others (not texting or SMS)?

Journal of Computer-Mediated Communication 18 (2013) 508–519 © 2013 International Communication Association   511
More than 10 times a day, 5–10 times a day, 2–4 times a day, at least once a day, 3–6 times a week,
1–2 times a week, Less often, Never.’’ In another question they were asked, ‘‘How often do you use
the mobile phone to send or receive text messages/SMS?’’ and were given the same set of categorical
response options. We refer to these types of categorical self-report questions as ‘‘how often’’ measures. Note
that all self-report measures except for this last categorical measure of SMS messages refer only to
outgoing activity and that all questions have been directly translated from Norwegian.

Log Data
Server log data includes all outgoing voice and SMS events captured during the month in which the
survey took place. Three types of variables are constructed with this data for both voice and SMS events.
The first is simply the number of outgoing events that month. The second is the average number of
outgoing events per day as calculated by dividing the total number of events by the number of days in
that month. This type of variable is used for comparison with the yesterday measures of daily use. The
third is categorical, constructed using the same categories as those used in the how often measures, i.e.
More than 10 times a day, 5–10 times a day, 2–4 times a day, at least once a day, 3–6 times a week, 1–2
times a week, Less often, Never. Here a calculation is used to select the most appropriate category from
these choices using the log data.

Conceptualization and Operationalization
To allow for a fair comparison between the self-report and log measures it is important to be clear about
exactly what is being compared. Our comparison is concerning the various possible operationalizations
of a concept, rather than the accuracy of the measures themselves. In other words, we are more interested
in the extent to which self-report measures can adequately operationalize a particular concept than the
extent to which they directly measure the exact nuances of the behavior about which they are asking.
For example, we are more interested to know if asking people to report on the number of people they
called yesterday is a good operationalization of mobile phone use, in general, than if respondents are
able to recall this behavior on a specific day with a high degree of accuracy.
     The broad concept that we are focusing on is intensity of mobile phone use, and the specific
dimension of this concept being examined is frequency of mobile phone use. Among the various
ways of operationalizing this dimension, monthly server log data represents one of the best possible
operationalizations because it is highly accurate. Indeed, it must be accurate because it is the basis
of billing. Moreover, since the data is collected for a sustained period of time it is less affected by
unusual events. Although it may be the case that monthly server log data collected during the holiday
season may be unusually high, the data that we use in this analysis is not collected during this
period of time. Moreover, all the respondents included in our analysis indicated that they are the sole
owner of their mobile phone and that they do not use anyone else’s mobile phone. In addition, the
practice of ‘‘SIM-switching’’ that is often seen in developing countries is not common in Norway.1
Thus, for all intents and purposes, the log data represents the total mobile phone use of a single
individual.
     Given that the server log data represents an ideal way of operationalizing frequency of use, our
analysis focuses on comparing it to the two types of operationalizations that are less ideal–namely,
the two types of self-report measures discussed above. While these measures are clearly aimed at
operationalizing frequency of use, issues of respondent recall, social desirability, and generalizability
give reason to doubt their validity. In the analysis that follows we examine the criterion validity of the

512              Journal of Computer-Mediated Communication 18 (2013) 508–519 © 2013 International Communication Association
self-report measures by comparing them to the more ideal sever log measures to address the following
research questions.
     Research Question 1: How well do the self-report measures compare to sever log data?
     Since it is likely that at least some respondents will be more prone to over- or underreport their
usage and identifying such respondents may help researchers to develop better measures and decrease
their negative impact in the analysis, we further ask:
     Research Question 2: What are the demographic characteristics associated with over- and underre-
porting of mobile phone use?
     We focus on demographic characteristics primarily because such characteristics are typically
collected in self-report surveys.

Analysis and Results
To address our first research question, we start by comparing our log data with the self-report measures
using pairwise correlations. The results show that although the self-report measures are significantly
correlated with the log data (p < .001 for all correlations), the correlations are not as strong as could be
expected from measures that operationalize the same concept equally well.
     The correlations between the yesterday self-report questions and the voice log measures are .55, p
< .001, for outgoing voice calls, and .58, p < .001, for outgoing SMS messages. In order to examine the
possibility that the presence of outliers adversely affected these correlations, we preformed logarithmic
transformations on these variables. Our results show lower correlations with these transformed variables,
at .23, p < .001, for outgoing voice calls and .41, p < .001, for outgoing SMS messages.
     The correlations between the how often self-report questions and the SMS message log data are .48,
p < .001, for voice calls and .35, p < .001, for SMS messages. Correlations with a transformed log
variable are .66, p < .001, for voice calls and .74, p < .001, for SMS messages. Given that the transformed
variables better controlled for the influence of outliers, these results indicate that the correlations
between self-report and log data for the how often measures were stronger than the correlations for the
yesterday measures.
     In sum, although these correlations may be high for variables that measure two different yet
associated things–for example, mobile phone use and life satisfaction–they are not high for variables
that are supposed to be measuring exactly the same thing. Overall, correlating self-report measures to
log data generally indicates that the self-report measures are only somewhat similar to the superior
log measures. The most promising self-report measure was the categorical how often measure of SMS
messaging. We note that this finding is unexpected given that this measure focused on both outgoing
and incoming messages, while the log data captured only outgoing SMS messages, though there is often
symmetry between in and out-going calls. The other self-report measures that show less correlation
with the log data asked about outgoing voice and SMS events only, making them theoretically more
comparable to the outgoing log data.
     Next, we compare the self-report measures directly to the log data to understand exactly how the
self-report measures differ from the log data, and to confirm that these differences are indeed significant.
For this set of analyses we use log variables that were recoded to make them directly comparable to
the self-report measures. We start by describing the difference between yesterday self-report and log
measures, as presented in Table 1. We find that respondents report making more voice calls yesterday
than the log data shows that they make during an average day. The mean number of reported calls
yesterday is 3.88, while the mean number of calls logged on an average day is 40 percent lower than
that number (2.38). The median number of calls reported yesterday is 3, while the median number of

Journal of Computer-Mediated Communication 18 (2013) 508–519 © 2013 International Communication Association   513
Table 1 Central tendency comparison of the ‘‘yesterday’’ self-report measure and average daily call log
activity

                                    Voice                                                      SMS
               Self-report           Log            Difference           Self-report            Log           Difference

Mean               3.88              2.38              1.50                  6.19              3.95              2.24
Median             3.00              1.43              1.57                  3.00              1.87              1.13
SD                 4.80              2.99              1.81                 13.98              8.81              5.17

calls logged on an average day is less than half that number (1.43). We also find a much larger standard
deviation for the self-report yesterday measures than the voice log measure (4.8 versus 2.99, respectively),
indicating a much greater range of variability in the responses given through this self-report measure
than we would expect based on the voice log measure. As with the voice self-report yesterday measures,
the SMS message self-report measures were also higher than the log SMS measures. Respondents report
a mean of 6.19 SMS messages yesterday, while the mean number of SMS messages logged in an average
day is 3.95. The median number of self-report SMS messages is 3.00, while the median number of
logged SMS messages is 1.87. The standard deviation for the self-report SMS measure is 13.98, while
the standard deviation of the log SMS measure is 8.81.
     A one-tail t test comparing the self-report and log measures of voice calls occurring yesterday
confirms that the mean of the self-report measure is significantly higher than the mean of the log
measure, t(425) = 7.72, p < .001. A one-tail t test comparing the self-report and log measures for the
number of SMS messages sent yesterday also shows that mean of the self-report measure is significantly
higher than the mean of the log measure, t(425) = 4.07, p < .001.
     We also compared the how often self-report measure to recoded log variables. Table 2 shows the
percentage of respondents in each category of these self-report and log measures. The results generally
show that a greater percent of respondents in each category of the self-report measures than in each
category of the log measures. The main exception is that respondents whose log data show an extremely
low amount of usage do not tend to report this low usage in the self-report measures. The log data
showed that 17% of the respondents made less than one voice call per week, however only 3% of the

Table 2 Percent comparison of the ‘‘how often’’ self-report measure and categorized call log activity

                                             Voice                                               SMS
                          Self-report         Log         Difference         Self-report          Log         Difference

11+ times a day                 9               3               6                 11                7             4
5-10 times a day               15              10               5                 24               15             9
2-4 times a day                32              28               4                 28               27             1
At least once a day            14              18              −3                 10               13            −3
3-6 times a week               19              15               3                 14               10             4
1-2 times a week                8               9              −1                  8                6             1
Less often                      3              17             −14                  5               23           −18
Total percent                 100             100                                100              100

514             Journal of Computer-Mediated Communication 18 (2013) 508–519 © 2013 International Communication Association
respondents self-reported having this low level of calling activity. This finding may point to respondents’
reluctance to report low levels of socialization in the questionnaire. The log data also showed that
23% of the respondents sent less than one text message per week, yet only 5% of the respondents
self-reported exchanging less than one text message per week.
     To confirm that the self-report and log measures for the categorical how often variables differed
significantly according to more stringent statistical criteria, we conducted a Chi-square goodness of
fit tests using csgof program installed as an add-on command in STATA. Using this program we
were able to compare the frequency of respondents in each category of the self-report measures with
the frequencies that we would expect based on the log data. The results of this test showed that the
frequencies of respondents in the categories of the voice self-report measure differed significantly from
the frequencies that could be expected based on the voice log measure, chisq(6) = 121.49, p = 0.
Similarly, results showed that the frequencies of respondents in the categories of the SMS self-report
measure differed significantly from the frequencies that could be expected based on the SMS log
measure, chisq(6) = 124.71, p = 0.
     In sum, regarding our first research question, our results show that self-report measures of voice
calling and SMS messaging generally do not compare favorably to log measures. This is true for both
the self-report measures that require respondents to report on these behaviors yesterday, and for the
self-report measures that requires respondents to report how often these behaviors occur by selecting
from a number of predetermined categories. In general, respondents are more likely to overreport than
to underreport voice calling and SMS messaging.
     To address our second research question, we conducted an exploratory multivariate analysis to
examine the extent to which demographic characteristics can be used to predict under- and overreporting
of voice calling and SMS messaging. We control for whether respondents pay their mobile phone bills
completely on their own since this likely affects the self-monitoring behavior that would assist in recall
and is also related to factors such as employment status, age, and education.
     We first examine under- and overreporting of the yesterday measures. Accordingly, two independent
variables are used: one indicating the amount of overreporting that occurs when the self-report measure
is greater than log measure of an average day, and the other indicating the opposite. The results of this
analysis are presented in Table 3.
     In regard to voice calling, different demographic traits are associated with overreporting than
are associated with underreporting. Being a male is positively associated with overreporting while
household size is positively associated with underreporting (p < .001 for both coefficients). Working
full or part time is positively associated with underreporting (p < .05). In regard to SMS messaging, age
is negatively associated with both overreporting and underreporting (p < .001 for both coefficients),
implying that young adults provide less valid reports than those who are older. Being a female and being
employed are both positively associated with underreporting of SMS messages (p < .05). Being married
is negatively associated with overreporting of SMS messages (p < .01). The adjusted R2 statistics for
these models are all relatively low, indicating that demographic traits do not explain a great amount of
the variance in under- and overreporting.
     We also used a multivariate analysis to explore demographic factors associated with under- and
overreporting that occurred with the how often self-report variable, using dichotomous variables to
indicate under- and overreporting that occurred when respondents selected a category that differed
from the appropriate category indicated by the log data. The results of this analysis are presented in
Table 4.
     This multivariate analysis shows that only a few factors are significantly associated with under- and
overreporting. In regards to voice calls, only being a male was significantly and negatively associated with
underreporting (p < .05). In regard to SMS messaging, only being a male and age were significantly and

Journal of Computer-Mediated Communication 18 (2013) 508–519 © 2013 International Communication Association   515
Table 3 Regression analysis of overreporting and underreporting with the ‘‘yesterday’’ self-report
measure

                                           Voice                                                       SMS
                       Overreporting             Underreporting                 Overreporting                Underreporting

Pays Cost                 −.438∗                     −.200                        −.248                        −.177
Male                        .551∗∗∗                    .064                          .290                      −.507∗
Age                       −.011                      −.008                        −.036∗∗∗                     −.028∗∗
House size                −.027                        .253∗∗∗                    −.033                          .087
Married                   −.056                      −.117                        −.703∗∗                      −.308
College degree            −.021                        .114                       −.389                        −.392
Working                     .027                       .367∗                      −.309                          .521∗
Constant                   1.426∗∗                   −.448                         3.403∗∗∗                     1.507∗
R2                          .121                       .236                          .250                        .302
Adjusted R2                 .095                       .202                          .224                        .273
N                        243                        164                          212                          174
Note: scores are unstandardized regression coefficients. ∗ p < .05.              ∗∗
                                                                                      p < .01.   ∗∗∗
                                                                                                       p < .001.

Table 4 Logistic regression analysis of overreporting and underreporting with the ‘‘how often’’
self-report measure

                                           Voice                                                       SMS
                       Overreporting             Underreporting                 Overreporting                Underreporting

Pays Cost                   −.317                         .259                      −.180                        −.057
Male                          .490                     −.704∗                         .535∗                      −.348
Age                           .018                        .002                        .025∗                        .006
House size                  −.085                         .142                        .042                       −.026
Married                     −.169                      −.078                        −.138                        −.015
College degree                .281                     −.052                          .474                         .086
Working                     −.096                        .017                         .329                       −.372
Pseudo R2                     .030                        .023                        .035                         .011
χ2                          15.05                       9.51                        17.94                         3.81
N                          248                        219                          260                          214
Note: scores are unstandardized regression weights. ∗ p < .05.             ∗∗
                                                                                p < .01.   ∗∗∗
                                                                                                 p < .001.

positively associated with overreporting (p < .05). That is, this analysis indicates that males significantly
overreport their voice mobile use. Further, males and also people who are older overreport their use of
SMS. As with the multivariate analysis of the yesterday self-report measures, the low Pseudo R2 statistics
of these multivariate models indicates that demographic traits do not explain a great amount of the
variance in under- and overreporting that occurs with the how often self-report measures.
    In sum, our exploratory analysis links a few demographic traits to under- and overreporting,
especially with regard to the yesterday self-report measure. Nevertheless, these demographic traits to
not explain a large amount of variance in our models, which suggests that it may be difficult to identify
and compensate for under- and overreporting using demographic traits alone.

516              Journal of Computer-Mediated Communication 18 (2013) 508–519 © 2013 International Communication Association
Limitations
This study faces a few limitations. First, given that it takes place in Norway, it is possible that the results
may not generalize well to other cultures or contexts. While this problem is common to most studies
and there is no obvious reason to think that Norwegians would be more likely to under- or overreport
their mobile phone usage than people in other countries, this point is worth bearing in mind. Second,
our focus has been on how well these measures operationalize the concept of frequency of mobile
phone use, rather than the specific accuracy of these measures in their own right. For example, we are
more interested in how well questions about mobile activity that occurred yesterday generally measure
frequency of mobile phone use than we are interested in how well they accurately reflect activity that
occurred on that specific day. For this reason these results do not apply to studies for which it is
important to have an accurate measure of activity occurring on a specific day. This is not to say that these
‘‘yesterday’’ measures are necessary accurate–just that we cannot comment on their accuracy based
on these findings. Third, although our data comes from a subset of a nationally representative sample,
it is possible that groups who systematically avoid participating in these surveys answer self-report
questions differently than those who participate. However, we find that the demographic composition
of the subsample is mostly accounted for by the fact that respondents in the subsample were customers
of the telecommunications company funding this study and customers of this company tended to have
these demographic traits. Another issue is that this study is founded on behavioral data in addition
to a few demographic variables. The data that was available did not include eventual motivational
questions because the questionnaire was focused on other issues. This does not mean that there is not
an interesting link between motivational variables and actual traffic data, only that it was not a part
of this study.
     Finally, although all respondents who participated in this study report that they are the sole users
of the phones for which we collected the log data, it is possible that family members or friends may
occasionally borrow their phones thereby decreasing the validity of the log data. We have little reason
to believe that this is a common problem in our data, however, such an error may have occurred in
some cases.

Discussion and Conclusion
Our results indicate that while self-report indicators aimed at measuring the frequency of mobile
phone use are not completely out of line with actual behavior, our analysis shows that these self-
report measures suffer from low criterion validity. That is to say, they are generally suboptimal ways
of operationalizing this dimension of mobile phone use. These measures only moderately correlate
with actual behavior, they vary more widely than actual behavior, and they are prone to overre-
porting. The categorical how often frequency measure that used predetermined response categories
fared better than the continuous yesterday frequency measure that used an open-ended response
approach. Perhaps this was because providing predetermined responses constrained respondents
who would have more widely overreported, and cued them to an appropriate range of possi-
ble responses. The downside of using the categorical measure is that it can be more difficult to
incorporate it into certain types of analyses and it provides less information than the continuous
measure.
    The results of this study have strong implications for the many studies that use these types of
measures. As discussed in our analysis of the mobile phone literature, approximately 40 percent
of mobile phone use studies published in scholarly communication journals base their findings
on self-report frequency measures. Following from the notion of W.I. Thomas, our perception of

Journal of Computer-Mediated Communication 18 (2013) 508–519 © 2013 International Communication Association   517
use may be real in its consequences. That is, if someone thinks that they are a heavy or a light
user of mobile communication they will likely arrange other contingent activities based on that
perception.
     That said, given the moderate correlations between these self-report measures and log data there
is good reason to be suspicious of studies that find significant correlations between these self-report
measures and other variables. Moreover, the implications for future research are potentially serious
when considering access to more accurate trace data–such as mobile phone log or server log data–is
generally difficult, time consuming, and expensive to obtain. Researchers might improve self-report
measures by asking respondents to access the voice and SMS logs of their mobile phones or find old
phone bills during interviews or while completing paper or web questionnaires. We caution that, this
approach may result in frustration and additional time to complete surveys as respondents may not
easily be able to pull this information from their mobile phones or locate old phone bills. The use of
diaries is also a possibility, though this also implies a greater commitment by respondents and more
difficulty in gathering data. It is also possible to consider a type of proactive data collection system such
as calling people at random times via their mobile phones and asking for the previous mobile-based
interaction. Another approach would be to use mobile applications designed to pull log data from
mobile phones and send it directly to researchers. For a detailed discussion of the merits of such an
approach, see Raento, Oulasvirta, and Eagle (2009).
     To inform current and future studies, we ran exploratory multivariate analyses to examine
the possible correlation between demographic traits and under- and overreporting. We focused on
demographic traits because they are generally present in most surveys and therefore are obvious targets
for identifying and compensating for inaccurate self-reporting behavior. While we did find some
associations between demographic traits and misreporting on the yesterday measures, we didn’t find
many associations with the categorical how often measure. Moreover, the low R2 statistics in our models
indicate that demographic traits will not go far in helping to identify and compensate for under- and
overreporting in both the yesterday and the how often measures.
     In conclusion, our results point towards the need to improve self-report measures that operationalize
frequency of mobile phone use. Given that these measures are often used in influential mobile phone
studies, more resources should be dedicated to understanding the sources of their weakness and
examining alternative measures that operationalize the same concept and can be easily deployed into
survey questionnaires.
     There are always boundaries in terms of funding, time, the constraints of the questionnaire method,
etc. As a result we often muck along as best we can and we accept the results of others given the
constraints that they work under. This ignores, however the larger question of the validity of the results.
In a perfect researcher’s world there would be access to valid and reliable data that would not threaten
the privacy of the individual. However, we do not live in that world. We have to think about how
to get the best data given the instruments and the constraints that we currently have and we have to
think about better data collection methods. Finally, we need to think about the way that our results
are reported and the caution needed in pushing the meaning of the results beyond what the data can
support.

Note
1 An individual who practices SIM switching will have SIM cards from several different operators and
  will switch between these during the day in order to take advantage of daily tariff differences
  between the operators

518             Journal of Computer-Mediated Communication 18 (2013) 508–519 © 2013 International Communication Association
References
Archambault, J. S. (2011). Breaking up because of the phone and the transformative potential of
    information in Southern Mozambique. New Media & Society, 13(3), 444–456.
Bernard, H. B., & Killworth, P. D. (1977). Informant accuracy in social network data II. Human
    Communication Research, 4(1), 3–18.
Bernard, H. B., Killworth, P. D., Sailer, L. (1982). Informant accuracy in social network data: An
    experimental attempt to predict actual communication from recall data. Social Science Research,
    11(1), 30–66.
Cohen, A. A., & Lemish, D. (2003). Real time and recall measures of mobile phone use: Some
    methodological concerns and empirical applications. New Media & Society, 5(2), 167–183.
Freeman, L. C., Romney, A. K., Freeman, S. C. (1987). Cognitive structure and informant accuracy.
    American Anthropologist, 89(2), 310–325.
Jagun, A., R. Heeks, and J. Whalley. (2008). The impact of mobile telephony on developing country
    micro-enterprise: A Nigerian case study. Information Technologies and International Development,
    4(4), 47–65.
Jensen, R. (2007). The digital provide: Information technology, market performance and welfare in the
    South Indian fisheries sector. The Quarterly Journal of Economics, 122(3), 879–924.
Lenhart, A., et al. (2010). Teens and mobile phones. Washington, DC: Pew Research Center.
Parslow, R. C., Hepworth, S.J., & McKinneym P.A. (2003). Recall of past use of mobile phone headsets.
    Radiation Protection Dosimetry, 106(3), 233–24.
Raento, M., Oulasvirta, A., & Eagle, N. (2009). Smartphones: An emerging tool for social scientists.
    Sociological Methods Research, 37(3), 426–454.
Sudman, S., Bradburn, N. M., & Schwartz, N. (1996). Thinking about answers: The application of
    cognitive processes to survey methodology. San Francisco, CA: Jossey-Bass.

About the Authors
Jeffrey Boase (jeffrey.boase@ryerson.ca) is an assistant professor in the School of Professional Commu-
nication at Ryerson University in Toronto, Canada. His research focuses on the relationship between
communication technology and social networks. Further information and publications can be found
at: http://www.ryerson.ca/∼jboase/

Address: School of Professional Communication, Ryerson University, 350 Victoria Street, Toronto,
Ontario M5B 2K3

Rich Ling is a professor at IT University of Copenhagen in Copenhagen Denmark. His work has focused
on the social consequences of mobile communication.

Address: IT University of Rued Langgaards vej 7, 2300 Copenhagen

Journal of Computer-Mediated Communication 18 (2013) 508–519 © 2013 International Communication Association   519
You can also read