8 Common Mobile Analytics Challenges And Pitfalls — Tips On How To Avoid Them

Proba
10 min readNov 30, 2021

--

According to the State of Unstructured Data Management Report, 45% of IT managers are planning to invest in analytics tools in the next 12 months. This indicates serious demand for advanced and transparent analytics systems among IT companies, including those with mobile products.

Mobile app analytics becomes more and more complex every year. Apple, Google, Facebook, and other tech giants set the trend for user data privacy protection. This leaves marketers, developers, and advertisers with fewer and fewer opportunities to analyze audiences and improve products.

However, data collection is not the only issue here. What other obstacles and pitfalls can we encounter in the realm of mobile analytics? At proba.ai, we are developing a service for A/B testing of mobile apps. Since these A/B tests are fully based on mobile app data, we work a lot with mobile analytics.

If you need to analyze mobile products in your field, you are very likely to have encountered some of the problems we describe below. So, here is our list of the most common mistakes and how to deal with them.

1. App installation data is not consistent

A common case: the app installation data from your analytics system and the console (App Store Connect or Google Play Console) do not match. What can cause these discrepancies? Most of the time, the reason is as follows: in App Store Connect, we see the number of installs, that is, how many times your app has been downloaded, the Installations metric. On the other hand, analytics systems show how many times your app has been opened. They start tracking when the SDK is initialized, i.e., when your app is launched. The thing is that not all of the users who have downloaded your app will run it. And that’s where discrepancies begin.

Charts showing installs of the same app in App Store Connect and Appsflyer

2. App event data is not tracked by your analytics system

Analytics systems might not track certain events you would like them to. There are several reasons for this:

  • The event is configured incorrectly. Because of this, it is not sent to your tracking and analytics system to begin with.
  • The event is not triggered by users. This sometimes happens when your app has a small audience or if the event occurs rarely: users just do not get to it. To avoid this, you need to test and make sure your events are sent correctly before you release the app. When adding new events to the analytics, it’s important to check that they do in fact arrive into the system and are triggered when they are supposed to. Ask your QA engineer to ensure the correctness of the event triggering before the release.
  • Your analytics system sends events in ‘batches’. To avoid server overload, analytics services might send event data in ‘batches’ or use timeouts (e.g., when there are 10 events in-store or when 10, 30 etc. seconds have passed). Assume we have an app with a sign-up process that has 5 events. The analytics system will wait for several more events before it sends the whole ‘batch’. It’s possible that the number of events never reaches the required number, so the ‘batch’ will never be sent.
    How do we address this? One solution is to force the system to send the events. In most analytics systems, you can force the sending of one or several events. However, we do not recommend using this approach all the time since it can disrupt the initial sequence when you receive them. This is also not common for analytics systems. By using separate requests to ‘force send’ every single event, you increase the server and network load.
  • Something has broken down in the app itself. You might have just missed a bug in the latest update: some features might in fact be working incorrectly and the issue has not been discovered during the testing stage. It might be worth it to check the app itself. For example, if, due to an error, your app does not open the user’s profile screen, the event of viewing the profile screen will not be sent to the analytics system. Sometimes abrupt drops of a certain metric to zero may be a sign of overlooked bugs and other technical issues. The advice here is quite obvious: to release a bug fix to address those, and quickly.

3. Events are not matched with users

Another common problem is when your analytics system does not know which user a certain event is related to. In that case, the system either creates a new user (like Amplitude does) or the event is just ignored (like in AppsFlyer).

The situation is as follows:

A certain user installs your app, and the analytics system labels this user ‘user #1’. We then send an event from someplace else informing that this user has made a purchase. It’s necessary to specify that this user is also ‘user #1’ so that the analytics system is able to find them. ‘User #1’ who has installed the app and ‘user #1’ who has made the purchase are one and the same.

To avoid issues here, we need to use the same user identifier when sending events from different sources. For information on how to implement this, you will need to refer to the documentation of the systems you would like to integrate.

4. Event stats duplication

It’s fairly easy to notice when event data is not tracked at all: the stats will show zero. However, determining if a certain event is duplicated is more difficult to do by just glancing at the dashboard. In our work, there have been at least a few cases of the same data about a certain event coming into the analytics system twice.

Event duplication most often happens when configuring events from different systems. This can be a purely organizational mistake when two people configure the same event about a purchase to be sent both from the client and from the billing server. Such an event will then be tracked twice.

If a purchase event comes in together with the income data, then the income data will be duplicated too. For example, you will see $10,000 of income and $7,000 of expenses. Seems like the ROI is positive and everything is fine. But actually, only $5,000 has been earned, so the campaign is in the red.

Which means that you need to compare the data both in App Store Connect/Google Play Console and in the service you use to track purchases. And those must match.

How to deal with data discrepancies? By testing, comparing the totals, comparing with the other systems, and disabling some of them if necessary.

5. App event data differ from system to system

Event data in different analytics systems might not match. The most common case is that their services are run at different times. Those are not triggered simultaneously. For instance, a user launches an app, and for that user, Amplitude has enough time to run and send the event. A second user’s app launch, however, is not tracked because in their case, the system did not have enough time to send the information.

Why does this happen? Generally, app developers prioritize their users, not analytics systems. Developers try to make it so the first thing to load is user content, and only after that does it come to Amplitude, AppsFlyer and the other background processes. Some users will not even wait until the app loads, so App Store Connect will show a lot more installs than tracking systems will.

6. Income data are added up in different currencies

The next problem is that your analytics system might add up your income data in different currencies. This issue is difficult to identify without comparing each sum using the console. If you attract traffic from multiple countries, you need to know the currency used by your analytics system when receiving money and the currency when sending it.

Usually, the income value is a separate field containing a number. The currency is specified in another field. Keep in mind that, hypothetically, the income data from one system might come in a currency different from the currency the receiving system is expecting. Which can lead to data discrepancies.

In most cases, it’s better to feed the analytics with the final total in the currency you are using to add up all the payments from all the countries you are working with. However, many of widely used analytics systems do not provide currency conversion features.

If you want to use a currency other than the available ones, you will need to decide on the exchange rate. For example, you might choose the rate at the moment of transaction, record keeping, or payout.

7. Time-delayed conversions and different attribution windows are not taken into account

Yet another situation that might lead to data discrepancies is when different systems have different attribution windows.

An installation attribution window is the number of days between the moment when an ad is shown and clicked on and the moment of app installation. For example, an attribution window of 7 days means that there might be a period of up to 7 days from the moment a user clicks on an ad for your app and the moment when this user installs it. In reality, attribution windows are just agreed upon by the parties involved.

Consequently, different systems may have different attribution window settings. The problems begin when you decide to use, for example, two different systems, one of which has its attribution window set for 3 days, while the other one has it set for 7. Naturally, some users might still install your app on days 4 through 7, which will make the two systems go out of sync. It’s important, therefore, to have your attribution windows identical in all the systems you are using. Moreover, different systems will attribute the same installs to different sources based on the settings used.

The next case is when you are trying to find out how many in-app purchases you got from every existing source. To calculate a channel’s effectiveness correctly, it’s important to compare those within the same attribution windows: a month, a week, a day, etc.

In addition, you need to account for time-delayed conversions. Assume you are selling a subscription with a trial of 7 days and you need to assess the trial-to-purchase conversion rate. If you look at the stats for purchases from the traffic for the last 7 days, you will see 0. Nobody could become a paying user within that time: the free trial is not over yet. So, you need to assess the conversion for the traffic from 8 and more days ago. This mistake is quite obvious but a common one nonetheless.

8. Incorrect data comparison

Different analytics systems may use the same term for different things. The most basic example is from the first mistake on this list, about comparing the installation data from the console and our tracker. As we have seen, this is not really consistent: after all, we are actually comparing different metrics: the installations and the launches.

Another example is when you compare incomparable data: some systems might track data for all users, while other only track those who have allowed ad tracking. These are stats for so-called LAT on/LAT off users — those who have or have not allowed ad tracking in apps (this has become even more relevant with the release of iOS 14.5).

We have looked at many common problems we have personally encountered while working with mobile analytics. The question, however, is this: what do you do if something does not add up somewhere?

First of all, we recommend you to check the technical side: see whether the developers have configured everything correctly, look at the logs and trace the sending of your events. If you have checked everything but the discrepancies are still significant (deviation of 20–30%), then the only solution is to contact the support of the analytics system you are using. The good news is that, for big systems, the response is quite fast. You can also ask for help in related communities on Facebook, Telegram, or Slack.

In some situations, it might be better to just accept discrepancies as a given and analyze trends and relative values, i.e., how a certain metric changes from one system to another. Then, if you see the conversion growing in both systems, the probability is high that it is actually growing and this trend is can be trusted.

To be more confident that you are collecting your data correctly, you can set up a secondary analytics or tracking system. If the data in one system is significantly different from the other, it might indicate that the analytics in one of them is faulty. On the one hand, it provokes the very errors we have been talking about. On the other, it helps ensure that you are in fact working with correct, valid data.

Have you encountered similar challenges in mobile analytics? Share your interesting cases in the comments, and we’ll work them out together.

--

--

Proba
Proba

Written by Proba

proba.ai is a tool for A/B testing in mobile apps. Carry out experiments faster, and at a better price — using the mobile app product hypothesis testing tool.

No responses yet