{"_id":"5846008b63c11b250037969f","category":{"_id":"57e1c88115bf6522002a5e4e","project":"5668fab608f90021008e882f","__v":0,"version":"5668fab608f90021008e8832","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2016-09-20T23:38:41.155Z","from_sync":false,"order":11,"slug":"metrics","title":"Metrics"},"parentDoc":null,"user":"5668fa9755e4b32100935d41","__v":0,"githubsync":"","project":"5668fab608f90021008e882f","version":{"_id":"5668fab608f90021008e8832","__v":19,"project":"5668fab608f90021008e882f","createdAt":"2015-12-10T04:08:22.769Z","releaseDate":"2015-12-10T04:08:22.769Z","categories":["5668fab708f90021008e8833","569740f124490c3700170a64","569742b58560a60d00e2c25d","569742bd0b09a41900b2446c","569742cd69393517000c82b3","569742f459a6692d003fad8f","569743020b09a41900b2446d","5697430b69393517000c82b5","56a17776470ae00d00c30642","56a2c48a831e2a0d0069b1ad","56b535757bccae0d00e9a1cd","56e1ff6aa49fdc0e005746b5","57e1c88115bf6522002a5e4e","57fa65275ba65a17008b988f","57fbeea34002550e004c032e","58474584889b6c2d00fb86e9","58475dcc64157f0f002f1907","587e7b5158666c2700965d4e","58a349fc30852819007ba083"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"1.18.0","version":"1.18"},"updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-12-06T00:04:27.423Z","link_external":false,"link_url":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":1,"body":"LiftIgniter's customers often ask us what the day-to-day changes in CTR and lift mean for them, and how they can forecast future performance. This page goes over sources of measurement error, variation, and trends in performance.\n\nWe review:\n\n* Errors in analytics measurement of the counts of raw metrics (such as widget shown and widget click)\n* Margin of error in CTR and other ratios arising from insufficient sample size\n* Daily, weekly, and annual traffic cycles\n* Autocorrelation in CTR and other ratio graphs, and possible explanations, such as trending item starts and fadeouts\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Errors in analytics measurements of the counts of raw metrics\"\n}\n[/block]\nFor JavaScript integrations, LiftIgniter will fail to measure the following:\n\n* Users who have enabled ad blockers using the EasyPrivacy list. Among major ad blockers, the relevant one is uBlock. For more, see [Ad blockers](doc:ad-blockers).\n* Users who bounce off the site too quickly, before the Javascript has loaded.\n* Users using a very old browser. For more, see [Browser compatibility](doc:browser-compatibility).\n* Users who have disabled Javascript.\n* Users browsing through proxies, such as Opera Mini and Amazon Silk, where pages are partly loaded server-side. This mainly affects users in Africa.\n\nFor API integrations, LiftIgniter will get data for users based on whatever you choose to send to our API.\n\nThese errors don't affect the accuracy of most of our metrics, because in the cases that we are unable to track users for these reasons, we are *also* unable to show recommendations to them. In other words, these users are completely invisible to us.\n\nFor more, see [Debugging analytics discrepancies](doc:debugging-analytics-discrepancies).\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Margin of error for CTR and other ratios due to insufficient sample size\"\n}\n[/block]\nTo understand this source of margin of error, let's assume a rather simple (but wrong!) model of user behavior: for each widget, there is a true probability *p* that describes the probability of a user clicking on a recommendation in the widget. The expected CTR is therefore *p* (written as a percentage). For instance, if for the right-rail widget, the true probability of a click is 4.21%, then we expect the CTR to be 4.21% for a large enough sample of data.\n\nWe can't observe the true probability of a click, but we do have the observed CTR. Our job is to infer the true probability of the click from the observed CTR. We can't know this true probability exactly but we can identify a range of values. Specifically, we report a 95% confidence interval: we report a range of possibilities for the true probability such that if the true probability were outside that range, then the CTR we observed would be within the tail 5% of CTRs we could observe (i.e., either the top 2.5% or the bottom 2.5%).\n\nFor instance, if we observe a CTR of 7.29% and report a confidence interval of 7.29%  ± 0.03%, that means that the true probability could be between 7.26% and 7.32%. If the true probability were bigger than 7.32%, our observed CTR of 7.29% would be in the bottom 2.5% of possible observed CTRs. If the true probability were less than 7.26%, our observed CTR of 7.29% would be in the top 2.5% of observed CTRs.\n\nIf you want to more fully understand the theory, check out the [Binomial proportion confidence interval Wikipedia article](https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval). When we report confidence intervals, we use the normal approximation interval. Specifically, the formula we use is the one described [here](http://onlinestatbook.com/2/estimation/proportion_ci.html).\n\nThere are two key determinants of the margin of error:\n\n* Number of observations (the denominator; widget shown in the case of CTR): The margin of error goes down as the square root of the number of observations. So, for CTR, the margin of error goes down as the square root of the number of widget shown. We'll generally see measurement errors of  between ± 0.1% and ± 1% at 10,000 widget showns, and  between ± 0.01% ± 0.1%  at a million widget showns. Note that this also means that the larger the date range you choose, the smaller the margin of error.\n* CTR: Margins of error also tend to be larger the higher the CTR. So with a CTR of 50% you will see a larger measurement error than with a CTR of 5%. The effect of CTR on the margin of error is much less noticeable than the effect of the number of observations.\n\nWhat does this mean? As a matter of fact, we know that there is no such things as a true probability of click. Every impression is distinctive and has a different probability of the user clicking, and our own choice of recommendations affects the probability. What this margin of error calculation essentially does, however, is provide a *lower bound* on how much CTR fluctuation there should be. In other words, if the margin of error is very large, then we *know* that the CTR will fluctuate for that reason alone. It's pointless to try to decipher hidden meaning in CTR fluctuations that are within the margin of error.\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Daily,  weekly and annual traffic cycles\"\n}\n[/block]\nYour website traffic may experience daily, weekly, and annual traffic cycles. You can compare the traffic cycles you see for your site to [known facts about traffic cycles for websites of various kinds](http://www.wikihow.com/Understand-Your-Website-Traffic-Variation-with-Time). But you're probably familiar with how these cycles operate for your website based on past exploration of your site's analytics. The part that may be less familiar to you is how these cycles manifest in ratio metrics such as CTR.\n\nWeekly cycles:\n\n* For most professional information sites and B2B sites that people access at work, CTR goes *up* during the weekend, even though overall traffic goes down. The CTR increase can be between 10% and 30%. The weekend increase in CTR is for similar reasons as the weekend increase in pages/session: people have more time to read as they are in less of a rush, and a lot of shallow traffic is eliminated, so that your site traffic is dominated by serious readers.\n* For sites that are heavily recency-driven, CTRs tend to be lower on days that they don't publish new content. Thus, for instance, CTR tends to be lower for news sites on weekends if they don't publish as much new stuff on weekends.\n* For other kinds of sites (such as general entertainment or news sites) the patterns are less clear. They can vary from site to site and are usually not discernible relative to general trends. Even though traffic levels themselves may follow a clear weekly cycle, CTRs do not.\n\nAnnual cycles: There are clear annual cycles in traffic levels. However, we generally don't see strong annual CTR cycles discernible relative to all the other changes that happen to your site over the course of a year. However, your site might be an exception, and if you have over a year of data, you might want to look at the annual cycle with care!\n\nDaily cycle: Every website has a distinctive daily traffic shape, that is a combination of information about the website audience's geographic location, the type of need (work/home), the type of content, and the website's promotional strategy on social media. The CTR generally is a little higher in the evening and at night, for reasons similar to the increase on weekends, but the effects are usually fairly small. The daily cycle isn't relevant if you are using our [Analytics Panel](doc:analytics-panel) because we do not show data at a granularity finer than daily.\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Autocorrelation\"\n}\n[/block]\nFor websites that rely on trending news events or developments in a domain, CTR shows an interesting pattern called [autocorrelation](https://en.wikipedia.org/wiki/Autocorrelation). Essentially, rather than alternating quickly between rising and falling, it generally rises for several days in a row, or falls for several days in a row. Sometimes it can rise a lot quickly and then gradually fall back to the original level. Sometimes it can fall a lot quickly and then gradually rise back to the original level. Sometimes the rise and fall are both over several days.\n\nThere are a few different but related reasons for autocorrelation:\n\n**Trending item start and fadeout**\n\nExamples include the illness or death of a celebrity, a new gadget, or a new TV show or song. In cases where there is a lot of anticipation leading up to the main event, the trending item builds up gradually and decays quickly. In cases where the event happens suddenly, the trending item builds up suddenly, and the decay is gradual as people catch up with it.\n\nThe effect of trending item starts and fadeouts on CTRs is ambiguous. That's because the effect could happen in two different ways:\n\n* The trending item could cause a huge increase in the denominator (widget shown) if it's a landing page for people. In this case, a trending item could hurt CTR by attracting more shallow traffic that is less interested in the rest of the site.\n* The trending item could cause a huge increase in the numerator (widget click) if the widget is a key way for people to discover the trending item or related information.\n\nYou will need to figure out how exactly trending items operate within the framework of your website.\n\n**Social media post virality and exponential decay**\n\nThe circulation of posts to social media decays exponentially with time. Thus, if you had a very popular Facebook post, traffic from that post to your site will generally decay exponentially with time. How this affects CTR is again unclear: if people who come from that post tend to be ones who click around more, CTR will go up due to the post and then decay to its usual level. Otherwise, CTR will go down due to the post and then gradually recover.","excerpt":"","slug":"measurement-error-variation-and-trends","type":"basic","title":"Measurement error, variation, and trends"}

Measurement error, variation, and trends


LiftIgniter's customers often ask us what the day-to-day changes in CTR and lift mean for them, and how they can forecast future performance. This page goes over sources of measurement error, variation, and trends in performance. We review: * Errors in analytics measurement of the counts of raw metrics (such as widget shown and widget click) * Margin of error in CTR and other ratios arising from insufficient sample size * Daily, weekly, and annual traffic cycles * Autocorrelation in CTR and other ratio graphs, and possible explanations, such as trending item starts and fadeouts [block:api-header] { "type": "basic", "title": "Errors in analytics measurements of the counts of raw metrics" } [/block] For JavaScript integrations, LiftIgniter will fail to measure the following: * Users who have enabled ad blockers using the EasyPrivacy list. Among major ad blockers, the relevant one is uBlock. For more, see [Ad blockers](doc:ad-blockers). * Users who bounce off the site too quickly, before the Javascript has loaded. * Users using a very old browser. For more, see [Browser compatibility](doc:browser-compatibility). * Users who have disabled Javascript. * Users browsing through proxies, such as Opera Mini and Amazon Silk, where pages are partly loaded server-side. This mainly affects users in Africa. For API integrations, LiftIgniter will get data for users based on whatever you choose to send to our API. These errors don't affect the accuracy of most of our metrics, because in the cases that we are unable to track users for these reasons, we are *also* unable to show recommendations to them. In other words, these users are completely invisible to us. For more, see [Debugging analytics discrepancies](doc:debugging-analytics-discrepancies). [block:api-header] { "type": "basic", "title": "Margin of error for CTR and other ratios due to insufficient sample size" } [/block] To understand this source of margin of error, let's assume a rather simple (but wrong!) model of user behavior: for each widget, there is a true probability *p* that describes the probability of a user clicking on a recommendation in the widget. The expected CTR is therefore *p* (written as a percentage). For instance, if for the right-rail widget, the true probability of a click is 4.21%, then we expect the CTR to be 4.21% for a large enough sample of data. We can't observe the true probability of a click, but we do have the observed CTR. Our job is to infer the true probability of the click from the observed CTR. We can't know this true probability exactly but we can identify a range of values. Specifically, we report a 95% confidence interval: we report a range of possibilities for the true probability such that if the true probability were outside that range, then the CTR we observed would be within the tail 5% of CTRs we could observe (i.e., either the top 2.5% or the bottom 2.5%). For instance, if we observe a CTR of 7.29% and report a confidence interval of 7.29% ± 0.03%, that means that the true probability could be between 7.26% and 7.32%. If the true probability were bigger than 7.32%, our observed CTR of 7.29% would be in the bottom 2.5% of possible observed CTRs. If the true probability were less than 7.26%, our observed CTR of 7.29% would be in the top 2.5% of observed CTRs. If you want to more fully understand the theory, check out the [Binomial proportion confidence interval Wikipedia article](https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval). When we report confidence intervals, we use the normal approximation interval. Specifically, the formula we use is the one described [here](http://onlinestatbook.com/2/estimation/proportion_ci.html). There are two key determinants of the margin of error: * Number of observations (the denominator; widget shown in the case of CTR): The margin of error goes down as the square root of the number of observations. So, for CTR, the margin of error goes down as the square root of the number of widget shown. We'll generally see measurement errors of between ± 0.1% and ± 1% at 10,000 widget showns, and between ± 0.01% ± 0.1% at a million widget showns. Note that this also means that the larger the date range you choose, the smaller the margin of error. * CTR: Margins of error also tend to be larger the higher the CTR. So with a CTR of 50% you will see a larger measurement error than with a CTR of 5%. The effect of CTR on the margin of error is much less noticeable than the effect of the number of observations. What does this mean? As a matter of fact, we know that there is no such things as a true probability of click. Every impression is distinctive and has a different probability of the user clicking, and our own choice of recommendations affects the probability. What this margin of error calculation essentially does, however, is provide a *lower bound* on how much CTR fluctuation there should be. In other words, if the margin of error is very large, then we *know* that the CTR will fluctuate for that reason alone. It's pointless to try to decipher hidden meaning in CTR fluctuations that are within the margin of error. [block:api-header] { "type": "basic", "title": "Daily, weekly and annual traffic cycles" } [/block] Your website traffic may experience daily, weekly, and annual traffic cycles. You can compare the traffic cycles you see for your site to [known facts about traffic cycles for websites of various kinds](http://www.wikihow.com/Understand-Your-Website-Traffic-Variation-with-Time). But you're probably familiar with how these cycles operate for your website based on past exploration of your site's analytics. The part that may be less familiar to you is how these cycles manifest in ratio metrics such as CTR. Weekly cycles: * For most professional information sites and B2B sites that people access at work, CTR goes *up* during the weekend, even though overall traffic goes down. The CTR increase can be between 10% and 30%. The weekend increase in CTR is for similar reasons as the weekend increase in pages/session: people have more time to read as they are in less of a rush, and a lot of shallow traffic is eliminated, so that your site traffic is dominated by serious readers. * For sites that are heavily recency-driven, CTRs tend to be lower on days that they don't publish new content. Thus, for instance, CTR tends to be lower for news sites on weekends if they don't publish as much new stuff on weekends. * For other kinds of sites (such as general entertainment or news sites) the patterns are less clear. They can vary from site to site and are usually not discernible relative to general trends. Even though traffic levels themselves may follow a clear weekly cycle, CTRs do not. Annual cycles: There are clear annual cycles in traffic levels. However, we generally don't see strong annual CTR cycles discernible relative to all the other changes that happen to your site over the course of a year. However, your site might be an exception, and if you have over a year of data, you might want to look at the annual cycle with care! Daily cycle: Every website has a distinctive daily traffic shape, that is a combination of information about the website audience's geographic location, the type of need (work/home), the type of content, and the website's promotional strategy on social media. The CTR generally is a little higher in the evening and at night, for reasons similar to the increase on weekends, but the effects are usually fairly small. The daily cycle isn't relevant if you are using our [Analytics Panel](doc:analytics-panel) because we do not show data at a granularity finer than daily. [block:api-header] { "type": "basic", "title": "Autocorrelation" } [/block] For websites that rely on trending news events or developments in a domain, CTR shows an interesting pattern called [autocorrelation](https://en.wikipedia.org/wiki/Autocorrelation). Essentially, rather than alternating quickly between rising and falling, it generally rises for several days in a row, or falls for several days in a row. Sometimes it can rise a lot quickly and then gradually fall back to the original level. Sometimes it can fall a lot quickly and then gradually rise back to the original level. Sometimes the rise and fall are both over several days. There are a few different but related reasons for autocorrelation: **Trending item start and fadeout** Examples include the illness or death of a celebrity, a new gadget, or a new TV show or song. In cases where there is a lot of anticipation leading up to the main event, the trending item builds up gradually and decays quickly. In cases where the event happens suddenly, the trending item builds up suddenly, and the decay is gradual as people catch up with it. The effect of trending item starts and fadeouts on CTRs is ambiguous. That's because the effect could happen in two different ways: * The trending item could cause a huge increase in the denominator (widget shown) if it's a landing page for people. In this case, a trending item could hurt CTR by attracting more shallow traffic that is less interested in the rest of the site. * The trending item could cause a huge increase in the numerator (widget click) if the widget is a key way for people to discover the trending item or related information. You will need to figure out how exactly trending items operate within the framework of your website. **Social media post virality and exponential decay** The circulation of posts to social media decays exponentially with time. Thus, if you had a very popular Facebook post, traffic from that post to your site will generally decay exponentially with time. How this affects CTR is again unclear: if people who come from that post tend to be ones who click around more, CTR will go up due to the post and then decay to its usual level. Otherwise, CTR will go down due to the post and then gradually recover.