/ Insights

How should we measure video quality of experience

How should we measure video quality of experience

There have been a couple of reports lately that echo what previous studies and our data have shown

Video buffering is bad for business

So the issue becomes, how do we measure video buffering and what is good and what is bad.

Average Buffer Time is the wrong way to measure

In a recent Ooyala report they are measuring video buffering as:

the time on buffer divided by session time

So you watch 100 minutes worth of video and see the experience buffer for 1 minute and that is a 1% buffer time.

Do you see the problem with this? The amount of time spent buffering versus watching doesn’t capture the real pain of the experience as, when aggregated, it could be that every user sees 1% buffer time which is indicative of systemic problems, or just 1 user who sees 100% buffer time.

1 user 100% buffer time ≠100 users 1% buffer time


Second problem is that users who buffer are likely to leave thus understating the average

Imagine you are watching the latest episode of Game of Thrones and the dreaded buffering wheel pops up. You may give it a second or so, but research shows that half of people will immediately give up when this occurs. That means that your 1 second of buffering is what was reported in the average, but the reality is that you stopped watching the content. If you had left the buffering continue, then maybe it would resolve in 10 seconds or maybe in 10 minutes, but that is not captured in the data.

But users without buffering routinely watch 2x or more video and so the average minutes spent in rebuffer are reduced even further as these good minutes add up and minimize the real problem.

A better way to measure

Rebuffer Rate: Percentage of user experiences impacted by buffering

We believe that the most important measurement of QOE is the rate of user experiences impacted by a “quality impacting” buffering event. That means that if 1 in 100 users see a buffering event, then you have a 1% rebuffer rate. This is the more important metric as you can’t always control the buffering of an individual user who may have a poor connection, malware, or other problems. That user most likely has issues across the board with video. But you can measure for the whole base of customers to see how many of them have a good experience with your content and brand versus a bad experience.

1 user buffers out of 100 = 1% Rebuffer Rate

If 1 in 5 of your users has a bad experience then that is catastrophic for your retention and ad revenue. On the flip side if 1 in 1000 of your users has a bad video experience, then overall you are doing a good job. There is no way to capture this metric when looking at an average of buffering time versus watching time.

“Quality Impacting” buffering event

Greater than 1 sec of buffering

We are defining this as any time the user sees the video buffer for more than 1 second. This might not be the best number, but overall it is a fair choice based on the research into how long a user will wait for video to buffer. If your rebuffer rate is high with that, then you can surely see that something needs to be done.


This is just our first iteration on a standard metric that can accurately reflect the issues that video buffering is causing for your audience. We welcome feedback and thoughts from others in the video QOE community as to how we can do a better job of this metric or what alternative metrics they feel are important.