Determining meaningful metrics for Adaptive Bit-rate Streaming HTTP video delivery

Page created by Bradley Webster
 
CONTINUE READING
Determining meaningful metrics for Adaptive Bit-rate Streaming HTTP video delivery
Bachelor Informatica
Informatica — Universiteit van Amsterdam

                                           Determining meaningful metrics for
                                           Adaptive Bit-rate Streaming HTTP
                                           video delivery

                                           Abe Wiersma
                                           Student number: 10433120

                                           15th June 2016

                                           Supervisor(s): Dirk Griffioen & Daniel Romão

                                           Signed: Robert Belleman
Determining meaningful metrics for Adaptive Bit-rate Streaming HTTP video delivery
Abstract

The video on demand industry has become the largest source of Internet
traffic in the world, and the struggle to deliver the hosting infrastructure
is a headache to system engineers. The capabilities of a server are mostly
guesswork and the amount of clients it can serve are based on nothing more
than the educated guess of the implementing system engineer. This paper
aims to conclude what measurements truly matter in determining a server’s
real world capabilities. For this purpose a load testing tool called Tensor
was created on a Flask Backend and Angular Frontend. The tool combines
resource monitoring and load data in D3 graphs. Load data is generated
with WRK, a HTTP benchmarking tool, requesting high amounts of data
from a video hosting server. As a test of the tool’s performance two video
hosting servers are put side by side, and their capabilities are measured.
Determining meaningful metrics for Adaptive Bit-rate Streaming HTTP video delivery
Contents

1 Introduction                                                                                                        4
  1.1 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . .                                              5

2 Adaptive Bit-rate Streaming                                                                                          7
  2.1 Apple HLS . . . . . . . . . .       .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    8
  2.2 Microsoft HSS . . . . . . . .       .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    9
  2.3 Adobe HDS . . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    9
  2.4 MPEG-DASH . . . . . . . .           .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    9
  2.5 Side by side . . . . . . . . .      .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   11

3 Infrastructure                                                            14
  3.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
  3.2 Set-ups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
  3.3 Performance measurements . . . . . . . . . . . . . . . . . . . . 17

4 Tensor                                                                                                              19
  4.1 Requirements . . .      . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   19
  4.2 Design . . . . . . .    . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   20
  4.3 Implementation . .      . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   21
      4.3.1 Backend . .       . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   23
      4.3.2 Performance       Co-Pilot    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   24
      4.3.3 Frontend . .      . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   25

                                       2
Determining meaningful metrics for Adaptive Bit-rate Streaming HTTP video delivery
CONTENTS                                                                                                 3

5 Experiments                                                                                           27
  5.1 Testing setup . . . . . . . . . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   27
      5.1.1 Source . . . . . . . . . . . . . . . .          .   .   .   .   .   .   .   .   .   .   .   27
      5.1.2 Video Hosting Servers . . . . . . .             .   .   .   .   .   .   .   .   .   .   .   28
      5.1.3 Video & Software . . . . . . . . . .            .   .   .   .   .   .   .   .   .   .   .   29
  5.2 Results . . . . . . . . . . . . . . . . . . . .       .   .   .   .   .   .   .   .   .   .   .   30
      5.2.1 Server: usp.abewiersma.nl . . . . .             .   .   .   .   .   .   .   .   .   .   .   30
      5.2.2 Server: demo.unified-streaming.com              .   .   .   .   .   .   .   .   .   .   .   32

6 Conclusion                                                              34
  6.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

Appendices                                                                                              39

Appendix A Results usp.abewiersma.nl
  A.1 HDS . . . . . . . . . . . . . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .
  A.2 HLS . . . . . . . . . . . . . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .
  A.3 HSS . . . . . . . . . . . . . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .
  A.4 DASH . . . . . . . . . . . . . . . . .    .   .   .   .   .   .   .   .   .   .   .   .   .   .

Appendix B Results demo.unfied-streaming.com
  B.1 HDS . . . . . . . . . . . . . . . . . . . . . . . .               .   .   .   .   .   .   .   .
  B.2 HLS . . . . . . . . . . . . . . . . . . . . . . . .               .   .   .   .   .   .   .   .
  B.3 HSS . . . . . . . . . . . . . . . . . . . . . . . .               .   .   .   .   .   .   .   .
  B.4 DASH . . . . . . . . . . . . . . . . . . . . . . .                .   .   .   .   .   .   .   .

Appendix C Glossary

                                      3
Determining meaningful metrics for Adaptive Bit-rate Streaming HTTP video delivery
CHAPTER         1

                                                             Introduction

Cisco recently unveiled a report showing that by 2019, on-line video will be
responsible for 80% of global Internet traffic[1], 72% of the content being
delivered by Content Delivery Networks. The mode of transport for video
content used to be progressive download over HTTP, in which a TCP con-
nection just transferred a video file to a client as quickly as possible.

    While TCP currently is the most used underlying protocol the origin of
streaming media lies with the Real-Time Transport Protocol (RTP) over
UDP. UDP was used because of TCP’s assumed hurtfulness to video stream-
ing performance as a result of throughput variations and potentially large
retransmission delays. As UDP is a lot simpler in relation to TCP this
seemed like the obvious way to go. UDP in contrast to TCP does not offer
guaranteed arrival of packets to the endpoint and instead focuses on fast
transmission (Best-effort delivery)[2].

    In practice the disadvantages of media streaming over TCP did not ne-
cessarily apply and contrary to HTTP over TCP, RTP struggles with the
traversal of firewalls and with the filtering done at Network Address Transla-
tion (NAT) on a router[3]. The rate-adaption which first was done server-side
by push-based RTP started migrating to HTTP over TCP. In HTTP Adapt-
ive Bit-rate Streaming (ABS) the rate-adaptation is done client-side. The

                                      4
Determining meaningful metrics for Adaptive Bit-rate Streaming HTTP video delivery
CHAPTER 1. INTRODUCTION                                                       5

adoption of rate adaptation gave way to modern Adaptive Bit-rate Stream-
ing, with a bit-rate for every user’s specific needs.

   Because HTTP web servers and infrastructure were already there to
provide with HTML content, this base could be extended for use with ABS.
Tools for HTTP benchmarking are widely available, e.g. WRK, ApacheBench
and HTTPerf, but none are dedicated enough to benchmark Adaptive Bit-
rate Streaming.

    To fill the void left by the lack of benchmarking solutions tailored for ABS
the load testing tool named Tensor was made. Tensor was made on the basis
of a literature study into what measurements should be done to describe the
performance of an origin (Adaptive Bit-rate Streaming server). As part of
an experiment the Tensor load testing tool is used to benchmark two origins
running the Unified Origin1 . The servers are hosting the same video con-
tent, which is requested over four Adaptive Bit-rate Streaming implement-
ations: HTTP Smooth Streaming (Microsoft HSS), HTTP Live Streaming
(Apple HLS), HTTP Dynamic Streaming (Adobe HDS) and Dynamic Ad-
aptive Streaming over HTTP (MPEG DASH).

1.1       Related work
When ABS over RTP was replaced with ABS over HTTP, performance test-
ing was done exclusively as a test the implemented algorithm. Papers in
which the algorithms of HDS, HLS, HSS and since 2012 also DASH are put
side by side. “An Experimental Evaluation of Rate-Adaptation Algorithms
in Adaptive Streaming over HTTP”[3] is a paper released before the intro-
duction of DASH and tests solution algorithm performance for restricted
resources. A similar paper,“A Comparison of Quality Scheduling in Com-
mercial Adaptive HTTP Streaming Solutions on a 3G Network”[4], measures
the performance of rate adaption algorithms to keep a full buffer in a ‘real’
3G environment. Video Benchlab is an open-source tool accompanied with
the paper “Video BenchLab: an open platform for realistic benchmarking of
streaming media workloads”[5]. The paper first describes the performance of
different browsers streaming the same video to one client. Afterwards the pa-
per describes an experiment in which 24 of these clients are run concurrently.
  1
      http://www.unified-streaming.com/products/unified-origin

                                       5
Determining meaningful metrics for Adaptive Bit-rate Streaming HTTP video delivery
6                                        CHAPTER 1. INTRODUCTION

The narrative of this paper however lies in individual client performance and
not server performance. The papers found are examples of client usage of
an Adaptive Bit-rate Streaming implementation, available literature did not
provide with information on the subject of load testing an origin.

                                     6
Determining meaningful metrics for Adaptive Bit-rate Streaming HTTP video delivery
CHAPTER         2

                                          Adaptive Bit-rate Streaming

Adaptive Bit-rate Streaming over HTTP describes a method of delivering
video content over the HTTP protocol leaving the logic of rate-adaptation
to the clients[3]. Because HTTP already was the standard for web content
delivery when ABS was introduced, the need for a new protocol for video
content delivery was negated. The server is tasked with providing multiple
profiles of the same video content encoded for a multitude of resolutions (e.g.
480p, 720p, 1080p) with bit-rates approximating these resolutions. Video
content encoded for equal resolutions, using different settings on an encoder,
can result in smaller encoded video size or better encoded video quality. In
Adaptive Bit-rate Streaming video content is often encoded in H264, which
has different profiles with a number of settings for application specific pur-
poses.

    The video content is stored on a server as partitioned fragments, or gen-
erated just-in-time. These fragments typically have a duration in a range
between 2 to 10 seconds[6]. In practice these video fragments are referred
to as video segments. These video segments have their references stored in
a separate file. Most ABS implementations use an XML format, but Apple
for example stores them in a ‘traditional’ play-list file.

   The client is tasked with the retrieval of video and meta data from servers

                                      7
Determining meaningful metrics for Adaptive Bit-rate Streaming HTTP video delivery
8                    CHAPTER 2. ADAPTIVE BIT-RATE STREAMING

and applies the logic of a rate adaptation algorithm. An ABS stream typ-
ically starts with a streaming client requesting the meta-data that describes
the different bit-rate options for the requested video-stream. Fragments of
lower bit-rate are usually requested first to quickly fill up the buffer. The
buffer of an ABS client is between 5 and 30 seconds long. The goal of an
ABS algorithm is to provide a client with the highest possible bit-rate frag-
ments. This starts during or after filling the buffer. Based on the network
conditions, the bit-rate of the fragments will converge to a bit-rate that is
bottlenecked either by a server or a client.

    The client is left in charge of deciding what bit-rate fragments to request,
resulting in an increase of server-side scalability. In HTTP ABS the client is
the only stateful entity. This allows the client to switch from one stateless
server to another without additional logic required on the server. This comes
from the fact that segment requests are independent of each other. For ex-
ample when congestion to one server increases a client can decide to instead
start requesting segments from another server.

    Currently, Adaptive bit-rate streaming is used in many video content
delivery solutions, both commercial and open-source. The most common
Adaptive Bit-rate Streaming formats are HLS, HSS, HDS and MPEG-DASH,
an ISO-standard. Details on these formats will be described next.

2.1      Apple HLS
Apple HTTP Live Streaming (HLS) is the Apple format of the adaptive bit-
rate streaming technique[7]. Apple packs baseline 3.0 encoded H.264 video
content with one of three audio types (MP3, HE-AAC and AAC-LC) in a
MPEG-2 Transport Stream. A segmenter subdivides the transport stream in
10 second parts. These parts are indexed into a file that keeps references to
the fragmented files. HLS is the only major system that uses the MPEG-2-
TS container instead of a ISO Base Media File Format (ISOBMFF). HLS is
the only major system implementing the MPEG-2-TS container. This might
be because in comparison to a MPEG-4 part-12 container this MPEG-2-TS
adds an approximate 24% overhead in relation to audio video data.[8].

                                       8
Determining meaningful metrics for Adaptive Bit-rate Streaming HTTP video delivery
CHAPTER 2. ADAPTIVE BIT-RATE STREAMING                                       9

2.2     Microsoft HSS
Microsoft delivers Adaptive Bit-rate Streaming, HTTP Smooth Streaming
(HSS), using the Protected Inter-operable File Format (PIFF), which is based
on the MPEG 4-Part 12 format[9]. The meta data and references to video
segments are stored into a XML-formatted Manifest. The supported video
codecs are VC-1 advanced and H.264 (Baseline, Main and High). The PIFF
media container is audio codec agnostic in theory but only has two supported
audio types: WMA and AAC. The typical PIFF segment length is 2 seconds.

2.3     Adobe HDS
Adobe first used Real Time Messaging Protocol (RTMP) for its own video
streaming, later it developed the HTTP Dynamic Streaming (HDS) to be
integrated in their Flash player infrastructure. The HDS protocol delivers
video content using the F4V file format, which is based on the MPEG-4 part
12 format. The F4V file finds its origin in the original Adobe file format FLV,
which was extended to comply with the MPEG-4 part 12 specification. The
container supports H.264 video encoding with either MP3 or AAC audio[10]
and has a default length of 4 seconds.

2.4     MPEG-DASH
Dynamic Adaptive Streaming over HTTP (DASH) is the international stand-
ard for Adaptive HTTP Streaming since November 2011[11]. The deploy-
ment of the standard was meant to provide a universal solution for the market
to rely on, unifying a landscape divided by vendor published solutions like
HLS, HSS and HDS. Because DASH focuses on providing a protocol imple-
mentation as universal as possible, the protocol allows for both MPEG-4 part
12 derived or MPEG-2 TS containers. DASH’s container agnosticism means
that DASH is both video and audio codec agnostic. The acceptance of Ul-
tra High Definition hardware and video content means DASH’s agnosticism
can implement new encodings to support Ultra High Definition and onward.
DASH should easily be able to implement the H.265 encoding which claims
to offer up to 40% of bit-rate reduction for the same quality and resolution
as H.264[12], but also offers encoding for higher resolutions than H.264. As
with the other protocol implementations, the meta-data and the segment

                                      9
10                       CHAPTER 2. ADAPTIVE BIT-RATE STREAMING

URLs for the different qualities and codecs for video-content are stored in a
Manifest file, which in DASH’s case is the Media Presentation Description
(MPD) (as seen in Figure 2.1, an XML file). The DASH specification does
not specify a segment length which means picking a length is done by the
implementing party.

Figure 2.1: Graphical representation of the DASH MPD manifest2 . This
shows the option for a video player on the client side to pick different bit-
rate segments.
     2
         Source: http://www.cablelabs.com/adaptive-bitrate-and-mpeg-dash/

                                         10
CHAPTER 2. ADAPTIVE BIT-RATE STREAMING                                                 11

2.5         Side by side

Company     Name   Year   Audio       Video       Container   Default Lengths   Manifest
Microsoft   HSS  2008 WMA & AAC      VC-1 & H.264 PIFF*       2 sec             XML
Apple       HLS  2009 MP3 & (HE-)AAC H.264        MPEG-2 TS   10 sec            M3U8
Adobe       HDS  2010 MP3 & AAC      H.264        F4V*        4 sec             XML
MPEG        DASH 2011 Any            Any          Any         Any               XML

Table 2.1: Several Adaptive Bit-rate Streaming implementation specifica-
tions side by side, (*MPEG-4 part 12 derived container).

    Even though the available technologies developed are very similar, there
are a few things to consider when choosing what technology to use for hosting
content. Because DASH was developed with the input of the companies that
made the original implementations, updates and support for Adobe’s HDS
and Microsoft’s HTTP Smooth streaming seem to be declining. These com-
panies are now actively involved in the development and adoption process of
the ISO/IEC Standard DASH.3

   One possible reason Apple’s HLS still has the support of its company
and its users is that HLS, like DASH, does not require side mechanisms,
and can be deployed on regular HTTP servers like Apache, NGINX, IIS, etc.
For HSS the segment request URLs are not URI compliant and thus need
translation when running on regular HTTP server. For example the URL of
a HLS segment looks like this:

      http://example.net/content.ism/content-audio_eng=-video_eng=-.ts

  This URL can point to a file on the file system of a regular HTTP server.
The URL of a HSS segment which looks like this:

      http://example.net/content.ism/QualityLevels()/
                Fragments(video=)

   The URL has to be translated into a file byte range pointing to a fragmen-
ted MP4 file, which is a feature not supported on regular HTTP servers. To
  3
      http://dashif.org/members/

                                        11
12                     CHAPTER 2. ADAPTIVE BIT-RATE STREAMING

support this translation Microsoft has outfitted their own HTTP web server
implementation IIS with the media extension Smooth Streaming. Adobe
has implemented a similar side mechanism called HTTP Origin Module for
Apache, which like HSS, translates requests to file byte ranges.
   Apple documented HLS as an Internet Draft at the IETF, and updates
the draft regularly, a push from Apple to have the IETF standardize HLS as
an RFC has been absent though.

    Digital Rights Management (DRM), a way of encrypting encoded video
content, is best supported on DASH. The Common Encryption Scheme
(CENC) implemented in DASH specifies standard encryption that is de-
cryptable by one or more digital rights management systems. This allows
DASH to, for example, serve both to Android and iOS players by encrypting
for Microsoft’s PlayReady and Verimatrix Encryption from a single content
stream, this makes multiple video player support easier.

   DASH might seem like the obvious way to go, but DASH players can be
troublesome, most notably on the desktop. DASH has proprietary players of
which some already had a client base:
     • Theoplayer, A HTML5 based video streaming player, that since the
       25th of august 2015, now also support DASH, besides already support-
       ing HLS4 .
     • JWplayer, A multi platform HTML5 and Flash based video streaming
       player, that with the release of version 7.0.0 on 20 July 2015 supports
       the DASH standard5 .
     • BitDASH, A dedicated DASH HTML5 based video streaming player,
       which on first release supported DASH only, and later added HLS sup-
       port6 .
But there are also open-source alternatives:
     • DASH.js, A reference client implementation for the playback of MPEG-
       DASH content using client-side JavaScript libraries. The implementa-
       tion is the result of an initiative by the DASH Industry Forum, a group
     4
       https://www.theoplayer.com/
     5
       http://www.jwplayer.com/
     6
       http://www.dash-player.com/

                                      12
CHAPTER 2. ADAPTIVE BIT-RATE STREAMING                                       13

      with the purpose of growing DASH’s market share[13]. The reference
      client is still in active development and a large refactor as of Octo-
      ber 30th 2015 extended its scalability and modularity. The DASH.js
      project has multiple in use forks, for example:

        – castLabs have an open-source implementation called DASH.as
          with limited functions in relation to their professional line of video
          client implementation DASH Everywhere7 .
        – Google, the Shaka player, a player optimized for low bandwidth
          video streaming8 .
        – Vualto, who implemented the DASH.js reference player into their
          workflow around August 20149 .

   • DASHplay (by RTL NL), RTL, Triple IT and Unified Streaming’s open-
     source attempt at building the best MPEG-DASH player[14].

All of the client implementations miss parts to make them the best choice.
Some do not operate well with live video, others do not fully implement sub-
titles; none of these clients fully implement every necessary service.

    DASH players for devices from SDK’s like Inside Secure10 , VisualON11
or Nexstreaming12 behave better but these tend to be expensive. With the
refactoring of DASH.js to version 2.0, a more complete open-source DASH
player might become available.

  7
     https://github.com/castlabs/dashas
  8
     https://github.com/google/shaka-player
   9
     http://www.vualto.com/i-dont-even-like-mustard/
  10
     http://www.insidesecure.com/Products-Technologies/Embedded-Software-
Solutions/DRM-Fusion-Agent-for-Downloadable-Deployment
  11
     http://visualon.com/onstream-mediaplayer/
  12
     http://www.nexstreaming.com/products

                                      13
CHAPTER         3

                                                          Infrastructure

3.1     Requirements
A good video streaming experience for users relies on a few important con-
ditions:
   • Avoid buffer underruns, because when the buffer is empty the video
     playback will be interrupted[4].
   • Refrain from oscillating in video quality. This negatively affects the
     perceived quality[15][16].
   • Make use of all the available bandwidth to offer the highest possible
     video bit-rate so the user gets the best possible video quality[4].
When we take these user-focused conditions into consideration in relation to
what a server has to provide, the following server conditions have to be met.
   • A server needs to offer continuity of bandwidth.
   • A server has to provide stable bandwidth.
   • A server’s throughput has to approximate the total user throughput.
     This allows each client to have their throughput approach their band-
     width, ensuring best possible video streaming quality.

                                     14
CHAPTER 3. INFRASTRUCTURE                                                   15

     The amount of bandwidth a user needs depends on the encoding of the
media, the bit-rate, and the container in which the media is transported. Net-
flix which implements the MP4 profile of the DASH protocol recommends the
following bandwidths to their users for the different qualities of streaming1 :

   • Netflix states 0.5 Mb/s as the minimum speed to be able to stream any
     video at all.

   • 1.5 Mb/s is the recommended speed to stream a minimum quality video.

   • For SD (Standard Definition) 3.0 Mb/s is the required bandwidth. SD
     means the video has a resolution of at least 480p.

   • For HD (High Definition), 5.0 Mb/s is the required bandwidth. HD
     having the HD-ready specification of at least 720p.

   • Netflix started supporting UHD in 2014, with a minimum bandwidth
     of 25 Mb/s. UHD has a resolution of 2160p.

To provide every user with a good video streaming experience, Netflix has
50+ encodings per video, one for each specific platform and bandwidth.

3.2        Set-ups
The Server-side functionality of Adaptive Bit-rate Streaming is provided
by an origin. An origin either creates the content just-in-time or offloads
pre-packaged content. The simplest set-up consists of a single origin server
serving clients.
                             HTTP GET requests
                 Clients                                Origin
                                  ABS data
             Figure 3.1: A diagram showing a simple origin set-up.

    The set-up described in Figure 3.1 could never serve a large number of
clients because the single server would easily be overwhelmed. This is why a
set-up usually consists of multiple origins over which clients are distributed
  1
      https://help.netflix.com/en/node/306

                                      15
16                                     CHAPTER 3. INFRASTRUCTURE

by a load-balancer. Next to increasing the number of origins, caches can
be used to decrease traffic to the origins. Alternatively a Content Delivery
Network (CDN) may be used to scale. CDNs can be put in front of an origin
to decrease traffic to the origin.

Figure 3.2: Diagram illustrating a set-up using two CDNs to serve clients.

    CDNs can be as small as a single edge node or as large as the CDNs of
companies like Akamai. Akamai’s CDNs provide 15 to 30% of the world’s
Internet content[17].

   An example of how video content is hosted is the way Netflix uses CDNs
from several companies to distribute their video content. When new content
has to be distributed, a single upload into a single CDN is enough to distrib-
ute the content over the global Netflix CDN infrastructure. Except for the
video-content, almost everything is hosted on the Amazon Cloud. Here the
content ingestion, log recording/analysis, DRM, CDN routing, user sign-in,
and mobile device support are done. The Amazon cloud is responsible for
generating a list of CDNs for a client’s location. A manifest containing the
best CDNs and the available video/audio bit-rates for the video is gener-

                                     16
CHAPTER 3. INFRASTRUCTURE                                                    17

ated. The player, which is either provided by the Amazon Cloud or already
installed on the client’s device, then plays it using the adaptive bit-rate pro-
tocol implementation[18].

    A CDN typically has a load balancer and a number of edges that are
servers, over which the requests for content will be divided. A request is
redirected to an edge based on the distance from an edge to the clients in
hops, in time or the historic/current performance of the edge. An optional
web-cache is often implemented to decrease loading times on recently/most
requested content[18].

3.3      Performance measurements
Because a CDN as a collective is very hard to fully stress, a meaningful and
easier way to test performance could be to stress and measure the capabilities
of individual edges and so estimate the capabilities of a CDN. To make sure
the test performance is purely the result of the edge’s effort, it is important
to have no web caching active. Web caching is a popular mechanism in CDNs
to cost effectively handle a large amount of load to a small amount of content.

   Network performance tests consist of a number of possible measurements,
from which a test can pick the relevant ones to get data that is relevant.
Popular options for these tests are:

   • Bandwidth; the maximum rate at which data can be transferred.

   • Throughput; the actual rate at which data is transferred.

   • Latency; the delay between the sender sending a packet and the re-
     ceiver decoding it. Latency is mainly the result of a signal’s travel time
     combined with processing times at the nodes.

   • Jitter; the variation in latency at the receiving end of the data.

   • Error rate; the number of corrupted bits expressed as a percentage or
     fraction of the total sent.

Jitter can be used as an indicator for server stress, in the sense that a high
amount of jitter indicates that a server is not consistently handling requests.

                                      17
18                                    CHAPTER 3. INFRASTRUCTURE

In ABS though the buffer of the ABS video player stops jitter from being a
problem.

    While many papers research single client implementations, there is little
known about the amount of clients that can be serviced from one server.
Single client performance research consists of measurements of bandwidth in
relation to the throughput that is utilized by the client [4][3]. Server per-
formance tests should at least include a throughput to the server as one of
the measurements.

    With the introduction of the HTTP/1.1 protocol the performance of the
updated protocol was measured with HTML page loads versus HTTP/1.0
page loads[19]. Data measured for this test was transmitted bytes, sent
packets and relative overhead put against running time for the page load.
The updated HTTP/2.0 is likely to have similar papers published with the
expected adoption of the new protocol. HTTP/2.0 had its specification pub-
lished in RFC7540[20].

                                     18
CHAPTER        4

                                                                    Tensor

The theory behind Adaptive Bit-rate Streaming and network performance
measurements was used to build a tool that helps system engineers determine
the capabilities of a streaming set-up. The name of the tool that was made
is Tensor, an ABS specific load testing tool.

4.1     Requirements
Tensor requires a number of things to be a satisfactory ABS specific load
testing tool, namely:

   • An ABS specific load generating component.

   • An interface which collects an origin’s performance metrics.

   • A user interface which collects and displays measurements.

The Tensor project is open-source, so any tool or framework that is used in
building the tool has to be open-source as well.

                                    19
20                                                 CHAPTER 4. TENSOR

4.2     Design
While looking for a load generator the decision was made to look for one
with as little as possible overhead. This decision was made to assure the load
generator would never be the bottleneck of the load generating capabilities
of a server.
                               Light    GUI   Multiple URLs
               ApacheBench      Yes     No         No
               HTTPerf          Yes     No         No
               Jmeter           No      Yes        Yes
               WRK              Yes     No         Yes
Table 4.1: List of open-source HTTP load generators, with relevant features.

    Apache bench, HTTPerf and WRK are programs that can exclusively be
run from the command-line. They were all built with C. Jmeter however is
built using Java and has a GUI. While Apache bench, HTTPerf and WRK
focus on generating as much requests as possible, Jmeter tries to more closely
simulate actual user interactions. Because of WRK’s LUA API, WRK also
allows for a degree of virtual user simulations. Jmeter was ultimately dis-
carded because of its relative overhead over the C-based load generators.
Because WRK supports multiple URLs as the only C-based load generator,
WRK was chosen as the load generating part of Tensor. WRK is able to
generate high load and many requests from a single source, which is useful
because Tensor is hosted from a single server. WRK is designed as a modern
HTTP benchmarking tool, combining a multi threaded design with scalable
event notification systems like epoll[21] and kqueue[22].

    To make WRK accessible to the frontend, WRK was attached to the
Python Flask web-framework as an API. Flask was chosen for its ability to
easily and quickly deploy an API. The open-source Flask micro framework is
written in Python and is based on Werkzeug, Jinja2 and what they call good
intentions[23]. The code is published under the BSD license. Apart from the
Python Standard Library the framework has no further dependencies. Flask
has the “micro framework” classification because it doesn’t implement the
full data storage to frontend pipeline and aims to keep the core small and ex-
tensible. Services offered by the framework include: a built-in HTTP server,
support for unit testing, and RESTful web service[24]. Features that are

                                       20
CHAPTER 4. TENSOR                                                          21

missing can be added using extensions. For example, the Flask-SQLAlchemy
module fully extends the Flask framework with the Python SQLAlchemy
toolkit. Flask implements only a small number of extra modules and gives
the opportunity to remove the modules that are implemented. The Jinja2
templating engine for example can be switched out for an engine more to a
engineer’s preference.

    There is a large number of network monitoring systems that fill the re-
quirements of being open source and reachable externally. But one monitor-
ing system is used by open-source the Netflix Vector framework which was
released in August 2015. Vector collects the measurements, from the network
monitoring system Performance Co-Pilot (PCP), in a Javascript frontend.
The framework was built to easily be extended to incorporate measurements
not only from PCP but also from other sources. Because of its extensibility
and its integration with a monitoring system Netflix Vector was chosen as
the frontend framework of Tensor and so PCP was chosen as the network
monitoring system.

    Angular, the framework Vector is built on, is an open-source Javascript
framework and part of the MEAN full-stack framework as its frontend mod-
ule1 . Angular’s imperative is to help the development of single page web
applications, providing a simplified design and testing platform.

4.3        Implementation
In Tensor, the Flask framework is used mainly to serve as an API for the
frontend, but also acts as the access point for a client to retrieve the Front
end. The tool displays measurements of CPU, HDD and memory usage
queried from an origin running the Performance Co-Pilot framework. WRK
is used to acquire measurements of throughput, latency and segments to an
origin.

  1
      https://github.com/linnovate/mean

                                      21
22                                                CHAPTER 4. TENSOR

                                         Tensor
                                         Flask
                                    REST API
                     Init                                    Load
                      WRK                                    WRK
                      PING

                                View
                     Javascript Angular web-app
            Angular UI
            DASHBoard
               Charts           Datamodels         Metrics          Services

                          Video Hosting Server
                          HTTP Server              PCP API
                             Apache:80
                                                     :44323
                             Origin:80

Figure 4.1: Diagram showing the design of Tensor and how it interfaces with
a video hosting server.

                                    22
CHAPTER 4. TENSOR                                                           23

4.3.1     Backend
View
The view is the simplest part of the whole project. No server side templating
is required because all templating logic is done in the Angular frontend. Once
a blueprint has been defined, providing an access point in Flask takes three
lines. The root of the web-application is defined as the function home() which
returns index.html a static HTML page which is generated during the web-
application’s start-up. The complete view code can be seen in main.py below.

     ##main.py##
     from flask import render_template, Blueprint

     main_blueprint = Blueprint(’main’, __name__, url_prefix=’’)

     @main_blueprint.route(’/’, methods=[’GET’])
     def home():
         return render_template(’index.html’, data={})

REST API
The API provides the frontend with metrics of the load put on the media
hosting server by Tensor, as shown using the edges to the right of Figure
4.1. The tool that was used as the load generating part of the backend is
WRK[25].

    To run, WRK requires a URL, a duration and the number of simultan-
eous connections. WRK will then send requests to the URL for the duration
specified with the stated amount of connections. Afterwards it will output
statistics about the network performance. WRK has a Lua scripting inter-
face, LuaJIT, which allows additional logic for sending and receiving requests.
The Lua scripting interface is used by the Tensor server to send requests for
multiple segments of video(s) stored on the video hosting server. Requests
are constructed from the contents of a file that contains URLs for the differ-
ent segments.

   Initial data is split using two tools: ping and WRK. Ping is run for 5
seconds after the API call and the average is returned to the Angular app.

                                      23
24                                                CHAPTER 4. TENSOR

Because of ‘slow start’ in TCP WRK cannot be run against small files. An
HTML page for example would produce lots of requests resulting in too much
overhead to have a high throughput. The solution is to load test against a
50MB zip-file on the HTTP web-server (e.g. Apache). This throughput data
serves as a baseline for the later load testing against the ABS implementing
module on the server.

4.3.2    Performance Co-Pilot
PCP is run on the origin and has an API listening to port 44323, shown in
the video hosting server part of Figure 4.1.

Figure 4.2: Diagram illustrating the way Performance Co-Pilot distributes
host measurements to the Vector framework3 .

    PCP’s Performance Metrics Co-ordinating Daemons (PMCDs) sit between
monitoring clients and Performance Metric Domain Agents (PMDAs) seen
in Figure 4.2. The PMDAs are interfaces with functions that collect meas-
urements from the different sources; e.g. An Application, mailq, a database
or the kernel. Tensor connects to an integrated web API that communic-
ates with a Web daemon monitoring client. The Web Daemon client in
turn is connected to a Host’s PMCD. Clients first request a context using
a URI combined with /pmapi/context to access the bindings. The context
is maintained as long as the clients regularly reconnect to request PMAPI
operations.

     3
   http://techblog.netflix.com/2015/04/introducing-vector-netflixs-on-
host.html

                                    24
CHAPTER 4. TENSOR                                                         25

4.3.3    Frontend
Frontend development was based on the open-source Netflix Vector frame-
work (HTML/CSS/JS)[26]. The Netflix Vector framework makes use of the
open-source Angular Javascript framework. The Netflix Vector framework
was extended to shift its focus towards load metrics and away from system
metrics.

   In Tensor the frontend is used to display the metrics retrieved from the
PCP API on the origin or from the Tensor API. The frontend extends the
angular app with both third party libraries and the projects source code.

    The files in the dashboard folder render the structure of the HTML page
and load the configured charts into the dashboard layout. The dashboard
controller starts a number of services as soon as a URL is entered into the
hostname input. The dashboard service retrieves the PCP context through
the pmapi service. If the context is valid the dashboard service starts an
interval that retrieves measurements from the host every 5 seconds. Meas-
urements are retrieved through the pmapi service and the wrkapi service
which update the metrics of PCP and WRK. The measurements traverse
one of the abstraction layers in the metrics folder before measurements are
passed on to a datamodel. The abstraction layer converts raw metrics from
PCP into the metrics shown in the charts. All data shown in the charts is
directly linked to their respective datamodels. The way charts are loaded
into the dashboard is very modular and allows for different metrics using the
same chart template.

                                     25
26                                                 CHAPTER 4. TENSOR

Figure 4.3: Screenshot showing Tensor’s Frontend while running a load test
with sources of data overlayed.

    When Tensor is started as a web server the Javascript source code is com-
piled into one file using Gulp. Gulp is a toolkit that handles automation in
development environments. Gulp automatically loads third party libraries
into Tensor’s javascript library. Figure 4.3 shows the Tensor frontend run-
ning in a Firefox browser.

                                     26
CHAPTER         5

                                                             Experiments

As a part of Tensor’s testing the load testing tool was run against two origin
servers. Both servers have all their supported implementations tested with
a single run of Tensor.
    A run in the experiment consists of five seconds of pinging to the server,
followed by five seconds of load testing against a fifty MB zip file. ABS
specific load is applied at a click of the button for the range of one to two
hundred concurrent connections. A run is canceled if the server crashes as a
result of the load.

5.1     Testing setup

5.1.1    Source
The source is the server on which Tensor is hosted and from which the load
is generated. Initially two source servers were made ready for experiment-
ing but one source was virtualized on the same machine as demo.unified-
streaming.com and thus discarded. An High-Bandwidth Amazon EC2 in-
stance remained as the sole source of load testing for the experiments.

                                     27
28                                         CHAPTER 5. EXPERIMENTS

ec2-XX-XX-XXX-XXX.eu-central-1.compute.amazonaws.com
First a default Ubuntu AMI (Amazon Machine Instance) was initialized on
an entry level EC2 instance. On this EC2 instance the necessary software
required for hosting Tensor was installed. Then a snapshot was made of the
EC2 instance state to make a custom AMI containing the Tensor tool. This
AMI was then transfered from AMI storage to an EC2 instance with the
required network interface speed. The address shown in the section title is
the URL in which the X’s identify the virtual machine.

 Source                     compute.amazonaws.com
 IP                      52.29.156.202
 Location                Frankfurt, Germany
 OS                      Ubuntu 14.04.2x64
 Kernel                  3.13.0-48 (12-03-2015)
 CPU                     36 vCPUs @ 2.9 GHz (Intel Xeon E5-2666v3)
 RAM                     60 GB
 Network Interface Speed 10Gbit/s

5.1.2     Video Hosting Servers
usp.abewiersma.nl
usp.abewiersma.nl is hosted from an entry level virtual machine hosted by Di-
gitalOcean on their Amsterdam2 server location. The DigitalOcean hosting
service is maintained/provided by the Telecity group.

 Origin                     usp.abewiersma.nl
 IP                         188.226.172.129
 Location                   Amsterdam, Netherlands
 OS                         Ubuntu 12.04.3x64
 Kernel                     3.8.0-29 (14-08-2013)
 CPU                        1 vCore @ 2.4Ghz (Intel Xeon E5-2630L v2)
 RAM                        512MB
 Storage                    20GB SSD
 Network Interface Speed    1Gbit/s
 Server Software            Apache/2.4.7
 ABS Implementation         Unified Streaming Package 1.7.16

                                     28
CHAPTER 5. EXPERIMENTS                                                 29

demo.unified-streaming.com
demo.unified-streaming.com is hosted from a virtual machine on a privately
owned dedicated computer. The Unified Streaming team provided the spe-
cifications of the server hardware and a document with the segment URLs.

 Origin                    demo.unified-streaming.com
 IP                      46.23.86.207
 Location                Kockengen, Netherlands
 Cpu                     4 vCore @ 2.3sGhz
 Ram                     4GB
 Network Interface Speed 1Gbit/s
 Server Software         Apache/2.2.22
 ABS Implementation      Unified Streaming Package 1.7.17

5.1.3     Video & Software
Test Video
The testing video from which fragments are requested is the open-source
animation movie, Tears of Steel. The movie was made to showcase the new
enhancements and visual capabilities of Blender.

The Unified Streaming Package
The Unified streaming package supports the following ABS implementations
from one source:

   • HDS

   • HLS

   • HSS (also referred to as MSS)

   • DASH

The Unified Origin package is developed by Unified Streaming and is under
active development.

                                     29
30                                       CHAPTER 5. EXPERIMENTS

5.2     Results
Table 5.1 through 5.8 are single run per implementation results from the
Tensor load testing tool. In the runs the number of concurrent connections
increases by one every 5 seconds.

5.2.1    Server: usp.abewiersma.nl

Table 5.1: Summary of HDS load testing on usp.abewiersma.nl. For images
of HDS load testing running see Appendix A.1
  HDS                  Init         20:30:35 at 25 connections Failure
 Ping               6.90ms     -                                -
 Connections        1          25                               80-90
 Throughput         47.69 MB/s 110MB/s                          0 MB/s
 Segments/s         -          155                              0
 CPU Utilization    0%         50%                              100%
 Disk Utilization   0%         2-5%                             35-40%
 Memory Utilization 140MB      240MB                            512MB

Table 5.2: Summary of HLS load testing on usp.abewiersma.nl. For images
of HLS load testing running see Appendix A.2
  HLS                  Init         20:47:05 at 25 connections Failure
 Ping               6.86ms     -                                -
 Connections        1          25                               70
 Throughput         50.45 MB/s 95MB/s                           0 MB/s
 Segments/s         -          300                              0
 CPU Utilization    0%         60%                              100%
 Disk Utilization   0%         0%                               30%
 Memory Utilization 120MB      300MB                            512MB

                                   30
CHAPTER 5. EXPERIMENTS                                                 31

Table 5.3: Summary of HSS load testing on usp.abewiersma.nl. For images
of HSS load testing running see Appendix A.3
  HSS                  Init         22:09:40 at 25 connections Failure
 Ping               6.86ms     -                                -
 Connections        1          25                               80
 Throughput         51.95 MB/s 110MB/s                          0 MB/s
 Segments/s         -          310                              0
 CPU Utilization    0%         50%                              100%
 Disk Utilization   0%         2-5%                             10%
 Memory Utilization 190MB      250MB                            512MB

Table 5.4: Summary of DASH load testing on usp.abewiersma.nl. For
images of DASH load testing running see Appendix A.4
  DASH                Init         22:38:35 at 25 connections Failure
 Ping               7.47ms     -                                -
 Connections        1          25                               90
 Throughput         50.64 MB/s 110MB/s                          0 MB/s
 Segments/s         -          300                              0
 CPU Utilization    0%         50%                              100%
 Disk Utilization   0%         15%                              35-40%
 Memory Utilization 130MB      240MB                            512MB

Discussion
Load testing showed unanimous failure at point the memory is completely
filled, after which the CPU usage goes to 100%. HLS failed with both lower
throughput and less connections than the other implementations. Initial
throughput measurements consistently came out lower than the load testing
throughput measurements. HLS does not reach a throughput of 110 MB/s
where the other implementations do. The 110 MB/s is close to the network
interface speed of the server: 1 Gbit/s=125MB/s.

                                   31
32                                      CHAPTER 5. EXPERIMENTS

5.2.2    Server: demo.unified-streaming.com

Table 5.5: Summary of HDS load testing on demo.unified-streaming.com.
For images of HDS load testing running see Appendix B.1
 HDS                  Init          22:37:00 at 70 connections End
 Ping               7.63ms         -                          -
 Connections        1              70                         200
 Throughput         11.58 MB/s     90MB/s                     90 MB/s
 Segments/s         -              54                         45
 CPU Utilization    5%             25%                        25%
 Disk Utilization   0%             15%                        20%
 Memory Utilization 1000MB         1250MB                     1500MB

   A pronounced 30 errors (timeouts) per second was recorded nearing the
end of the HDS test.

Table 5.6: Summary of HLS load testing on demo.unified-streaming.com.
For images of HLS load testing running see Appendix B.2
 HLS                  Init         23:20:02 at 70 connections End
 Ping               7.34ms    -                              -
 Connections        1         70                             200
 Throughput         8.90 MB/s 85MB/s                         70 MB/s
 Segments/s         -         325                            280
 CPU Utilization    4%        45%                            35%
 Disk Utilization   0%        10%                            10%
 Memory Utilization 1050MB    1250MB                         1500MB

                                  32
CHAPTER 5. EXPERIMENTS                                                  33

Table 5.7: Summary of HSS load testing on demo.unified-streaming.com.
For images of HSS load testing running see Appendix B.3
 HSS                  Init          00:31:19 at 70 connections End
 Ping               7.71ms          -                            -
 Connections        1               70                           200
 Throughput         26.57 MB/s      90MB/s                       75 MB/s
 Segments/s         -               220                          180
 CPU Utilization    5%              35%                          30%
 Disk Utilization   0%              10%                          15%
 Memory Utilization 1000MB          1250MB                       1400MB

Table 5.8: Summary of DASH load testing on demo.unified-streaming.
com. For images of DASH load testing running see Appendix B.4
 DASH                Init          00:51:33 at 70 connections End
 Ping               7.60ms          -                            -
 Connections        1               70                           200
 Throughput         20.02 MB/s      85MB/s                       75 MB/s
 Segments/s         -               205                          180
 CPU Utilization    4%              30%                          30%
 Disk Utilization   5%              15%                          15%
 Memory Utilization 1000MB          1100MB                       1400MB

Discussion
The initial throughput measurements seem random for every time the tool is
run. These measurements should be consistent because they measure to the
same file on the same server with each run. The Tensor web application only
goes up to 200 concurrent connections, which is why every load test ends
at 200 connections. During every implementation load test the throughput
goes down as the number of concurrent connections goes past steady state.

                                    33
CHAPTER        6

                                                             Conclusion

In this thesis the goal was to find a meaningful way to do Adaptive Bit-rate
Streaming load testing. The tool that was made for this purpose, Tensor,
performs good at load testing. However for now the baseline testing of the
throughput to the server is not representative and thus non-functional.

    The results show that the demo server, provided by the Unified Streaming
team, does not reach its maximum network interface speed of 1Gbit/s. On
the other hand the demo server should be able to support more concurrent
users than the DigitalOcean cloud-server due to a higher memory capacity.
With about 200 concurrent users the average throughput per user over a less
than 1 Gbit link is lower than the Netflix recommended 5.0 Mbit/s for HD
streaming. A load higher than 200 concurrent HD streaming clients might
stop the clients from receiving their optimal video bit-rate quality.

   The Digital Ocean cloud server is limited by the relatively small memory
that it has available and completely stalls as the number of concurrent con-
nections reaches about 90. For every connection Apache spawns a process to
maintain this connection and every process uses a small amount of memory.
This causes the memory to fill up to the point that swapping starts and con-
nection attempts get dropped.

                                    34
CHAPTER 6. CONCLUSION                                                   35

   The Adaptive Bit-rate Streaming protocol implementations perform fairly
similarly with a few exceptions:

   • The Apple HLS protocol performs worst at achieving the allotted 1Gbit/s
     throughput.

   • The Adobe HDS protocol uses the least amount of segments for stream-
     ing, and therefore suffers from a lot of packages that get timed out.

6.1     Future Work
As a reference to load testing it is important to finish/tweak the baseline
throughput testing. Without the server-specifications that were known be-
forehand estimation of the bandwidth to the servers would have been im-
possible. Currently as a way to estimate the bandwidth a single WRK con-
nection requests a 50MB file. When testing this set-up Tensor was run from
a laptop development environment and in this instance WRK gave similar
results to IPERF. Testing of the set-up should have been more thorough to
assure its accuracy.
The tool still suffers from infancy bugs like:

   • A bug that makes the tooltips of the last nodes in a graph unreadable
     because the next graph clips over the tooltip.

   • Drag and drop of the graph widgets is broken.

                                    35
Bibliography

[1] Cisco Visual Networking Index Cisco. Cisco Visual Networking Index:
    Forecast and Methodology, 2014–2019. white paper, 2015.

[2] J. Postel. User Datagram Protocol, 8 1980. RFC768.

[3] Saamer Akhshabi, Ali C Begen, and Constantine Dovrolis. An experi-
    mental evaluation of rate-adaptation algorithms in adaptive streaming
    over HTTP. In Proceedings of the second annual ACM conference on
    Multimedia systems, pages 157–168. ACM, 2011.

[4] Haakon Riiser, Håkon S Bergsaker, Paul Vigmostad, Pål Halvorsen, and
    Carsten Griwodz. A comparison of quality scheduling in commercial
    adaptive HTTP streaming solutions on a 3g network. In Proceedings of
    the 4th Workshop on Mobile Video, pages 25–30. ACM, 2012.

[5] P Pegus, Emmanuel Cecchet, and Prashant Shenoy. Video BenchLab:
    an open platform for realistic benchmarking of streaming media work-
    loads. In Proc. ACM Multimedia Systems Conference (MMSys), Port-
    land, OR, 2015.

[6] Stefan Lederer, Christopher Müller, and Christian Timmerer. Dynamic
    adaptive streaming over HTTP dataset. In Proceedings of the 3rd Mul-
    timedia Systems Conference, pages 89–94. ACM, 2012.

                                   36
BIBLIOGRAPHY                                                             37

 [7] Apple Inc.     HTTP Live Streaming.      IETF draft, Novem-
     ber 2015. https://tools.ietf.org/pdf/draft-pantos-http-live-
     streaming-18.pdf.

 [8] Haakon Riiser, Pål Halvorsen, Carsten Griwodz, and Dag Johansen.
     Low overhead container format for adaptive streaming. In Proceedings
     of the first annual ACM SIGMM conference on Multimedia systems,
     pages 193–198. ACM, 2010.

 [9] Microsoft. Smooth Streaming Protocol. [MS-SSTR] - v20150630, June
     2015. http://download.microsoft.com/download/9/5/E/95EF66AF-
     9026-4BB0-A41D-A4F81802D92C/[MS-SSTR].pdf.

[10] Adobe Systems Incorporated. HTTP dynamic streaming specific-
     ation.  Version 3.0 FINAL, August 2013.   http://wwwimages.
     adobe.com/content/dam/Adobe/en/devnet/hds/pdfs/adobe-hds-
     specification.pdf.

[11] ISO/IEC. Information technology – Dynamic adaptive streaming over
     HTTP (dash) Part 1. Technical Report ISO/IEC 23009-1:2014, Inter-
     national Organization for Standardization, Geneva, Switzerland, 2014.

[12] Dan Grois, Detlev Marpe, Amit Mulayoff, Benaya Itzhaky, and Ofer
     Hadar. Performance comparison of H. 265/MPEG-HEVC, VP9, and H.
     264/MPEG-AVC encoders. In Picture Coding Symposium (PCS), 2013,
     pages 394–397. IEEE, 2013.

[13] DASH-IF. DASH.js.      https://github.com/Dash-Industry-Forum/
     dash.js/wiki.

[14] RTL NL. DASHplay. http://dashplay.org/.

[15] Pengpeng Ni, Alexander Eichhorn, Carsten Griwodz, and Pål Halvorsen.
     Fine-grained scalable streaming from coarse-grained videos. In Proceed-
     ings of the 18th international workshop on Network and operating sys-
     tems support for digital audio and video, pages 103–108. ACM, 2009.

[16] Michael Zink, Oliver Künzel, Jens Schmitt, and Ralf Steinmetz. Sub-
     jective impression of variations in layer encoded videos. In Quality of
     Service—IWQoS 2003, pages 137–154. Springer, 2003.

                                    37
38                                                      BIBLIOGRAPHY

[17] Anya George Tharakan and Subrat Patnaik. Strong dollar hurts
     Akamai’s profit forecast, shares fall. reuters, April 2015.

[18] Vijay Kumar Adhikari, Yang Guo, Fang Hao, Matteo Varvello, Volker
     Hilt, Moritz Steiner, and Zhi-Li Zhang. Unreeling Netflix: Understand-
     ing and improving multi-cdn movie delivery. In INFOCOM, 2012 Pro-
     ceedings IEEE, pages 1620–1628. IEEE, 2012.

[19] Henrik Frystyk Nielsen, James Gettys, Anselm Baird-Smith, Eric
     Prud’hommeaux, Håkon Wium Lie, and Chris Lilley. Network perform-
     ance effects of HTTP/1.1, CSS1, and PNG. In ACM SIGCOMM Com-
     puter Communication Review, volume 27, pages 155–166. ACM, 1997.

[20] Ed. M. Thomson. Hypertext Transfer Protocol Version 2 (HTTP/2).
     RFC 7540, Internet Engineering Task Force (IETF), May 2015.

[21] epoll(7) Linux User’s Manual, May 2015.

[22] Jonathan Lemon. Kqueue-A Generic and Scalable Event Notification
     Facility. In USENIX Annual Technical Conference, FREENIX Track,
     pages 141–153, 2001.

[23] Armin Ronacher. Flask. https://github.com/mitsuhiko/flask,
     http://flask.pocoo.org/docs/, 2015.

[24] Miguel Grinberg. Flask Web Development: Developing Web Applica-
     tions with Python. ” O’Reilly Media, Inc.”, 2014.

[25] Will Glozer. WRK Modern HTTP benchmarking tool.             https://
     github.com/wg/wrk, 2015.

[26] Netflix Inc. Vector. https://github.com/Netflix/vector, 2015.

[27] Didier J LeGall. MPEG (Moving Pictures Expert Group) video com-
     pression algorithm: a review. In Electronic Imaging’91, San Jose, CA,
     pages 444–457. International Society for Optics and Photonics, 1991.

                                    38
Appendices

    39
APPENDIX    A

Results usp.abewiersma.nl
A.1     HDS

                                                                                                                                          APPENDIX A. RESULTS USP.ABEWIERSMA.NL
Figure A.1: This figure shows the benchmarking of HDS on usp.abewiersma.nl reaching 25 connections at the red marker at which point the
throughput to the server remains approximately the same whilst concurrent connections keep increasing.
APPENDIX A. RESULTS USP.ABEWIERSMA.NL
Figure A.2: This figure shows the moment during the benchmarking of HDS on usp.abewiersma.nl at which failure has occured.
A.2     HLS

                                                                                                                                          APPENDIX A. RESULTS USP.ABEWIERSMA.NL
Figure A.3: This figure shows the benchmarking of HLS on usp.abewiersma.nl reaching 25 connections at the red marker at which point the
throughput to the server remains approximately the same whilst concurrent connections keep increasing.
APPENDIX A. RESULTS USP.ABEWIERSMA.NL
Figure A.4: This figure shows the moment during the benchmarking of HLS on usp.abewiersma.nl at which failure has occured.
A.3     HSS

                                                                                                                                          APPENDIX A. RESULTS USP.ABEWIERSMA.NL
Figure A.5: This figure shows the benchmarking of HSS on usp.abewiersma.nl reaching 25 connections at the red marker at which point the
throughput to the server remains approximately the same whilst concurrent connections keep increasing.
APPENDIX A. RESULTS USP.ABEWIERSMA.NL
Figure A.6: This figure shows the moment during the benchmarking of HSS on usp.abewiersma.nl at which failure has occured.
A.4     DASH

                                                                                                                                           APPENDIX A. RESULTS USP.ABEWIERSMA.NL
Figure A.7: This figure shows the benchmarking of DASH on usp.abewiersma.nl reaching 25 connections at the red marker at which point the
throughput to the server remains approximately the same whilst concurrent connections keep increasing.
APPENDIX A. RESULTS USP.ABEWIERSMA.NL
Figure A.8: This figure shows the moment during the benchmarking of DASH on usp.abewiersma.nl at which failure has occured.
APPENDIX   B

Results demo.unfied-streaming.com
B.1      HDS

                                                                                                                                               APPENDIX B. RESULTS DEMO.UNFIED-STREAMING.COM
Figure B.1: This figure shows the benchmarking of HDS on demo.unified-streaming.com reaching 70 connections at the red marker at which point
the throughput to the server remains approximately the same whilst concurrent connections keep increasing.
APPENDIX B. RESULTS DEMO.UNFIED-STREAMING.COM
Figure B.2: This figure shows the end of benchmarking of HDS on demo.unified-streaming.com the point at which 200 concurrent connections have
been run.
B.2      HLS

                                                                                                                                               APPENDIX B. RESULTS DEMO.UNFIED-STREAMING.COM
Figure B.3: This figure shows the benchmarking of HLS on demo.unified-streaming.com reaching 70 connections at the red marker at which point
the throughput to the server remains approximately the same whilst concurrent connections keep increasing.
APPENDIX B. RESULTS DEMO.UNFIED-STREAMING.COM
Figure B.4: This figure shows the end of benchmarking of HLS on demo.unified-streaming.com the point at which 200 concurrent connections have
been run.
B.3      HSS

                                                                                                                                               APPENDIX B. RESULTS DEMO.UNFIED-STREAMING.COM
Figure B.5: This figure shows the benchmarking of HSS on demo.unified-streaming.com reaching 70 connections at the red marker at which point
the throughput to the server remains approximately the same whilst concurrent connections keep increasing.
APPENDIX B. RESULTS DEMO.UNFIED-STREAMING.COM
Figure B.6: This figure shows the end of benchmarking of HSS on demo.unified-streaming.com the point at which 200 concurrent connections have
been run.
B.4     DASH

                                                                                                                                          APPENDIX B. RESULTS DEMO.UNFIED-STREAMING.COM
Figure B.7: This figure shows the benchmarking of DASH on demo.unified-streaming.com reaching 70 connections at the red marker at which
point the throughput to the server remains approximately the same whilst concurrent connections keep increasing.
APPENDIX B. RESULTS DEMO.UNFIED-STREAMING.COM
Figure B.8: This figure shows the end of benchmarking of DASH on demo.unified-streaming.com the point at which 200 concurrent connections
have been run. Note that the dip in throughput is not considered failure because the server recovers.
APPENDIX         C

                                                                                Glossary

MPEG stands for Moving Pictures Expert Group, a group that sets standards for audio/video
compression and transmission[27].
IETF stands for Internet Engineering Task Force, a group with the goal to make the Internet
work better. The group publishes documents called RFC’s(Request for Comments), that
describe parts of the Internet and gives opinions on what the best practices are.
You can also read