In our last blog, we discussed the “groundbreaking media delivery platform” that FLAME will bring to smart city infrastructures throughout Europe. Here we look at how FLAME will allow media service providers to develop and deploy novel user experiences.
Faster response, better engagement
FLAME can deploy services at the edge of the network (e.g. in a street cabinet). As a consequence, compute capabilities may be located just one hop away (at best) from the users. Deployment planning tools and reactive systems can deploy compute services at the edge, providing low latency access to compute services as well as having the compute workload distributed across the network. Together with emerging new radio access technology, such as 5G radio or GBit Wi-Fi, we expect to ultimately experience service level latencies of 5ms or less.
What this means is that service providers can provide a far richer experience to users, especially when dealing with user uploaded video. Local processing could make use of computer vision techniques to quickly identify interesting content and make promote it to other users on the network rather than relying on centralized capabilities for such processing, possibly increasing the needed bandwidth for the transfer of content to such central locations before deciding on the ‘value’ of the content.
By improving the responsiveness of the system to user desires, FLAME ultimately leads to a far more personalized and localized user experience which in turn increases the content’s impact and the user’s engagement.
Improved service request routing
In a conventional network, an instance of a (media) service is found using the DNS: the user’s client queries a DNS server to map from a domain name to an IP address. The subsequent service request is then routed through the network between the client and the service. If the user is doing a DNS lookup to access content served by a CDN then the DNS service will maintain a list of IP addresses that can be returned for a given domain name and will try to return an IP address of a node geographically close to the client.
Should the (media) service provider want to replace an instance of their service with another one at a different IP address (and potentially a different physical location) then the DNS tables must be updated. This is done by updating the local authoritative DNS server which then propagates the new mapping to DNS services across the world.
DNS propagation can take up to 48 hours so fast dynamic switching from one service instance to another is not possible in conventional networks. Even more so, there are no general practises to propagate these changes to the client. This is important for cases in which clients will cache previously received DNS responses to domain names – a practise that is widespread among browsers and even at operating system level. If an assignment of DNS name and IP address now changes (and is ultimately finally diffused in the DNS system), the client might not know about it until it flushes its local DNS cache, which in turn will further delay any reaction to the change.
In contrast, the FLAME platform provides fast (between 10 and 20ms) switching time from one service instance to another by not relying on the DNS for service location. Furthermore, the FLAME platform does not rely on typical mobility management approaches found in IP networks, usually leading to inefficient ‘triangular’ routing of requests through a common ‘anchor’ point. Instead, the fast and dynamic service routing of the FLAME platform leads to so-called direct path mobility, where the path between the requester and the responding service can be determined as being optimal (e.g., shortest path or direct path to a selected instance), avoiding the use of anchor points.
Multicast delivery of HTTP responses
The underlying properties of the FLAME network enable the possible multicast-based delivery of HTTP responses to service request. This is done transparently to the (otherwise unicast) semantic of HTTP transactions. The platform provides this capability at the level of each individual service request for constantly changing user groups. With that, scenarios in which individual users exit the multicast delivery, e.g., when a user watching a video together with many other uses pauses the transmission, are supported without any particular penalty to the operation of the platform beyond the obvious additional transmission of the unicast responses.
With this, media services that create a semi-synchronous request pattern across a number of users (e.g., for HTTP-level streaming scenarios over a popular catalogue of videos or the synchronization of HTTP-level resources across a number of clients) will likely see a significant reduction of costs due to the multicast delivery realized by the platform, while not needing to adapt the media services to the specifics of the multicast delivery. The potential for such cost reduction have been showcased at recent events such as the Mobile World Congress 2016 as well as 2017 for scenarios of HTTP-level media streaming. Such cost reduction and improvement on network utilization will also have significant QoE impact on media service users.
Net-level indirection
When relying on many surrogate service endpoints to exist in the network, including content delivery nodes, there is a clear issue of certain resources not being available in one surrogate instance while existing in another. With that, state synchronization across all surrogate instances becomes a vital issue. As an alternative to such state synchronization solutions at the media service or application level, the platform also provides the capability to indirect service requests at the network level. With this, when a service request is being sent to one surrogate instance but results in a 404 or 5xx error response, the platform can be configured to redirect the original request to another alternative surrogate. Nesting these operations effectively leads to a net-level ‘search’ among all available surrogate instances until the search is exhausted (with a negative result) or the resource is found.
We expect this capability to play a significant role in distributed media services that create local content or state, where access to such state is not limited to the local service only, i.e., it could be requested from other places in the network. User generated content is one such example. Such net-level indirection capability could lead to significant reduction of traffic needed otherwise for (possibly unnecessary) state synchronization. This in turn leads to cost reductions which are of direct benefit for both media service providers and users alike.
Less chance of insecure direct object references
The use of CDNs in many use cases, such as the distribution of social media resources (e.g., photos, videos), usually leads to the leakage of insecure object references. This is due to the original service request, possibly issued within the secure transaction context of the service, is indirected (via the DNS canonical name entry) to, typically, the closest CDN cache node to the user. With that, the direct link of the media being retrieved changes from the secure media context to the likely insecure CDN retrieval context. The latter is likely insecure since it is usually unaware of the original security context.
A good example is what could happen if someone shared a photo on a social networking site with just her friends. If the photo ended up distributed via a CDN, her friends would be given the CDN address of the image in their newsfeeds and could then access the image as they were authenticated by the social network. However, her friends could also copy the address of the image in the CDN and share it with people not authenticated by the social network (and not friends of the poster), breaking her privacy.
The problem here arises that the CDN is usually not aware of the particular authentication to view or use the particular media. Sending the (CDN) link to somebody without proper authentication will enable that person to simply access the media that was originally protected through the authentication defined by the original social media service. The underlying network technology in FLAME enables the use of surrogates instead. With this, CDNs morph into surrogate service endpoints with the potential to also hold the necessary security context when serving the desired content. This ability simplifies the security model and benefits the media service developer. The benefit to the media service user is that there is less possibility to mistakenly or wilfully share content not intended for sharing.
Secure end-to-end access to content
In conventional content delivery networks there are various difficulties when serving HTTPS content. When a user accesses a website over HTTPS they expect that their connection is to the website and that it is encrypted end to end. If a conventional CDN is used as the primary host for the site then in a sense the CDN acts as a “man-in-the-middle”, signing a certificate claiming to be the site in question but potentially retrieving data from the origin site (as well as serving cached static content). In some configurations the connection from the CDN to the origin site is unencrypted, breaking the contract with the end user once again.
By lifting content delivery onto the level of surrogates, the FLAME platform exposes CDNs as properly secured endpoints. The necessary certificate sharing between content and CDN provider will then allow for securing the content delivery (again) according to the originally intended end user facing contract -more secure for provider and consumer.
At some level, all providers want better engagement with their users, and FLAME will help achieve that. By offering a faster, more responsive and more secure content delivery system, service providers improve their trust, their engagement, and ultimately their competitiveness in a changing media world.
Post from Dirk Trossen, InterDigital Europe