
Enter Docker from a hackathon around 2013 : (software)ĭocker introduced the idea of Containers. The trouble with VMs is they reserve an enormous amount of RAM and CPU for themselves. The 3 early players in the Virtualisation Management space were: It's a clever business move instead of one client per server, you can simply provide Virtual Machines on the same box, rather than Virtual Hosts in Apache. The latest fad in DevOps came from the Cloud practice of subdividing server hardware into potentially hundreds of Virtual servers (i.e. and it's clear the future of video streaming is right back in the browser, be it on the desktop, phone, or an embedded/headless variant.Īll in all, the history looks a bit like this: Scaling Up: Containerising Absolutely Everything Used in conjunction with other emerging standards like: It provides Peer-to-Peer connectivity inside the browser for audio, video, and data. Underlying all these has come to be the Web Real-Time Communication (WebRTC) standard proposed by Ericsson / Google around 2011.
Let twitch play your rtmp server torrent#
Then came WebTorrent: and Peerflix (, which used Torrent Stream: ), which eventually produced Hollywood's worst nightmare: Popcorn Time. Peer-to-Peer had been waiting the Napster/eMule days for an answer, and when it got it, they were out of business.īitTorrent always had a problem though: it was slow to get going. Little can be said about the genius of BitTorrent or of inverting the bandwidth problem into a positive by making the stream faster according to the demand for it. It's also pretty handy in stopping people downloading entire files. The idea is split large video files into smaller segments which play in sequence. Microsoft Smooth Streaming (Silverlight, 2008).Adobe HTTP Dynamic Streaming (HDS, 2010-ish).MPEG Dynamic Adaptive Streaming over HTTP (DASH, 2011).However, not all connections were equal, and HTTP is stateless.ģ gorillas developed different technologies based on the idea of Adaptive Bitrate Streaming: With the increase in available bandwidth, and increasingly effective compression algorithms (codecs), HTTP became viable again as a transport protocol for extremely large files. It eventually mutated into an SSL version (RTMPS), an encrypted DRM version (RTMPE), a tunnel version for NAT traversal (RTMPT), and a UDP variant ( Real-time Media Flow Protocol, RTMFP).Īround the same time, competitors and innovators sprung up. It was slightly similar to MPEG transport, with different channels to handle different data types, such as Flash Video and Action Message Format (AMF: ) for binary data. 2002 bought along Macromedia's Real-time Messaging Protocol (RTMP: ) for audio, video, and data.

It wasn't long before the funky animation plugin transformed into a suite of products which offered functionality on the backend. An example PLAY request looked like this: C->S: PLAY rtsp:///media.mp4 RTSP/1.0

The theory was fairly simple: it worked as stateful protocol on port 554 over TCP, and was a control layer on top of Real-time Transport Protocol (RTP) and Real-time Transport Control Protocol (RTCP). In the old 90s days of POTS modems, RealNetworks pioneered the Real-time Streaming Protocol (RTSP: more: ) as a specialist alternative to HTTP then produced RealServer, which morphed into Helix Server ( (multimedia_project).

How do we mutate to that from what we know about an HTTP web application?įirst, some streaming history. It's an arbitrary number as usually only 5% of your users are on simultaneously. Let's say we need to create 10,000 video streaming servers which can read static files from their origin (with unlimited bandwidth), but also host live services which record to disk. We all know how Peering ( ) and CDNs ( ) work. We all know the high-availability architecture for web applications using load balancers.

To be honest, it's never been easy the problem has merely moved location: from the pipe, to the server farm. If you want to spin up a service like Netflix or Youtube, it's easier because of the technologies available, but it's tougher because of the scale. If it wasn't the client CPU speed and bad codecs, it was the ISP bandwidth, or the server capacity.
Let twitch play your rtmp server software#
It used to be hours of tedious work, combined with expensive proprietary licenses for single-server software desperately straining a CDN.
