🍠 Serve Your Media Without Limits From a "Potato" Computer Hosted in Mom's Basement: Take our Beloved "Series of Tubes" to Full Power
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
forest 6efb311fb6 fix streaming pic 2 weeks ago
readme fix streaming pic 2 weeks ago
.gitignore write out architechture for purpose-built tuber protocol 3 weeks ago
ReadMe.md md syntax 2 weeks ago
go.mod pull trickle-ice example from onedss/webrtc-demo & add upnp support 2 weeks ago
go.sum pull trickle-ice example from onedss/webrtc-demo & add upnp support 2 weeks ago
index.html pull trickle-ice example from onedss/webrtc-demo & add upnp support 2 weeks ago
main.go msg size limits notes 2 weeks ago

ReadMe.md

🍠Tuber

Serve Your Media Without Limits From a "Potato" Computer Hosted in Mom's Basement:

Take our Beloved "Series of Tubes" to Full Power

Tubers are enlarged structures used as storage organs for nutrients [...] used for regrowth energy during the next season, and as a means of asexual reproduction. Wikipedia


Tuber is a stand-alone HTTP "middleware" or "reverse-proxy" server designed to enable a P2P CDN (peer to peer content delivery network) on top of ANY* existing web server or web application, without requiring modifications to the application itself.

Tuber's client-side code is a progressive enhancement. It requires JavaScript and WebRTC support in order to "function", however it won't break your site for users who's web browser does not support those features or has those features turned off.

* Tuber uses a ServiceWorker to make this possible, so it may not work properly on advanced web applications which already register a ServiceWorker.

Use Cases

1. NAT Punchthrough for Tunneled Self-hosted Servers

aka: "how 2 run home server without router login n without payin' $$$ for VPN"

diagram showing a greenhouse self-hosting setup optimized with tuber. first, the web browser connects to greenhouse. greenhouse forwards the connection to a self-hosted server in your living room. The self-hosted server is running Tuber, so Tuber handles the request and injects tuber.js into the response body. Next, inside your web browser, tuber.js connects to the tuber server through the original greenhouse connection and establishes a direct WebRTC UDP connection between itself and the tuber server. Finally, elements in the web page like videos and links to large file downloads will have thier HTTP requests handled by a ServiceWorker in the web browser. The ServiceWorker will forward the requests to tuber.js, which will in turn fulfill the request directly with the Tuber server through the WebRTC DataChannel. This allows large files like videos to be transferred without incurring bandwidth costs on your greenhouse account, or on any other cloud provider you might use.

The original post where I described this idea was published in 2017: https://sequentialread.com/serviceworker-webrtc-the-p2p-web-solution-i-have-been-looking-for

The idea was to utilize a cloud-based service running something like threshold for initial connectivity, but then bootstrap a WebRTC connection and serve the content directly from the self-hosted server, thus sidestepping the bandwidth costs associated with greenhouse or any other "cloud" service.

This way, one could easily self-host a server with Greenhouse and even serve gigabytes worth of files to 100s or people without incurring a hefty Greenhouse bill at the end of the month.

2. Bandwidth Scalability for Selfhosted Media (Especially Live Streams)

aka: "how 2 self-host twitch.tv on potato internet"

diagram showing a a peer-to-peer (p2p) video streaming network. The data starts on the streaming PC in your house, and from there, Tuber distributes it to a Tuber Booster running in your friend's house. The two tubers running on home servers distribute the video stream directly to viewers who are on LTE/5G connections, because LTE/5G is incompatible with the WebRTC features we need for P2P distribution. The tubers running on home servers also directly distribute the video to the viewers who have the fastest internet connections. Then the viewers with fast internet connections distribute the video to the viewers who have slower internet connections. Finally, any request for video that cannot be handled in time by a viewer is handled directly by the tubers running on the home servers.

The Novage/p2p-media-loader and WebTorrent projects got me excited about P2P content distribution, where the origin server may not even need to handle all the content requests from all the web surfers any more.

If a large portion of your website's bandwidth is related to the same file (or set of files), and multiple website visitors are typically accessing that content at the same time, then Tuber should be able to expand your ability to distribute content well beyond your home server's upload speed limit.

Live video streaming is a perfect example of this. Live video is usually delivered to web browsers in chunks via formats like HLS. When you're streaming, every time your streaming PC produces the next chunk of video, each viewer will need a copy of it ASAP. If your stream gets popular, then trying to upload all those video chunks to each viewer individually will saturate your internet connection and eventually your viewers will see the stream start cutting out or go down entirely.

But with Tuber, each viewer can also help upload video chunks to other viewers, similar to how BitTorrent works. Technically tuber's protocol has more in common with scalable distributed streaming ala Apache Kafka, but the end result should be the same: the video chunks propagate outwards from the publisher, and as soon as a viewer gets one small piece of the file, they can start forwarding that small piece to everyone else who needs it. If your viewers' average upload speed is significantly faster than your stream's bit rate, you may never have to pay for any additional servers as your viewer count grows.

Tuber Implementations:

  • Permanent/App

    • Written in Golang, Pion WebRTC
  • Ephemeral/Web

    • Written in TypeScript, Web Browser Only

Tuber Operation Modes:

  • Central Planner

    • Maintain Content and Peer Database
      • Assign Peer to Peer connection candidates to Peers
      • Assign content partitions to Peers
      • Give upload/download priority instructions to peers
    • Collate and Compare metrics
      • rank peers by longevity and upload bandwidth
      • detect and mitigate lying peers?
  • Content Origin

    • HTTP reverse proxy server
      • inject serviceWorker into the page
      • track content that should be p2p-distributed
      • "which content will be requested next?" prediction/mapping per page..?
      • auto-detect or configure for HLS or other "stream-like" behavior
      • webhooks for adding new content
  • Globally Dialable Gateway

    • STUN server..?
    • If it's also a Content Peer, maybe it can serve files directly over HTTP?
  • Content Peer

    • Can be registered as "Trusted" with Central Planner
    • Partition / File Store
    • WebRTC DataChannel connections to other peers.
  • ServiceWorker

    • Intercepts HTTP requests from the web browser
      • Can handle request via direct passthrough to Content Origin
      • Can handle request via Tuber protocol (through connections to peers)
      • Can handle request via HTTP request to Globally Dialable Gateway?

Tuber Architechture for Scalability

Tuber is like "State Communism": A central planner orchestrates / enforces transfer of data according to:

"from each according to their means, to each according to their needs."

https://github.com/owncast/owncast/issues/112#issuecomment-1007597971

Notes

https://github.com/onedss/webrtc-demo

https://github.com/gortc/gortcd

https://github.com/coturn/coturn

// this allows us to see if the web page is served via http2 or not: 
// window.performance.getEntriesByType('navigation')[0].nextHopProtocol
// but unfortunately caniuse says its at 70% globally.. its not in safari at all, as usual.
// at least in firefox, for me, with greenhouse caddy server, this web socket is using http1
// so it actually makes another socket instead of using the existing one. (this is bad for greenhouse.)

// ideally in the future we could use http w/ Server Sent Events (SSE) for signalling under HTTP2,
// and WebSockets for signalling under HTTP1.1

https://stackoverflow.com/questions/35529455/keeping-webrtc-streams-connections-between-webpages

There is the IceRestart constraint which might help. Apparently you can save the SDP info to local storage, reuse the negotiated SDP, then call an IceRestart to quickly reconnect.

As described in section 3, the nominated ICE candidate pair is exchanged during an SDP offer/answer procedure, which is maintained by the JavaScript. The JavaScript can save the SDP information on the application server or in browser local storage. When a page reload has happened, a new JavaScript will be reloaded, which will create a new PeerConnection and retrieve the saved SDP information including the previous nominated candidate pair. Then the JavaScript can request the previous resource by sending a setLocalDescription(), which includes the saved SDP information. Instead of restart an ICE procedure without additional action hints, the new JavaScript SHALL send an updateIce() which indicates that it has happended because of a page reload. If the ICE agent then can allocate the previous resource for the new JavaScript, it will use the previous nominated candidate pair for the first connectivity check, and if it succeeds the ICE agent will keep it marked as selected. The ICE agent can now send media using this candidate pair, even if it is running in Regular Nomination mode.

https://stackoverflow.com/questions/40363606/how-to-keep-webrtc-datachannel-open-in-phone-browser-inactive-tab

datachannel message size limits

https://lgrahl.de/articles/demystifying-webrtc-dc-size-limit.html

https://viblast.com/blog/2015/2/5/webrtc-data-channel-message-size/