**EDIT:** Just to be clear, everything in this readme is currently an _idea_, the software to do this doesn't exist yet. I may never write it. But I just really wanted to write down the idea because I thought it was cool.
### Serve Your Media Without Limits From a "Potato" Computer Hosted in Mom's Basement:
#### Take our Beloved "Series of Tubes" to Full Power
> _Tubers are enlarged structures used as storage organs for nutrients [...] used for regrowth energy during the next season, and as a means of asexual reproduction._ [Wikipedia](https://en.wikipedia.org/wiki/Tuber)
Tuber is a stand-alone HTTP "middleware" or "reverse-proxy" server designed to enable a P2P CDN (peer to peer content delivery network) on top of ANY\* existing web server or web application, **without requiring modifications** to the application itself.
Tuber's client-side code is a progressive enhancement. It requires JavaScript and WebRTC support in order to "function", however it won't break your site for users who's web browser does not support those features or has those features turned off.
\* Tuber uses a [ServiceWorker](https://developer.mozilla.org/en-US/docs/Web/API/ServiceWorker) to make this possible, so it may not work properly on advanced web applications which already register a ServiceWorker.
## Use Cases
### 1. NAT Punchthrough for Tunneled Self-hosted Servers
#### aka: "how 2 run home server without router login n without payin' $$$ for VPN"
![diagram showing a greenhouse self-hosting setup optimized with tuber. first, the web browser connects to greenhouse. greenhouse forwards the connection to a self-hosted server in your living room. The self-hosted server is running Tuber, so Tuber handles the request and injects tuber.js into the response body. Next, inside your web browser, tuber.js connects to the tuber server through the original greenhouse connection and establishes a direct WebRTC UDP connection between itself and the tuber server. Finally, elements in the web page like videos and links to large file downloads will have thier HTTP requests handled by a ServiceWorker in the web browser. The ServiceWorker will forward the requests to tuber.js, which will in turn fulfill the request directly with the Tuber server through the WebRTC DataChannel. This allows large files like videos to be transferred without incurring bandwidth costs on your greenhouse account, or on any other cloud provider you might use.](readme/greenhouse-reverse-tunnel.jpg)
The original post where I described this idea was published in 2017: https://sequentialread.com/serviceworker-webrtc-the-p2p-web-solution-i-have-been-looking-for
The idea was to utilize a cloud-based service running something like [threshold](https://git.sequentialread.com/forest/threshold) for initial connectivity, but then bootstrap a WebRTC connection and serve the content directly from the self-hosted server, thus sidestepping the bandwidth costs associated with [greenhouse](https://greenhouse.server.garden/) or any other "cloud" service.
This way, one could easily self-host a server with Greenhouse and even serve gigabytes worth of files to 100s or people without incurring a hefty Greenhouse bill at the end of the month.
![diagram showing a a peer-to-peer (p2p) video streaming network. The data starts on the streaming PC in your house, and from there, Tuber distributes it to a Tuber Booster running in your friend's house. The two tubers running on home servers distribute the video stream directly to viewers who are on LTE/5G connections, because LTE/5G is incompatible with the WebRTC features we need for P2P distribution. The tubers running on home servers also directly distribute the video to the viewers who have the fastest internet connections. Then the viewers with fast internet connections distribute the video to the viewers who have slower internet connections. Finally, any request for video that cannot be handled in time by a viewer is handled directly by the tubers running on the home servers.](readme/p2p-streaming.jpg)
The [Novage/p2p-media-loader](https://github.com/Novage/p2p-media-loader) and [WebTorrent](https://webtorrent.io) projects got me excited about P2P content distribution, where the origin server may not even need to handle all the content requests from all the web surfers any more.
If a large portion of your website's bandwidth is related to the same file (or set of files), and multiple website visitors are typically accessing that content at the same time, then Tuber should be able to expand your ability to distribute content well beyond your home server's upload speed limit.
Live video streaming is a perfect example of this. Live video is usually delivered to web browsers in chunks via formats like [HLS](https://en.wikipedia.org/wiki/HTTP_Live_Streaming). When you're streaming, every time your streaming PC produces the next chunk of video, each viewer will need a copy of it ASAP. If your stream gets popular, then trying to upload all those video chunks to each viewer individually will saturate your internet connection and eventually your viewers will see the stream start cutting out or go down entirely.
But with Tuber, each viewer can also help upload video chunks to other viewers, similar to how BitTorrent works. Technically tuber's protocol has more in common with scalable distributed streaming ala [Apache Kafka](https://kafka.apache.org), but the end result should be the same: the video chunks propagate outwards from the publisher, and as soon as a viewer gets one small piece of the file, they can start forwarding that small piece to everyone else who needs it. If your viewers' average upload speed is significantly faster than your stream's bit rate, you may never have to pay for any additional servers as your viewer count grows.
There is the IceRestart constraint which might help. Apparently you can save the SDP info to local storage, reuse the negotiated SDP, then call an IceRestart to quickly reconnect.
As described in section 3, the nominated ICE candidate pair is exchanged during an SDP offer/answer procedure, which is maintained by the JavaScript. The JavaScript can save the SDP information on the application server or in browser local storage. When a page reload has happened, a new JavaScript will be reloaded, which will create a new PeerConnection and retrieve the saved SDP information including the previous nominated candidate pair. Then the JavaScript can request the previous resource by sending a setLocalDescription(), which includes the saved SDP information. Instead of restart an ICE procedure without additional action hints, the new JavaScript SHALL send an updateIce() which indicates that it has happended because of a page reload. If the ICE agent then can allocate the previous resource for the new JavaScript, it will use the previous nominated candidate pair for the first connectivity check, and if it succeeds the ICE agent will keep it marked as selected. The ICE agent can now send media using this candidate pair, even if it is running in Regular Nomination mode.
hi! noob question: what is the proper way to reconnect a browser client? like is there a prefered way, (like saving the remote descriptor or something) or should i just do the handshake again
David Zhao 2:16 PM
@Filip Weiss you'd want to perform an ICE restart from the side that is creating the offer. [Here's docs](https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection/restartIce) on initiating this on the browser side, and [here's initiating from pion](https://github.com/pion/webrtc/blob/master/examples/ice-restart/main.go).
Bang He 9 days ago
how to set the timeout for waiting <-gatherComplete?
Filip Weiss 8 days ago
thanks. so is it correct that i have to do signaling again when the browser wants to reconnect?
Juliusz Chroboczek 8 days ago
@Bang He You don't need to use GatheringPromise if you're doing Trickle ICE, which is recommended. If you're not doing Trickle ICE, you need to wait until GatheringPromise has triggered and then generate a new SDP. You don't need a timeout, there should be enough timeouts in the Pion code already.
forest johnson:bike: 7 days ago
@David Zhao
> want to perform an ICE restart from the side that is creating the offer.
What happens if the answering side performs the ICE restart ? it wont work? I guess I can just try it :smile:
David Zhao 7 days ago
I don't think you can. it needs to be initiated in the offer.
This document describes Session Description Protocol (SDP) Offer/Answer procedures for carrying out Interactive Connectivity Establishment (ICE) between the agents. This document obsoletes RFC 5245.
### Make sure to send offer/answer before ice candidates:
John Selbie May 15th
In the trickle-ice example, does a race condition exist such that the callback for OnIceCandidate might send a Candidate message back before the initial answer gets sent? Or do the webrtc stacks on the browser allow for peerConnection.addIceCandidate being invoked before peerConnection.setRemoteDescription and do the right thing?
Sean DuBois 8 days ago
@John Selbie Yea, it's a sharp case of the WebRTC API in a multi threaded environment