|forest 6efb311fb6||2 weeks ago|
|readme||2 weeks ago|
|.gitignore||3 weeks ago|
|ReadMe.md||2 weeks ago|
|go.mod||2 weeks ago|
|go.sum||2 weeks ago|
|index.html||2 weeks ago|
|main.go||2 weeks ago|
Tubers are enlarged structures used as storage organs for nutrients [...] used for regrowth energy during the next season, and as a means of asexual reproduction. Wikipedia
Tuber is a stand-alone HTTP "middleware" or "reverse-proxy" server designed to enable a P2P CDN (peer to peer content delivery network) on top of ANY* existing web server or web application, without requiring modifications to the application itself.
* Tuber uses a ServiceWorker to make this possible, so it may not work properly on advanced web applications which already register a ServiceWorker.
The original post where I described this idea was published in 2017: https://sequentialread.com/serviceworker-webrtc-the-p2p-web-solution-i-have-been-looking-for
The idea was to utilize a cloud-based service running something like threshold for initial connectivity, but then bootstrap a WebRTC connection and serve the content directly from the self-hosted server, thus sidestepping the bandwidth costs associated with greenhouse or any other "cloud" service.
This way, one could easily self-host a server with Greenhouse and even serve gigabytes worth of files to 100s or people without incurring a hefty Greenhouse bill at the end of the month.
The Novage/p2p-media-loader and WebTorrent projects got me excited about P2P content distribution, where the origin server may not even need to handle all the content requests from all the web surfers any more.
If a large portion of your website's bandwidth is related to the same file (or set of files), and multiple website visitors are typically accessing that content at the same time, then Tuber should be able to expand your ability to distribute content well beyond your home server's upload speed limit.
Live video streaming is a perfect example of this. Live video is usually delivered to web browsers in chunks via formats like HLS. When you're streaming, every time your streaming PC produces the next chunk of video, each viewer will need a copy of it ASAP. If your stream gets popular, then trying to upload all those video chunks to each viewer individually will saturate your internet connection and eventually your viewers will see the stream start cutting out or go down entirely.
But with Tuber, each viewer can also help upload video chunks to other viewers, similar to how BitTorrent works. Technically tuber's protocol has more in common with scalable distributed streaming ala Apache Kafka, but the end result should be the same: the video chunks propagate outwards from the publisher, and as soon as a viewer gets one small piece of the file, they can start forwarding that small piece to everyone else who needs it. If your viewers' average upload speed is significantly faster than your stream's bit rate, you may never have to pay for any additional servers as your viewer count grows.
Globally Dialable Gateway
Tuber is like "State Communism": A central planner orchestrates / enforces transfer of data according to:
"from each according to their means, to each according to their needs."
// this allows us to see if the web page is served via http2 or not: // window.performance.getEntriesByType('navigation').nextHopProtocol // but unfortunately caniuse says its at 70% globally.. its not in safari at all, as usual. // at least in firefox, for me, with greenhouse caddy server, this web socket is using http1 // so it actually makes another socket instead of using the existing one. (this is bad for greenhouse.) // ideally in the future we could use http w/ Server Sent Events (SSE) for signalling under HTTP2, // and WebSockets for signalling under HTTP1.1
There is the IceRestart constraint which might help. Apparently you can save the SDP info to local storage, reuse the negotiated SDP, then call an IceRestart to quickly reconnect.
datachannel message size limits