Building Retonet and Meshworks - paying users for spare bandwidth
A residential proxy network with 3,000 globally connected nodes, a 3-tier self-sorting protocol, and an ethical take on a shady industry.
In 2017 I was working at a fintech that scraped a lot of the public internet - LinkedIn, Play Store, App Store, Glassdoor. At any
serious scale, that means proxies. And the more I dug into the proxy ecosystem to solve our own problem, the more I noticed something
unsettling.
The highest-quality proxies weren’t datacenter IPs. They were residential - real user devices routing someone else’s traffic through their home connection. And a lot of them were getting there through “free” VPN apps that quietly signed users up as exit nodes in the background. The user had no idea they were sharing their bandwidth.
That was the gap I wanted to build in.
Retonet - return on internet
The premise was simple: you have spare mobile data every month. You paid for it. If you’re fine with it being used, you should be
able to sell it and earn from it. No hiding behind a free VPN, no dark pattern - an app that told you exactly what it was doing and
paid you a share of what your bandwidth earned.
Retonet was the user-facing side. Install it, opt in, and your device became a node in a residential proxy network. Every byte that
went through you translated into revenue share.
Meshworks - the client side
Meshworks was the other half. Companies that needed high-quality residential proxies would come in through Meshworks, submit their
requests, and the backend would route them through the Retonet network. The bandwidth bill got split with the users whose devices
actually served the traffic.
The economics only work if the network itself is reliable. Residential nodes are inherently unreliable - phones go into tunnels,
wi-fi drops, apps get backgrounded. A proxy network built on mobile devices can’t promise uptime on any individual node.
So I built the reliability into the routing, not the nodes.
The 3-tier self-sorting protocol
On the backend, I kept every node connected via a persistent socket. The server always knew who was online and who wasn’t. But a node being online when a request arrived didn’t mean it would still be online 800ms later when the response needed to come back.
The trick was to stop betting on single nodes.
Every incoming request was fanned out to 3 nodes in parallel. Whichever one responded fastest won - its response went back to the client, and the other two were discarded. That alone solved the mid-request drop problem: you’d need all three nodes to fail simultaneously for the request to fail, which at ~3,000 connected nodes basically never happened.
Then I layered a tiering system on top. Three tiers - low, med, high. Every node started in low. After each request:
- The fastest of the 3 got promoted one tier
- The other two got demoted one tier
Over time this self-sorted. Reliable nodes with good connections floated up to the high tier. Flaky nodes sank to low. Clients
needing premium routing could request high-tier only and pay more; clients doing bulk scraping could take low-tier at a fraction of
the cost. The network sorted itself without me having to score nodes manually.
request -> 3 nodes (parallel) -> fastest wins, +1 tier -> losers, -1 tier
Distribution
Getting to 3,000 nodes as a solo side project is the hard part. The Retonet app alone wasn’t going to cut it.
So I also shipped an SDK. Any app developer could drop it into their own app, give their users the opt-in, and suddenly that app had a new revenue stream on top of ads. The incentive alignment was clean - developer gets paid, user gets paid, Meshworks has more supply.
At the POC peak I had around 3,000 nodes consistently connected across the globe. Requests were being served through the 3-tier
system in real time. The unit economics worked on paper.
Why I didn’t push it further
I didn’t. This stayed a POC. Running a residential proxy business - even the ethical version - puts you adjacent to an industry with a lot of sharp edges: compliance, KYC on the clients asking for the proxies, liability for what gets routed through user devices. Doing it right would have needed a team and a legal posture I didn’t have at the time.
What stayed with me was the protocol design. The 3-tier self-sorting pattern is the part I still think about. You can’t make
individual nodes reliable in an unreliable substrate - phones, home routers, anything consumer - but you can make the network
reliable by racing them and letting the winners bubble up. The same idea shows up anywhere you’re coordinating unreliable workers:
distributed compute, edge inference, crowdsourced data collection.
Reliability doesn’t have to live inside the node. Sometimes it’s cheaper to build it in the routing layer and let the nodes be as
flaky as they want to be.