Bifrost — Private Peer-to-Peer File Sharing
Your files stay on your machine, always.
The problem
Sharing files over the internet today means uploading them to someone else's server. Google Drive, Dropbox, WeTransfer — they all work, but they all rely on the same trade: you give a third party a copy of your data, and you trust them to keep it safe, private, and available.
That trade isn't always acceptable. Sensitive client work, personal media, large datasets, files you'd rather not have a copy of sitting in a cloud bucket — for these, the existing tools are the wrong shape.
Bifrost is what I built when I got tired of working around that. Files never leave the sender's machine. The recipient browses and downloads directly. When the sender closes the app, the files are gone from the network. There is no central storage, no subscription, no reason for the data to ever exist anywhere except on the two devices that are talking to each other.
What it actually does
Your PC ── encrypted P2P tunnel ── Friend's app / browser
(host) (guest)
Files never touch a server.
The host runs a small Rust daemon, picks a folder to share, and grants a peer (identified by a cryptographic key) Read or ReadWrite access. The peer connects from any of the supported clients — desktop, mobile, browser — sees the folder, browses its contents, downloads what they want. Transfers resume across restarts. Bandwidth is whatever the underlying internet connection supports. Nothing is uploaded to a server first.
Architecture
Three layers, each with one job:
┌──────────────── CLIENT LAYER ─────────────────────────────┐
│ Tauri Desktop Flutter Mobile Browser Extension │
│ (Windows) (Android) (Chrome / Edge) │
└───────────────────────┬───────────────────────────────────┘
│ localhost HTTP + WebSocket
┌───────────────── RUST CORE DAEMON ────────────────────────┐
│ libp2p networking · file engine · permission engine │
│ HTTP API on localhost:7777 · SQLite local state │
└───────────────────────┬───────────────────────────────────┘
│ P2P (direct or relay-mediated)
┌───────────────── RELAY INFRASTRUCTURE ────────────────────┐
│ Forward encrypted bytes only — never sees file content │
└────────────────────────────────────────────────────────────┘
Every client — desktop, mobile, browser extension — is a thin shell over the same Rust core. The desktop app spawns the daemon in-process via Tauri. The mobile app links the same Rust crate as a static library through C FFI. The extension talks to the daemon over the localhost API. One brain, three faces.
The networking stack
The core is built on libp2p. It gets a lot for free — but the interesting decisions were how to compose it:
- Transports: TCP for direct connections, WebSocket and WSS for traversing networks that block raw TCP. WSS specifically is what lets the home-server relay work behind a residential router with no port forwarding — Cloudflare Tunnel exposes the WSS endpoint and routes it back through the tunnel.
- Kademlia DHT for peer discovery. Every peer joins one shared DHT, so two people on different relay servers can still find each other.
- mDNS for zero-config LAN discovery — peers on the same network find each other instantly without touching a relay.
- Circuit Relay v2 as the NAT-traversal fallback. The relay only forwards encrypted bytes; it never sees what's in the connection.
- DCUTR (Direct Connection Upgrade through Relay). After the initial relay-mediated connection, both peers attempt a coordinated hole-punch to upgrade to a direct TCP path. When it works, the relay drops out of the data path entirely.
- Noise protocol authenticates and encrypts every connection end-to-end via the Noise XX handshake. Even when traffic flows through a relay, the relay sees ciphertext only.
Most home users sit behind some flavor of NAT. The combination above means Bifrost works for almost everyone: direct when possible, relayed when not, and the user experience is the same either way.
The file engine
Files are served as content-addressed chunks — each ~256 KB, hashed with SHA-256, compressed per-chunk with zstd. Three properties fall out of this design:
- Resumable transfers are free. If a transfer drops, the receiver knows exactly which chunks it already has and asks only for the rest.
- Deduplication is automatic. Two files that share content share chunks on the wire and on disk.
- Parallel downloads are easy. The receiver fans out chunk requests across whatever bandwidth is available.
A manifest (LRU-cached, 256 entries) maps the original folder structure to its chunk graph. The receiver sees the folder layout immediately and decides what to actually download.
Identity, permissions, and the relay user directory
Identity is a local Ed25519 keypair generated on first launch. The peer ID is derived from the public key via libp2p multihash encoding. Nothing about identity ever requires a third-party signup.
For social features (find your friend by username, register a handle, etc.), the relay also runs a small SQLite-backed user directory:
POST /api/v1/users/register— peers self-assert a usernameGET /api/v1/users/lookup?username=…— friend resolutionGET /api/v1/users/search?q=…— prefix searchGET /api/v1/presence/:peer_id— live online state from the in-memory connected set
Default display names are mythological: a deterministic {Adjective} {Creature} {Location} triple derived from sha256(peer_id) (1 million unique combinations), with a 40-variant generated SVG avatar. Real personality, zero account creation.
Folder access is gated by a per-folder peer whitelist with Read or ReadWrite levels. Default is deny-all. Unlisted peers get a permission-denied response with no folder metadata leaked.
Relay infrastructure
Two relays are live in production:
- Azure (US):
20.80.80.104:9001— fresh Ubuntu VM, libp2p relay binary, stable peer ID persisted across restarts. - Dell home server (LAN + WSS through Cloudflare Tunnel): a relay box on my home network whose WSS endpoint is exposed globally via
projectplatypus.site— no port forwarding, no static IP needed.
Both run the same bifrost-relay binary. The daemon dials all configured relays at startup and reconnects any that drop on a 30-second ticker. Bootstrap config is hot-reloadable: POST /api/v1/config/bootstrap updates the relay list live, no daemon restart required.
Multi-platform delivery from one core
The same Rust crate compiles to three deliverables:
- Tauri desktop (Windows) — the daemon runs in-process; the React/TypeScript UI talks to it via Tauri commands.
- Flutter mobile (Android) — the daemon links as a static library via
bifrost-ffi; Flutter calls into it through Dart FFI bindings. - Browser extension (Chrome / Edge) — the popup UI talks to the locally-running daemon over its HTTP/WebSocket API.
This was one of the load-bearing architectural choices early on. Maintaining three implementations of the networking and file logic would have been a death sentence. Maintaining three thin UIs on top of one Rust core is genuinely tractable.
Where it stands
| Phase | Status | |---|---| | Core daemon — P2P networking, file serving, resumable transfers, CLI | ✅ Done | | Desktop app — Tauri, system tray, folder UI, peer management | ✅ Done | | Browser extension — popup, peer browser, file access | ✅ Done | | Mobile — Flutter / Android, Rust FFI, transfers on device | ✅ Done | | Infrastructure — relay network, monitoring, public beta | 🔧 In progress | | Advanced — versioning, FUSE mount, video streaming, spaces | 📋 Planned |
There are 57 automated tests (unit, integration, two-peer transfer, relay-circuit) green at the moment.
What I've learned building this
- libp2p is a remarkable piece of engineering, and a remarkable amount of work to use correctly. Every primitive — the DHT, the relay, DCUTR, AutoNAT, the transport stack — works beautifully on its own. Composing them into a stable real-world networking stack involves a lot of careful sequencing and configuration that the documentation only gestures at.
- NAT traversal is mostly a coordination problem, not a punching problem. The hole-punch itself is cheap; getting both peers to attempt it at the right moment with the right addresses is the hard part. DCUTR handles this well, but tracing through a failed punch attempt is humbling.
- A Rust core + thin platform shells beats parallel implementations every time. The first week of mobile work cost more than the next month combined, because everything I'd already built on desktop just kept working.
- Operational reality is its own discipline. Running a relay server on a home network behind a Cloudflare Tunnel — and watching it actually work, and stay working — taught me more about distributed systems in two months than years of reading taught me.
What's next
- EU and APAC relay nodes so geographically-spread peers don't pay a cross-Atlantic round-trip on every NAT-traversal attempt.
- Friends list synced via the relay (currently designed, not yet wired).
- At-rest file encryption on the host side.
- Versioning and FUSE-mounted spaces — letting a peer's shared folder appear as a real filesystem on the receiver's machine.
The README, full feature matrix, NAT-traversal walkthrough, and architecture deep-dive live in the project's docs/ directory.