Version: 0.2.1 (workspace) Status: Alpha / Feature Incomplete Purpose: Post-quantum secure, peer-to-peer gossip overlay for friend-of-a-friend networks, decentralized website hosting, and privacy-preserving social applications
- Vision and Philosophy
- Core Capabilities
- Architecture Overview
- Identity and Cryptography
- Transport Layer
- Membership and Topology
- Message Dissemination
- Discovery Mechanisms
- Saorsa Sites: Decentralized Website Hosting
- Presence and User Discovery
- Data Synchronization
- Communitas Integration
- Network Topologies and Use Cases
Saorsa Gossip is a post-quantum secure, DNS-free, DHT-free peer-to-peer gossip overlay designed to enable:
- Friend-of-a-Friend (FOAF) networking: Organic network growth based on trust relationships
- Decentralized website publishing: Host and discover websites without central servers
- Privacy-preserving presence: Find people without revealing your social graph
- Local-first data: Own your data, sync peer-to-peer
- NAT traversal: Work anywhere, including behind restrictive firewalls
- Post-quantum security: Future-proof against quantum attacks
- No Central Infrastructure: No DNS, no DHT, no central discovery servers
- Pure P2P: All communication happens directly between peers or through volunteer relays
- PQC-Only: ML-KEM for key exchange, ML-DSA for signatures, ChaCha20-Poly1305 for encryption
- Local-First: Data lives on your device, syncs via CRDTs when online
- Privacy-Preserving: Bounded queries, capability tokens, peer scoring
- Partition-Tolerant: Works in isolated networks, reconnects seamlessly
- Measure, Donβt Trust: Capability claims are hints; peers validate reachability and behavior before relying on them
- QUIC-based peer-to-peer connections via
ant-quic - Post-quantum handshakes (ML-KEM-768)
- NAT traversal with hole punching
- Automatic connection establishment to bootstrap coordinators
- Path migration for mobile/roaming scenarios
- HyParView partial views for scalable connectivity
- SWIM failure detection with indirect probes
- Self-healing topology via active/passive views
- Peer scoring to maintain mesh quality
- Plumtree eager-push tree with lazy digests
- O(N) message propagation instead of O(NΒ²)
- Anti-entropy reconciliation with IBLT
- Backpressure and rate limiting
- Coordinator Adverts: Public nodes advertise themselves via gossip
- Rendezvous Shards: 65,536 content-addressed shards for global findability
- FOAF Queries: Bounded random-walk over friend graph (TTL=3)
- Peer Cache: Persistent storage of known peers with NAT hints
- Coordinator roles are treated as hints; peers test and score nodes before selecting them
- Content-addressed sites with BLAKE3 merkle trees
- ML-DSA signed manifests
- Efficient block-level sync via IBLT
- Optional private sites with MLS group encryption
- No hosting provider needed
- Encrypted presence beacons scoped to MLS groups
- TTL-based expiration (10-15 minutes)
- Privacy-preserving "find user" queries
- IBLT summaries for efficient membership tests
- Delta-CRDTs (OR-Set, LWW-Register, RGA)
- IBLT reconciliation for large sets
- Anti-entropy repairs on reconnection
- Local-first with eventual consistency
- Bluetooth Mesh bridging for offline scenarios
- Presence beacons and summaries over BLE
- Short messages (β€120 bytes) with FEC
- Gateway nodes translate Mesh β QUIC
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Application Layer β
β (Communitas, Saorsa Sites Browser, Custom Apps) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββ
β Saorsa Gossip API β
β ββββββββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββββββββββββ β
β β Identity β PubSub β Presence β CRDT Sync β β
β β Groups β Topics β Beacons β Sites β β
β ββββββββββββ΄βββββββββββ΄βββββββββββ΄βββββββββββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββ
β Membership & Dissemination β
β ββββββββββββββββ¬βββββββββββββββ¬ββββββββββββββββββββββββββ
β β HyParView β Plumtree β Peer Scoring ββ
β β SWIM β Anti-Ent β Mesh Gating ββ
β ββββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββ
β Transport (ant-quic) β
β ββββββββββββββββ¬βββββββββββββββ¬ββββββββββββββββββββββββββ
β β QUIC P2P β NAT Traverse β Address Discovery ββ
β β 3 Streams β Hole Punch β Path Migration ββ
β β (mship/ β Relay β PQC Handshake ββ
β β pubsub/ β β ββ
β β bulk) β β ββ
β ββββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββ
β Cryptography (saorsa-pqc) β
β ML-KEM-768 β ML-DSA-65 β ChaCha20-Poly1305 β
β (or -1024) β (or SLH) β (AEAD) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
mship: HyParView shuffles, SWIM probes, membership deltaspubsub: Plumtree EAGER/IHAVE/IWANT control messagesbulk: Actual payloads, CRDT deltas, SITE_SYNC blocks
Every peer has a long-term ML-DSA keypair:
- Public Key: Used for signatures, advertised in the network
- PeerId:
BLAKE3(ml_dsa_pubkey)[0..32]- 32-byte unique identifier - Alias: Optional human-readable name (app layer, not in protocol)
All cryptography uses saorsa-pqc v0.3.14+:
-
Key Exchange: ML-KEM-768 (or ML-KEM-1024 for higher security)
- Quantum-resistant key encapsulation
- Used in QUIC handshake via
ant-quic
-
Signatures: ML-DSA-65 (default) or SLH-DSA (optional)
- ML-DSA: Fast, FIPS 204 standard
- SLH-DSA: Hash-based, long-term security, 12 parameter sets available
-
Symmetric Encryption: ChaCha20-Poly1305 AEAD
- 256-bit keys derived via BLAKE3 KDF
- All group messages, CRDT state, private sites
No classical crypto: No Ed25519, no X25519, no AES-GCM
- saorsa-mls: Provides group keys for encrypted topics
- Derives per-epoch secrets for presence beacons and CRDT encryption
- Forward secrecy and post-compromise security
ant-quic provides the P2P QUIC transport with:
- QUIC Multiplexing: Multiple streams over single connection
- 0-RTT Resumption: Fast reconnection to known peers
- Path Migration: Seamless transition between networks (WiFi β cellular)
- Connection Migration: Survives IP address changes
Coordinator-Assisted Hole Punching:
- Client connects to bootstrap coordinator (public node)
- Coordinator observes client's reflexive address (via
OBSERVED_ADDRESS) - Coordinator facilitates candidate exchange between clients behind NAT
- Clients attempt simultaneous outbound packets (hole punching)
- QUIC connection migrates to direct path on success
Relay Fallback:
- If hole punching fails, coordinator may act as relay
- Rate-limited to prevent abuse
- Only used as last resort
- Bootstrap: Public coordinator, accepts incoming connections
- Client: Normal peer, connects to bootstraps, may accept after NAT punch
- Relay: Optional volunteer relay for clients with symmetric NAT
Current Implementation (v0.1.6):
- β Automatic connection establishment to coordinators
- β NAT traversal capability negotiation
- β QUIC connection multiplexing
- β Address observation
- π§ Full hole punching (in progress in
ant-quic) - π§ Relay services (scaffolded in coordinator binary)
Purpose: Maintain a robust, self-healing overlay without requiring all-to-all connectivity.
-
Active View (degree 8-12)
- Direct QUIC connections
- Used for message routing
- Probed frequently via SWIM
-
Passive View (degree 64-128)
- Candidate peers (not connected)
- Used for recovery when active peer fails
- Populated via shuffle protocol
Shuffle (every 30 seconds):
- Exchange random subsets of passive view
- Discover new peers transitively
- Prevent network partitioning
Promotion:
- When active peer fails, promote from passive
- Establish new QUIC connection
- Update routing state
Neighbor Selection:
- Prioritize low-latency, high-reliability peers
- Use peer scoring metrics
- Maintain diversity (different ASes, geos)
Purpose: Quickly detect and disseminate peer failures.
Status: Fully implemented with configurable fanout, direct and indirect probes, and suspect/dead state transitions.
-
K-Random-Peer Probing (every 1 second)
- Each interval selects
SWIM_PROBE_FANOUT(K=3) random alive peers - Sends
SwimMessage::Pingto each selected peer - Expects
SwimMessage::AckwithinSWIM_ACK_TIMEOUT_MS(500ms)
- Each interval selects
-
Indirect Probes (on direct probe timeout)
- Selects
SWIM_INDIRECT_PROBE_FANOUT(K=3) random alive peers as intermediaries - Sends
SwimMessage::PingReq { target, requester }to each intermediary - Intermediaries probe the target and forward
SwimMessage::AckResponse { target, requester }back - If any intermediary receives an Ack, the suspect is cleared
- Selects
-
State Transitions
- Alive -> Suspect (failed direct + indirect probes within ack timeout)
- Suspect -> Dead (suspect timeout expires, default 3s)
- Dead -> removed from active view, promoted from passive
- Any state + Ack/AckResponse -> Alive (clears suspicion immediately)
-
Background Timeout Detection
- A background task checks pending probes every 500ms
- Probes that exceed the ack timeout trigger indirect probing or suspect transitions
-
Message Dispatch
- Unified
MembershipProtocolMessageenvelope routes both HyParView and SWIM messages - All SWIM messages are serialized via CBOR and sent over the membership stream
- Unified
| Constant | Value | Description |
|---|---|---|
SWIM_PROBE_FANOUT |
3 | Peers probed per interval |
SWIM_INDIRECT_PROBE_FANOUT |
3 | Intermediaries for indirect probes |
SWIM_ACK_TIMEOUT_MS |
500 | Milliseconds before marking probe as failed |
SWIM_PROBE_INTERVAL_SECS |
1 | Seconds between probe rounds |
SWIM_SUSPECT_TIMEOUT_SECS |
3 | Seconds before suspect becomes dead |
SwimMessage::Ping- Direct probeSwimMessage::Ack- Response to PingSwimMessage::PingReq { target, requester }- Indirect probe requestSwimMessage::AckResponse { target, requester }- Forwarded ack from indirect probe
Goal: Broadcast messages to all subscribed peers with O(N) overhead instead of flooding's O(NΒ²).
-
Eager Push (Tree Links)
- Maintain spanning tree per topic
- Forward messages eagerly along tree links
- Guarantees delivery as long as tree is connected
-
Lazy Pull (Non-Tree Links)
- Send IHAVE digest to non-tree neighbors
- If neighbor hasn't seen message, they send IWANT
- Sender replies with full payload
-
Tree Repair
- If IHAVE reveals a gap, send IWANT
- Temporarily graft sender into eager tree
- Prune poor-quality links from tree
-
Anti-Entropy (every 30 seconds)
- Exchange IBLT/Bloom filter of recent message IDs
- Identify and fetch missing messages
- Repairs partitions and late joins
Metrics:
- Delivery Latency: Time from publish to receive
- IWANT Responsiveness: How quickly peer sends requested messages
- Invalid Messages: Malformed or badly signed messages
- Duplicate Floods: Excessive retransmissions
Actions:
- Prune poor performers from eager tree
- Graft high-quality peers into tree
- Blacklist peers with persistent misbehavior
Traditional P2P systems rely on:
- DNS: For initial coordinator discovery β β Single point of failure, censorship
- DHT: For global key-value lookups β β Sybil attacks, poor locality
Saorsa Gossip uses a multi-layered approach:
Persistent local storage of known peers:
struct CachedPeer {
peer_id: PeerId,
addr_hints: Vec<SocketAddr>, // IPv4/IPv6 addresses
last_success: u64, // Unix timestamp
nat_class: NatClass, // EIM, EDM, Symmetric, Unknown
roles: CoordinatorRoles, // Coordinator, Reflector, Relay, Rendezvous
score: i32, // Local quality metric
}On startup:
- Read peer cache from disk (~/.saorsa-gossip/peers.db)
- Sort by score and last_success
- Attempt connection to top 3-5 peers
- If any connect, join overlay via membership protocol
Self-Elected Coordinators:
Any public node can become a coordinator by:
- Generating a signed Coordinator Advert
- Gossiping it on the well-known Coordinator Topic
- Other peers cache it with TTL and score
Advert Format (CBOR):
{
"v": 1,
"peer": "< PeerId (32 bytes) >",
"roles": {
"coordinator": true, // Accepts bootstrap connections
"reflector": true, // Provides address observation
"rendezvous": false, // Coordinates rendezvous shards
"relay": false // Relays for symmetric NAT peers
},
"addr_hints": [
"203.0.113.42:7000", // IPv4
"[2001:db8::1]:7000" // IPv6
],
"nat_class": "eim", // Endpoint-Independent Mapping
"not_before": 1735000000000, // Unix ms
"not_after": 1735086400000, // Unix ms (TTL ~24h)
"score": 100, // Local scoring, not signed
"sig": "< ML-DSA signature >"
}Caching Strategy:
- LRU cache with max size (e.g., 1000 adverts)
- Evict expired (not_after < now) or low-score entries
- Re-score based on connectivity success rate
Benefits:
- No hardcoded bootstrap IPs
- Censorship-resistant (any node can be coordinator)
- Automatic failover (many coordinators available)
Friend-of-a-Friend Queries:
If peer cache is empty (cold start), ask friends:
Protocol:
- Send
FIND_COORDINATORquery to all known peers - Each peer forwards to 3 random friends (fanout=3)
- TTL=3 hops max (exponential decay: 9 β 27 β 81 peers)
- Replies carry Coordinator Adverts
- Cache received adverts and connect
Privacy:
- Query is signed by requester (rate limit by PeerId)
- No full graph disclosure (bounded fanout & TTL)
- Capability tokens prevent spam
Global Findability Without Directories:
Problem: How do you find content (website, user) without a global index?
Solution: Content-addressed sharding.
- K = 16 bits β 65,536 shards
- Shard ID for target T:
shard(T) = BLAKE3("saorsa-rendezvous" || T) & 0xFFFF
Publishers of content (e.g., a Saorsa Site) gossip Provider Summaries to their target's shard:
{
"v": 1,
"target": "< SiteId or PeerId (32 bytes) >",
"provider": "< PeerId of publisher >",
"cap": ["SITE", "IDENTITY"], // What this provider serves
"have_root": true, // Has full content vs partial
"manifest_ver": 42, // Version number (for sites)
"summary": {
"bloom": "< Bloom filter of block CIDs >",
"iblt": "< IBLT for efficient reconciliation >"
},
"exp": 1735100000000, // Expiration timestamp
"sig": "< ML-DSA signature >"
}- Subscribe to
SITE_ADVERT:<shard(SID)>topic - Receive Provider Summaries from peers hosting the site
- Pick top providers by score (latency, bandwidth, reputation)
- Fetch manifest and blocks directly via QUIC
bulkstream
Scaling:
- Subscribers only join relevant shards (not all 65k)
- Each shard is a separate Plumtree instance
- Sharding distributes load across overlay
Host and discover websites without DNS, without web hosts. Content is:
- Content-Addressed: BLAKE3 hashes ensure integrity
- Signed: ML-DSA proves authorship
- Versioned: Incremental updates with merkle proofs
- Syncable: IBLT-based block reconciliation
- Private: Optional MLS-encrypted blocks for group-only sites
SiteId (SID) = BLAKE3(site_signing_pubkey)[0..32]
- Derived from site owner's ML-DSA public key
- Immutable, self-certifying identifier
- No registration, no centralized namespace
Manifest is the root document, ML-DSA signed:
{
"v": 1,
"sid": "< SiteId (32 bytes) >",
"pub": "< Site ML-DSA public key >",
"version": 42, // Increment on updates
"chunk_size": 262144, // 256 KB blocks
"root": "< BLAKE3 merkle root of all CIDs >",
"routes": [
{
"path": "/index.html",
"cid": "< BLAKE3(content) >",
"mime": "text/html"
},
{
"path": "/style.css",
"cid": "< BLAKE3(css) >",
"mime": "text/css"
}
],
"assets": [
{ "cid": "< CID >", "len": 12345 },
{ "cid": "< CID >", "len": 67890 }
],
"private": null // For public sites
}For Private Sites:
{
...
"private": {
"mls_group": "< MLS group ID (32 bytes) >",
"encrypted_routes": "< ChaCha20-Poly1305(routes) >",
"key_epoch": 5 // MLS epoch for decryption key
}
}- Fixed-size chunks (e.g., 256 KB)
- Content-Addressed:
CID = BLAKE3(chunk_data)[0..32] - No per-block signatures: Integrity via CID + manifest signature
- Deduplication: Identical blocks across sites share CID
Publisher Node:
- Prepare site directory (HTML, CSS, JS, images)
- Chunk large files into 256 KB blocks
- Hash each block β CID
- Build manifest with routes and asset list
- Sign manifest with site ML-DSA key
- Gossip Provider Summary to
SITE_ADVERT:<shard(SID)> - Serve manifest and blocks on demand via
SITE_SYNC:<SID>stream
Updates:
- Increment
versionnumber - Generate new manifest (same SID, new merkle root)
- Gossip updated Provider Summary
- Clients detect version change and re-sync
Browser/Client:
- Resolve SID (from bookmark, link, or search)
- Subscribe to
SITE_ADVERT:<shard(SID)> - Receive Provider Summaries from multiple publishers
- Score providers (latency, bandwidth, reputation)
- Connect to top provider via QUIC
- Fetch manifest via
GET_MANIFESTonSITE_SYNCstream - Verify ML-DSA signature on manifest
- Reconcile block set:
- Compare local CIDs (if cached) with manifest
- Generate IBLT of missing blocks
- Send IBLT to provider
- Provider responds with delta
- Fetch missing blocks via
GET_BLOCKS [cid1, cid2, ...] - Verify each block:
CID == BLAKE3(block_data) - Render site locally
Caching:
- Locally cache blocks by CID
- Cache manifest with TTL based on
version - Opportunistically serve blocks to other clients (become provider)
Use Case: Company intranet, family photo album, private forum
Encryption:
- MLS Group: Site owner creates MLS group, adds authorized members
- Exporter Secret: Derive encryption key from MLS exporter (
site_aead_key = KDF(exporter, "site-encryption")) - Block Encryption: Encrypt each block with ChaCha20-Poly1305
- Manifest: Encrypt sensitive fields (routes, metadata) or leave public (CIDs are opaque)
Access Control:
- Only MLS group members can derive decryption key
- Publisher verifies membership before serving blocks
- Capability tokens can gate SITE_SYNC requests
Problem: How do you know if a friend is online without centralized status servers?
Solution: Periodic presence beacons scoped to MLS groups, cryptographically signed for authenticity.
For each MLS group the user is in:
{
"presence_tag": "< KDF(mls_exporter, user_id || time_slice) >",
"addr_hints": ["192.0.2.1:9000"], // Current reflexive addresses
"since": 1735000000000, // When user came online
"expires": 1735000900000, // TTL (15 minutes)
"seq": 42, // Sequence number for ordering
"four_words": "ocean-forest-moon-star", // Optional four-word identity
"signature": "< ML-DSA-65 signature >", // Post-quantum signature
"signer_pubkey": "< ML-DSA-65 public key >" // Signer's public key
}Cryptographic Security:
- Signatures: All beacons signed with ML-DSA-65 (FIPS 204 post-quantum)
- Verification: Incoming beacons verified on receipt; invalid signatures rejected
- Canonical Serialization:
signable_bytes()ensures deterministic signing - Optional Strict Mode: Set
COMMUNITAS_PRESENCE_REQUIRE_SIGNED=1to reject unsigned beacons
Encryption:
- Encrypted to MLS group with ChaCha20-Poly1305
- Only group members can decrypt
- Gossiped on the group's topic
Timing:
- Beacon every 5-10 minutes while online
- TTL 10-15 minutes (expires if user goes offline)
presence_tagchanges every epoch (forward secrecy)
Privacy:
- Only shared groups see presence
presence_tagis pseudonymous (not raw user_id)- Address hints reveal IP but only to group members
Rate Limiting:
- Per-peer token bucket limiting (default: burst of 3, refill 0.2/sec = 12/min)
- Applied to incoming beacons and FOAF queries
- Prevents spam and denial-of-service attacks
- Stale peer entries automatically cleaned up
Scenario 1: Shared Group
If you share an MLS group with the target user:
- Wait for their presence beacon on that group's topic
- Extract
addr_hintsand connect directly
Scenario 2: No Shared Group (FOAF Query)
If you don't share a group but are socially connected:
- Send
FIND_USER(target_peer_id)to all active peers - Propagate via FOAF:
- Each peer checks if they share group with target
- If yes, reply with encrypted
addr_hints - If no, forward to 3 random friends (fanout=3)
- TTL=3 hops max
- Receive replies encrypted to requester
- Connect using received
addr_hints
Abuse Prevention:
- Per-Peer Rate Limiting: Token bucket limiting (burst of 3, refill 0.2/sec)
- Applied to all incoming queries and beacons
- Prevents spam and DoS attacks
- Automatic cleanup of stale peer entries
- Capability Tokens: Require signed capability from mutual friend
- Proximity Check: Only propagate within 2-hop social distance
- Scoring Penalty: Peers who spam queries get low score
Scenario 3: Public Findability (Rendezvous)
If user opts into global findability:
- User gossips Provider Summary to
rendezvous_shard(peer_id) - Seekers subscribe to that shard
- Receive provider info and connect
Privacy Trade-off:
- Rendezvous reveals that user exists and is online
- Does not reveal social graph
- User can disable at any time
Philosophy: Your data lives on your device, not in the cloud.
- Offline-First: Full functionality without network
- Eventual Consistency: Changes propagate when online
- Peer-to-Peer Sync: No central database
- Conflict-Free: CRDTs ensure deterministic merge
OR-Set (Observed-Remove Set):
- Use Case: Group membership, contact lists
- Operations:
add(element),remove(element) - Merge: Union of adds, remove wins only if add was observed
- Example: Alice and Bob both add Charlie to a group offline, sync later β Charlie is in group (no conflict)
LWW-Register (Last-Write-Wins):
- Use Case: Profile fields (name, avatar, bio)
- Operations:
write(value, timestamp) - Merge: Keep value with highest timestamp
- Example: Alice updates her bio at T=100, Bob's stale update at T=50 β Alice's wins
RGA (Replicated Growable Array):
- Use Case: Text documents, chat messages
- Operations:
insert(index, char),delete(index) - Merge: Convergent ordering based on causal history
- Example: Collaborative text editing without lock-step sync
Efficiency Improvement:
- Instead of sending full state, send deltas (changes since last sync)
- Reduces bandwidth from O(state_size) to O(changes)
- Causal stability tracking via version vectors
Example: Alice adds 10 new contacts. Instead of sending all 500 contacts, send:
{
"delta": {
"adds": [
{"id": "peer1", "name": "Charlie", ...},
{"id": "peer2", "name": "Diana", ...},
...
]
},
"version": {"alice": 42, "bob": 30}
}Problem: After offline period, you have 1000s of updates, peer has different 1000s.
Inefficient: Send all IDs, compare, fetch diffs β O(N) messages
Efficient: Invertible Bloom Lookup Table (IBLT)
How IBLT Works:
- Encode your set of update IDs into fixed-size IBLT (e.g., 512 bytes)
- Send IBLT to peer
- Peer subtracts their IBLT
- Result is symmetric difference (what you have that they don't, vice versa)
- Fetch missing updates
Scaling:
- IBLT size is O(d) where d = difference size
- If d is small (typical case), 512-byte IBLT suffices
- Fallback to full set exchange if IBLT decode fails (too many diffs)
Periodic Background Sync:
- Every 30-60 seconds, pick random active peer
- Exchange IBLT of recent updates
- Fetch missing deltas
- Repairs network partitions, late joins, dropped messages
Communitas is a privacy-preserving social networking app built on Saorsa Gossip. Here's how it leverages each capability:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Communitas App (Tauri) β
β β
β βββββββββββββββ ββββββββββββββββ ββββββββββββββββββ β
β β React UI β β Chat View β β Profile View β β
β β (TypeScript)β β β β β β
β ββββββββ¬βββββββ ββββββββ¬ββββββββ βββββββββ¬βββββββββ β
β β β β β
β ββββββββΌββββββββββββββββββΌβββββββββββββββββββΌβββββββββ β
β β Communitas Core (Rust Backend) β β
β β ββββββββββββββ ββββββββββββββ ββββββββββββββββ β β
β β β Groups β β Contacts β β Profile β β β
β β β Manager β β CRDT β β Store β β β
β β βββββββ¬βββββββ ββββββββ¬ββββββ ββββββββ¬ββββββββ β β
β ββββββββββΌββββββββββββββββββΌβββββββββββββββββΌββββββββββ β
βββββββββββββΌββββββββββββββββββΌβββββββββββββββββΌβββββββββββββ
β β β
βββββββββββββΌββββββββββββββββββΌβββββββββββββββββΌβββββββββββββ
β Saorsa Gossip Library β
β ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ β
β β Identity β β PubSub β β Presence β β Sync β β
β β (MLS) β β(Plumtree)β β (Beacons)β β (CRDTs) β β
β ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Technology Stack:
- MLS Groups: End-to-end encrypted group membership
- Plumtree: Efficient message broadcast to all group members
- CRDTs: Message ordering (RGA), reactions (OR-Set)
User Flow:
-
Create Group: Alice creates "Family Chat"
- Communitas generates MLS group with Alice as owner
- Publishes group advert to
rendezvous_shard(group_id)
-
Invite Members: Alice invites Bob and Charlie
- Sends MLS Welcome messages via private topic
- Bob/Charlie accept, join MLS group
- All future messages encrypted with group key
-
Send Message: Alice types "Hello family!"
- Communitas encrypts message with ChaCha20-Poly1305 (key from MLS)
- Publishes to group's Plumtree topic
- Bob and Charlie receive via eager-push
- CRDT ensures causal ordering (RGA)
-
Offline/Online: Bob is offline when Alice sends message
- Message propagates to Charlie
- When Bob reconnects, anti-entropy fetches missed messages
- CRDT merge resolves any conflicts
Implementation Details:
// Communitas Group Manager
struct GroupManager {
gossip: Arc<GossipClient>,
mls_groups: HashMap<GroupId, MlsGroup>,
message_store: CrdtMessageLog,
}
impl GroupManager {
async fn send_message(&self, group_id: &GroupId, text: &str) -> Result<()> {
// 1. Get MLS group and derive epoch key
let mls_group = self.mls_groups.get(group_id)?;
let epoch_key = mls_group.exporter_secret("message-encryption")?;
// 2. Create message CRDT op
let msg_id = MessageId::new();
let timestamp = SystemTime::now();
let crdt_op = RgaOp::Insert {
id: msg_id,
pos: self.message_store.len(),
content: text.to_string(),
timestamp,
};
// 3. Encrypt with ChaCha20-Poly1305
let plaintext = postcard::to_stdvec(&crdt_op)?;
let ciphertext = chacha20poly1305::encrypt(&epoch_key, &plaintext)?;
// 4. Sign with user's ML-DSA key
let signature = self.gossip.identity().sign(&ciphertext)?;
// 5. Publish to group topic via Plumtree
let group_topic = TopicId::from_group(group_id);
self.gossip.publish(group_topic, Bytes::from(ciphertext)).await?;
// 6. Apply locally
self.message_store.apply(crdt_op);
Ok(())
}
}Scenario: Alice wants to add Bob as a friend, but they've never connected before.
Solution: FOAF + Rendezvous
User Flow:
-
Search: Alice searches for "Bob Smith"
- Communitas generates query:
FIND_USER(bob_peer_id) - Sends to Alice's active peers (friends)
- Communitas generates query:
-
FOAF Propagation:
- Alice's friend Charlie knows Bob (shares a group)
- Charlie replies with Bob's addr_hints (encrypted to Alice)
- If Charlie doesn't know Bob, forwards to his friends (TTL=3)
-
Connection:
- Alice receives Bob's addr_hints
- Communitas establishes QUIC connection
- Alice sends friend request (MLS Add proposal)
- Bob accepts, now in each other's contact CRDT
-
Ongoing Presence:
- Both subscribe to a shared "Friends" MLS group
- Exchange presence beacons (online/offline status)
- Can message directly or in group
Implementation:
async fn find_user(&self, target_alias: &str) -> Result<Vec<PeerInfo>> {
// 1. Query local contact CRDT
if let Some(peer) = self.contacts.get_by_alias(target_alias) {
return Ok(vec![peer]);
}
// 2. FOAF query
let query = FindUserQuery {
target: alias_to_peer_id(target_alias)?,
requester: self.gossip.identity().peer_id(),
capability: self.generate_capability_token()?,
ttl: 3,
fanout: 3,
};
// 3. Send to active peers
let results = self.gossip.foaf_query(query).await?;
// 4. If no results, try rendezvous
if results.is_empty() {
let shard_id = rendezvous_shard(&target);
self.gossip.subscribe(TopicId::from_shard(shard_id)).await?;
// Wait for Provider Summaries...
}
Ok(results)
}Scenario: Alice's family wants to share vacation photos privately.
User Flow:
-
Create Album: Alice creates "Summer 2025" album
- Communitas packages photos into Saorsa Site
- Generates site manifest with image routes
- Derives
SID = BLAKE3(alice_site_key)
-
Make Private: Alice restricts to family MLS group
- Encrypts each photo block with group key (from MLS exporter)
- Publishes Provider Summary to
SITE_ADVERT:<shard(SID)> - Includes
private.mls_groupfield in manifest
-
Share Link: Alice sends site link to family group
saorsa://site/<SID>(custom URI scheme)- Group members click link
-
View Album: Bob (family member) clicks link
- Communitas subscribes to
SITE_ADVERT:<shard(SID)> - Receives Provider Summary from Alice's node
- Fetches manifest via
SITE_SYNCstream - Derives decryption key from shared MLS group
- Decrypts and displays photos
- Communitas subscribes to
-
Sync Updates: Alice adds new photos
- Increments site
version - Gossips updated Provider Summary
- Bob's Communitas detects version change
- Fetches only new blocks (IBLT reconciliation)
- Increments site
Implementation:
struct AlbumManager {
gossip: Arc<GossipClient>,
sites: SaorsaSiteClient,
mls_groups: HashMap<GroupId, MlsGroup>,
}
impl AlbumManager {
async fn create_private_album(
&self,
name: &str,
photos: Vec<PathBuf>,
group_id: &GroupId,
) -> Result<SiteId> {
// 1. Get MLS group key
let mls_group = self.mls_groups.get(group_id)?;
let encryption_key = mls_group.exporter_secret("site-encryption")?;
// 2. Build site
let mut site_builder = SiteBuilder::new();
site_builder.set_title(name);
for photo_path in photos {
let content = fs::read(&photo_path).await?;
let encrypted = chacha20poly1305::encrypt(&encryption_key, &content)?;
site_builder.add_file(
photo_path.file_name().unwrap().to_str().unwrap(),
encrypted,
"image/jpeg",
);
}
// 3. Add MLS group metadata
site_builder.set_private_group(*group_id);
// 4. Sign and publish
let site = site_builder.build()?;
let sid = site.id();
self.sites.publish(site).await?;
// 5. Gossip Provider Summary
let summary = ProviderSummary {
target: sid,
provider: self.gossip.identity().peer_id(),
capabilities: vec![Capability::Site],
manifest_version: site.version(),
// ...
};
self.gossip.publish_to_shard(rendezvous_shard(&sid), summary).await?;
Ok(sid)
}
}Public Profile:
- Each user can publish a public Saorsa Site for their profile
SID = BLAKE3(user_ml_dsa_pubkey)(tied to identity)- Contains: avatar, bio, public posts, links
- Discoverable via rendezvous shard
- Updatable by incrementing version
Private Details:
- Contact details (phone, email) stored in encrypted CRDT
- Shared only with friends (MLS group)
- Synced via delta-CRDT anti-entropy
Local Timeline:
- Each user maintains an RGA CRDT of their posts
- Posts encrypted to "All Friends" MLS group
- Gossiped via Plumtree to all friends
Aggregated Feed:
- Communitas subscribes to all friends' post topics
- CRDT merge creates unified timeline
- Local scoring/filtering (user preferences)
Reactions:
- OR-Set CRDT for likes/reactions
- Add/remove operations, conflict-free merge
- No central like counter
Scenario: Alice is on a plane (no network) for 6 hours.
Experience:
-
Compose Messages: Alice writes messages to family group
- Stored locally in CRDT message log
- UI shows "pending sync" indicator
-
Edit Profile: Alice updates her bio
- LWW-Register applied locally
- Timestamped for eventual conflict resolution
-
View Cached Content: Alice browses photos in shared album
- All previously fetched blocks cached locally
- Full functionality for cached sites
-
Reconnect: Plane lands, Alice gets WiFi
- Communitas reconnects to Saorsa Gossip overlay
- Anti-entropy kicks in:
- Sends IBLT of local message IDs
- Receives IBLT of missed messages
- Fetches diffs, applies CRDT merges
- All pending messages broadcast
- UI updates with merged state
No data loss, deterministic merges, seamless sync.
Use Case: Small community, family, close friends
Topology:
- All peers maintain active connections to all others
- No coordinators needed (everyone is reachable)
- Plumtree degenerates to simple broadcast (efficient for small N)
Advantages:
- Lowest latency (direct peer-to-peer)
- Maximum reliability (no single point of failure)
- Simple membership (OR-Set of all peers)
Example: Family of 12 sharing photos and messages
Use Case: Extended social network, alumni group, local community
Topology:
- HyParView partial views (active=10, passive=100)
- Plumtree spanning trees per topic
- 1-2 public coordinators for cold starts
- Rendezvous shards for content discovery
Advantages:
- Scales to thousands without all-to-all connections
- Social clustering (friends of friends closer in topology)
- Efficient broadcast via tree (not flood)
Example: University alumni network with shared groups, event announcements, job board
Use Case: Large-scale social platform, city-wide mesh, global activism network
Topology:
- Multiple coordinator tiers (regional, global)
- Rendezvous sharding for all content
- SWIM + HyParView for scalable membership
- Bluetooth bridges for offline regions
Advantages:
- No single point of control
- Censorship-resistant (anyone can run coordinator)
- Partition-tolerant (communities can operate offline, resync later)
Example: Decentralized Twitter-like platform
- Users publish to their
<peer_id>shard - Followers subscribe to that shard
- Rendezvous ensures global findability
- No central server can ban users
Use Case: Company intranet, university campus, conference venue
Topology:
- LAN gossip beacons (UDP multicast) for zero-config joins
- Local coordinator on LAN for fast bootstrap
- Optional gateway to public Saorsa network
- Bluetooth fallback for auditoriums/basements
Advantages:
- LAN-only mode (no internet required)
- High bandwidth for large file sharing (site sync)
- Private by default, opt-in to public network
Example: Conference app
- Attendees auto-discover on venue WiFi
- Shared schedule, speaker sites, Q&A topics
- Works in basement conference halls (Bluetooth)
- Optionally sync with remote attendees (QUIC)
Saorsa Gossip provides a complete stack for building decentralized, privacy-preserving, post-quantum secure applications:
β Transport: QUIC with NAT traversal, PQC handshakes β Membership: Scalable, self-healing topology (HyParView + SWIM) β Dissemination: Efficient broadcast (Plumtree + anti-entropy) β Discovery: No DNS/DHT (Coordinator adverts, Rendezvous, FOAF) β Content: Decentralized websites (Saorsa Sites) β Presence: Privacy-preserving user discovery β Sync: Local-first CRDTs with IBLT reconciliation β Security: ML-KEM, ML-DSA, MLS, ChaCha20-Poly1305
Communitas demonstrates the full potential by providing:
- End-to-end encrypted group messaging
- Friend-of-a-friend social networking
- Decentralized photo/file sharing
- Offline-first with seamless sync
- No central servers, no surveillance
The future is peer-to-peer, privacy-preserving, and quantum-safe. Saorsa Gossip makes it real.
- SPEC2.md: Original protocol specification (now superseded by this design doc)
- ant-quic: https://github.com/maidsafe/ant-quic
- saorsa-pqc: https://github.com/saorsalabs/saorsa-pqc
- MLS RFC 9420: https://datatracker.ietf.org/doc/rfc9420/
- Plumtree Paper: https://asc.di.fct.unl.pt/~jleitao/pdf/srds07-leitao.pdf
- HyParView Paper: http://asc.di.fct.unl.pt/~jleitao/pdf/dsn07-leitao.pdf
- SWIM Paper: https://www.cs.cornell.edu/projects/Quicksilver/public_pdfs/SWIM.pdf