I ship low-latency services, make code cheaper, and own the tail.
- Roles: systems / backend / performance (Rust, Go, C/C++)
- Strengths: protocol work, observability, P99/P999 control, cost/perf trade-offs
- Scale: millions+ requests; deterministic behavior in prod
- Availability: audits, architecture, staff-level IC
- Rust-AI — CPU inference with arena allocators + SIMD. ~3× faster than Python baselines on targeted workloads.
- void-go — minimal web stack with low-alloc hot path. ~100k+ req/s synthetic, stable P99s.
- fasthttp/http2 — stream scheduling + perf tuning used in production ecosystems.
Deterministic latency > headline throughput. I optimize for tail and jitter.
Rust · Go · C/C++ · epoll/kqueue · io_uring (selective) · SIMD · ring buffers · lock-free
HTTP/1.1/2 · gRPC · WebSockets · flow control · HPACK/QPACK
PostgreSQL · Redis · MongoDB (used judiciously)
perf/pprof · flamegraphs · bcc/eBPF · tracing · GitHub Actions · Docker/K8s
- Measure first (profiles, traces, budgets)
- Fix the heaviest edge (allocs → copies → locks → syscalls)
- Prove it (A/B, SLO deltas, cost per 10k req)
- Make it boring (observability, blast-radius, rollback)
- Rust-AI — memory-safe inference, cache-aware data structures
- void-go — zero-allocation hot path, sane defaults
- fasthttp/http2 — correctness & throughput under load
Benchmark notes
Throughput numbers are synthetic (wrk/vegeta) on commodity hardware; I prioritize tail-latency & jitter. Real wins came from pooling, scatter/gather I/O, fewer copies, bounded queues, and backpressure.Email: [email protected]
“Make it work. Make it right. Make it fast.”



