Function templates for building DSPM connectors. Each template provides the runtime scaffolding — HTTP server, job-mode runner, OpenTelemetry instrumentation, Redis-based stop/pause/resume signals, and batched data ingestion — so connector authors only need to implement their scanning logic.
| Template | Language | Purpose |
|---|---|---|
netwrix-python |
Python 3.12 | External source and IAM connectors |
netwrix-internal-python |
Python 3.12 | Internal common platform functions |
netwrix-csharp |
C# / .NET 8 | External source and IAM connectors |
netwrix-internal-csharp |
C# / .NET 8 | Internal common platform functions |
- Docker (for building container images)
- .NET 8 SDK (for C# templates)
- Python 3.12 + uv (for Python templates)
Connector repositories reference these templates in their stack.yml. The templates are pulled automatically at build time.
functions:
my-connector:
lang: netwrix-python
handler: ./functions/my-connector
image: my-connector:latest- Use
netwrix-pythonornetwrix-csharpfor connectors that scan external data sources and ingest data into ClickHouse. - Use
netwrix-internal-pythonornetwrix-internal-csharpfor internal platform functions that do not scan external sources.
All templates support two execution modes controlled by the EXECUTION_MODE environment variable:
- HTTP mode (default): starts a long-running HTTP server (Flask/Waitress for Python, ASP.NET Core for C#).
- Job mode (
EXECUTION_MODE=job): runs the handler once and exits. Used for Kubernetes jobs invoked by the connector-api.
The netwrix-python and netwrix-csharp templates include a StateManager that monitors Redis Streams for control signals (STOP, PAUSE, RESUME) sent by the Core API during a running scan.
See docs/STOP_PAUSE_RESUME_GUIDE.md for full implementation guidance.
Both the netwrix-csharp and netwrix-python templates include a BatchManager that buffers scanned objects in memory and flushes them to the data-ingestion service in batches (flush threshold: 500 KB). In C#, BatchManager instances are created per table via context.GetTable("table_name"). In Python, context.save_object(table, obj) creates a per-table BatchManager internally.
All templates export distributed traces, metrics, and logs to an OTLP-compatible collector. Configure the endpoint via OTEL_EXPORTER_OTLP_ENDPOINT (default: http://otel-collector.access-analyzer.svc.cluster.local:4318). Set OTEL_ENABLED=false to disable.
Secrets are loaded from files mounted at /var/secrets/{name}. Access them via context.Secrets["name"] (C#) or context.secrets["name"] (Python).
Python templates do not require a separate build step — dependencies are installed at container build time via uv sync in the Dockerfile.
docker build -t my-connector:latest -f template/netwrix-python/Dockerfile .cd template/netwrix-csharp
dotnet build ConnectorFramework/ConnectorFramework.csprojOr build the container image directly:
docker build -t my-connector:latest -f template/netwrix-csharp/Dockerfile .cd template/netwrix-python
uv sync
uv run ruff check .
uv run ruff format .cd template/netwrix-csharp
dotnet restore ConnectorFramework/ConnectorFramework.csproj
dotnet buildcd template/netwrix-python
uv run pytestThe CI pipeline runs ruff check and ruff format --check on every push/PR to main (see .github/workflows/ruff.yml).
cd template/netwrix-csharp
dotnet test ConnectorFramework.Tests/ConnectorFramework.Tests.csprojConnector containers are built as multi-stage Docker images and distributed via the Keygen OCI registry (oci.pkg.keygen.sh). Connector repositories reference these templates in their stack.yml, and images are built and pushed by CI/CD pipelines. Set EXECUTION_MODE=job for Kubernetes Job deployments or leave unset for long-running HTTP server mode.
See CONTRIBUTING.md.