diff --git a/k8s-manifests/README.md b/k8s-manifests/README.md new file mode 100644 index 00000000..bf16418c --- /dev/null +++ b/k8s-manifests/README.md @@ -0,0 +1,258 @@ +# Storedog Kubernetes Deployment + +This directory contains all the Kubernetes manifests for deploying the Storedog application. + +## Directory Structure + +The manifests are split into logical groups and subdirectories as follows: + +``` +k8s-manifests/ +├── cluster-setup/ +│ ├── ingress-controller/ +│ ├── provisioner/ +│ └── storage/ +├── datadog/ +└── storedog-app/ + ├── configmaps/ + ├── secrets/ + ├── deployments/ + ├── statefulsets/ + └── ingress/ +``` + +- **`cluster-setup/`**: Manifests for cluster-wide components (storage, provisioner, ingress controller). +- **`datadog/`**: Datadog agent manifest for observability. +- **`storedog-app/`**: All manifests for the Storedog application, organized by resource type (configmaps, secrets, deployments, statefulsets, ingress). + +## Cluster Prerequisites + +This deployment requires two cluster-level components to function on a non-cloud or local Kubernetes setup: a storage provisioner and an ingress controller. The manifests for both are included in the `cluster-setup/` directory. + +### Storage + +A storage provisioner is required for the PostgreSQL and Redis `StatefulSet`s. This repository includes manifests for the **Rancher Local Path Provisioner** and a default `StorageClass` to use it. + +### Ingress + +An Ingress Controller is required to expose the application on standard HTTP/S ports. This repository includes the manifest for the standard **NGINX Ingress Controller**, configured to use the host node's network. + +## Using a Local Registry + +For a standard Kubernetes cluster, you'll need to set up a local registry that your cluster can access: + +> [!NOTE] +> This step is only required on worker nodes because they are the ones that pull and run containers. + +1. Start a local Docker registry: + +```bash +docker run -d -p 5000:5000 --restart=always --name registry registry:2 +``` + +2. Configure worker nodes to trust the insecure registry: + +- On each WORKER node only (not needed on control plane), add the following to `/etc/docker/daemon.json`: + +```json +{ + "insecure-registries": ["localhost:5000"] +} +``` + +- Restart Docker on each WORKER node: + +```bash +sudo systemctl restart docker +``` + +3. Build and push **ALL** images to local registry: + +```bash +REGISTRY_URL=localhost:5000; find ./services -name Dockerfile | while read dockerfile; do context_dir=$(dirname "$dockerfile"); image_name=$(echo "$context_dir" | sed 's|^\./services/||; s|/|-|g'); full_tag="$REGISTRY_URL/storedog-$image_name:latest"; echo "Building $full_tag from $context_dir"; docker build -t "$full_tag" "$context_dir" && docker push "$full_tag"; done +``` + +4. You may want to rebuild one service while testing. It helps to export the `REGISTRY_URL` so you don't need to keep setting it. + +```bash +export REGISTRY_URL=localhost:5000 +``` + +> [!IMPORTANT] +> Building and pushing containers to the local registry needs to be done on the worker node. + +```bash +docker build -t $REGISTRY_URL/storedog-backend:latest ./services/backend && docker push $REGISTRY_URL/storedog-backend:latest +``` + +## Prerequisites + +Before deploying, ensure you have the following tools installed: + +- **kubectl** (v1.20+ recommended): For interacting with your Kubernetes cluster. +- **helm** (v3+): For installing the Datadog Operator. +- **docker**: For building and pushing container images. +- **envsubst**: For substituting environment variables in manifest files. + +You should also have access to a running Kubernetes cluster (local or cloud) and sufficient permissions to create namespaces, deployments, and cluster-wide resources. + +## Environment Variables Reference + +The deployment process uses several environment variables to template image locations, tags, and configuration. Below is a summary: + +| Variable | Description | Example | +|-------------------------------|---------------------------------------------|---------------------------------| +| `REGISTRY_URL` | Container registry base URL | `localhost:5000` | +| `SD_TAG` | Storedog image tag/version | `latest` | +| `DD_VERSION_ADS` | Version tag for ads service | `1.0.0` | +| `DD_VERSION_BACKEND` | Version tag for backend & worker services | `1.0.0` | +| `DD_VERSION_DISCOUNTS` | Version tag for discounts service | `1.0.0` | +| `DD_VERSION_NGINX` | Version tag for nginx | `1.0.0` | +| `NEXT_PUBLIC_DD_SERVICE_FRONTEND` | RUM service name for frontend | `store-frontend` | +| `NEXT_PUBLIC_DD_VERSION_FRONTEND` | Version tag for frontend service | `1.0.0` | +| `DD_ENV` | Environment name (e.g., development, prod) | `development` | +| `DD_API_KEY` | Datadog API key (for secret creation) | `` | +| `DD_APP_KEY` | Datadog App key (for secret creation) | `` | + +Set these variables in your shell before running the deployment commands. See the deployment steps below for usage examples. + +## Deployment Steps + +The Storedog manifest files use two variables to set the container registry URL and the version tag. The default is to use the localhost registry and `latest`. Set these environment variables accordingly when using a different registry location and tag version. + +Default values (development): + +```bash +export REGISTRY_URL=localhost:5000 +export SD_TAG=latest +``` + +Example values for hosted containers: + +```bash +export REGISTRY_URL="ghcr.io/datadog/storedog" +export SD_TAG=1.4.0 +``` + +### Set default environment variables for Storedog + +```bash +export DD_VERSION_ADS=1.0.0 +export DD_VERSION_BACKEND=1.0.0 +export DD_VERSION_DISCOUNTS=1.0.0 +export DD_VERSION_NGINX=1.0.0 +export NEXT_PUBLIC_DD_SERVICE_FRONTEND=store-frontend +export NEXT_PUBLIC_DD_VERSION_FRONTEND=1.0.0 +export DD_ENV=development +``` + +### Deploy the Datadog Operator + +1. Install the Datadog Operator with Helm: + +```bash +helm repo add datadog https://helm.datadoghq.com +helm repo update +helm install my-datadog-operator datadog/datadog-operator +``` + +2. Create a Kubernetes secret with your Datadog API and app keys: + +```bash +kubectl create secret generic datadog-secret --from-literal api-key=$DD_API_KEY --from-literal app-key=$DD_APP_KEY +``` + +2. Apply the Datadog Agent definition: + +```bash +kubectl apply -f k8s-manifests/datadog/datadog-agent.yaml +``` + +### Deploy Cluster Setup and Storedog + +The storedog-app definition files contain variables which need to be set before applying them to the cluster. The command below uses `envsubst` to update the variable values in place before applying the definition file. + +1. **Deploy Cluster Components (one-time setup per cluster):** + +This single command installs the storage provisioner and the ingress controller. + +```bash +kubectl apply -R -f k8s-manifests/cluster-setup/ +``` + +2. **Deploy the Storedog Application:** + +This command creates a `storedog` namespace and deploys all application components into it. + +```bash +kubectl create namespace storedog +for file in k8s-manifests/storedog-app/**/*.yaml; do envsubst < "$file" | kubectl apply -n storedog -f -; done +``` + +3. **Apply manifest changes to one service:** + +While testing, you might change one manifest file. Rather than update all at once, you can apply the change like this. + +```bash +envsubst < k8s-manifests/storedog-app/deployments/backend.yaml | kubectl apply -n storedog -f - +``` + +4. **To reset the all Storedog:** + +You only need to delete the application's namespace. The cluster components can remain installed. + +```bash +kubectl delete namespace storedog +``` + +5. **To restart one service:** + +After rebuilding a container image, it's faster to restart only the service you need. + +```bash +kubectl rollout restart deployment backend -n storedog +``` + +## Troubleshooting + +- Check pod status in the namespace: + +```bash +kubectl get pods -n storedog +``` + +- Check pod logs: + +```bash +kubectl logs -n storedog +``` + +- Check service status: + +```bash +kubectl get services -n storedog +``` + +- Check ingress status: + +```bash +kubectl get ingress -n storedog +``` + +- Check Persistent Volume Claims: + +```bash +kubectl get pvc -n storedog +``` + +*The status should be `Bound`.* + +Check the logs for cluster components (if issues persist): + +```bash +# Storage Provisioner Logs +kubectl logs -n local-path-storage -l app=local-path-provisioner + +# Ingress Controller Logs +kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx +``` diff --git a/k8s-manifests/cluster-setup/ingress-controller/nginx-ingress.yaml b/k8s-manifests/cluster-setup/ingress-controller/nginx-ingress.yaml new file mode 100644 index 00000000..76c97be8 --- /dev/null +++ b/k8s-manifests/cluster-setup/ingress-controller/nginx-ingress.yaml @@ -0,0 +1,685 @@ +apiVersion: v1 +kind: Namespace +metadata: + labels: + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + name: ingress-nginx +--- +apiVersion: v1 +automountServiceAccountToken: true +kind: ServiceAccount +metadata: + labels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + name: ingress-nginx + namespace: ingress-nginx +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + labels: + app.kubernetes.io/component: admission-webhook + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + name: ingress-nginx-admission + namespace: ingress-nginx +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + labels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + name: ingress-nginx + namespace: ingress-nginx +rules: +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get +- apiGroups: + - "" + resources: + - configmaps + - pods + - secrets + - endpoints + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - services + verbs: + - get + - list + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - get + - list + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses/status + verbs: + - update +- apiGroups: + - networking.k8s.io + resources: + - ingressclasses + verbs: + - get + - list + - watch +- apiGroups: + - coordination.k8s.io + resourceNames: + - ingress-nginx-leader + resources: + - leases + verbs: + - get + - update +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - discovery.k8s.io + resources: + - endpointslices + verbs: + - list + - watch + - get +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + labels: + app.kubernetes.io/component: admission-webhook + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + name: ingress-nginx-admission + namespace: ingress-nginx +rules: +- apiGroups: + - "" + resources: + - secrets + verbs: + - get + - create +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + labels: + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + name: ingress-nginx +rules: +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - nodes + - pods + - secrets + - namespaces + verbs: + - list + - watch +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - list + - watch +- apiGroups: + - "" + resources: + - nodes + verbs: + - get +- apiGroups: + - "" + resources: + - services + verbs: + - get + - list + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - networking.k8s.io + resources: + - ingresses/status + verbs: + - update +- apiGroups: + - networking.k8s.io + resources: + - ingressclasses + verbs: + - get + - list + - watch +- apiGroups: + - discovery.k8s.io + resources: + - endpointslices + verbs: + - list + - watch + - get +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + labels: + app.kubernetes.io/component: admission-webhook + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + name: ingress-nginx-admission +rules: +- apiGroups: + - admissionregistration.k8s.io + resources: + - validatingwebhookconfigurations + verbs: + - get + - update +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + labels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + name: ingress-nginx + namespace: ingress-nginx +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: ingress-nginx +subjects: +- kind: ServiceAccount + name: ingress-nginx + namespace: ingress-nginx +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + labels: + app.kubernetes.io/component: admission-webhook + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + name: ingress-nginx-admission + namespace: ingress-nginx +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: ingress-nginx-admission +subjects: +- kind: ServiceAccount + name: ingress-nginx-admission + namespace: ingress-nginx +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + name: ingress-nginx +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: ingress-nginx +subjects: +- kind: ServiceAccount + name: ingress-nginx + namespace: ingress-nginx +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + app.kubernetes.io/component: admission-webhook + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + name: ingress-nginx-admission +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: ingress-nginx-admission +subjects: +- kind: ServiceAccount + name: ingress-nginx-admission + namespace: ingress-nginx +--- +apiVersion: v1 +data: + allow-snippet-annotations: "false" + http-snippet: | + server { + listen 18080; + + location /nginx_status { + allow all; + stub_status on; + } + + location / { + return 404; + } + } + nginx-status-ipv4-whitelist: "192.168.0.0/16" +kind: ConfigMap +metadata: + labels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + name: ingress-nginx-controller + namespace: ingress-nginx +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + name: ingress-nginx-controller + namespace: ingress-nginx +spec: + ipFamilies: + - IPv4 + ipFamilyPolicy: SingleStack + ports: + - appProtocol: http + name: http + port: 80 + protocol: TCP + targetPort: http + - appProtocol: https + name: https + port: 443 + protocol: TCP + targetPort: https + - name: metrics + port: 10254 + protocol: TCP + targetPort: metrics + selector: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + type: ClusterIP +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + name: ingress-nginx-controller-admission + namespace: ingress-nginx +spec: + ports: + - appProtocol: https + name: https-webhook + port: 443 + targetPort: webhook + selector: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + type: ClusterIP +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + name: ingress-nginx-controller + namespace: ingress-nginx +spec: + minReadySeconds: 0 + revisionHistoryLimit: 10 + selector: + matchLabels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + strategy: + rollingUpdate: + maxUnavailable: 1 + type: RollingUpdate + template: + metadata: + labels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + spec: + hostNetwork: true + dnsPolicy: ClusterFirstWithHostNet + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + containers: + - args: + - /nginx-ingress-controller + - --election-id=ingress-nginx-leader + - --controller-class=k8s.io/ingress-nginx + - --ingress-class=nginx + - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller + - --validating-webhook=:8443 + - --validating-webhook-certificate=/usr/local/certificates/cert + - --validating-webhook-key=/usr/local/certificates/key + - --enable-metrics=true + env: + - name: POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: LD_PRELOAD + value: /usr/local/lib/libmimalloc.so + image: registry.k8s.io/ingress-nginx/controller:v1.10.1@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e + imagePullPolicy: IfNotPresent + lifecycle: + preStop: + exec: + command: + - /wait-shutdown + livenessProbe: + failureThreshold: 5 + httpGet: + path: /healthz + port: 10254 + scheme: HTTP + initialDelaySeconds: 10 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + name: controller + ports: + - containerPort: 80 + name: http + protocol: TCP + - containerPort: 443 + name: https + protocol: TCP + - containerPort: 8443 + name: webhook + protocol: TCP + - containerPort: 10254 + name: metrics + protocol: TCP + readinessProbe: + failureThreshold: 3 + httpGet: + path: /healthz + port: 10254 + scheme: HTTP + initialDelaySeconds: 10 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: + requests: + cpu: 100m + memory: 90Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + add: + - NET_BIND_SERVICE + drop: + - ALL + readOnlyRootFilesystem: false + runAsNonRoot: true + runAsUser: 101 + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /usr/local/certificates/ + name: webhook-cert + readOnly: true + nodeSelector: + kubernetes.io/os: linux + serviceAccountName: ingress-nginx + terminationGracePeriodSeconds: 300 + volumes: + - name: webhook-cert + secret: + secretName: ingress-nginx-admission +--- +apiVersion: batch/v1 +kind: Job +metadata: + labels: + app.kubernetes.io/component: admission-webhook + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + name: ingress-nginx-admission-create + namespace: ingress-nginx +spec: + template: + metadata: + labels: + app.kubernetes.io/component: admission-webhook + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + name: ingress-nginx-admission-create + spec: + containers: + - args: + - create + - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc + - --namespace=$(POD_NAMESPACE) + - --secret-name=ingress-nginx-admission + env: + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366 + imagePullPolicy: IfNotPresent + name: create + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + readOnlyRootFilesystem: true + runAsNonRoot: true + runAsUser: 65532 + seccompProfile: + type: RuntimeDefault + nodeSelector: + kubernetes.io/os: linux + restartPolicy: OnFailure + serviceAccountName: ingress-nginx-admission +--- +apiVersion: batch/v1 +kind: Job +metadata: + labels: + app.kubernetes.io/component: admission-webhook + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + name: ingress-nginx-admission-patch + namespace: ingress-nginx +spec: + template: + metadata: + labels: + app.kubernetes.io/component: admission-webhook + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + name: ingress-nginx-admission-patch + spec: + containers: + - args: + - patch + - --webhook-name=ingress-nginx-admission + - --namespace=$(POD_NAMESPACE) + - --patch-mutating=false + - --secret-name=ingress-nginx-admission + - --patch-failure-policy=Fail + env: + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366 + imagePullPolicy: IfNotPresent + name: patch + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + readOnlyRootFilesystem: true + runAsNonRoot: true + runAsUser: 65532 + seccompProfile: + type: RuntimeDefault + nodeSelector: + kubernetes.io/os: linux + restartPolicy: OnFailure + serviceAccountName: ingress-nginx-admission +--- +apiVersion: networking.k8s.io/v1 +kind: IngressClass +metadata: + labels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + name: nginx +spec: + controller: k8s.io/ingress-nginx +--- +apiVersion: admissionregistration.k8s.io/v1 +kind: ValidatingWebhookConfiguration +metadata: + labels: + app.kubernetes.io/component: admission-webhook + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.10.1 + name: ingress-nginx-admission +webhooks: +- admissionReviewVersions: + - v1 + clientConfig: + service: + name: ingress-nginx-controller-admission + namespace: ingress-nginx + path: /networking/v1/ingresses + failurePolicy: Fail + matchPolicy: Equivalent + name: validate.nginx.ingress.kubernetes.io + rules: + - apiGroups: + - networking.k8s.io + apiVersions: + - v1 + operations: + - CREATE + - UPDATE + resources: + - ingresses + sideEffects: None diff --git a/k8s-manifests/cluster-setup/provisioner/local-path-storage.yaml b/k8s-manifests/cluster-setup/provisioner/local-path-storage.yaml new file mode 100644 index 00000000..725449ae --- /dev/null +++ b/k8s-manifests/cluster-setup/provisioner/local-path-storage.yaml @@ -0,0 +1,159 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: local-path-storage + +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: local-path-provisioner-service-account + namespace: local-path-storage + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: local-path-provisioner-role + namespace: local-path-storage +rules: + - apiGroups: [""] + resources: ["pods"] + verbs: ["get", "list", "watch", "create", "patch", "update", "delete"] + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: local-path-provisioner-role +rules: + - apiGroups: [""] + resources: ["nodes", "persistentvolumeclaims", "configmaps", "pods", "pods/log"] + verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: ["persistentvolumes"] + verbs: ["get", "list", "watch", "create", "patch", "update", "delete"] + - apiGroups: [""] + resources: ["events"] + verbs: ["create", "patch"] + - apiGroups: ["storage.k8s.io"] + resources: ["storageclasses"] + verbs: ["get", "list", "watch"] + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: local-path-provisioner-bind + namespace: local-path-storage +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: local-path-provisioner-role +subjects: + - kind: ServiceAccount + name: local-path-provisioner-service-account + namespace: local-path-storage + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: local-path-provisioner-bind +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: local-path-provisioner-role +subjects: + - kind: ServiceAccount + name: local-path-provisioner-service-account + namespace: local-path-storage + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: local-path-provisioner + namespace: local-path-storage +spec: + replicas: 1 + selector: + matchLabels: + app: local-path-provisioner + template: + metadata: + labels: + app: local-path-provisioner + spec: + serviceAccountName: local-path-provisioner-service-account + containers: + - name: local-path-provisioner + image: rancher/local-path-provisioner:v0.0.26 + imagePullPolicy: IfNotPresent + command: + - local-path-provisioner + - --debug + - start + - --config + - /etc/config/config.json + volumeMounts: + - name: config-volume + mountPath: /etc/config/ + env: + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + volumes: + - name: config-volume + configMap: + name: local-path-config + +--- +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: local-path +provisioner: rancher.io/local-path +volumeBindingMode: WaitForFirstConsumer +reclaimPolicy: Delete + +--- +kind: ConfigMap +apiVersion: v1 +metadata: + name: local-path-config + namespace: local-path-storage +data: + config.json: |- + { + "nodePathMap":[ + { + "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES", + "paths":["/opt/local-path-provisioner"] + } + ] + } + setup: |- + #!/bin/sh + set -eu + mkdir -m 0777 -p "$VOL_DIR" + teardown: |- + #!/bin/sh + set -eu + rm -rf "$VOL_DIR" + helperPod.yaml: |- + apiVersion: v1 + kind: Pod + metadata: + name: helper-pod + spec: + priorityClassName: system-node-critical + tolerations: + - key: node.kubernetes.io/disk-pressure + operator: Exists + effect: NoSchedule + containers: + - name: helper-pod + image: busybox + imagePullPolicy: IfNotPresent diff --git a/k8s-manifests/cluster-setup/storage/storageclass.yaml b/k8s-manifests/cluster-setup/storage/storageclass.yaml new file mode 100644 index 00000000..d874d3f8 --- /dev/null +++ b/k8s-manifests/cluster-setup/storage/storageclass.yaml @@ -0,0 +1,9 @@ +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: standard + annotations: + storageclass.kubernetes.io/is-default-class: "true" +provisioner: rancher.io/local-path +reclaimPolicy: Delete +volumeBindingMode: WaitForFirstConsumer diff --git a/k8s-manifests/datadog/datadog-agent.yaml b/k8s-manifests/datadog/datadog-agent.yaml new file mode 100644 index 00000000..ee08b87b --- /dev/null +++ b/k8s-manifests/datadog/datadog-agent.yaml @@ -0,0 +1,99 @@ +apiVersion: datadoghq.com/v2alpha1 +kind: DatadogAgent +metadata: + name: datadog +spec: + global: + clusterName: storedog-k8s + site: datadoghq.com + kubelet: + tlsVerify: false + credentials: + apiSecret: + secretName: datadog-secret + keyName: api-key + appSecret: + secretName: datadog-secret + keyName: app-key + features: + logCollection: # Logs + enabled: true + containerCollectAll: true + clusterChecks: # Required for integrations + enabled: true + # Datadog security features + # cspm: # Cloud Security Posture Management + # enabled: true + # hostBenchmarks: + # enabled: true + # cws: # Cloud Workload Security + # enabled: true + # sbom: # Software Bill of Materials + # enabled: true + # containerImage: + # enabled: true + override: + # Deploy the Datadog agent to the control plane nodes. + # Used to monitor pods running on the control plane nodes. + # This is required for etcd integration. + nodeAgent: + tolerations: + - key: node-role.kubernetes.io/control-plane + operator: Exists + effect: NoSchedule + containers: + agent: + volumeMounts: + - name: etcd-certs # Used for etcd integration + readOnly: true + mountPath: /host/etc/kubernetes/pki/etcd + - name: disable-etcd-autoconf # Used for etcd integration + mountPath: /etc/datadog-agent/conf.d/etcd.d + # Used for audit logs + # - name: klogpath + # readOnly: true + # mountPath: /var/log/kubernetes + volumes: + - name: etcd-certs # Used for etcd integration + hostPath: + path: /etc/kubernetes/pki/etcd + - name: disable-etcd-autoconf + emptyDir: {} + # Used for audit logs + # - name: klogpath + # hostPath: + # path: /var/log/kubernetes + extraConfd: + configDataMap: # Integration configurations + etcd.yaml: |- # etcd integration + ad_identifiers: + - etcd + init_config: + instances: + - prometheus_url: https://%%host%%:2379/metrics + tls_ca_cert: /host/etc/kubernetes/pki/etcd/ca.crt + tls_cert: /host/etc/kubernetes/pki/etcd/server.crt + tls_private_key: /host/etc/kubernetes/pki/etcd/server.key + nginx_ingress_controller.yaml: |- # nginx-ingress-controller integration + ad_identifiers: + - controller + init_config: + instances: + - prometheus_url: http://%%host%%:10254/metrics + collect_nginx_histograms: true + logs: + - service: controller + source: nginx-ingress-controller + nginx.yaml: |- # nginx integration on the nginx-ingress-controller + ad_identifiers: + - controller + init_config: + instances: + - nginx_status_url: http://%%host%%:18080/nginx_status + # Send audit logs to Datadog + # kube_audit.yaml: |- + # logs: + # - type: file + # path: /var/log/kubernetes/apiserver/audit.log + # service: kube-apiserver-audit + # source: kubernetes.audit diff --git a/k8s-manifests/storedog-app/configmaps/postgres-config.yaml b/k8s-manifests/storedog-app/configmaps/postgres-config.yaml new file mode 100644 index 00000000..4f1c9501 --- /dev/null +++ b/k8s-manifests/storedog-app/configmaps/postgres-config.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: postgres-config +data: + postgresql.conf: | + listen_addresses = '*' + max_connections = 200 + shared_buffers = 128MB + dynamic_shared_memory_type = posix diff --git a/k8s-manifests/storedog-app/configmaps/postgres-scripts.yaml b/k8s-manifests/storedog-app/configmaps/postgres-scripts.yaml new file mode 100644 index 00000000..d1537e52 --- /dev/null +++ b/k8s-manifests/storedog-app/configmaps/postgres-scripts.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: postgres-scripts +data: + start-services.sh: | + #!/bin/sh + + # Start cron as root + crond -f & + + # Ensure data directory has correct permissions + chown -R postgres:postgres /var/lib/postgresql/data + chmod -R 0700 /var/lib/postgresql/data + + # Switch to the postgres user and start PostgreSQL using the original entrypoint + exec su-exec postgres docker-entrypoint.sh "$@" diff --git a/k8s-manifests/storedog-app/configmaps/shared-config.yaml b/k8s-manifests/storedog-app/configmaps/shared-config.yaml new file mode 100644 index 00000000..ff85e299 --- /dev/null +++ b/k8s-manifests/storedog-app/configmaps/shared-config.yaml @@ -0,0 +1,20 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: storedog-config +data: + POSTGRES_USER: postgres + DB_HOST: postgres + DB_PORT: "5432" + REDIS_URL: redis://redis:6379/0 + RAILS_ENV: production + DB_POOL: "25" + MAX_THREADS: "5" + NEXT_PUBLIC_ADS_ROUTE: /services/ads + NEXT_PUBLIC_DISCOUNTS_ROUTE: /services/discounts + NEXT_PUBLIC_DBM_ROUTE: /services/dbm + NEXT_PUBLIC_FRONTEND_API_ROUTE: http://service-proxy:80 + NEXT_PUBLIC_SPREE_API_HOST: http://service-proxy/services/backend + NEXT_PUBLIC_SPREE_CLIENT_HOST: /services/backend + NEXT_PUBLIC_SPREE_IMAGE_HOST: /services/backend + NEXT_PUBLIC_SPREE_ALLOWED_IMAGE_DOMAIN: service-proxy diff --git a/k8s-manifests/storedog-app/deployments/ads.yaml b/k8s-manifests/storedog-app/deployments/ads.yaml new file mode 100644 index 00000000..a65e291c --- /dev/null +++ b/k8s-manifests/storedog-app/deployments/ads.yaml @@ -0,0 +1,64 @@ +apiVersion: v1 +kind: Service +metadata: + name: ads +spec: + ports: + - port: 3030 + targetPort: 3030 + name: http + selector: + app: ads +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ads +spec: + replicas: 1 + selector: + matchLabels: + app: ads + template: + metadata: + labels: + app: ads + annotations: + ad.datadoghq.com/ads.logs: '[{"source": "java"}]' + spec: + volumes: + - name: apmsocketpath + hostPath: + path: /var/run/datadog/ + containers: + - name: ads + image: ${REGISTRY_URL}/storedog-ads-java:${SD_TAG} # (add if needed) + ports: + - containerPort: 3030 + env: + - name: DD_ENV + value: ${DD_ENV} + - name: DD_SERVICE + value: store-ads + - name: DD_VERSION + value: ${DD_VERSION_ADS} + - name: DD_LOGS_INJECTION + value: "true" + - name: DD_RUNTIME_METRICS_ENABLED + value: "true" + - name: DD_PROFILING_ENABLED + value: "true" + - name: DD_PROFILING_ALLOCATION_ENABLED + value: "true" + - name: DD_PROFILING_TIMELINE_ENABLED + value: "true" + resources: + requests: + memory: "256Mi" + cpu: "200m" + limits: + memory: "512Mi" + cpu: "400m" + volumeMounts: + - name: apmsocketpath + mountPath: /var/run/datadog diff --git a/k8s-manifests/storedog-app/deployments/backend.yaml b/k8s-manifests/storedog-app/deployments/backend.yaml new file mode 100644 index 00000000..f1942a2d --- /dev/null +++ b/k8s-manifests/storedog-app/deployments/backend.yaml @@ -0,0 +1,109 @@ +apiVersion: v1 +kind: Service +metadata: + name: backend +spec: + ports: + - port: 4000 + targetPort: 4000 + name: http + selector: + app: backend +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: backend +spec: + replicas: 1 + selector: + matchLabels: + app: backend + template: + metadata: + labels: + app: backend + annotations: + ad.datadoghq.com/backend.logs: '[{"source": "ruby"}, "auto_multi_line_detection":true }]' + spec: + volumes: + - name: apmsocketpath + hostPath: + path: /var/run/datadog/ + initContainers: + - name: wait-for-db + image: busybox + command: ['sh', '-c', 'until nc -z postgres 5432; do echo waiting for postgres; sleep 2; done;'] + containers: + - name: backend + image: ${REGISTRY_URL}/storedog-backend:${SD_TAG} + ports: + - containerPort: 4000 + command: ["bundle", "exec", "rails", "s", "-b", "0.0.0.0", "-p", "4000"] + env: + - name: REDIS_URL + valueFrom: + configMapKeyRef: + name: storedog-config + key: REDIS_URL + - name: DB_HOST + valueFrom: + configMapKeyRef: + name: storedog-config + key: DB_HOST + - name: DB_PORT + valueFrom: + configMapKeyRef: + name: storedog-config + key: DB_PORT + - name: POSTGRES_USER + valueFrom: + configMapKeyRef: + name: storedog-config + key: POSTGRES_USER + - name: POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: storedog-secrets + key: POSTGRES_PASSWORD + - name: RAILS_ENV + valueFrom: + configMapKeyRef: + name: storedog-config + key: RAILS_ENV + - name: DB_POOL + valueFrom: + configMapKeyRef: + name: storedog-config + key: DB_POOL + - name: MAX_THREADS + valueFrom: + configMapKeyRef: + name: storedog-config + key: MAX_THREADS + - name: DD_ENV + value: ${DD_ENV} + - name: DD_SERVICE + value: store-backend + - name: DD_VERSION + value: ${DD_VERSION_BACKEND} + - name: DD_LOGS_INJECTION + value: "true" + - name: DD_RUNTIME_METRICS_ENABLED + value: "true" + - name: DD_PROFILING_ENABLED + value: "true" + - name: DD_PROFILING_ALLOCATION_ENABLED + value: "true" + - name: DD_PROFILING_TIMELINE_ENABLED + value: "true" + resources: + requests: + memory: "512Mi" + cpu: "250m" + limits: + memory: "1Gi" + cpu: "500m" + volumeMounts: + - name: apmsocketpath + mountPath: /var/run/datadog diff --git a/k8s-manifests/storedog-app/deployments/discounts.yaml b/k8s-manifests/storedog-app/deployments/discounts.yaml new file mode 100644 index 00000000..eb353238 --- /dev/null +++ b/k8s-manifests/storedog-app/deployments/discounts.yaml @@ -0,0 +1,85 @@ +apiVersion: v1 +kind: Service +metadata: + name: discounts +spec: + ports: + - port: 2814 + targetPort: 2814 + name: http + selector: + app: discounts +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: discounts +spec: + replicas: 1 + selector: + matchLabels: + app: discounts + template: + metadata: + labels: + app: discounts + annotations: + ad.datadoghq.com/discounts.logs: '[{"source": "python"}]' + spec: + volumes: + - name: apmsocketpath + hostPath: + path: /var/run/datadog/ + initContainers: + - name: wait-for-db + image: busybox + command: ['sh', '-c', 'until nc -z postgres 5432; do echo waiting for postgres; sleep 2; done;'] + containers: + - name: discounts + image: ${REGISTRY_URL}/storedog-discounts:${SD_TAG} + ports: + - containerPort: 2814 + command: ["ddtrace-run", "flask", "run", "--port=2814", "--host=0.0.0.0"] + env: + - name: FLASK_APP + value: "discounts.py" + - name: FLASK_DEBUG + value: "0" + - name: POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: storedog-secrets + key: POSTGRES_PASSWORD + - name: POSTGRES_USER + valueFrom: + configMapKeyRef: + name: storedog-config + key: POSTGRES_USER + - name: POSTGRES_HOST + value: postgres + - name: DD_ENV + value: ${DD_ENV} + - name: DD_SERVICE + value: store-discounts + - name: DD_VERSION + value: ${DD_VERSION_DISCOUNTS} + - name: DD_LOGS_INJECTION + value: "true" + - name: DD_RUNTIME_METRICS_ENABLED + value: "true" + - name: DD_PROFILING_ENABLED + value: "true" + - name: DD_PROFILING_ALLOCATION_ENABLED + value: "true" + - name: DD_PROFILING_TIMELINE_ENABLED + value: "true" + resources: + requests: + memory: "128Mi" + cpu: "100m" + limits: + memory: "256Mi" + cpu: "200m" + volumeMounts: + - name: apmsocketpath + mountPath: /var/run/datadog diff --git a/k8s-manifests/storedog-app/deployments/frontend.yaml b/k8s-manifests/storedog-app/deployments/frontend.yaml new file mode 100644 index 00000000..8810a89b --- /dev/null +++ b/k8s-manifests/storedog-app/deployments/frontend.yaml @@ -0,0 +1,121 @@ +apiVersion: v1 +kind: Service +metadata: + name: frontend +spec: + ports: + - port: 3000 + targetPort: 3000 + name: http + selector: + app: frontend +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend +spec: + replicas: 1 + selector: + matchLabels: + app: frontend + template: + metadata: + labels: + app: frontend + annotations: + ad.datadoghq.com/frontend.logs: '[{"source": "nodejs", "auto_multi_line_detection":true}]' + spec: + volumes: + - name: apmsocketpath + hostPath: + path: /var/run/datadog/ + initContainers: + - name: wait-for-backend + image: busybox + command: ['sh', '-c', 'until nc -z backend 4000; do echo waiting for backend; sleep 2; done;'] + containers: + - name: frontend + image: ${REGISTRY_URL}/storedog-frontend:${SD_TAG} + ports: + - containerPort: 3000 + command: ["npm", "run", "dev"] + env: + - name: NEXT_PUBLIC_ADS_ROUTE + valueFrom: + configMapKeyRef: + name: storedog-config + key: NEXT_PUBLIC_ADS_ROUTE + - name: NEXT_PUBLIC_DISCOUNTS_ROUTE + valueFrom: + configMapKeyRef: + name: storedog-config + key: NEXT_PUBLIC_DISCOUNTS_ROUTE + - name: NEXT_PUBLIC_DBM_ROUTE + valueFrom: + configMapKeyRef: + name: storedog-config + key: NEXT_PUBLIC_DBM_ROUTE + - name: NEXT_PUBLIC_FRONTEND_API_ROUTE + valueFrom: + configMapKeyRef: + name: storedog-config + key: NEXT_PUBLIC_FRONTEND_API_ROUTE + - name: NEXT_PUBLIC_SPREE_API_HOST + valueFrom: + configMapKeyRef: + name: storedog-config + key: NEXT_PUBLIC_SPREE_API_HOST + - name: NEXT_PUBLIC_SPREE_CLIENT_HOST + valueFrom: + configMapKeyRef: + name: storedog-config + key: NEXT_PUBLIC_SPREE_CLIENT_HOST + - name: NEXT_PUBLIC_SPREE_IMAGE_HOST + valueFrom: + configMapKeyRef: + name: storedog-config + key: NEXT_PUBLIC_SPREE_IMAGE_HOST + - name: NEXT_PUBLIC_SPREE_ALLOWED_IMAGE_DOMAIN + valueFrom: + configMapKeyRef: + name: storedog-config + key: NEXT_PUBLIC_SPREE_ALLOWED_IMAGE_DOMAIN + - name: DD_ENV + value: ${DD_ENV} + - name: DD_SERVICE + value: ${NEXT_PUBLIC_DD_SERVICE_FRONTEND} + - name: DD_VERSION + value: ${NEXT_PUBLIC_DD_VERSION_FRONTEND} + - name: DD_LOGS_INJECTION + value: "true" + - name: DD_RUNTIME_METRICS_ENABLED + value: "true" + - name: DD_PROFILING_ENABLED + value: "true" + - name: DD_PROFILING_ALLOCATION_ENABLED + value: "true" + - name: DD_PROFILING_TIMELINE_ENABLED + value: "true" + resources: + requests: + memory: "256Mi" + cpu: "100m" + limits: + memory: "2Gi" + volumeMounts: + - name: apmsocketpath + mountPath: /var/run/datadog + livenessProbe: + httpGet: + path: / + port: 3000 + initialDelaySeconds: 180 + periodSeconds: 20 + failureThreshold: 5 + readinessProbe: + httpGet: + path: / + port: 3000 + initialDelaySeconds: 120 + periodSeconds: 20 diff --git a/k8s-manifests/storedog-app/deployments/nginx.yaml b/k8s-manifests/storedog-app/deployments/nginx.yaml new file mode 100644 index 00000000..b7ae036a --- /dev/null +++ b/k8s-manifests/storedog-app/deployments/nginx.yaml @@ -0,0 +1,71 @@ +apiVersion: v1 +kind: Service +metadata: + name: service-proxy +spec: + type: ClusterIP + ports: + - port: 80 + targetPort: 80 + name: http + selector: + app: service-proxy +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: service-proxy + labels: + tags.datadoghq.com/service: service-proxy + tags.datadoghq.com/version: 1.28.0 +spec: + replicas: 1 + selector: + matchLabels: + app: service-proxy + template: + metadata: + labels: + app: service-proxy + tags.datadoghq.com/service: service-proxy + tags.datadoghq.com/version: 1.28.0 + annotations: + ad.datadoghq.com/nginx.logs: '[{"source": "nginx"}]' + ad.datadoghq.com/nginx.checks: | + { + "nginx": { + "init_config": {}, + "instances": [{"nginx_status_url": "http://%%host%%:81/nginx_status/"}] + } + } + spec: + volumes: + - name: apmsocketpath + hostPath: + path: /var/run/datadog/ + containers: + - name: service-proxy + image: ${REGISTRY_URL}/storedog-nginx:${SD_TAG} + ports: + - containerPort: 80 + env: + - name: NGINX_RESOLVER + value: "kube-dns.kube-system.svc.cluster.local" + - name: DD_ENV + value: ${DD_ENV} + - name: DD_SERVICE + value: service-proxy + - name: DD_VERSION + value: ${DD_VERSION_NGINX} + - name: DD_TRACE_AGENT_URL + value: 'unix:///var/run/datadog/apm.socket' + resources: + requests: + memory: "128Mi" + cpu: "100m" + limits: + memory: "256Mi" + cpu: "200m" + volumeMounts: + - name: apmsocketpath + mountPath: /var/run/datadog diff --git a/k8s-manifests/storedog-app/deployments/puppeteer.yaml b/k8s-manifests/storedog-app/deployments/puppeteer.yaml new file mode 100644 index 00000000..c14d7ea1 --- /dev/null +++ b/k8s-manifests/storedog-app/deployments/puppeteer.yaml @@ -0,0 +1,43 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: puppeteer +spec: + replicas: 1 + selector: + matchLabels: + app: puppeteer + template: + metadata: + labels: + app: puppeteer + spec: + containers: + - name: puppeteer + image: ${REGISTRY_URL}/storedog-puppeteer:${SD_TAG} + env: + - name: STOREDOG_URL + value: "http://service-proxy" + - name: PUPPETEER_TIMEOUT + value: "30000" + - name: SKIP_SESSION_CLOSE + value: "false" + resources: + requests: + memory: "1Gi" + cpu: "250m" + limits: + memory: "2Gi" + cpu: "500m" + securityContext: + capabilities: + add: + - SYS_ADMIN # Required for Puppeteer to run in Docker + volumeMounts: + - name: dshm + mountPath: /dev/shm + volumes: + - name: dshm + emptyDir: + medium: Memory + sizeLimit: 4Gi # Equivalent to shm_size: '4gb' in docker-compose diff --git a/k8s-manifests/storedog-app/deployments/worker.yaml b/k8s-manifests/storedog-app/deployments/worker.yaml new file mode 100644 index 00000000..a456af14 --- /dev/null +++ b/k8s-manifests/storedog-app/deployments/worker.yaml @@ -0,0 +1,92 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: worker +spec: + replicas: 1 + selector: + matchLabels: + app: worker + template: + metadata: + labels: + app: worker + annotations: + ad.datadoghq.com/worker.logs: '[{"source": "ruby"}, "auto_multi_line_detection":true }]' + spec: + volumes: + - name: apmsocketpath + hostPath: + path: /var/run/datadog/ + initContainers: + - name: wait-for-db + image: busybox + command: ['sh', '-c', 'until nc -z postgres 5432; do echo waiting for postgres; sleep 2; done;'] + containers: + - name: worker + image: ${REGISTRY_URL}/storedog-backend:${SD_TAG} + command: ["bundle", "exec", "sidekiq", "-C", "config/sidekiq.yml"] + env: + - name: WORKER + value: "true" + - name: REDIS_URL + valueFrom: + configMapKeyRef: + name: storedog-config + key: REDIS_URL + - name: DB_HOST + valueFrom: + configMapKeyRef: + name: storedog-config + key: DB_HOST + - name: DB_PORT + valueFrom: + configMapKeyRef: + name: storedog-config + key: DB_PORT + - name: POSTGRES_USER + valueFrom: + configMapKeyRef: + name: storedog-config + key: POSTGRES_USER + - name: POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: storedog-secrets + key: POSTGRES_PASSWORD + - name: DB_POOL + valueFrom: + configMapKeyRef: + name: storedog-config + key: DB_POOL + - name: MAX_THREADS + valueFrom: + configMapKeyRef: + name: storedog-config + key: MAX_THREADS + - name: DD_ENV + value: ${DD_ENV} + - name: DD_SERVICE + value: store-worker + - name: DD_VERSION + value: ${DD_VERSION_BACKEND} + - name: DD_LOGS_INJECTION + value: "true" + - name: DD_RUNTIME_METRICS_ENABLED + value: "true" + - name: DD_PROFILING_ENABLED + value: "true" + - name: DD_PROFILING_ALLOCATION_ENABLED + value: "true" + - name: DD_PROFILING_TIMELINE_ENABLED + value: "true" + resources: + requests: + memory: "512Mi" + cpu: "250m" + limits: + memory: "1Gi" + cpu: "500m" + volumeMounts: + - name: apmsocketpath + mountPath: /var/run/datadog diff --git a/k8s-manifests/storedog-app/ingress/nginx-ingress.yaml b/k8s-manifests/storedog-app/ingress/nginx-ingress.yaml new file mode 100644 index 00000000..1f40edaa --- /dev/null +++ b/k8s-manifests/storedog-app/ingress/nginx-ingress.yaml @@ -0,0 +1,27 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: storedog-ingress + namespace: storedog +spec: + ingressClassName: nginx + # All traffic that doesn't match a more specific rule below will + # be sent to our central nginx reverse proxy. + defaultBackend: + service: + name: service-proxy + port: + number: 80 + rules: + - http: + paths: + # This rule directs all traffic (path: /) to the nginx service. + # The nginx service will then use its internal configuration + # to route requests to the appropriate backend service. + - path: / + pathType: Prefix + backend: + service: + name: service-proxy + port: + number: 80 diff --git a/k8s-manifests/storedog-app/secrets/shared-secrets.yaml b/k8s-manifests/storedog-app/secrets/shared-secrets.yaml new file mode 100644 index 00000000..974246f2 --- /dev/null +++ b/k8s-manifests/storedog-app/secrets/shared-secrets.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: Secret +metadata: + name: storedog-secrets +type: Opaque +stringData: + POSTGRES_PASSWORD: postgres # Change this in production + DB_PASSWORD: postgres # Change this in production diff --git a/k8s-manifests/storedog-app/statefulsets/postgres.yaml b/k8s-manifests/storedog-app/statefulsets/postgres.yaml new file mode 100644 index 00000000..1b993eec --- /dev/null +++ b/k8s-manifests/storedog-app/statefulsets/postgres.yaml @@ -0,0 +1,84 @@ +apiVersion: v1 +kind: Service +metadata: + name: postgres +spec: + ports: + - port: 5432 + name: postgres + clusterIP: None + selector: + app: postgres +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: postgres + labels: + tags.datadoghq.com/service: store-db + tags.datadoghq.com/version: "15.0" +spec: + serviceName: postgres + replicas: 1 + selector: + matchLabels: + app: postgres + template: + metadata: + labels: + app: postgres + tags.datadoghq.com/service: store-db + tags.datadoghq.com/version: "15.0" + annotations: + ad.datadoghq.com/postgres.logs: | + { + "source": "postgresql", + "auto_multi_line_detection":true, + "path": "/var/log/pg_log/postgresql*.json", + "type": "file" + } + ad.datadoghq.com/postgres.checks: | + { + "postgres": { + "init_config": {}, + "instances": [{"host":"%%host%%", "port":5432, "username":"datadog", "password":"datadog"}] + } + } + spec: + containers: + - name: postgres + image: ${REGISTRY_URL}/storedog-postgres:${SD_TAG} + ports: + - containerPort: 5432 + name: postgres + env: + - name: POSTGRES_USER + valueFrom: + configMapKeyRef: + name: storedog-config + key: POSTGRES_USER + - name: POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: storedog-secrets + key: POSTGRES_PASSWORD + volumeMounts: + - name: postgres-data + mountPath: /var/lib/postgresql/data + - name: postgres-logs + mountPath: /var/log/pg_log + volumeClaimTemplates: + - metadata: + name: postgres-data + spec: + accessModes: ["ReadWriteOnce"] + resources: + requests: + storage: 10Gi + - metadata: + name: postgres-logs + spec: + accessModes: ["ReadWriteOnce"] + resources: + requests: + storage: 1Gi diff --git a/k8s-manifests/storedog-app/statefulsets/redis.yaml b/k8s-manifests/storedog-app/statefulsets/redis.yaml new file mode 100644 index 00000000..08878297 --- /dev/null +++ b/k8s-manifests/storedog-app/statefulsets/redis.yaml @@ -0,0 +1,58 @@ +apiVersion: v1 +kind: Service +metadata: + name: redis +spec: + ports: + - port: 6379 + name: redis + clusterIP: None + selector: + app: redis +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: redis + labels: + tags.datadoghq.com/service: redis + tags.datadoghq.com/version: "6.2" +spec: + serviceName: redis + replicas: 1 + selector: + matchLabels: + app: redis + template: + metadata: + labels: + app: redis + tags.datadoghq.com/service: redis + tags.datadoghq.com/version: "6.2" + annotations: + ad.datadoghq.com/redis.logs: '[{"source": "redis"}]' + ad.datadoghq.com/redis.checks: | + { + "redisdb": { + "init_config": {}, + "instances": [{"host":"%%host%%", "port":6379}] + } + } + spec: + containers: + - name: redis + image: redis:6.2-alpine + ports: + - containerPort: 6379 + name: redis + volumeMounts: + - name: redis-data + mountPath: /data + volumeClaimTemplates: + - metadata: + name: redis-data + spec: + accessModes: ["ReadWriteOnce"] + resources: + requests: + storage: 1Gi