Cluster Requirements
Subsalt provides Terraform templates for each of the major Kubernetes cloud providers to make it easy to set up compatible clusters. The Terraform templates are accessible through Subsalt's CLI tool.
Cluster resource requirements
Subsalt runs on Kubernetes v1.32+, and can be deployed in multiple configurations depending on your needs. Multiple components can be deployed in a single cluster, or across multiple clusters.
We follow Kubernetes release support lifecycle for the versions of Kubernetes that we aim to support.
Subsalt must have cluster-level permission to add operators at installation time.
Below are tables that note the minimum requirements for node pools configured in each cloud environment.
Microsoft Azure AKS
common
2 (fixed)
Standard_E16s_v3
subsalt.io/node-purpose=common
serving
0-1 (auto-scaling)
Standard_NV12ads_A10_v5
nvidia.com/gpu=present:NoSchedule
subsalt.io/node-purpose=serving, subsalt.io/has-gpu=true
pipelines_cpu
0 - 4 (auto-scaling)
Standard_E16s_v3
subsalt.io/node-purpose=pipelines:NoSchedule
subsalt.io/node-purpose=pipelines, subsalt.io/has-gpu=false
pipelines_gpu
0 - 3 (auto-scaling)
Standard_NC8as_T4_v3
nvidia.com/gpu=present:NoSchedule
subsalt.io/node-purpose=pipelines, subsalt.io/has-gpu=true
Amazon Web Services (AWS) EKS
common
2 (fixed)
r6a.4xlarge
subsalt.io/node-purpose=common
serving
0-1 (auto-scaling)
g5.xlarge
nvidia.com/gpu=present:NoSchedule
subsalt.io/node-purpose=serving, subsalt.io/has-gpu=true
pipelines_cpu
0 - 4 (auto-scaling)
r6a.4xlarge
subsalt.io/node-purpose=pipelines:NoSchedule
subsalt.io/node-purpose=pipelines, subsalt.io/has-gpu=false
pipelines_gpu
0 - 3 (auto-scaling)
g5.4xlarge
nvidia.com/gpu=present:NoSchedule
subsalt.io/node-purpose=pipelines, subsalt.io/has-gpu=true
Assuming you're using EC2 autoscaling groups for cluster autoscaling the following tags will also need to be set (assumes recommended instance types).
CPU and Memory values should be set to between 85-90% of the actual instance type's resource values to account for Kubernetes system pods.
Serving node pool
k8s.io/cluster-autoscaler/node-template/resources/cpu
3
k8s.io/cluster-autoscaler/node-template/resources/memory
14G
k8s.io/cluster-autoscaler/node-template/resources/nvidia.com/gpu
1
k8s.io/cluster-autoscaler/node-template/taint/nvidia.com/gpu
present:NoSchedule
k8s.io/cluster-autoscaler/node-template/label/subsalt.io/has-gpu
true
Pipelines (CPU) node pool
k8s.io/cluster-autoscaler/node-template/resources/cpu
14
k8s.io/cluster-autoscaler/node-template/resources/memory
116G
k8s.io/cluster-autoscaler/node-template/taint/subsalt.io/node-purpose
pipelines:NoSchedule
k8s.io/cluster-autoscaler/node-template/label/subsalt.io/node-purpose
pipelines
k8s.io/cluster-autoscaler/node-template/label/subsalt.io/has-gpu
false
Pipelines (GPU) node pool
k8s.io/cluster-autoscaler/node-template/resources/cpu
14
k8s.io/cluster-autoscaler/node-template/resources/memory
56G
k8s.io/cluster-autoscaler/node-template/resources/nvidia.com/gpu
1
k8s.io/cluster-autoscaler/node-template/taint/nvidia.com/gpu
present:NoSchedule
k8s.io/cluster-autoscaler/node-template/label/subsalt.io/node-purpose
pipelines
k8s.io/cluster-autoscaler/node-template/label/subsalt.io/has-gpu
true
Networking
Ingress
Subsalt requires that the cluster has an Ingress controller support for web access.
There are two hosts to configure, one for the web portal and one for authentication management. They should take the form portal.subsalt.acme.com and auth.subsalt.acme.com.
A Record's for each host should be configured in your DNS provider to point at the ingress IP address.
A certificate (TLS/SSL) should be added to the Ingress resource that accounts for both hosts. The Subsalt Helm Chart comes with cert-manager by default which can be configured to provision the certificates and their secrets automatically.
Load balancer
Subsalt requires Service (Load Balancer) support for the query endpoint to serve synthetic data.
An A Record should be configured in your DNS provider to point at this load balancer's IP address.
Other
If your team uses another Kubernetes provider (IBM, DigitalOcean, etc), please reach out and we can find a way to support your deployment.
Last updated