CoreWeave Install & Auth
Overview
Set up access to CoreWeave Kubernetes Service (CKS). CKS runs bare-metal Kubernetes with NVIDIA GPUs -- no hypervisor overhead. Access is via standard kubeconfig with CoreWeave-issued credentials.
Prerequisites
- CoreWeave account at https://cloud.coreweave.com
kubectlv1.28+ installed- Kubernetes namespace provisioned by CoreWeave
Instructions
Step 1: Download Kubeconfig
- Log in to https://cloud.coreweave.com
- Navigate to API Access > Kubeconfig
- Download the kubeconfig file
# Save kubeconfig
mkdir -p ~/.kube
cp ~/Downloads/coreweave-kubeconfig.yaml ~/.kube/coreweave
# Set as active context
export KUBECONFIG=~/.kube/coreweave
# Verify connection
kubectl get nodes
kubectl get namespaces
Step 2: Configure API Token
# CoreWeave API token for programmatic access
export COREWEAVE_API_TOKEN="your-api-token"
# Store securely
echo "COREWEAVE_API_TOKEN=${COREWEAVE_API_TOKEN}" >> .env
echo "KUBECONFIG=~/.kube/coreweave" >> .env
Step 3: Verify GPU Access
# List available GPU nodes
kubectl get nodes -l gpu.nvidia.com/class -o custom-columns=\
NAME:.metadata.name,GPU:.metadata.labels.gpu\.nvidia\.com/class,\
STATUS:.status.conditions[-1].type
# Check GPU allocatable resources
kubectl describe nodes | grep -A5 "Allocatable:" | grep nvidia
Step 4: Test with a Simple GPU Pod
# test-gpu.yaml
apiVersion: v1
kind: Pod
metadata:
name: gpu-test
spec:
restartPolicy: Never
containers:
- name: cuda-test
image: nvidia/cuda:12.2.0-base-ubuntu22.04
command: ["nvidia-smi"]
resources:
limits:
nvidia.com/gpu: 1
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gpu.nvidia.com/class
operator: In
values: ["A100_PCIE_80GB"]
kubectl apply -f test-gpu.yaml
kubectl logs gpu-test # Should show nvidia-smi output
kubectl delete pod gpu-test
Error Handling
| Error | Cause | Solution |
|-------|-------|----------|
| Unable to connect to the server | Wrong kubeconfig | Verify KUBECONFIG path |
| Forbidden | Missing namespace permissions | Contact CoreWeave support |
| No GPU nodes found | Wrong node labels | Check gpu.nvidia.com/class labels |
| Pod stuck Pending | GPU capacity exhausted | Try different GPU type or region |
Resources
Next Steps
Proceed to coreweave-hello-world to deploy your first inference service.