Skip to Content

KubeStellar on AWS EKS

KubeStellar on AWS EKS
AWS AWS EKS Installation(Kubernetes 1.34)

Last updated: 2025 • Author: Rishi Mondal 

Overview

This guide installs KubeStellar on AWS EKS (Kubernetes 1.34) following the existing docs style. It covers a host EKS cluster running KubeStellar (ITS + WDS), and optional WECs (Workload Execution Clusters) registered to KubeStellar.

  • Prefer a local/dev install? See Getting Started → Installation.

Visual diagram

Quick Steps

Step 0 — Prerequisites

AWS
  • EC2, EKS, IAM, VPC, CloudFormation
  • Region: us-east-1
  • IPv4 networking
  • Egress internet access
Local Tools
  • kubectl, eksctl, AWS CLI v2, Helm
  • kflex, clusteradm
  • Linux or macOS
Quotas
  • vCPU: 12
  • Elastic IPs: 4
  • Target Groups: 5
  • NLBs: 2

AWS

  • Permissions: EC2, EKS, IAM, VPC, CloudFormation
  • Region: us-east-1 recommended
  • Networking: IPv4 (public or private subnets)
  • Internet egress for images & Helm charts

Minimum quotas:

  • vCPU: 12
  • Elastic IPs: 4
  • Target Groups: 5
  • NLBs: 2

Local machine

  • Linux or macOS
  • kubectl (latest)
  • eksctl (≥ 0.197 for Kubernetes 1.34)
  • AWS CLI v2
  • Helm v3
  • kflex (latest)
  • clusteradm (OCM) (latest)

Install tooling

# AWS CLI curl -sSLO https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip
unzip -q awscli-exe-linux-x86_64.zip && sudo ./aws/install
# kubectl (latest) curl -sSLO "https://dl.k8s.io/release/$(curl -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl && sudo mv kubectl /usr/local/bin/
# eksctl curl -sSL "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
# Helm curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# KubeFlex CLI curl -fsSL https://github.com/kubestellar/kubeflex/releases/download/v0.7.4/kflex_0.7.4_linux_amd64.tar.gz | tar xz
sudo mv kflex /usr/local/bin/
# clusteradm (OCM) curl -fsSL https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash

Configure AWS

aws configure
# Region: us-east-1, Output: json aws sts get-caller-identity

Step 1 — Create Host EKS Cluster (Kubernetes 1.34)

cat > kubestellar-host-cluster.yaml <<'EOF' apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: kubestellar-host region: us-east-1 version: "1.34" kubernetesNetworkConfig: ipFamily: IPv4 iam: withOIDC: true managedNodeGroups: - name: ng-1 instanceType: t3.large desiredCapacity: 3 minSize: 2 maxSize: 4 volumeSize: 50 amiFamily: AmazonLinux2023 privateNetworking: false addons: - name: vpc-cni version: latest - name: kube-proxy version: latest - name: coredns version: latest EOF
eksctl create cluster -f kubestellar-host-cluster.yaml
aws eks update-kubeconfig --name kubestellar-host --region us-east-1
kubectl get nodes

Step 2 — Install Ingress (NGINX)

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \ --namespace ingress-nginx \ --create-namespace \ --version 4.12.1 \ --set controller.extraArgs.enable-ssl-passthrough="" \ --set controller.service.type=LoadBalancer \ --set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-type"="nlb" \ --set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-nlb-target-type"="instance" \ --set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-scheme"="internet-facing"
kubectl get svc -n ingress-nginx ingress-nginx-controller

Step 3 — Install KubeStellar Core

export KUBESTELLAR_VERSION=0.27.2
helm upgrade --install ks-core \ oci://ghcr.io/kubestellar/kubestellar/core-chart \ --version $KUBESTELLAR_VERSION \ --set-json='ITSes=[{"name":"its1"}]' \ --set-json='WDSes=[{"name":"wds1"},{"name":"wds2","type":"host"}]' \ --timeout 24h

Step 4 — Create Workload Execution Clusters (WECs) (optional)

If you already have clusters to use as WECs, skip this step and go directly to Step 5 — Register WECs with KubeStellar.

Create WEC 1 — cluster1

cat > cluster1.yaml <<'EOF' apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: cluster1 region: us-east-1 version: "1.34" managedNodeGroups: - name: ng-1 instanceType: t3.medium desiredCapacity: 2 EOF
eksctl create cluster -f cluster1.yaml

Create WEC 2 — cluster2

cat > cluster2.yaml <<'EOF' apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: cluster2 region: us-east-1 version: "1.34" managedNodeGroups: - name: ng-1 instanceType: t3.medium desiredCapacity: 2 EOF
eksctl create cluster -f cluster2.yaml

Step 5 — Register WECs with KubeStellar

If you skipped Step 4, register your existing clusters here.

Get join token from ITS

joincmd=$(clusteradm --context its1 get token | awk '/clusteradm join/ {print}')

Register cluster1

${joincmd/<cluster_name>/cluster1} \ --context cluster1 \ --singleton \ --force-internal-endpoint-lookup \ --wait-timeout 240s

Register cluster2

${joincmd/<cluster_name>/cluster2} \ --context cluster2 \ --singleton \ --force-internal-endpoint-lookup \ --wait-timeout 240s

Accept and label

clusteradm --context its1 accept --clusters cluster1
clusteradm --context its1 accept --clusters cluster2
kubectl --context its1 label managedcluster cluster1 location-group=edge --overwrite
kubectl --context its1 label managedcluster cluster2 location-group=edge --overwrite

Step 6 — Deploy a Test App via KubeStellar

Create namespace and deployment

kubectl apply -f - <<'EOF' apiVersion: v1 kind: Namespace metadata: name: test-app EOF kubectl apply -f - <<'EOF' apiVersion: apps/v1 kind: Deployment metadata: name: nginx-test namespace: test-app spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest EOF

Create BindingPolicy to target WECs

kubectl apply -f - <<'EOF' apiVersion: control.kubestellar.io/v1alpha1 kind: BindingPolicy metadata: name: nginx-test-policy namespace: test-app spec: clusterSelectors: - matchLabels: location-group: edge downsync: - objectSelectors: - matchLabels: app: nginx EOF

Verify

kubectl --context cluster1 get deploy -n test-app
kubectl --context cluster2 get deploy -n test-app

Troubleshooting

# Registration kubectl --context its1 get managedclusters
# Agent issues kubectl --context cluster1 -n open-cluster-management-agent get pods
kubectl --context cluster1 get csr
# KubeStellar components kubectl get controlplanes -A
kubectl logs -n kubeflex-system -l app=kubeflex-controller-manager

Cleanup

eksctl delete cluster --name cluster1 --region us-east-1
eksctl delete cluster --name cluster2 --region us-east-1
eksctl delete cluster --name kubestellar-host --region us-east-1
Last updated on