Skip to content

[Bug] eksctl create fargate cluster in china region error #7713

@timandy

Description

@timandy

What were you trying to accomplish?

create fargate cluster in china region

What happened?

How to reproduce it?

in china ningxia region run

eksctl create cluster --name fargate-eks --region cn-northwest-1 --fargate

Logs

eksctl create cluster --name fargate-eks --region cn-northwest-1 --fargate
2024-04-17 14:34:12 [ℹ]  eksctl version 0.175.0
2024-04-17 14:34:12 [ℹ]  using region cn-northwest-1
2024-04-17 14:34:12 [ℹ]  setting availability zones to [cn-northwest-1c cn-northwest-1a cn-northwest-1b]
2024-04-17 14:34:12 [ℹ]  subnets for cn-northwest-1c - public:192.168.0.0/19 private:192.168.96.0/19
2024-04-17 14:34:12 [ℹ]  subnets for cn-northwest-1a - public:192.168.32.0/19 private:192.168.128.0/19
2024-04-17 14:34:12 [ℹ]  subnets for cn-northwest-1b - public:192.168.64.0/19 private:192.168.160.0/19
2024-04-17 14:34:12 [ℹ]  using Kubernetes version 1.29
2024-04-17 14:34:12 [ℹ]  creating EKS cluster "fargate-eks" in "cn-northwest-1" region with Fargate profile
2024-04-17 14:34:12 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=cn-northwest-1 --cluster=fargate-eks'
2024-04-17 14:34:12 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "fargate-eks" in "cn-northwest-1"
2024-04-17 14:34:12 [ℹ]  CloudWatch logging will not be enabled for cluster "fargate-eks" in "cn-northwest-1"
2024-04-17 14:34:12 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=cn-northwest-1 --cluster=fargate-eks'
2024-04-17 14:34:12 [ℹ]  
2 sequential tasks: { create cluster control plane "fargate-eks", 
    2 sequential sub-tasks: { 
        wait for control plane to become ready,
        create fargate profiles,
    } 
}
2024-04-17 14:34:12 [ℹ]  building cluster stack "eksctl-fargate-eks-cluster"
2024-04-17 14:34:12 [ℹ]  deploying stack "eksctl-fargate-eks-cluster"
2024-04-17 14:34:42 [ℹ]  waiting for CloudFormation stack "eksctl-fargate-eks-cluster"
2024-04-17 14:35:12 [ℹ]  waiting for CloudFormation stack "eksctl-fargate-eks-cluster"
2024-04-17 14:36:12 [ℹ]  waiting for CloudFormation stack "eksctl-fargate-eks-cluster"
2024-04-17 14:37:12 [ℹ]  waiting for CloudFormation stack "eksctl-fargate-eks-cluster"
2024-04-17 14:38:12 [ℹ]  waiting for CloudFormation stack "eksctl-fargate-eks-cluster"
2024-04-17 14:39:12 [ℹ]  waiting for CloudFormation stack "eksctl-fargate-eks-cluster"
2024-04-17 14:40:12 [ℹ]  waiting for CloudFormation stack "eksctl-fargate-eks-cluster"
2024-04-17 14:41:12 [ℹ]  waiting for CloudFormation stack "eksctl-fargate-eks-cluster"
2024-04-17 14:43:13 [ℹ]  creating Fargate profile "fp-default" on EKS cluster "fargate-eks"
2024-04-17 14:43:13 [!]  1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console
2024-04-17 14:43:13 [ℹ]  to cleanup resources, run 'eksctl delete cluster --region=cn-northwest-1 --name=fargate-eks'
2024-04-17 14:43:13 [✖]  failed to create Fargate profile "fp-default" on EKS cluster "fargate-eks": failed to create Fargate profile "fp-default": operation error EKS: CreateFargateProfile, https response error StatusCode: 400, RequestID: 0c2b3a5d-7ac9-4e7d-92c0-ecb9c8007cbb, InvalidParameterException: Misconfigured PodExecutionRole Trust Policy; Please add the eks-fargate-pods.amazonaws.com Service Principal
Error: failed to create cluster "fargate-eks"

Anything else we need to know?

Versions

$ eksctl info
eksctl version: 0.175.0
kubectl version: v1.29.0-eks-5e0fdde
OS: linux

I found that eksctl created FargatePodExecutionRole Trust Policy is

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "eks-fargate-pods.amazonaws.com"
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "ArnLike": {
                    "aws:SourceArn": "arn:aws:eks:cn-northwest-1:<account-id>:fargateprofile/fargate-eks/*"
                }
            }
        }
    ]
}

the condition.arnlike should be "aws:SourceArn": "arn:aws-cn:eks:cn-northwest-1:account-id:fargateprofile/fargate-eks/*"

in china region the arn should be started with arn:aws-cn:

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions