[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-07-31 (世界標準時間)。"],[],[],null,["# Attach your AKS cluster\n\nTo attach a cluster means to connect it to Google Cloud by registering it\nwith Google Cloud [Fleet management](/anthos/fleet-management/docs) and\ninstalling the GKE attached clusters software on it.\n\nYou can attach a cluster using the gcloud CLI or Terraform. To learn\nhow to create and attach an AKS cluster using Terraform, check the\n[GitHub repository of samples for GKE attached clusters](https://github.com/GoogleCloudPlatform/anthos-samples/tree/main/anthos-attached-clusters).\n\nTo attach an AKS cluster using gcloud, perform the\nfollowing steps.\n\nPrerequisites\n-------------\n\nEnsure that your cluster meets the\n[cluster requirements](/kubernetes-engine/multi-cloud/docs/attached/aks/reference/cluster-prerequisites).\n\nWhen attaching your cluster, you must specify the following:\n\n- A supported Google Cloud [administrative region](/kubernetes-engine/multi-cloud/docs/attached/aks/reference/supported-regions)\n- A platform version\n\nThe administrative region is a Google Cloud region\nto administer your attached cluster from. You can choose any supported\nregion, but best practice is to choose the region geographically closest to\nyour cluster. No user data is stored in the administrative region.\n\nThe platform version is the version of GKE attached clusters to be installed on your\ncluster. You can list all supported versions by running the following command: \n\n gcloud container attached get-server-config \\\n --location=\u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_REGION\u003c/var\u003e\n\nReplace \u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_REGION\u003c/var\u003e with the name of the\nGoogle Cloud location to administer your cluster from.\n\nPlatform version numbering\n--------------------------\n\nThese documents refer to the GKE attached clusters version as the platform version,\nto distinguish it from the Kubernetes version. GKE attached clusters uses the same\nversion numbering convention as GKE - for example, 1.21.5-gke.1. When attaching\nor updating your cluster, you must choose a platform version whose minor version\nis the same as or one level below the Kubernetes version of your cluster. For\nexample, you can attach a cluster running Kubernetes v1.22.\\* with\nGKE attached clusters platform version 1.21.\\* or 1.22.\\*.\n\nThis lets you upgrade your cluster to the next minor version before upgrading\nGKE attached clusters.\n\nAttach your AKS cluster\n-----------------------\n\n| **Note:** The default number of clusters that you can attach per project is 50. To increase this quota, contact [Google Cloud support](/support).\n\nTo attach your AKS cluster to Google Cloud\n[Fleet management](/anthos/fleet-management/docs),\nrun the following commands:\n\n1. Ensure that your kubeconfig file has an entry for the cluster you'd like\n to attach:\n\n az aks get-credentials -n \u003cvar translate=\"no\"\u003eAKS_CLUSTER_NAME\u003c/var\u003e \\\n -g \u003cvar translate=\"no\"\u003eRESOURCE_GROUP\u003c/var\u003e\n\n2. Run this command to extract your cluster's kubeconfig context and\n store it in the `KUBECONFIG_CONTEXT` environment variable:\n\n KUBECONFIG_CONTEXT=$(kubectl config current-context)\n\n3. The command to register your cluster varies slightly depending on whether\n you've configured your cluster with the default private OIDC issuer or the\n experimental public one. Choose the tab that applies to your cluster:\n\n ### Private OIDC issuer (default)\n\n Use the\n [`gcloud container attached clusters register` command](/sdk/gcloud/reference/container/attached/clusters/register) to register the cluster: \n\n gcloud container attached clusters register \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e \\\n --location=\u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_REGION\u003c/var\u003e \\\n --fleet-project=\u003cvar translate=\"no\"\u003ePROJECT_NUMBER\u003c/var\u003e \\\n --platform-version=\u003cvar translate=\"no\"\u003ePLATFORM_VERSION\u003c/var\u003e \\\n --distribution=aks \\\n --context=\u003cvar translate=\"no\"\u003eKUBECONFIG_CONTEXT\u003c/var\u003e \\\n --has-private-issuer \\\n --kubeconfig=\u003cvar translate=\"no\"\u003eKUBECONFIG_PATH\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e: the name of your cluster. This name can be the same \u003cvar translate=\"no\"\u003eAKS_CLUSTER_NAME\u003c/var\u003e you used in step 1. The \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e must be compliant with the [RFC 1123 Label Names standard](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-label-names).\n - \u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_REGION\u003c/var\u003e: the Google Cloud region to administer your cluster from\n - \u003cvar translate=\"no\"\u003ePROJECT_NUMBER\u003c/var\u003e: the Fleet host project to register the cluster with\n - \u003cvar translate=\"no\"\u003ePLATFORM_VERSION\u003c/var\u003e: the platform version to use for the cluster\n - \u003cvar translate=\"no\"\u003eKUBECONFIG_CONTEXT\u003c/var\u003e: context in the kubeconfig for accessing the AKS cluster\n - \u003cvar translate=\"no\"\u003eKUBECONFIG_PATH\u003c/var\u003e: path to your kubeconfig\n\n ### Public OIDC issuer\n\n 1. Retrieve your cluster's OIDC issuer URL with the following command:\n\n az aks show -n \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e \\\n -g \u003cvar translate=\"no\"\u003eRESOURCE_GROUP\u003c/var\u003e \\\n --query \"oidcIssuerProfile.issuerUrl\" -otsv\n\n Replace \u003cvar translate=\"no\"\u003eRESOURCE_GROUP\u003c/var\u003e with the AKS resource\n group your cluster belongs to.\n\n The output of this command will be the URL of your OIDC issuer. Save\n this value for use later.\n 2. Run this command to extract your cluster's kubeconfig context and\n store it in the `KUBECONFIG_CONTEXT` environment variable:\n\n KUBECONFIG_CONTEXT=$(kubectl config current-context)\n\n 3. Use the\n [`gcloud container attached clusters register` command](/sdk/gcloud/reference/container/attached/clusters/register) to register the cluster:\n\n gcloud container attached clusters register \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e \\\n --location=\u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_REGION\u003c/var\u003e \\\n --fleet-project=\u003cvar translate=\"no\"\u003ePROJECT_NUMBER\u003c/var\u003e \\\n --platform-version=\u003cvar translate=\"no\"\u003ePLATFORM_VERSION\u003c/var\u003e \\\n --distribution=aks \\\n --issuer-url=\u003cvar translate=\"no\"\u003eISSUER_URL\u003c/var\u003e \\\n --context=\u003cvar translate=\"no\"\u003eKUBECONFIG_CONTEXT\u003c/var\u003e \\\n --kubeconfig=\u003cvar translate=\"no\"\u003eKUBECONFIG_PATH\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e: the name of your cluster. This name can be the same \u003cvar translate=\"no\"\u003eAKS_CLUSTER_NAME\u003c/var\u003e you used in step 1. The \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e must be compliant with the [RFC 1123 Label Names standard](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-label-names).\n - \u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_REGION\u003c/var\u003e: the Google Cloud region to administer your cluster\n - \u003cvar translate=\"no\"\u003ePROJECT_NUMBER\u003c/var\u003e: the Fleet host project where the cluster will be registered\n - \u003cvar translate=\"no\"\u003ePLATFORM_VERSION\u003c/var\u003e: the GKE attached clusters version to use for the cluster\n - \u003cvar translate=\"no\"\u003eISSUER_URL\u003c/var\u003e: the issuer URL retrieved earlier\n - \u003cvar translate=\"no\"\u003eKUBECONFIG_CONTEXT\u003c/var\u003e: context in the kubeconfig for accessing your cluster, as extracted earlier\n - \u003cvar translate=\"no\"\u003eKUBECONFIG_PATH\u003c/var\u003e: path to your kubeconfig\n | **Note:** If attaching your cluster fails, the system automatically rolls back any changes made to Google Cloud resources related to the cluster, such as workload identity pool. This means the connection between your cluster and GKE attached clusters isn't established, but your actual AKS cluster remains unaffected. You can try again to attach the cluster after fixing the issue that caused the failure.\n\nAuthorize Cloud Logging / Cloud Monitoring\n------------------------------------------\n\n| **Note:** Starting with GKE Enterprise version 1.28, manual policy binding to authorize the `gke-system/gke-telemetry-agent` service account for log and metric collection is no longer necessary. The required permissions are now automatically granted to this service account. You can therefore disregard this section.\n\nIn order for GKE attached clusters to create and upload system logs and metrics to\nGoogle Cloud, it must be authorized.\n\nTo authorize the Kubernetes workload identity `gke-system/gke-telemetry-agent`\nto write logs to Google Cloud Logging, and metrics to Google Cloud Monitoring,\nrun this command: \n\n gcloud projects add-iam-policy-binding \u003cvar translate=\"no\"\u003eGOOGLE_PROJECT_ID\u003c/var\u003e \\\n --member=\"serviceAccount:\u003cvar translate=\"no\"\u003eGOOGLE_PROJECT_ID\u003c/var\u003e.svc.id.goog[gke-system/gke-telemetry-agent]\" \\\n --role=roles/gkemulticloud.telemetryWriter\n\nReplace \u003cvar translate=\"no\"\u003eGOOGLE_PROJECT_ID\u003c/var\u003e with the cluster's Google Cloud project ID.\n\nThis IAM binding grants access for all clusters in the Google Cloud project project to\nupload logs and metrics. You only need to run it after creating your\nfirst cluster for the project.\n\nAdding this IAM binding will fail unless at least one cluster has been\ncreated in your Google Cloud project. This is because the workload identity pool\nit refers to (\u003cvar translate=\"no\"\u003eGOOGLE_PROJECT_ID\u003c/var\u003e`.svc.id.goog`) is\nnot provisioned until cluster creation."]]