Creating Clusters on Huawei DCS
This document provides instructions for creating Kubernetes clusters on the Huawei DCS platform. YAML-based cluster creation is available through manifests. If Fleet Essentials is installed and Alauda Container Platform DCS Infrastructure Provider is 1.0.13 or later, you can also create clusters through the web UI. If the workflow relies on pool-managed persistent disks, use DCS provider v1.0.16 or later. In v1.0.16, the persistentDisk declaration on DCSIpHostnamePool is available through YAML only and is not exposed in the web UI.
The web UI provides a guided workflow with validation, while YAML offers more automation flexibility.
TOC
Prerequisites1. Infrastructure Resources2. Required Plugin Installation3. Virtual Machine Template Preparation4. Network Connectivity5. LoadBalancer Configuration6. Public Registry ConfigurationUsing the Web UICreation WorkflowStep 1: Basic InfoStep 2: Control Plane Node PoolStep 3: Worker Node PoolsStep 4: NetworkingStep 5: ReviewUsing YAMLCluster Creation WorkflowConfiguration WorkflowNetwork Planning and Load BalancerConfigure KubeadmControlPlaneConfigure DCSClusterConfigure ClusterDeploying NodesCluster VerificationUsing the ConsoleUsing kubectlExpected ResultsAppendixComplete KubeadmControlPlane ConfigurationNext StepsPrerequisites
Before creating clusters, ensure all of the following prerequisites are met:
1. Infrastructure Resources
Configure the following infrastructure resources before creating a cluster:
- Cloud Credential - DCS platform access information
- IP Pool - Network configuration for cluster nodes and any IP-slot persistent disks such as
/var/cpaas - Machine Template - VM specifications for control plane and worker nodes, excluding pool-managed persistent disks
See Infrastructure Resources for Huawei DCS for detailed configuration instructions.
2. Required Plugin Installation
Install the following plugins on the global cluster:
- Alauda Container Platform Kubeadm Provider
- Alauda Container Platform DCS Infrastructure Provider
For detailed installation instructions, refer to the Installation Guide.
3. Virtual Machine Template Preparation
For Kubernetes installation, you must:
- Upload the MicroOS image to the DCS platform
- Create a virtual machine template based on this image
- Ensure the template includes all necessary Kubernetes components
- Use DCS VM templates
4.2.1or later if you plan to use persistent disks, because safe shutdown and disk detach depend on guest tools - Use one-by-one replacement for any cluster that will rely on pool-managed persistent disks. Keep
maxSurge: 0on the control plane and on worker node pools.
For details on the Kubernetes components included in each VM image, see OS Support Matrix.
4. Network Connectivity
Ensure that all nodes in the global cluster can access the DCS platform on:
- Port 7443 (DCS API)
- Port 8443 (DCS Web Console)
Requirement: Connectivity to both ports is mandatory for cluster creation and management.
5. LoadBalancer Configuration
Configure a LoadBalancer for the Kubernetes API Server before creating the cluster. The LoadBalancer distributes API server traffic across control plane nodes to ensure high availability.
6. Public Registry Configuration
Configure the public registry credentials. This includes:
- Registry repository address configuration
- Proper authentication credentials setup
Using the Web UI
Version requirement: This workflow requires Fleet Essentials and Alauda Container Platform DCS Infrastructure Provider 1.0.13 or later. If the provider version is earlier than 1.0.13, use YAML manifests. If you use pool-managed persistent disks, use DCS provider v1.0.16 or later. In v1.0.16, configure DCSIpHostnamePool.spec.pool[].persistentDisk through YAML because the web UI does not expose that field.
If the new cluster will rely on pool-managed persistent disks, create or update the backing DCSIpHostnamePool with YAML and then use the web UI for the rest of the cluster workflow.
Creation Workflow
The cluster creation follows a 5-step wizard:
Navigation: Clusters → Clusters → Create Cluster → Select Huawei DCS
Step 1: Basic Info
Prerequisites Check:
Before creating a cluster, ensure:
- DCS VM Templates exist in the DCS platform, and the MicroOS version matches the Kubernetes version
- A LoadBalancer for the Kubernetes API Server has been set up
Version Constraint: Only the latest Kubernetes version supported by the platform can be created.
Step 2: Control Plane Node Pool
The control plane node pool is fixed at 3 replicas for high availability.
Validation: The associated IP Pool must have sufficient available IP addresses (≥ 3).
Step 3: Worker Node Pools
You can add multiple worker node pools. Each pool has the following configuration:
Validation Rules:
- Pool names must be unique within the cluster
- IP Pool must have sufficient available IP addresses (≥ Replicas)
- maxSurge and maxUnavailable must satisfy the constraint: if maxSurge = 0, then maxUnavailable > 0
- If the cluster will rely on pool-managed persistent disks, keep
maxSurge = 0so nodes are replaced one by one during future upgrades
Tip: Prefix the pool name with the cluster name followed by a hyphen (e.g., mycluster-worker-1) to avoid naming conflicts across different clusters.
Step 4: Networking
Validation: Pods CIDR and Services CIDR must not overlap.
Step 5: Review
Review all configuration settings before creating the cluster:
Basic Info:
- Name, Display Name, Infrastructure Credential
- Distribution Version, Kubernetes Version
- Cluster API Address
Control Plane Node Pool:
- Machine Template with VM Template Name, OS Version, Kubernetes Version
- CPU, Memory, Replicas, SSH Keys
Worker Node Pools (list view):
- Pool Name, Machine Template, Replicas
- Max Surge, Max Unavailable, SSH Keys
If the cluster will rely on pool-managed persistent disks, keep Max Surge set to 0 for worker node pools.
Networking:
- Pods CIDR, Services CIDR, Join CIDR
Click Create to start the cluster creation process.
Using YAML
Cluster Creation Workflow
When using YAML, you create Cluster API resources in the global cluster to provision infrastructure and bootstrap a functional Kubernetes cluster.
Important Namespace Requirement
To ensure proper integration as business clusters, all resources must be deployed in the cpaas-system namespace. Deploying resources in other namespaces may result in integration issues.
Configuration Workflow
Follow these steps in order:
- Configure KubeadmControlPlane
- Configure DCSCluster
- Create the Cluster resource
Note: Infrastructure resources (Secret, DCSIpHostnamePool, DCSMachineTemplate) should be configured separately. See Infrastructure Resources for Huawei DCS for instructions.
If you need any disk to survive rolling replacement, declare it in the matching DCSIpHostnamePool.spec.pool[].persistentDisk entry. This includes the platform-required /var/cpaas disk.
Network Planning and Load Balancer
Before creating control plane resources, plan the network architecture and deploy a load balancer for high availability.
Requirements:
- Network segmentation: Plan IP address ranges for control plane nodes
- Load balancer: Deploy and configure access to the API server
- API server address: Prepare a stable VIP or load balancer address for the Kubernetes API Server
- Connectivity: Ensure network connectivity between all components
Configure KubeadmControlPlane
The KubeadmControlPlane resource defines the control plane configuration including Kubernetes version, node specifications, and bootstrap settings.
Full Configuration Reference
The example below truncates long configuration files for readability. For the complete configuration (including default audit policies, admission controls, and file contents), refer to the Complete KubeadmControlPlane Configuration in the Appendix.
Parameter Descriptions:
For component versions (e.g., <dns-image-tag>, <etcd-image-tag>), refer to OS Support Matrix.
Configure DCSCluster
DCSCluster is the infrastructure cluster declaration that references the load balancer and DCS platform credentials.
Parameter Descriptions:
Configure Cluster
The Cluster resource declares the cluster and references the control plane and infrastructure resources.
Parameter Descriptions:
Deploying Nodes
Refer to Managing Nodes on Huawei DCS for instructions on deploying worker nodes.
Cluster Verification
After deploying all cluster resources, verify that the cluster has been created successfully and is operational.
Using the Console
- Navigate to Clusters → Clusters
- Locate your newly created cluster in the cluster list
- Verify that the cluster status shows as Running
- Check that all control plane and worker nodes are Ready
Using kubectl
Alternatively, verify the cluster using kubectl commands:
Expected Results
A successfully created cluster should show:
- Cluster status: Running or Provisioned
- All control plane machines: Running
- All worker nodes (if deployed): Running
- Kubernetes nodes: Ready
- Cluster Module Status: Completed
Appendix
Complete KubeadmControlPlane Configuration
Below is the complete KubeadmControlPlane configuration, including all default audit policies, admission controls, and file contents.
Alternative: reference a centrally managed Secret instead of inline content
The Alauda Container Platform DCS Infrastructure Provider plugin ships a Secret named dcs-kubernetes-<kubernetes-major-minor>-files in the cpaas-system namespace (for example, dcs-kubernetes-1.33-files for Kubernetes 1.33). It contains the canonical content of psa-config.yaml, control-plane-kubelet-patch.json, and audit-policy.yaml, and is updated together with each release.
When that Secret is present, you can replace the three inline files entries with contentFrom.secret references. Inline and Secret-referenced forms are functionally equivalent; using the Secret keeps file content aligned with the installed plugin version and avoids manual updates on cluster upgrades.
encryption-provider.conf is not provided by the Secret. You can either keep it inline as shown above (and supply your own <base64-encoded-secret>), or omit the inline file entirely and rely on the version that the DCS VM template image already bakes in — both are valid; the latter is simpler when the VM template's default key is acceptable for your environment.
Next Steps
After creating a cluster: