Author: Qiao Zhongpei (Yiling)

background

With the gradual popularization of Internet of Everything scenarios, the computing power of edge devices is also increasing. How to use the advantages of cloud computing to meet complex and diverse edge application scenarios and extend cloud native technology to the end and edge has become a new technical challenge. “Cloud Edge collaboration” is gradually becoming a new technology focus. This article will focus on the two open source projects of CNCF, KubeVela and OpenYurt, and introduce the cloud-edge collaboration solution with an actual Helm application delivery scenario.

OpenYurt is focused on extending Kubernetes to edge computing in a non-intrusive manner. OpenYurt relies on the container orchestration and scheduling capabilities of native Kubernetes, incorporates edge computing power into the Kubernetes infrastructure for unified management, and provides services such as edge autonomy, efficient operation and maintenance channels, edge unit management, edge traffic topology, secure containers, and edge Serverless/ FaaS, heterogeneous resource support and other capabilities. In short, OpenYurt builds a unified infrastructure for cloud-side collaboration in a Kubernetes-native way.

KubeVela was incubated in the OAM model, focusing on helping enterprises build unified application delivery and management capabilities, shielding business developers from the complexity of the underlying infrastructure, providing flexible scalability, and providing out-of-the-box microservice container management and cloud resources Management, versioning and grayscale release, scaling, observability, resource dependency orchestration and data delivery, multi-cluster, CI docking, GitOps and other features. Maximize the R&D efficiency of self-service application management for developers, and meet the scalability requirements of the long-term evolution of the platform.

What problems can the combination of OpenYurt and KubeVela solve?

As mentioned above, OpenYurt satisfies the access of edge nodes, allowing users to manage edge nodes by operating native Kubernetes. Edge nodes are usually used to represent computing resources closer to users, such as virtual machines or physical servers in a nearby computer room. After joining through OpenYurt, these edge nodes will be converted into nodes (Nodes) that can be used in Kubernetes. OpenYurt uses a node pool (NodePool) to describe a group of edge nodes in the same region. After satisfying the basic resource management, we usually have the following core requirements for how to arrange and deploy applications to different node pools in a cluster:

1. Unified configuration: If you manually modify each resource to be distributed, a lot of manual intervention is required, which is very prone to errors and omissions. We need a unified method for parameter configuration, which can not only facilitate batch management operations, but also connect with security and auditing to meet enterprise risk control and compliance needs.

2. Differential deployment: Workloads deployed to different node pools have most of the same properties, but there are always individual configuration differences.The key lies in how to set parameters related to node selection, such as NodeSelector The Kubernetes scheduler can be instructed to schedule workloads to different node pools.

3. Scalability: The cloud-native ecosystem is prosperous, and both workload types and different operation and maintenance functions are constantly growing. In order to meet business needs more flexibly, we need the overall application architecture to be scalable and fully enjoy the dividends of the cloud-native ecosystem.

KubeVela and OpenYurt can complement each other very well at the application layer to meet the above three core requirements. Next, we will demonstrate these function points in combination with the actual operation process.

Deploy applications to the edge

We will use the Ingress controller as an example to show how to use KubeVela to deploy applications to the edge. In this case, we hope to deploy the Nginx Ingress controller to multiple node pools to access the services provided by the specified node pool through the edge Ingress, and an Ingress can only be processed by the Ingress controller of the node pool where it resides.

There are two node pools in the cluster in the diagram: Beijing and Shanghai, and the networks between them are not interconnected. We hope to deploy an Nginx Ingress Controller in each of the node pools, and use it as the network traffic entrance of the respective node pools. A client near Beijing can access the services provided by the Beijing node pool by accessing the Ingress Controller of the Beijing node pool, and will not access the services provided by the Shanghai node pool.

1.png

Demo’s basic environment

We will simulate an edge scenario using a Kubernetes cluster. The cluster has 3 nodes whose roles are:

  • Node 1: master node, cloud node
  • Node 2: worker node, edge node, in the node pool beijing middle
  • Node 3: worker node, edge node, in the node pool shanghai middle

Preparation

1. Install YurtAppManager

YurtAppManager is the core component of OpenYurt. It provides node pool CRDs and controllers. There are other components in OpenYurt, but for this tutorial we only need YurtAppManager.

git clone https://github.com/openyurtio/yurt-app-managercd yurt-app-manager && helm install yurt-app-manager -n kube-system ./charts/yurt-app-manager/

2. Install KubeVela and enable the FluxCD plugin.

Install the Vela command line tool and install KubeVela in the cluster.

curl -fsSl https://kubevela.net/script/install.sh | bash
vela install

In this case, in order to reuse the mature Helm Chart provided by the community, we use the Helm type component to install the Nginx Ingress Controller.In KubeVela with microkernel design, Helm-type components are provided by the FluxCD plugin, and the FluxCD plugin is enabled below [ 1] .

vela addon enable fluxcd

3. Prepare the node pool

Create two node pools: Beijing node pool and Shanghai node pool. In actual edge scenarios, it is a common pattern to divide node pools by region. Nodes in different groups often have obvious isolation attributes such as network non-communication, resource non-sharing, resource heterogeneity, and application independence. This is also the origin of the concept of node pool. In OpenYurt, functions such as node pool and service topology help users deal with the above problems. In today’s example we will use node pools to describe and manage nodes.

kubectl apply -f - <<EOF
apiVersion: apps.openyurt.io/v1beta1
kind: NodePool
metadata:
  name: beijing
spec:
  type: Edge
  annotations:
    apps.openyurt.io/example: test-beijing
  taints:
    - key: apps.openyurt.io/example
      value: beijing
      effect: NoSchedule
---
apiVersion: apps.openyurt.io/v1beta1
kind: NodePool
metadata:
  name: shanghai
spec:
  type: Edge
  annotations:
    apps.openyurt.io/example: test-shanghai
  taints:
    - key: apps.openyurt.io/example
      value: shanghai
      effect: NoSchedule
EOF

Add edge nodes to their respective node pools. The way to join edge nodes can refer to the way to join OpenYurt nodes.

kubectl label node <node1> apps.openyurt.io/desired-nodepool=beijing
kubectl label node <node2> apps.openyurt.io/desired-nodepool=shanghai
kubectl get nodepool

expected output

NAME       TYPE   READYNODES   NOTREADYNODES   AGE
beijing    Edge   1            0               6m2s
shanghai   Edge   1            0               6m1s

Deploy edge applications in batches

Before we dive into the details, let’s see how KubeVela describes applications deployed to the edge. Through the following application, we can deploy multiple Nginx Ingress Controllers to their respective edge node pools.use the same app toUnified configuration Nginx Ingress canEliminate duplication, reduce the management burden, and facilitate subsequent common operations such as release, operation and maintenance of components in the cluster.

apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
  name: edge-ingress
spec:
  components:
    - name: ingress-nginx
      type: helm
      properties:
        chart: ingress-nginx
        url: https://kubernetes.github.io/ingress-nginx
        repoType: helm
        version: 4.3.0
        values:
          controller:
            service:
              type: NodePort
            admissionWebhooks:
              enabled: false
      traits:
        - type: edge-nginx
  policies:
    - name: replication
      type: replication
      properties:
        selector: [ "ingress-nginx" ]
        keys: [ "beijing","shanghai" ]
  workflow:
    steps:
      - name: deploy
        type: deploy
        properties:
          policies: ["replication"]

A KubeVela Application has 3 parts:

  1. One helm type component. It describes the version of the Helm package that we want to install to the cluster. Additionally, we attach an operational trait to this component edge-nginx . We’ll show the specifics of this operational feature later, for now you can think of it as a patch that includes properties for different node pools.

  2. One replication(component splitting) strategy. It describes how to replicate components to different node pools.Should selector field is used to select the components that need to be copied.its keys Fields will convert one component into two components with different keys. (“beijing” and “shanghai”)

3. deploy Workflow steps. It describes how to deploy the application.it specifies replication Strategy The strategy for executing the copy job.

Notice:

  1. If you want this application to work properly, please first issue the following in the cluster edge-ingress characteristic.
  2. deploy is a KubeVela built-in workflow step.It can also be used in multi-cluster scenarios [2 ] neutralizeoverride,topology strategies are used together.

Now, we can deploy the application to the cluster.

vela up -f app.yaml

Check application status and resources created by KubeVela.

vela status edge-ingress --tree --detail

expected output

CLUSTER       NAMESPACE     RESOURCE                           STATUS    APPLY_TIME          DETAIL
local  ─── default─┬─ HelmRelease/ingress-nginx-beijing  updated   2022-11-02 12:00:24 Ready: True  Status: Release reconciliation succeeded  Age: 153m
                   ├─ HelmRelease/ingress-nginx-shanghai updated   2022-11-02 12:00:24 Ready: True  Status: Release reconciliation succeeded  Age: 153m
                   └─ HelmRepository/ingress-nginx       updated   2022-11-02 12:00:24 URL: https://kubernetes.github.io/ingress-nginx  Age: 153m
                                                                                         Ready: True
                                                                                         Status: stored artifact for revision '7bce426c58aee962d479ca84e5c
                                                                                         fc6931c19c8995e31638668cb958d4a3486c2'

Vela CLI can not only collect and display the health status of applications at a high level, but also help you penetrate the application to reach the underlying workload when needed, and provide rich observation and debugging capabilities. For example, you can pass vela logs Print the log of the application; you can pass vela port-forward Forward the port of the deployed application to the local; you can pass vela exec Commands, go deep into the edge container to execute Shell commands to troubleshoot problems.

If you want to understand the application situation more intuitively, KubeVela also officially provides the web console plug-in VelaUX.Enable the VelaUX plugin [3 ] you can view a more detailed resource topology.

vela addon enable velaux

Visit VelaUX’s resource topology page.

2.png

As you can see, KubeVela creates two HelmRelease Resources, deliver the Helm Chart of the Nginx Ingress Controller to two node pools. HelmRelease Resources are processed by the above FluxCD plugin and distributed in the clusterin two node poolsNginx Ingress installed separately. Run the following command to check whether the Pod of the Ingress Controller has been created in the Beijing node pool, and the Shanghai node pool is the same.

$ kubectl get node -l  apps.openyurt.io/nodepool=beijing                               
NAME                      STATUS   ROLES    AGE   VERSION
iz0xi0r2pe51he3z8pz1ekz   Ready    <none>   23h   v1.24.7+k3s1

$ kubectl get pod ingress-nginx-beijing-controller-c4c7cbf64-xthlp -oyaml|grep iz0xi0r2pe51he3z8pz1ekz
  nodeName: iz0xi0r2pe51he3z8pz1ekz

Differentiated deployment

How does KubeVela implement differentiated deployment of the same component during application delivery? Let’s continue to learn more about the Trait (operation and maintenance characteristics) and Policy (application strategy) that support the application.As mentioned above, we used KubeVela’s built-in component split (replication) Policy in the workflow, and added a custom one to the ingress-nginx component edge-nginx Trait .

  • Component Split Policy will componentsplit into two componentswith different context.replicaKey .

  • edge-nginx Trait uses different context.replicaKey, to deliver a Helm Chart with different configuration values ​​to the cluster. Let two Nginx Ingress Controllers run in different node pools and listen to Ingress resources with different ingressClasses.The specific way is to patch the Values ​​configuration of the Helm Chart, modify the andnode selection,AffinityAnd ingressClass related fields.

  • When patching different fields, different patch strategies are used [4 ] (PatchStrategy), for example using retainKeys Strategies can override the original value, using jsonMergePatch The policy will be merged with the original value.

"edge-nginx": {
  type: "trait"
  annotations: {}
  attributes: {
    podDisruptive: true
    appliesToWorkloads: ["helm"]
  }
}
template: {
  patch: {
    // +patchStrategy=retainKeys
    metadata: {
      name: "(context.name)-(context.replicaKey)"
    }
    // +patchStrategy=jsonMergePatch
    spec: values: {
      ingressClassByName: true
      controller: {
        ingressClassResource: {
          name:            "nginx-" + context.replicaKey
          controllerValue: "openyurt.io/" + context.replicaKey
        }
        _selector
      }
      defaultBackend: {
        _selector
      }
    }
  }
  _selector: {
    tolerations: [
      {
        key:      "apps.openyurt.io/example"
        operator: "Equal"
        value:    context.replicaKey
      },
    ]
    nodeSelector: {
      "apps.openyurt.io/nodepool": context.replicaKey
    }
  }
  parameter: null
}

Bring more types of applications to the edge

It can be seen that in order to deploy Nginx Ingress to different node pools, we only customized a Trait of more than forty lines and made full use of the built-in capabilities of KubeVela. Under the trend of cloud-native ecology becoming more prosperous and cloud-edge collaboration,more applicationsAll may move towards edge deployment. When a new application needs to be deployed in the edge node pool in a new scenario, there is no need to worry, because with the help of KubeVela, it is also easy to expand a new edge application deployment trait following this model,no coding required.

For example, we hope that the recent evolution hotspot of the K8s community, the Gateway API [5 ] The implementation of the network is also deployed to the edge, through the Gateway API to enhance the expressiveness and scalability of the exposed services of the edge node pool, and to use role-based network APIs on the edge nodes. For this scenario, we can also easily complete the deployment task based on the above-mentioned extension method, and only need to define a new Trait as follows.

"gateway-nginx": {
  type: "trait"
  annotations: {}
  attributes: {
    podDisruptive: true
    appliesToWorkloads: ["helm"]
  }
}

template: {
  patch: {
    // +patchStrategy=retainKeys
    metadata: {
      name: "(context.name)-(context.replicaKey)"
    }
    // +patchStrategy=jsonMergePatch
    spec: values: {
      _selector
      fullnameOverride: "nginx-gateway-nginx-" + context.replicaKey
      gatewayClass: {
        name:           "nginx" + context.replicaKey
        controllerName: "k8s-gateway-nginx.nginx.org/nginx-gateway-nginx-controller-" + context.replicaKey
      }
    }
  }
  _selector: {
    tolerations: [
      {
        key:      "apps.openyurt.io/example"
        operator: "Equal"
        value:    context.replicaKey
      },
    ]
    nodeSelector: {
      "apps.openyurt.io/nodepool": context.replicaKey
    }
  }
  parameter: null
}

This Trait is very similar to the Trait used to deploy Nginx Ingress mentioned above. Among them, we also made some similar patches to the Values ​​of Nginx Gateway Chart, including node selection, affinity, and resource name. The difference from the previous Trait is that this Trait specifies gatewayClass instead of IngressClass.The Trait and application files of this case are detailed in the GitHub repository [6 ] . By customizing such a Trait, we extend the cluster’s ability to deploy a new application to the edge.

If we cannot predict more application deployment requirements brought about by the development of edge computing in the future, at least we can continue to adapt to new scenarios through this easier expansion method.

How KubeVela Solved the Edge Deployment Challenge

Review how KubeVela addresses the key questions raised at the beginning of the article.

1. Unified configuration: We use a component to describe the common properties of the ingress-nginx Helm Chart to be deployed, such as Helm warehouse, Chart name, version and other unified configuration.

2. Attribute difference: KubeVela uses a user-defined O&M feature definition, which describes the differences in Helm configuration delivered to different node pools. This operational feature can be reused to deploy the same Helm Chart.

3. Scalability: KubeVela can be extended programmatically for common workloads (such as Deployment/StatefulSet) or other packaging methods (such as Helm/Kustomize/…), and a new application can be pushed to to the edge of the scene.

Thanks to the powerful functions provided by KubeVela in the field of application delivery and management, KubeVela can not only solve application definition, delivery, operation and maintenance and observability problems within a single cluster, but also restore native supportMulti-cluster modeapplication publishing and management. At present, there is no fixed Kubernetes deployment mode suitable for edge computing scenarios. Regardless of the architecture of a single cluster + edge node pool or a multi-edge cluster architecture, KubeVela can handle the application management tasks.

With the cooperation of OpenYurt and KubeVela, cloud-side applications are deployed in a unified manner, sharing the same abstraction, operation and maintenance, and observability capabilities, avoiding fragmented experiences in different scenarios. And both cloud applications and edge applications can use KubeVela’s excellent practice of cloud native ecology that is continuously integrated in the form of plug-ins. In the future, the KubeVela community will continue to enrich out-of-the-box system plug-ins, and continue to deliver better and easier-to-use application delivery and management capabilities.

If you want to know more about application deployment and management capabilities, you can read KubeVela official documentation [7 ] to know the latest developments in the KubeVela community, welcome to the KubeVela community [8 ] (DingTalk Group 23310022) Participate in the discussion! If you are interested in OpenYurt, welcome to the OpenYurt community (DingTalk group 31993519) to participate in the discussion.

You can also learn more about KubeVela and the details of the OAM project through the following materials:

  • Project code base:https://github.com/kubevela/kubevela Welcome to Star/Watch/Fork!
  • Project official homepage and documentation: kubevela.io, since version 1.1, Chinese and English documents have been provided, and developers are welcome to translate more language documents.
  • Project DingTalk Group: 23310022; Slack: CNCF #kubevela Channel
  • Join the WeChat group: Please add the following maintainer WeChat account first, indicating that you have entered the KubeVela user group:

3.png

stamphere : Check out the KubeVela project official website! !

Related Links

[1] FluxCD plugin

https://kubevela.net/zh/docs/reference/addons/fluxcd

[2] Multi-cluster scenario

https://kubevela.net/docs/case-studies/multi-cluster

[3] Enable the VelaUX plugin

https://kubevela.net/zh/docs/reference/addons/velaux

[4] Patch strategy

https://kubevela.net/zh/docs/platform-engineers/traits/patch-trait#patch-strategy

[5] Gateway APIs

https://gateway-api.sigs.k8s.io/

[6] GitHub repository

https://github.com/chivalryq/yurt-vela-example/tree/main/gateway-nginx

[7] KubeVela Official Documentation

https://kubevela.net/

[8] KubeVela Community

https://github.com/kubevela/community

#Unified #Application #Management #CloudEdge #Collaboration #Solution #Based #OpenYurt #KubeVela #Alibaba #Cloud #Cloud #Native #Personal #Space #News Fast Delivery

Leave a Comment

Your email address will not be published. Required fields are marked *