FAUN — Developer Community 🐾

We help developers learn and grow by keeping them up with what matters. 👉 www.faun.dev

Follow publication

Revamped ShardingSphere-On-Cloud: What’s New in Version 0.2.0 with CRD ComputeNode

--

Apache ShardingSphere-On-Cloud recently released version 0.2.0, which includes a new CRD ComputeNode for ShardingSphere Operator. This new feature enables users to define computing nodes fully within the ShardingSphere architecture.

Introduction to ComputeNode

In the classic architecture of Apache ShardingSphere, computing nodes, storage nodes, and governance nodes are the primary components.

The computing node refers to the ShardingSphere Proxy, which acts as the entry point for all data traffic and is responsible for data governance capabilities such as distribution and balancing.

The storage node is the environment that stores various ShardingSphere metadata, such as sharding rules, encryption rules, and read-write splitting rules. Governance node components include Zookeeper, Etcd, etc.

In version 0.1.x of ShardingSphere Operator, two CRD components, Proxy and ProxyServerConfig, were introduced to describe the deployment and configuration of ShardingSphere Proxy, as shown in the figure below.

These components enable basic maintenance and deployment capabilities for ShardingSphere Proxy, which are sufficient for Proof of Concept (PoC) environments.

However, for the Operator to be useful in production environments, it must be able to manage various scenarios and problems. These scenarios include cross-version upgrades, smooth session shutdowns, horizontal elastic scaling with multiple metrics, location-aware traffic scheduling, configuration security, cluster-level high availability, and more.

To address these management capabilities, ShardingSphere-On-Cloud has introduced ComputeNode, which can handle these functions within a specific group of objects. The first object is ComputeNode, as shown in the figure below:

Compared with Proxy and ProxyServerConfig, ComputeNode brings changes such as cross-version upgrades, horizontal elastic scaling, and configuration security. ComputeNode is still in the v1alpha1 stage and needs to be enabled through a feature gate.

ComputeNode Practice

Quick Installation of ShardingSphere Operator

To quickly set up a ShardingSphere Proxy cluster using ComputeNode, execute the following helm command:

helm repo add shardingsphere-on-cloud https://charts.shardingsphere.io
helm install shardingsphere-on-cloud/shardingsphere-operator - version 0.2.0 - generate-name
  • The deployment status of the ShardingSphere Proxy cluster can be checked using kubectl get pod:

Now, a complete cluster managed by ShardingSphere Operator has been deployed.

Checking the ShardingSphere Proxy cluster status using kubectl get

ShardingSphere Proxy cluster’s status can be checked using kubectl get pod. The ComputeNode status includes READYINSTANCES, PHASE, CLUSTER-IP, SERVICEPORTS, and AGE.

READYINSTANCES represent the number of ShardingSphere Pods in the Ready state, PHASE represents the current cluster status, CLUSTER-IP represents the ClusterIP of the current cluster Service, SERVICEPORTS represents the port list of the current cluster Service, and AGE represents the creation time of the current cluster.

kubectl get computenode

Quickly Scale the ShardingSphere Proxy Cluster Using kubectl scale

ComputeNode supports the Scale subresource, which enables you to manually scale up using the kubectl scale command with the — replicas parameter.

If the ComputeNode installed by the operator’s default charts cannot meet your usage scenario, you can write a ComputeNode yaml file and submit it to Kubernetes for deployment.

Customizing ComputeNode configuration

If the ComputeNode installed by the operator’s default charts cannot meet the usage scenario, you need to write a ComputeNode yaml file by yourself and submit it to Kubernetes for deployment:

apiVersion: shardingsphere.apache.org/v1alpha1
kind: ComputeNode
metadata:
labels:
app: foo
name: foo
spec:
storageNodeConnector:
type: mysql
version: 5.1.47
serverVersion: 5.3.1
replicas: 1
selector:
matchLabels:
app: foo
portBindings:
- name: server
containerPort: 3307
servicePort: 3307
protocol: TCP
serviceType: ClusterIP
bootstrap:
serverConfig:
authority:
privilege:
type: ALL_PERMITTED
users:
- user: root%
password: root
mode:
type: Cluster
repository:
type: ZooKeeper
props:
timeToLiveSeconds: "600"
server-lists: shardingsphere-operator-zookeeper.default:2181
retryIntervalMilliseconds: "500"
operationTimeoutMilliseconds: "5000"
namespace: governance_ds
maxRetries: "3"
props:
proxy-frontend-database-protocol-type: MySQL

Save the above configuration as foo.yml and execute the following command to create it:

kubectl apply -f foo.yml

The above example can be directly found in our Github repository.

Other Improvements

Other improvements in version 0.2.0 include support for rolling upgrade parameters in the ShardingSphereProxy CRD’s annotations, fixed issues with readyNodes and Conditions in the ShardingSphereProxy Status field in certain scenarios, and more:

  • Introduced the scale subresource to ComputeNode to support kubectl scale #189
  • Separated the construction and update logic of ComputeNode and ShardingSphereProxy #182
  • Wrote NodePort back to ComputeNode definition #187
  • Fixed NullPointerException caused by non-MySQL configurations #179
  • Refactored Manager configuration logic and separated command line configuration #192
  • Fixed Docker build process in CI #173

Wrap Up

In conclusion, the new CRD ComputeNode for ShardingSphere Operator in version 0.2.0 provides various management capabilities that are essential in production environments.

With ComputeNode, users can define computing nodes fully within the ShardingSphere architecture and manage various scenarios and problems, including cross-version upgrades, smooth session shutdowns, horizontal elastic scaling, location-aware traffic scheduling, configuration security, and cluster-level high availability.

Community Contribution

This ShardingSphere-On-Cloud 0.2.0 release is the result of 22 merged PRs, made by 2 contributors. Thank you for your love & passion for open source!

GitHub ID:

  • mlycore
  • xuanyuan300

Relevant Links

👋 If you find this helpful, please click the clap 👏 button below a few times to show your support for the author 👇

🚀Join FAUN Developer Community & Get Similar Stories in your Inbox Each Week

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Published in FAUN — Developer Community 🐾

We help developers learn and grow by keeping them up with what matters. 👉 www.faun.dev

Written by Apache ShardingSphere

Distributed SQL transaction & query engine for data sharding, scaling, encryption, and more - on any database. https://linktr.ee/ApacheShardingSphere

No responses yet

Write a response