diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 5afd75bc..a0a1a3b4 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,10 +1,10 @@ # Submitting your patch -Thanks for taking the time to contribute to `clickhouse-operator`! +Thanks for taking the time to contribute to `radondb-clickhouse-operator`! ## Intro -`clickhouse-operator` contribution process is built around standard git _Pull Requests_. +`radondb-clickhouse-operator` contribution process is built around standard git _Pull Requests_. ## How to make PR @@ -19,9 +19,9 @@ In case you'd like to introduce several features, make several PRs, please. ## Sign Your Work Every PR has to be signed. The sign-off is a text line at the end of the commit's text description. -Your signature certifies that you wrote the patch or otherwise have the right to contribute it to `clickhouse-operator`. +Your signature certifies that you wrote the patch or otherwise have the right to contribute it to `radondb-clickhouse-operator`. -Developer Certificate of Origin is available at [developercertificate.org](https://developercertificate.org/): +Developer Certificate of Origin is available at [developer certificate.org](https://developercertificate.org/): ```text Version 1.1 @@ -72,7 +72,7 @@ If you set your `user.name` and `user.email` git configs, you can sign your comm Your `git log` information for your commit should look something like this: -``` +```spain Author: John Doe Date: Mon Jan 24 12:34:56 2020 +0200 diff --git a/README.md b/README.md index edb0a820..a0cd36a1 100644 --- a/README.md +++ b/README.md @@ -1,87 +1,67 @@ -# ClickHouse Operator +# ![LOGO](docs/_images/logo_radondb.png) -ClickHouse Operator creates, configures and manages ClickHouse clusters running on Kubernetes. +> English | [简体中文](README_zh.md) -[![GitHub release](https://img.shields.io/github/v/release/altinity/clickhouse-operator?include_prereleases)](https://img.shields.io/github/v/release/altinity/clickhouse-operator?include_prereleases) -[![CircleCI](https://circleci.com/gh/Altinity/clickhouse-operator.svg?style=svg)](https://circleci.com/gh/Altinity/clickhouse-operator) -[![Docker Pulls](https://img.shields.io/docker/pulls/altinity/clickhouse-operator.svg)](https://hub.docker.com/r/altinity/clickhouse-operator) -[![Go Report Card](https://goreportcard.com/badge/github.com/altinity/clickhouse-operator)](https://goreportcard.com/report/github.com/altinity/clickhouse-operator) -[![Go version](https://img.shields.io/github/go-mod/go-version/altinity/clickhouse-operator)](https://img.shields.io/github/go-mod/go-version/altinity/clickhouse-operator) -[![issues](https://img.shields.io/github/issues/altinity/clickhouse-operator.svg)](https://github.com/altinity/clickhouse-operator/issues) -[![tags](https://img.shields.io/github/tag/altinity/clickhouse-operator.svg)](https://github.com/altinity/clickhouse-operator/tags) +## What is RadonDB ClickHouse -## Features +ClickHouse is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP). -The ClickHouse Operator for Kubernetes currently provides the following: +RadonDB ClickHouse is an open-source, cloud-native, highly availability cluster solutions based on [ClickHouse](https://clickhouse.tech/) and [ClickHouse Operator](https://github.com/Altinity/clickhouse-operator). + +[RadonDB ClickHouse Operator](https://github.com/radondb/radondb-clickhouse-operator) is committed to creating ClickHouse clusters on Kubernetes quickly and easily. RadonDB ClickHouse Operator supports [Kubernetes 1.15.11+](https://kubernetes.io), [KubeSphere 3.1.x](https://kubesphere.com.cn), [Rancher](https://rancher.com/), [OpenShift](https://www.redhat.com/en) and other container platforms to deploy, configure and manage RadonDB ClickHouse clusters. + +## Architecture + +![Architecture](docs/_images/arch.png) + +## Main Features + +The RadonDB ClickHouse Operator for Kubernetes currently provides the following: -- Creates ClickHouse clusters based on Custom Resource [specification][chi_max_yaml] provided -- Customized storage provisioning (VolumeClaim templates) -- Customized pod templates -- Customized service templates for endpoints - ClickHouse configuration and settings (including Zookeeper integration) -- Flexible templating - ClickHouse cluster scaling including automatic schema propagation - ClickHouse version upgrades - Exporting ClickHouse metrics to Prometheus +- Multiple customization and custom configuration templates available -## Requirements - - * Kubernetes 1.15.11+ - -## Documentation - -[Quick Start Guide][quick_start_guide] - -**Advanced setups** - * [Detailed Operator Installation Instructions][detailed_installation_instructions] - * [Operator Configuration][operator_configuration] - * [Setup ClickHouse cluster with replication][replication_setup] - * [Setting up Zookeeper][zookeeper_setup] - * [Persistent Storage Configuration][storage_configuration] - * [ClickHouse Installation Custom Resource specification][crd_explained] - -**Maintenance tasks** - * [Add replication to an existing ClickHouse cluster][update_cluster_add_replication] - * [Schema maintenance][schema_migration] - * [Update ClickHouse version][update_clickhouse_version] - * [Update Operator version][update_operator] - -**Monitoring** - * [Setup Monitoring][monitoring_setup] - * [Prometheus & clickhouse-operator integration][prometheus_setup] - * [Grafana & Prometheus integration][grafana_setup] - -**How to contribute** - * [How to contribute/submit a patch][contributing_manual] - ---- -**All docs** - * [All available docs list][all_docs_list] ---- - -## License + - Creates ClickHouse clusters based on Custom Resource [specification](docs/chi-examples/99-clickhouseinstallation-max.yaml) provided + - Customized storage provisioning (VolumeClaim templates) + - Customized pod templates + - Customized service templates for endpoints + - Flexible templating + +## Quick Start -Copyright (c) 2019-2219, Altinity Ltd and/or its affiliates. All rights reserved. +> On Kubernetes, we recommend you to install ClickHouse cluster through RadonDB ClickHouse Operator. -`clickhouse-operator` is licensed under the Apache License 2.0. +- [Quick Start](docs/quick_start.md) +- [Installing RadonDB ClickHouse on KubeSphere](docs/en-us/deploy_radondb-clickhouse_with_operator_on_kubesphere_appstore.md) +- [More Detailed Guidance](docs/README.md) + +## License + +RadonDB ClickHouse Operator is published under the Apache License 2.0. See [LICENSE](./LICENSE) for more details. - -[chi_max_yaml]: ./docs/chi-examples/99-clickhouseinstallation-max.yaml -[intro]: ./docs/introduction.md -[quick_start_guide]: ./docs/quick_start.md -[detailed_installation_instructions]: ./docs/operator_installation_details.md -[replication_setup]: ./docs/replication_setup.md -[crd_explained]: ./docs/custom_resource_explained.md -[zookeeper_setup]: ./docs/zookeeper_setup.md -[monitoring_setup]: ./docs/monitoring_setup.md -[prometheus_setup]: ./docs/prometheus_setup.md -[grafana_setup]: ./docs/grafana_setup.md -[storage_configuration]: ./docs/storage.md -[update_cluster_add_replication]: ./docs/chi_update_add_replication.md -[update_clickhouse_version]: ./docs/chi_update_clickhouse_version.md -[update_operator]: ./docs/operator_upgrade.md -[schema_migration]: ./docs/schema_migration.md -[operator_configuration]: ./docs/operator_configuration.md -[all_docs_list]: ./docs/README.md -[contributing_manual]: ./CONTRIBUTING.md + +## Discussion and Community + +- Contribution + + We welcome any kind of code contribution, some PR requirements can be found in [How to contribute/submit a patch](./CONTRIBUTING.md). + +- Forum + + The RadonDB ClickHouse topic is in [KubeSphere Community](https://kubesphere.com.cn/forum/t/radondb). + +- Please pay attention to our official account. + + ![](docs/_images/vx_code_258.jpg) + +--- +

+

+Please submit any RadonDB ClickHouse Operator bugs, issues, and feature requests to RadonDB MySQL GitHub Issue. +
+ +

diff --git a/README_zh.md b/README_zh.md new file mode 100644 index 00000000..30566ca4 --- /dev/null +++ b/README_zh.md @@ -0,0 +1,65 @@ +# ![LOGO](docs/_images/logo_radondb.png) + +> [English](README.md) | 简体中文 + +---- + +## 什么是 RadonDB ClickHouse + +ClickHouse 是一个用于联机分析(OLAP)的列式数据库管理系统(DBMS)。 + +RadonDB ClickHouse 是一款基于 [ClickHouse](https://clickhouse.tech/) 和 [ClickHouse Operator](https://github.com/Altinity/clickhouse-operator) 的开源、高可用、云原生集群解决方案。[RadonDB ClickHouse Operator](https://github.com/radondb/radondb-clickhouse-operator) 致力于在 Kubernetes 上轻便快速创建 ClickHouse 集群。 + +RadonDB ClickHouse Operator 支持在 [Kubernetes 1.15.11+](https://kubernetes.io) 、[KubeSphere 3.1.x](https://kubesphere.com.cn) 、[Rancher](https://rancher.com/) 、[OpenShift](https://www.redhat.com/en) 等容器平台部署、配置和管理 RadonDB ClickHouse 集群。 + +## 架构图 + +![架构图](docs/_images/arch.png) + +## 特性功能 + +- 兼容 ClickHouse 配置,集成 ZooKeeper 组件 +- 支持集群自动扩容 +- 支持 ClickHouse 内核版本升级 +- 满足 Prometheus 监控指标标准,支持第三方平台监控服务 +- 提供多种定制和自定义配置模版 + + - 定制的资源配置模版 + - 定制的存储配置(VolumeClaim)模版 + - 定制的 Pod 配置模版 + - 定制的终端服务配置模版 + - 灵活的自定义配置模版 + +## 快速入门 + +> 当使用 Kubernetes 时,推荐您通过 RadonDB ClickHouse Operator 部署 ClickHouse 集群。 + +- [快速入门](docs/zh-cn/quick_start.md) +- [在 KubeSphere 上部署 RadonDB ClickHouse](docs/zh-cn/deploy_radondb-clickhouse_with_operator_on_kubesphere_appstore.md) +- [更多文档](docs/README.md) + +## 协议 + +RadonDB ClickHouse 基于 Apache 2.0 协议,详见 [LICENSE](./LICENSE)。 + +## 欢迎加入社区话题互动 + +- 贡献 + + 我们欢迎任何形式代码贡献,一些提交 PR 要求请参见 [How to contribute/submit a patch](./CONTRIBUTING.md)。 + +- 论坛 + + 请加入[KubeSphere 开发者社区](https://kubesphere.com.cn/forum/t/radondb) RadonDB ClickHouse 话题专区。 + +- 欢迎关注微信公众号 + + ![](docs/_images/vx_code_258.jpg) + + +

+

+如有任何关于 RadonDB ClickHouse Operator 的问题或建议,请在 GitHub 提交 Issue 反馈。 +
+ +

diff --git a/docs/README.md b/docs/README.md index 66b87c59..939c87df 100644 --- a/docs/README.md +++ b/docs/README.md @@ -1,37 +1,44 @@ # Table of Contents -## Quick Start: -1. [quick_start.md](./quick_start.md) - quick start -1. [introduction.md](./introduction.md) - general introduction +## Introduction +1. [general introduction](./introduction.md) +2. [architecture overview](./architecture.md) +3. [operator configuration in details](./operator_configuration.md) +4. [Custom Resource Definition in details](./custom_resource_explained.md) +5. [storage explained](./storage.md) -## ClickHouse Operator: -1. [operator_installation_details.md](./operator_installation_details.md) - how to install operator in details -1. [operator_upgrade.md](./operator_upgrade.md) - how to upgrade operator to the different version -1. [operator_configuration.md](./operator_configuration.md) - operator configuration in details -1. [operator_build_from_sources.md](./operator_build_from_sources.md) - how to build operator from sources -1. [custom_resource_explained.md](./custom_resource_explained.md) - explain Custom Resource Definition in details -1. [clickhouse_config_errors_handling.md](./clickhouse_config_errors_handling.md) - how operator handles ClickHouse's config errors -1. [architecture.md](./architecture.md) - architecture overview -1. [schema_migration.md](./schema_migration.md) - how operator migrates schema during cluster resize +## Quick Start +1. [quick start](./quick_start.md) -## ClickHouse Installation: -1. [zookeeper_setup.md](./zookeeper_setup.md) - how to set up zookeeper -1. [replication_setup.md](./replication_setup.md) - how to set up replication -1. [chi_update_add_replication.md](./chi_update_add_replication.md) - how to add replication -1. [chi_update_clickhouse_version.md](./chi_update_clickhouse_version.md) - how to update version -1. [clickhouse_backup_and_restore.md](./clickhouse_backup_and_restore.md) - how to do backup and restore -1. [storage.md](./storage.md) - storage explained +## ClickHouse Operator +1. [how to install operator in details](./operator_installation_details.md) +2. [how to upgrade operator to the different version](./operator_upgrade.md) +3. [how to build operator from sources](./operator_build_from_sources.md) +4. [how operator handles ClickHouse's config errors](./clickhouse_config_errors_handling.md) +5. [how operator migrates schema during cluster resize](./schema_migration.md) -## ClickHouse Monitor: -1. [monitoring_setup.md](./monitoring_setup.md) - how to set up monitoring -1. [prometheus_setup.md](./prometheus_setup.md) - how to set up Prometheus -1. [grafana_setup.md](./grafana_setup.md) - how to set up Grafana +## ClickHouse Cluster -## ClickHouse Backup: -1. [clickhouse_backup_and_restore.md](./clickhouse_backup_and_restore.md) - how to backup / restore clickhouse cluster +1. [how to set up zookeeper](./zookeeper_setup.md) +2. [how to set up replication](./replication_setup.md) +3. [how to add replication](./chi_update_add_replication.md) +4. [how to update version](./chi_update_clickhouse_version.md) +5. [how to do backup and restore](./clickhouse_backup_and_restore.md) -## Others: -1. [k8s_cluster_access.md](./k8s_cluster_access.md) - how to set up cluster access \ No newline at end of file + +## ClickHouse Monitor + +1. [how to set up monitoring](./monitoring_setup.md) +2. [how to set up Prometheus](./prometheus_setup.md) +3. [how to set up Grafana](./grafana_setup.md) + +## ClickHouse Backup + +1. [how to backup / restore clickhouse cluster](./clickhouse_backup_and_restore.md) + +## Others + +1. [how to set up cluster access](./k8s_cluster_access.md) diff --git a/docs/_images/arch.png b/docs/_images/arch.png new file mode 100644 index 00000000..54f57863 Binary files /dev/null and b/docs/_images/arch.png differ diff --git a/docs/_images/logo_radondb.png b/docs/_images/logo_radondb.png new file mode 100644 index 00000000..deba0b89 Binary files /dev/null and b/docs/_images/logo_radondb.png differ diff --git a/docs/_images/logo_radondb.svg b/docs/_images/logo_radondb.svg new file mode 100644 index 00000000..affc2e9a --- /dev/null +++ b/docs/_images/logo_radondb.svg @@ -0,0 +1,3 @@ + + + \ No newline at end of file diff --git a/docs/_images/vx_code_1280.jpg b/docs/_images/vx_code_1280.jpg new file mode 100644 index 00000000..e4b804dc Binary files /dev/null and b/docs/_images/vx_code_1280.jpg differ diff --git a/docs/_images/vx_code_258.jpg b/docs/_images/vx_code_258.jpg new file mode 100644 index 00000000..32f75245 Binary files /dev/null and b/docs/_images/vx_code_258.jpg differ diff --git a/docs/_images/vx_code_430.jpg b/docs/_images/vx_code_430.jpg new file mode 100644 index 00000000..99288a77 Binary files /dev/null and b/docs/_images/vx_code_430.jpg differ diff --git a/docs/en-us/_images/add-clickhouse.png b/docs/en-us/_images/add-clickhouse.png new file mode 100644 index 00000000..e9c1d1fb Binary files /dev/null and b/docs/en-us/_images/add-clickhouse.png differ diff --git a/docs/en-us/_images/add-repo.png b/docs/en-us/_images/add-repo.png new file mode 100644 index 00000000..9758f614 Binary files /dev/null and b/docs/en-us/_images/add-repo.png differ diff --git a/docs/en-us/_images/app-running.png b/docs/en-us/_images/app-running.png new file mode 100644 index 00000000..a5977282 Binary files /dev/null and b/docs/en-us/_images/app-running.png differ diff --git a/docs/en-us/_images/basic-info.png b/docs/en-us/_images/basic-info.png new file mode 100644 index 00000000..44ae19ae Binary files /dev/null and b/docs/en-us/_images/basic-info.png differ diff --git a/docs/en-us/_images/change-nodeport.png b/docs/en-us/_images/change-nodeport.png new file mode 100644 index 00000000..adb6d10c Binary files /dev/null and b/docs/en-us/_images/change-nodeport.png differ diff --git a/docs/en-us/_images/chart-tab.png b/docs/en-us/_images/chart-tab.png new file mode 100644 index 00000000..b6e550cb Binary files /dev/null and b/docs/en-us/_images/chart-tab.png differ diff --git a/docs/en-us/_images/click-deploy-new-app.png b/docs/en-us/_images/click-deploy-new-app.png new file mode 100644 index 00000000..5208b1b4 Binary files /dev/null and b/docs/en-us/_images/click-deploy-new-app.png differ diff --git a/docs/en-us/_images/click-deploy.png b/docs/en-us/_images/click-deploy.png new file mode 100644 index 00000000..7b1c242d Binary files /dev/null and b/docs/en-us/_images/click-deploy.png differ diff --git a/docs/en-us/_images/clickhouse-cluster.png b/docs/en-us/_images/clickhouse-cluster.png new file mode 100644 index 00000000..804d7147 Binary files /dev/null and b/docs/en-us/_images/clickhouse-cluster.png differ diff --git a/docs/en-us/_images/clickhouse-service.png b/docs/en-us/_images/clickhouse-service.png new file mode 100644 index 00000000..246c1814 Binary files /dev/null and b/docs/en-us/_images/clickhouse-service.png differ diff --git a/docs/en-us/_images/from-app-templates.png b/docs/en-us/_images/from-app-templates.png new file mode 100644 index 00000000..06c85be5 Binary files /dev/null and b/docs/en-us/_images/from-app-templates.png differ diff --git a/docs/en-us/_images/get-username-password.png b/docs/en-us/_images/get-username-password.png new file mode 100644 index 00000000..f45886e9 Binary files /dev/null and b/docs/en-us/_images/get-username-password.png differ diff --git a/docs/en-us/_images/pods-running.png b/docs/en-us/_images/pods-running.png new file mode 100644 index 00000000..a1e9a40f Binary files /dev/null and b/docs/en-us/_images/pods-running.png differ diff --git a/docs/en-us/_images/project-overview.png b/docs/en-us/_images/project-overview.png new file mode 100644 index 00000000..58e5a98f Binary files /dev/null and b/docs/en-us/_images/project-overview.png differ diff --git a/docs/en-us/_images/repo-added.png b/docs/en-us/_images/repo-added.png new file mode 100644 index 00000000..15006d7d Binary files /dev/null and b/docs/en-us/_images/repo-added.png differ diff --git a/docs/en-us/_images/statefulset-monitoring.png b/docs/en-us/_images/statefulset-monitoring.png new file mode 100644 index 00000000..1c1a068b Binary files /dev/null and b/docs/en-us/_images/statefulset-monitoring.png differ diff --git a/docs/en-us/_images/statefulsets-running.png b/docs/en-us/_images/statefulsets-running.png new file mode 100644 index 00000000..37986e08 Binary files /dev/null and b/docs/en-us/_images/statefulsets-running.png differ diff --git a/docs/en-us/_images/use-clickhouse.png b/docs/en-us/_images/use-clickhouse.png new file mode 100644 index 00000000..17e047f3 Binary files /dev/null and b/docs/en-us/_images/use-clickhouse.png differ diff --git a/docs/en-us/_images/volume-status.png b/docs/en-us/_images/volume-status.png new file mode 100644 index 00000000..3eb4e94f Binary files /dev/null and b/docs/en-us/_images/volume-status.png differ diff --git a/docs/en-us/_images/volumes.png b/docs/en-us/_images/volumes.png new file mode 100644 index 00000000..f9d96664 Binary files /dev/null and b/docs/en-us/_images/volumes.png differ diff --git a/docs/en-us/deploy_radondb-clickhouse_on_kubernetes.md b/docs/en-us/deploy_radondb-clickhouse_on_kubernetes.md new file mode 100644 index 00000000..04dc9014 --- /dev/null +++ b/docs/en-us/deploy_radondb-clickhouse_on_kubernetes.md @@ -0,0 +1,201 @@ +Contents +================= + +- [Contents](#contents) +- [Deploy Radondb ClickHouse On kubernetes](#deploy-radondb-clickhouse-on-kubernetes) + - [Introduction](#introduction) + - [Prerequisites](#prerequisites) + - [Procedure](#procedure) + - [Step 1 : Add Helm Repository](#step-1--add-helm-repository) + - [Step 2 : Install to Kubernetes](#step-2--install-to-kubernetes) + - [Step 3 : Verification](#step-3--verification) + - [Check the Pod](#check-the-pod) + - [Check the Status of Pod](#check-the-status-of-pod) + - [Access RadonDB ClickHouse](#access-radondb-clickhouse) + - [Use Pod](#use-pod) + - [Use Service](#use-service) + - [Persistence](#persistence) + - [Custom Configuration](#custom-configuration) + +# Deploy Radondb ClickHouse On kubernetes + +> English | [简体中文](../zh-cn/deploy_radondb-clickhouse_on_kubernetes.md) + +## Introduction + +RadonDB ClickHouse is an open-source, cloud-native, highly availability cluster solutions based on [ClickHouse](https://clickhouse.tech/). It provides features such as high availability, PB storage, real-time analytical, architectural stability and scalability. + +This tutorial demonstrates how to deploy RadonDB ClickHouse on Kubernetes. + +## Prerequisites + +- You have created a Kubernetes Cluster. +- You have install Helm. + +## Procedure + +### Step 1 : Add Helm Repository + +Add and update helm repositor. + +```bash +$ helm repo add https://radondb.github.io/radondb-clickhouse-kubernetes/ +$ helm repo update +``` + +**Expected output** + +```bash +$ helm repo add ck https://radondb.github.io/radondb-clickhouse-kubernetes/ +"ck" has been added to your repositories + +$ helm repo update +Hang tight while we grab the latest from your chart repositories... +...Successfully got an update from the "ck" chart repository +Update Complete. ⎈Happy Helming!⎈ +``` + +### Step 2 : Install to Kubernetes + +> Zookeeper store ClickHouse's metadata. You can install Zookeeper and ClickHouse Cluster at the same time. + +```bash +$ helm install --generate-name /clickhouse-cluster -n --version v1.0 +``` + +- For more configurable options and variables, see [values.yaml](../clickhouse-cluster-helm/values.yaml). +- If you need to customize cluster parameters, you can modify `values.yaml` file. For details, see [Custom Configuration](#custom-configuration). + +**Expected output:** + +```bash +$ helm install clickhouse ck/clickhouse-cluster -n test --version v1.0 +NAME: clickhouse +LAST DEPLOYED: Thur June 17 07:55:42 2021 +NAMESPACE: test +STATUS: deployed +REVISION: 1 +TEST SUITE: None + +$ helm list -n test +NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION +clickhouse test 1 2021-06-07 07:55:42.860240764 +0000 UTC deployed clickhouse-v1.0 21.1 +``` + +### Step 3 : Verification + +#### Check the Pod + +```bash +kubectl get all --selector app.kubernetes.io/instance= -n +``` + +**Expected output:** + +```bash +$ kubectl get all --selector app.kubernetes.io/instance=clickhouse -n test +NAME READY STATUS RESTARTS AGE +pod/clickhouse-s0-r0-0 1/1 Running 0 72s +pod/clickhouse-s0-r1-0 1/1 Running 0 72s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/clickhouse ClusterIP 10.96.230.92 9000/TCP,8123/TCP 72s +service/clickhouse-s0-r0 ClusterIP 10.96.83.41 9000/TCP,8123/TCP 72s +service/clickhouse-s0-r1 ClusterIP 10.96.240.111 9000/TCP,8123/TCP 72s + +NAME READY AGE +statefulset.apps/clickhouse-s0-r0 1/1 72s +statefulset.apps/clickhouse-s0-r1 1/1 72s +``` + +#### Check the Status of Pod + +You should wait a while,then check the output in `Events` line. When the output persistently return `Started`, indicate that RadonDB ClickHouse is up and running. + +```bash +kubectl describe pod -n +``` + +**Expected output:** + +```bash +$ kubectl describe pod clickhouse-s0-r0-0 -n test +... +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Warning FailedScheduling 7m30s (x3 over 7m42s) default-scheduler error while running "VolumeBinding" filter plugin for pod "clickhouse-s0-r0-0": pod has unbound immediate PersistentVolumeClaims + Normal Scheduled 7m28s default-scheduler Successfully assigned default/clickhouse-s0-r0-0 to worker-p004 + Normal SuccessfulAttachVolume 7m6s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-21c5de1f-c396-4743-a31b-2b094ecaf79b" + Warning Unhealthy 5m4s (x3 over 6m4s) kubelet, worker-p004 Liveness probe failed: Code: 210. DB::NetException: Connection refused (localhost:9000) + Normal Killing 5m4s kubelet, worker-p004 Container clickhouse failed liveness probe, will be restarted + Normal Pulled 4m34s (x2 over 6m50s) kubelet, worker-p004 Container image "tceason/clickhouse-server:v21.1.3.32-stable" already present on machine + Normal Created 4m34s (x2 over 6m50s) kubelet, worker-p004 Created container clickhouse + Normal Started 4m33s (x2 over 6m48s) kubelet, worker-p004 Started container clickhouse +``` + +## Access RadonDB ClickHouse + +### Use Pod + +You can directly connect to ClickHouse Pod with `kubectl`. + +```bash +$ kubectl exec -it -n -- clickhouse-client --user= --password= +``` + +**Expected output** + +```bash +$ kubectl get pods -n test | grep clickhouse +clickhouse-s0-r0-0 1/1 Running 0 8m50s +clickhouse-s0-r1-0 1/1 Running 0 8m50s + +$ kubectl exec -it clickhouse-s0-r0-0 -n test -- clickhouse-client -u default --password=C1ickh0use --query='select hostName()' +clickhouse-s0-r0-0 +``` + +### Use Service + +```bash +$ echo | curl 'http://:@:/' --data-binary @- +``` + +**Expected output** + +```bash +$ kubectl get service -n test | grep clickhouse +clickhouse ClusterIP 10.96.71.193 9000/TCP,8123/TCP 12m +clickhouse-s0-r0 ClusterIP 10.96.40.207 9000/TCP,8123/TCP 12m +clickhouse-s0-r1 ClusterIP 10.96.63.179 9000/TCP,8123/TCP 12m + +$ echo 'select hostname()' | curl 'http://default:C1ickh0use@10.96.71.193:8123/' --data-binary @- +clickhouse-s0-r1-0 +$ echo 'select hostname()' | curl 'http://default:C1ickh0use@10.96.71.193:8123/' --data-binary @- +clickhouse-s0-r0-0 +``` + +## Persistence + +You can configure a Pod to use a PersistentVolumeClaim(PVC) for storage. +In default, PVC mount on the `/var/lib/clickhouse` directory. + +1. You should create a Pod that uses the above PVC for storage. + +2. You should create a PVC that is automatically bound to a suitable PersistentVolume(PV). + +> **Note** +> PVC can use different PV, so using the different PV show the different performance. + +## Custom Configuration + +If you need to customize many parameters, you can modify [values.yaml](../clickhouse-cluster-helm/values.yaml). + +1. Download the `values.yaml` file. +2. Modify the parameter values in the `values.yaml`. +3. Run the following command to deploy the cluster. + +```bash +$ helm install --generate-name /clickhouse-cluster -n \ + -f //to/values.yaml +``` \ No newline at end of file diff --git a/docs/en-us/deploy_radondb-clickhouse_with_operator_on_kubernetes.md b/docs/en-us/deploy_radondb-clickhouse_with_operator_on_kubernetes.md new file mode 100644 index 00000000..0456cf0c --- /dev/null +++ b/docs/en-us/deploy_radondb-clickhouse_with_operator_on_kubernetes.md @@ -0,0 +1,237 @@ +Contents +================= +- [Contents](#contents) +- [Deploy Radondb ClickHouse On Kubernetes](#deploy-radondb-clickhouse-on-kubernetes) + - [Introduction](#introduction) + - [Prerequisites](#prerequisites) + - [Procedure](#procedure) + - [Step 1 : Add Helm Repository](#step-1--add-helm-repository) + - [Step 2 : Install RadonDB ClickHouse Operator](#step-2--install-radondb-clickhouse-operator) + - [Step 3 : Install RadonDB ClickHouse Cluster](#step-3--install-radondb-clickhouse-cluster) + - [Step 4 : Verification](#step-4--verification) + - [Check the Status of Pod](#check-the-status-of-pod) + - [Check the Status of SVC](#check-the-status-of-svc) + - [Access RadonDB ClickHouse](#access-radondb-clickhouse) + - [Use Pod](#use-pod) + - [Use Service](#use-service) + - [Persistence](#persistence) + - [Configuration](#configuration) + - [Custom Configuration](#custom-configuration) + +# Deploy Radondb ClickHouse On Kubernetes + +> English | [简体中文](../zh-cn/deploy_radondb-clickhouse_with_operator_on_kubernetes.md) + +## Introduction + +RadonDB ClickHouse is an open-source, cloud-native, highly availability cluster solutions based on [ClickHouse](https://clickhouse.tech/). It provides features such as high availability, PB storage, real-time analytical, architectural stability and scalability. + +This tutorial demonstrates how to deploy RadonDB ClickHouse on Kubernetes. + +## Prerequisites + +- You have created a Kubernetes cluster. +- You have installed helm. + +## Procedure + +### Step 1 : Add Helm Repository + +Add and update helm repository. + +```bash +$ helm repo add https://radondb.github.io/radondb-clickhouse-kubernetes/ +$ helm repo update +``` + +**Expected output** + +```bash +$ helm repo add ck https://radondb.github.io/radondb-clickhouse-kubernetes/ +"ck" has been added to your repositories + +$ helm repo update +Hang tight while we grab the latest from your chart repositories... +...Successfully got an update from the "ck" chart repository +Update Complete. ⎈Happy Helming!⎈ +``` + +### Step 2 : Install RadonDB ClickHouse Operator + +```bash +$ helm install --generate-name -n /clickhouse-operator +``` + +**Expected output** + +```bash +$ helm install clickhouse-operator ck/clickhouse-operator -n kube-system +NAME: clickhouse-operator +LAST DEPLOYED: Wed Aug 17 14:43:44 2021 +NAMESPACE: kube-system +STATUS: deployed +REVISION: 1 +TEST SUITE: None +``` + +> **Notice** +> +> This command will install ClickHouse Operator in the namespace `kube-system`. Therefore, ClickHouse Operator only needs to be installed once in a Kubernetes cluster. + +### Step 3 : Install RadonDB ClickHouse Cluster + +```bash +$ helm install --generate-name /clickhouse-cluster -n \ + --set = +``` + +- For more information about cluter parameters, see [Configuration](#configuration). +- If you need to customize many parameters, you can modify `values.yaml` file. For details, see [Custom Configuration](#custom-configuration). + +**Expected output** + +```bash +$ helm install clickhouse ck/clickhouse-cluster -n test +NAME: clickhouse +LAST DEPLOYED: Wed Aug 17 14:48:12 2021 +NAMESPACE: test +STATUS: deployed +REVISION: 1 +TEST SUITE: None +``` + +### Step 4 : Verification + +#### Check the Status of Pod + +```bash +$ kubectl get pods -n +``` + +**Expected output** + +```bash +$ kubectl get pods -n test +NAME READY STATUS RESTARTS AGE +pod/chi-ClickHouse-replicas-0-0-0 2/2 Running 0 3m13s +pod/chi-ClickHouse-replicas-0-1-0 2/2 Running 0 2m51s +pod/zk-clickhouse-cluster-0 1/1 Running 0 3m13s +pod/zk-clickhouse-cluster-1 1/1 Running 0 3m13s +pod/zk-clickhouse-cluster-2 1/1 Running 0 3m13s +``` + +#### Check the Status of SVC + +```bash +$ kubectl get service -n +``` + +**Expected output** + +```bash +$ kubectl get service -n test +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/chi-ClickHouse-replicas-0-0 ClusterIP None 8123/TCP,9000/TCP,9009/TCP 2m53s +service/chi-ClickHouse-replicas-0-1 ClusterIP None 8123/TCP,9000/TCP,9009/TCP 2m36s +service/clickhouse-ClickHouse ClusterIP 10.96.137.152 8123/TCP,9000/TCP 3m14s +service/zk-client-clickhouse-cluster ClusterIP 10.107.33.51 2181/TCP,7000/TCP 3m13s +service/zk-server-clickhouse-cluster ClusterIP None 2888/TCP,3888/TCP 3m13s +``` + +## Access RadonDB ClickHouse + +### Use Pod + +You can directly connect to ClickHouse Pod with `kubectl`. + +```bash +$ kubectl exec -it -n -- clickhouse-client --user= --password= +``` + +**Expected output** + +```bash +$ kubectl get pods | grep clickhouse +chi-ClickHouse-replicas-0-0-0 1/1 Running 0 8m50s +chi-ClickHouse-replicas-0-1-0 1/1 Running 0 8m50s + +$ kubectl exec -it chi-ClickHouse-replicas-0-0-0 -- clickhouse-client -u clickhouse --password=c1ickh0use0perator --query='select hostName()' +chi-ClickHouse-replicas-0-0-0 +``` + +### Use Service + +```bash +$ echo '' | curl 'http://:@:/' --data-binary @- +``` + +**Expected output** + +```bash +$ kubectl get service |grep clickhouse +clickhouse-ClickHouse ClusterIP 10.96.137.152 9000/TCP,8123/TCP 12m +chi-ClickHouse-replicas-0-0 ClusterIP None 9000/TCP,8123/TCP 12m +chi-ClickHouse-replicas-0-1 ClusterIP None 9000/TCP,8123/TCP 12m + +$ echo 'select hostname()' | curl 'http://clickhouse:c1ickh0use0perator@10.96.137.152:8123/' --data-binary @- +chi-ClickHouse-replicas-0-1-0 +$ echo 'select hostname()' | curl 'http://clickhouse:c1ickh0use0perator@10.96.137.152:8123/' --data-binary @- +chi-ClickHouse-replicas-0-0-0 +``` + +## Persistence + +You can configure a Pod to use a PersistentVolumeClaim(PVC) for storage. +In default, PVC mount on the `/var/lib/clickhouse` directory. + +1. You should create a Pod that uses the above PVC for storage. + +2. You should create a PVC that is automatically bound to a suitable PersistentVolume(PV). + +> **Notices** +> +> PVC can use different PV, so using the different PV show the different performance. + +## Configuration + +| Parameter | Description | Default Value | +|:----|:----|:----| +| **ClickHouse** | | | +| `clickhouse.clusterName` | ClickHouse cluster name. | all-nodes | +| `clickhouse.shardscount` | Shards count. Once confirmed, it cannot be reduced. | 1 | +| `clickhouse.replicascount` | Replicas count. Once confirmed, it cannot be modified. | 2 | +| `clickhouse.image` | ClickHouse image name, it is not recommended to modify. | radondb/clickhouse-server:v21.1.3.32-stable | +| `clickhouse.imagePullPolicy` | Image pull policy. The value can be Always/IfNotPresent/Never. | IfNotPresent | +| `clickhouse.resources.memory` | K8s memory resources should be requested by a single Pod. | 1Gi | +| `clickhouse.resources.cpu` | K8s CPU resources should be requested by a single Pod. | 0.5 | +| `clickhouse.resources.storage` | K8s Storage resources should be requested by a single Pod. | 10Gi | +| `clickhouse.user` | ClickHouse user array. Each user needs to contain a username, password and networks array. | [{"username": "clickhouse", "password": "c1ickh0use0perator", "networks": ["127.0.0.1", "::/0"]}] | +| `clickhouse.port.tcp` | Port for the native interface. | 9000 | +| `clickhouse.port.http` | Port for HTTP/REST interface. | 8123 | +| `clickhouse.svc.type` | K8s service type. The value can be ClusterIP/NodePort/LoadBalancer. | ClusterIP | +| `clickhouse.svc.qceip` | If the value of type is LoadBalancer, You need to configure loadbalancer that provided by third-party platforms. | nil | +| **BusyBox** | | | +| `busybox.image` | BusyBox image name, it is not recommended to modify. | busybox | +| `busybox.imagePullPolicy` | Image pull policy. The value can be Always/IfNotPresent/Never. | Always | +| **ZooKeeper** | | | +| `zookeeper.install` | Whether to create ZooKeeper by operator. | true | +| `zookeeper.port` | ZooKeeper service port. | 2181 | +| `zookeeper.replicas` | ZooKeeper cluster replicas count. | 3 | +| `zookeeper.image` | ZooKeeper image name, it is not recommended to modify. | radondb/zookeeper:3.6.2 | +| `zookeeper.imagePullPolicy` | Image pull policy. The value can be Always/IfNotPresent/Never. | Always | +| `zookeeper.resources.memory` | K8s memory resources should be requested by a single Pod. | Deprecated, if install = true | +| `zookeeper.resources.cpu` | K8s CPU resources should be requested by a single Pod. | Deprecated, if install = true | +| `zookeeper.resources.storage` | K8s storage resources should be requested by a single Pod. | Deprecated, if install = true | + +## Custom Configuration + +If you need to customize many parameters, you can modify [values.yaml](../clickhouse-cluster/values.yaml). + +1. Download the `values.yaml` file. +2. Modify the parameter values in the `values.yaml`. +3. Run the following command to deploy the cluster. + +```bash +$ helm install --generate-name /clickhouse-cluster -n \ + -f //to/values.yaml +``` diff --git a/docs/en-us/deploy_radondb-clickhouse_with_operator_on_kubesphere_appstore.md b/docs/en-us/deploy_radondb-clickhouse_with_operator_on_kubesphere_appstore.md new file mode 100644 index 00000000..874fa0e7 --- /dev/null +++ b/docs/en-us/deploy_radondb-clickhouse_with_operator_on_kubesphere_appstore.md @@ -0,0 +1,139 @@ +Contents +================= + +- [Contents](#contents) +- [Deploy Radondb ClickHouse On KubeSphere](#deploy-radondb-clickhouse-on-kubesphere) + - [Introduction](#introduction) + - [Prerequisites](#prerequisites) + - [Procedure](#procedure) + - [Step 1 : Deploy ClickHouse Operator](#step-1--deploy-clickhouse-operator) + - [Step 2 : Add an app repository](#step-2---add-an-app-repository) + - [Step 3 : Deploy a ClickHouse Cluster](#step-3---deploy-a-clickhouse-cluster) + - [Step 4 : Verification](#step-4---verification) + - [Access RadonDB ClickHouse](#access-radondb-clickhouse) + +# Deploy Radondb ClickHouse On KubeSphere + +> English | [简体中文](../zh-cn/deploy_radondb-clickhouse_with_operator_on_kubesphere_appstore.md) + +## Introduction + +RadonDB ClickHouse is an open-source, cloud-native, highly availability cluster solutions based on [ClickHouse](https://clickhouse.tech/). It provides features such as high availability, PB storage, real-time analytical, architectural stability and scalability. + +This tutorial demonstrates how to deploy ClickHouse Operator and a ClickHouse Cluster on KubeSphere. + +## Prerequisites + +- You have created a KubeSphere Cluster. +- You need to enable [the OpenPitrix system](https://kubesphere.io/docs/pluggable-components/app-store/) in KubeSphere. +- You need to [Create Workspaces, Projects, Accounts and Roles](https://kubesphere.io/docs/quick-start/create-workspace-and-project/) in KubeSphere. +- You need to enable the gateway in your project to provide [external access](https://kubesphere.io/docs/project-administration/project-gateway/). + +## Procedure + +### Step 1 : Deploy ClickHouse Operator + +Log in to the KubeSphere Web console as `admin`, and use **Kubectl** from the **Toolbox** in the bottom-right corner to run the following command to install ClickHouse Operator. It is recommended that you have at least two worker nodes available in your cluster. + +```bash +kubectl apply -f https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/deploy/operator/clickhouse-operator-install-bundle.yaml +``` + +> **Notice** +> +> This command will install ClickHouse Operator in the namespace `kube-system`. Therefore, ClickHouse Operator only needs to be installed once in a KubeSphere cluster. + +### Step 2 : Add an app repository + +1. Log out of KubeSphere and log back in as `ws-admin`. In `demo-workspace`, go to **App Repositories** under **App Management**, and then click **Add**. + + ![add-repo](_images/add-repo.png) + +2. In the dialog that appears, enter `clickhouse` for the app repository name and `https://radondb.github.io/radondb-clickhouse-operator/` for the repository URL. Click **Validate** to verify the URL and you will see a green check mark next to the URL if it is available. Click **OK** to continue. + +3. Your repository displays in the list after successfully imported to KubeSphere. + + +### Step 3 : Deploy a ClickHouse Cluster + +1. Log out of KubeSphere and log back in as `project-regular`. In `demo-project`, go to **Apps** under **Application Workloads** and click **Deploy New App**. + + ![click-deploy-new-app](_images/click-deploy-new-app.png) + +2. In the dialog that appears, select **From App Templates**. + + ![from-app-templates](_images/from-app-templates.png) + +3. On the new page that appears, select **clickhouse** from the drop-down list and then click **clickhouse-cluster**. + + ![clickhouse-cluster](_images/clickhouse-cluster.png) + +4. On the **Chart Files** tab, you can view the configuration and download the `values.yaml` file. Click **Deploy** to continue. + + ![chart-tab](_images/chart-tab.png) + +5. On the **Basic Information** page, confirm the app name, app version, and deployment location. Click **Next** to continue. + + ![basic-info](_images/basic-info.png) + +6. On the **App Configurations** tab, you can change the YAML file to customize configurations. In this tutorial, click **Deploy** to use the default configurations. + + ![click-deploy](_images/click-deploy.png) + +7. After a while, you can see the app status shown as **Running**. + + ![app-running](_images/app-running.png) + +### Step 4 : Verification + +1. In **Workloads** under **Application Workloads**, click the **StatefulSets** tab and you can see the StatefulSets are up and running. + + ![statefulsets-running](_images/statefulsets-running.png) + +3. Click a single StatefulSet to go to its detail page. You can see the metrics in line charts over a period of time under the **Monitoring** tab. + + ![statefulset-monitoring](_images/statefulset-monitoring.png) + +3. In **Pods** under **Application Workloads**, you can see all the Pods are up and running. + + ![pods-running](_images/pods-running.png) + +4. In **Volumes** under **Storage**, you can see the ClickHouse Cluster components are using persistent volumes. + + ![volumes](_images/volumes.png) + +5. Volume usage is also monitored. Click a volume item to go to its detail page. Here is an example of one of the data nodes. + + ![volume-status](_images/volume-status.png) + +6. On the **Overview** page of the project, you can see a list of resource usage in the current project. + + ![project-overview](_images/project-overview.png) + +## Access RadonDB ClickHouse + +1. Log out of KubeSphere and log back in as `admin`. Hover your cursor over the hammer icon in the bottom-right corner and then select **Kubectl**. + +2. In the window that appears, run the following command and then navigate to the username and password of the ClickHouse cluster. + + ```bash + kubectl edit chi clickho-749j8s -n demo-project + ``` + + ![get-username-password](_images/get-username-password.png) + + > **Notice** + > + > In the above command, `clickho-749j8s` is the ClickHouse application name and `demo-project` is the project name. Make sure you use your own application name and project name. + +3. Run the following command to access the ClickHouse cluster, and then you can use command like `show databases` to interact with it. + + ```bash + kubectl exec -it chi-clickho-749j8s-all-nodes-0-0-0 -n demo-project -- clickhouse-client --user=clickhouse --password=c1ickh0use0perator + ``` + + ![use-clickhouse](_images/use-clickhouse.png) + +> **Notice** +> +> In the above command, `chi-clickho-749j8s-all-nodes-0-0-0` is the Pod name and you can find it in **Pods** under **Application Workloads**. Make sure you use your own Pod name, project name, username and password. diff --git a/docs/quick_start.md b/docs/quick_start.md index 7e7dffd1..f8c0628b 100644 --- a/docs/quick_start.md +++ b/docs/quick_start.md @@ -1,63 +1,85 @@ +[English](docs/quick_start.md) | 简体中文 + +--- + +- [Quick Start Guides](#quick-start-guides) + - [Prerequisites](#prerequisites) + - [RadonDB ClickHouse Operator Installation](#radondb-clickhouse-operator-installation) + - [Case 1: Install operator into `kube-system` namespace](#case-1-install-operator-into-kube-system-namespace) + - [Case 2: Install operator on kubernetes version prior `1.17` in `kube-system` namespace](#case-2-install-operator-on-kubernetes-version-prior-117-in-kube-system-namespace) + - [Case 3: Customize installation parameters](#case-3-customize-installation-parameters) + - [Case 4: Not run scripts from internet in your protected environment](#case-4-not-run-scripts-from-internet-in-your-protected-environment) + - [Operator installation process](#operator-installation-process) + - [Building ClickHouse Operator from Sources](#building-clickhouse-operator-from-sources) + - [adonDB ClickHouse Cluster Installation](#adondb-clickhouse-cluster-installation) + - [Create Custom Namespace](#create-custom-namespace) + - [Example 1: Trivial](#example-1-trivial) + - [Example 2: Simple Persistent Volume](#example-2-simple-persistent-volume) + - [Example 3: Custom Deployment with Pod and VolumeClaim](#example-3-custom-deployment-with-pod-and-volumeclaim) + - [Example 4: Custom Deployment with Specific ClickHouse Configuration](#example-4-custom-deployment-with-specific-clickhouse-configuration) + - [Validate cluster deployment](#validate-cluster-deployment) + - [Access to RadonDB ClickHouse](#access-to-radondb-clickhouse) + - [Via EXTERNAL-IP](#via-external-ip) + - [Via pod-NAME](#via-pod-name) + # Quick Start Guides -# Table of Contents -* [ClickHouse Operator Installation](#clickhouse-operator-installation) -* [Building ClickHouse Operator from Sources](#building-clickhouse-operator-from-sources) -* [Examples](#examples) - * [Trivial Example](#trivial-example) - * [Connect to ClickHouse Database](#connect-to-clickhouse-database) - * [Simple Persistent Volume Example](#simple-persistent-volume-example) - * [Custom Deployment with Pod and VolumeClaim Templates](#custom-deployment-with-pod-and-volumeclaim-templates) - * [Custom Deployment with Specific ClickHouse Configuration](#custom-deployment-with-specific-clickhouse-configuration) +## Prerequisites -# Prerequisites -1. Operational Kubernetes instance -1. Properly configured `kubectl` -1. `curl` +- Operational Kubernetes instance +- Properly configured `kubectl` +- `curl` -# ClickHouse Operator Installation +## RadonDB ClickHouse Operator Installation Apply `clickhouse-operator` installation manifest. The simplest way - directly from `github`. -## **In case you are convenient to install operator into `kube-system` namespace** +### Case 1: Install operator into `kube-system` namespace just run: + ```bash kubectl apply -f https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/deploy/operator/clickhouse-operator-install-bundle.yaml ``` -## **If you want to install operator on kubernetes version prior `1.17` in `kube-system` namespace** + +### Case 2: Install operator on kubernetes version prior `1.17` in `kube-system` namespace just run: + ```bash kubectl apply -f https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/deploy/operator/clickhouse-operator-install-bundle-v1beta1.yaml ``` -## **In case you'd like to customize installation parameters**, +### Case 3: Customize installation parameters such as namespace where to install operator or operator's image, use the special installer script. + ```bash curl -s https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/deploy/operator-web-installer/clickhouse-operator-install.sh | OPERATOR_NAMESPACE=test-clickhouse-operator bash ``` -Take into account explicitly specified namespace -```bash -OPERATOR_NAMESPACE=test-clickhouse-operator -``` + +Take into account explicitly specified namespace. + This namespace would be created and used to install `clickhouse-operator` into. Install script would download some `.yaml` and `.xml` files and install `clickhouse-operator` into specified namespace. + After installation **clickhouse-operator** will watch custom resources like a `kind: ClickhouseInstallation` only in `test-clickhouse-operator` namespace. If no `OPERATOR_NAMESPACE` specified, as: + ```bash cd ~ curl -s https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/deploy/operator-web-installer/clickhouse-operator-install.sh | bash ``` -installer will install **clickhouse-operator** into `kube-system` namespace and will watch custom resources like a `kind: ClickhouseInstallation` in all available namespaces. +Installer will install **clickhouse-operator** into `kube-system` namespace and will watch custom resources like a `kind: ClickhouseInstallation` in all available namespaces. + +### Case 4: Not run scripts from internet in your protected environment -## **In case you can not run scripts from internet in your protected environment**, +You can download manually [this template file](../deploy/operator/clickhouse-operator-install-template.yaml). + +And edit it according to your choice. After that apply it with `kubectl`. Or you can use this snippet instead: -you can download manually [this template file][clickhouse-operator-install-template.yaml] -and edit it according to your choice. After that apply it with `kubectl`. Or you can use this snippet instead: ```bash # Namespace to install operator into OPERATOR_NAMESPACE="${OPERATOR_NAMESPACE:-test-clickhouse-operator}" @@ -80,7 +102,8 @@ kubectl apply --namespace="${OPERATOR_NAMESPACE}" -f <( \ ) ``` -## Operator installation process +### Operator installation process + ```text Setup ClickHouse Operator into test-clickhouse-operator namespace namespace/test-clickhouse-operator created @@ -97,23 +120,22 @@ deployment.apps/clickhouse-operator created ``` Check `clickhouse-operator` is running: + ```bash -kubectl get pods -n test-clickhouse-operator -``` -```text +$ kubectl get pods -n test-clickhouse-operator NAME READY STATUS RESTARTS AGE clickhouse-operator-5ddc6d858f-drppt 1/1 Running 0 1m ``` -## Building ClickHouse Operator from Sources +### Building ClickHouse Operator from Sources -Complete instructions on how to build ClickHouse operator from sources as well as how to build a docker image and use it inside `kubernetes` described [here][build_from_sources]. +Complete instructions on how to build ClickHouse operator from sources as well as how to build a docker image and use it inside `kubernetes` described [here](../operator_build_from_sources.md). -# Examples +## adonDB ClickHouse Cluster Installation -There are several ready-to-use [ClickHouseInstallation examples][chi-examples]. Below are few ones to start with. +There are several ready-to-use [ClickHouseInstallation examples](../chi-examples/). Below are few ones to start with. -## Create Custom Namespace +### Create Custom Namespace It is a good practice to have all components run in dedicated namespaces. Let's run examples in `test` namespace ```bash kubectl create namespace test-clickhouse-operator @@ -122,20 +144,21 @@ kubectl create namespace test-clickhouse-operator namespace/test-clickhouse-operator created ``` -## Trivial example +### Example 1: Trivial -This is the trivial [1 shard 1 replica][01-simple-layout-01-1shard-1repl.yaml] example. +This is the trivial [1 shard 1 replica](../chi-examples/01-simple-layout-01-1shard-1repl.yaml) example. -**WARNING**: Do not use it for anything other than 'Hello, world!', it does not have persistent storage! +> **WARNING** +> +> Do not use it for anything other than 'Hello, world!', it does not have persistent storage! ```bash -kubectl apply -n test-clickhouse-operator -f https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/docs/chi-examples/01-simple-layout-01-1shard-1repl.yaml -``` -```text +$ kubectl apply -n test-clickhouse-operator -f https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/docs/chi-examples/01-simple-layout-01-1shard-1repl.yaml clickhouseinstallation.clickhouse.radondb.com/simple-01 created ``` Installation specification is straightforward and defines 1-replica cluster: + ```yaml apiVersion: "clickhouse.radondb.com/v1" kind: "ClickHouseInstallation" @@ -143,56 +166,11 @@ metadata: name: "simple-01" ``` -Once cluster is created, there are two checks to be made. - -```bash -kubectl get pods -n test-clickhouse-operator -``` -```text -NAME READY STATUS RESTARTS AGE -chi-b3d29f-a242-0-0-0 1/1 Running 0 10m -``` - -Watch out for 'Running' status. Also check services created by an operator: +### Example 2: Simple Persistent Volume -```bash -kubectl get service -n test-clickhouse-operator -``` -```text -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -chi-b3d29f-a242-0-0 ClusterIP None 8123/TCP,9000/TCP,9009/TCP 11m -clickhouse-example-01 LoadBalancer 100.64.167.170 abc-123.us-east-1.elb.amazonaws.com 8123:30954/TCP,9000:32697/TCP 11m -``` - -ClickHouse is up and running! - -## Connect to ClickHouse Database +In case of having Dynamic Volume Provisioning available, we are able to use PersistentVolumeClaims. -There are two ways to connect to ClickHouse database - -1. In case previous command `kubectl get service -n test-clickhouse-operator` reported **EXTERNAL-IP** (abc-123.us-east-1.elb.amazonaws.com in our case) we can directly access ClickHouse with: -```bash -clickhouse-client -h abc-123.us-east-1.elb.amazonaws.com -u clickhouse_operator --password clickhouse_operator_password -``` -```text -ClickHouse client version 18.14.12. -Connecting to abc-123.us-east-1.elb.amazonaws.com:9000. -Connected to ClickHouse server version 19.4.3 revision 54416. -``` -1. In case there is not **EXTERNAL-IP** available, we can access ClickHouse from inside Kubernetes cluster -```bash -kubectl -n test-clickhouse-operator exec -it chi-b3d29f-a242-0-0-0 -- clickhouse-client -``` -```text -ClickHouse client version 19.4.3.11. -Connecting to localhost:9000 as user default. -Connected to ClickHouse server version 19.4.3 revision 54416. -``` - -## Simple Persistent Volume Example - -In case of having Dynamic Volume Provisioning available - ex.: running on AWS - we are able to use PersistentVolumeClaims -Manifest is [available in examples][03-persistent-volume-01-default-volume.yaml] +Manifest is [available in examples](../chi-examples/03-persistent-volume-01-default-volume.yaml): ```yaml apiVersion: "clickhouse.radondb.com/v1" @@ -228,14 +206,15 @@ spec: storage: 123Mi ``` -## Custom Deployment with Pod and VolumeClaim Templates +## Example 3: Custom Deployment with Pod and VolumeClaim Let's install more complex example with: -1. Deployment specified -1. Pod template -1. VolumeClaim template -Manifest is [available in examples][03-persistent-volume-02-pod-template.yaml] +- Deployment specified +- Pod template +- VolumeClaim template + +Manifest is [available in examples](../chi-examples/03-persistent-volume-02-volume.yaml): ```yaml apiVersion: "clickhouse.radondb.com/v1" @@ -290,9 +269,9 @@ spec: storage: 2Gi ``` -## Custom Deployment with Specific ClickHouse Configuration +### Example 4: Custom Deployment with Specific ClickHouse Configuration -You can tell operator to configure your ClickHouse, as shown in the example below ([link to the manifest][05-settings-01-overview.yaml]): +You can tell operator to configure your ClickHouse, as shown in the example below ([link to the manifest](../chi-examples/05-settings-01-overview.yaml): ```yaml apiVersion: "clickhouse.radondb.com/v1" @@ -346,10 +325,47 @@ spec: replicasCount: 1 ``` -[build_from_sources]: ./operator_build_from_sources.md -[clickhouse-operator-install-template.yaml]: ../deploy/operator/clickhouse-operator-install-template.yaml -[chi-examples]: ./chi-examples/ -[01-simple-layout-01-1shard-1repl.yaml]: ./chi-examples/01-simple-layout-01-1shard-1repl.yaml -[03-persistent-volume-01-default-volume.yaml]: ./chi-examples/03-persistent-volume-01-default-volume.yaml -[03-persistent-volume-02-pod-template.yaml]: ./chi-examples/03-persistent-volume-02-pod-template.yaml -[05-settings-01-overview.yaml]: ./chi-examples/05-settings-01-overview.yaml +### Validate cluster deployment + +Once cluster is created, there are two checks to be made. + +```bash +$ kubectl get pods -n test-clickhouse-operator +NAME READY STATUS RESTARTS AGE +chi-b3d29f-a242-0-0-0 1/1 Running 0 10m +``` + +Watch out for 'Running' status. Also check services created by an operator: + +```bash +$ kubectl get service -n test-clickhouse-operator +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +chi-b3d29f-a242-0-0 ClusterIP None 8123/TCP,9000/TCP,9009/TCP 11m +clickhouse-example-01 LoadBalancer 100.64.167.170 abc-123.us-east-1.elb.amazonaws.com 8123:30954/TCP,9000:32697/TCP 11m +``` + +ClickHouse is up and running! + +## Access to RadonDB ClickHouse + +### Via EXTERNAL-IP + +In case previous command `kubectl get service -n test-clickhouse-operator` reported **EXTERNAL-IP**, we can directly access ClickHouse with: + +```bash +$ clickhouse-client -h abc-123.us-east-1.elb.amazonaws.com -u clickhouse_operator --password clickhouse_operator_password +ClickHouse client version 18.14.12. +Connecting to abc-123.us-east-1.elb.amazonaws.com:9000. +Connected to ClickHouse server version 19.4.3 revision 54416. +``` + +### Via pod-NAME + +In case there is not **EXTERNAL-IP** available, we can access ClickHouse from inside Kubernetes cluster. + +```bash +$ kubectl -n test-clickhouse-operator exec -it chi-b3d29f-a242-0-0-0 -- clickhouse-client +ClickHouse client version 19.4.3.11. +Connecting to localhost:9000 as user default. +Connected to ClickHouse server version 19.4.3 revision 54416. +``` diff --git a/docs/zh-cn/_images/add-clickhouse.png b/docs/zh-cn/_images/add-clickhouse.png new file mode 100644 index 00000000..7d19d93e Binary files /dev/null and b/docs/zh-cn/_images/add-clickhouse.png differ diff --git a/docs/zh-cn/_images/add-repo.png b/docs/zh-cn/_images/add-repo.png new file mode 100644 index 00000000..24f4be62 Binary files /dev/null and b/docs/zh-cn/_images/add-repo.png differ diff --git a/docs/zh-cn/_images/app-running.png b/docs/zh-cn/_images/app-running.png new file mode 100644 index 00000000..3a42aab2 Binary files /dev/null and b/docs/zh-cn/_images/app-running.png differ diff --git a/docs/zh-cn/_images/basic-info.png b/docs/zh-cn/_images/basic-info.png new file mode 100644 index 00000000..026366d9 Binary files /dev/null and b/docs/zh-cn/_images/basic-info.png differ diff --git a/docs/zh-cn/_images/chart-tab.png b/docs/zh-cn/_images/chart-tab.png new file mode 100644 index 00000000..7fd405fd Binary files /dev/null and b/docs/zh-cn/_images/chart-tab.png differ diff --git a/docs/zh-cn/_images/click-deploy-new-app.png b/docs/zh-cn/_images/click-deploy-new-app.png new file mode 100644 index 00000000..2df9ccaf Binary files /dev/null and b/docs/zh-cn/_images/click-deploy-new-app.png differ diff --git a/docs/zh-cn/_images/click-deploy.png b/docs/zh-cn/_images/click-deploy.png new file mode 100644 index 00000000..7a4f6e2a Binary files /dev/null and b/docs/zh-cn/_images/click-deploy.png differ diff --git a/docs/zh-cn/_images/clickhouse-cluster.png b/docs/zh-cn/_images/clickhouse-cluster.png new file mode 100644 index 00000000..a967e4da Binary files /dev/null and b/docs/zh-cn/_images/clickhouse-cluster.png differ diff --git a/docs/zh-cn/_images/from-app-templates.png b/docs/zh-cn/_images/from-app-templates.png new file mode 100644 index 00000000..a93da8ea Binary files /dev/null and b/docs/zh-cn/_images/from-app-templates.png differ diff --git a/docs/zh-cn/_images/get-username-password.png b/docs/zh-cn/_images/get-username-password.png new file mode 100644 index 00000000..28e8b38e Binary files /dev/null and b/docs/zh-cn/_images/get-username-password.png differ diff --git a/docs/zh-cn/_images/logo_radondb.png b/docs/zh-cn/_images/logo_radondb.png new file mode 100644 index 00000000..deba0b89 Binary files /dev/null and b/docs/zh-cn/_images/logo_radondb.png differ diff --git a/docs/zh-cn/_images/logo_radondb.svg b/docs/zh-cn/_images/logo_radondb.svg new file mode 100644 index 00000000..affc2e9a --- /dev/null +++ b/docs/zh-cn/_images/logo_radondb.svg @@ -0,0 +1,3 @@ + + + \ No newline at end of file diff --git a/docs/zh-cn/_images/pods-running.png b/docs/zh-cn/_images/pods-running.png new file mode 100644 index 00000000..fa5a9947 Binary files /dev/null and b/docs/zh-cn/_images/pods-running.png differ diff --git a/docs/zh-cn/_images/project-overview.png b/docs/zh-cn/_images/project-overview.png new file mode 100644 index 00000000..8de986b6 Binary files /dev/null and b/docs/zh-cn/_images/project-overview.png differ diff --git a/docs/zh-cn/_images/repo-added.png b/docs/zh-cn/_images/repo-added.png new file mode 100644 index 00000000..99cd30f3 Binary files /dev/null and b/docs/zh-cn/_images/repo-added.png differ diff --git a/docs/zh-cn/_images/statefulset-monitoring.png b/docs/zh-cn/_images/statefulset-monitoring.png new file mode 100644 index 00000000..acbb81a0 Binary files /dev/null and b/docs/zh-cn/_images/statefulset-monitoring.png differ diff --git a/docs/zh-cn/_images/statefulsets-running.png b/docs/zh-cn/_images/statefulsets-running.png new file mode 100644 index 00000000..6602c9ca Binary files /dev/null and b/docs/zh-cn/_images/statefulsets-running.png differ diff --git a/docs/zh-cn/_images/use-clickhouse.png b/docs/zh-cn/_images/use-clickhouse.png new file mode 100644 index 00000000..cec6ec4c Binary files /dev/null and b/docs/zh-cn/_images/use-clickhouse.png differ diff --git a/docs/zh-cn/_images/volume-status.png b/docs/zh-cn/_images/volume-status.png new file mode 100644 index 00000000..da96f8a6 Binary files /dev/null and b/docs/zh-cn/_images/volume-status.png differ diff --git a/docs/zh-cn/_images/volumes.png b/docs/zh-cn/_images/volumes.png new file mode 100644 index 00000000..0f251227 Binary files /dev/null and b/docs/zh-cn/_images/volumes.png differ diff --git a/docs/zh-cn/deploy_radondb-clickhouse_on_kubernetes.md b/docs/zh-cn/deploy_radondb-clickhouse_on_kubernetes.md new file mode 100644 index 00000000..f97e7357 --- /dev/null +++ b/docs/zh-cn/deploy_radondb-clickhouse_on_kubernetes.md @@ -0,0 +1,204 @@ +Contents +================= + +- [Contents](#contents) +- [在 Kubernetes 上部署 RadonDB ClickHouse](#在-kubernetes-上部署-radondb-clickhouse) + - [简介](#简介) + - [部署准备](#部署准备) + - [部署步骤](#部署步骤) + - [步骤 1 : 添加仓库](#步骤-1--添加仓库) + - [步骤 2 : 部署](#步骤-2--部署) + - [步骤 3 : 部署校验](#步骤-3--部署校验) + - [查看集群 Pod](#查看集群-pod) + - [查询 Pod 状态](#查询-pod-状态) + - [访问 RadonDB ClickHouse](#访问-radondb-clickhouse) + - [通过 Pod](#通过-pod) + - [通过 Service](#通过-service) + - [持久化](#持久化) + - [自定义配置](#自定义配置) + +# 在 Kubernetes 上部署 RadonDB ClickHouse + +> [English](../en-us/deploy_radondb-clickhouse_on_kubernetes.md) | 简体中文 + +## 简介 + +RadonDB ClickHouse 是基于 [ClickHouse](https://clickhouse.tech/) 的开源、高可用、云原生集群解决方案。具备高可用、PB 级数据存储、实时数据分析、架构稳定和可扩展等性能。 + +本教程演示如何使用命令行在 Kubernetes 上部署 RadonDB ClickHouse。 + +## 部署准备 + +- 已成功部署 Kubernetes 集群。 +- 已安装 Helm 包管理工具。 + +## 部署步骤 + +### 步骤 1 : 添加仓库 + +添加并更新 helm 仓库。 + +```bash +$ helm repo add https://radondb.github.io/radondb-clickhouse-kubernetes/ +$ helm repo update +``` + +**预期效果** + +```bash +$ helm repo add ck https://radondb.github.io/radondb-clickhouse-kubernetes/ +"ck" has been added to your repositories + +$ helm repo update +Hang tight while we grab the latest from your chart repositories... +...Successfully got an update from the "ck" chart repository +Update Complete. ⎈Happy Helming!⎈ +``` + +### 步骤 2 : 部署 + +> 因 ZooKeeper 存储了 ClickHouse 元数据。您可以一次性安装 ZooKeeper 组件和 ClickHouse 集群。 + +```bash +$ helm install --generate-name /clickhouse-cluster -n --version v1.0 +``` + +- 更多参数说明,请参见 [values.yaml](../../clickhouse-cluster-helm/values.yaml)。 +- 若需自定义更多参数,可修改集群 `values.yaml` 文件中配置,详细操作说明请参见[自定义配置](#自定义配置)。 + +**Expected output:** + +```bash +$ helm install clickhouse ck/clickhouse-cluster -n test --version v1.0 +NAME: clickhouse +LAST DEPLOYED: Thur June 17 07:55:42 2021 +NAMESPACE: test +STATUS: deployed +REVISION: 1 +TEST SUITE: None + +$ helm list -n test +NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION +clickhouse test 1 2021-06-07 07:55:42.860240764 +0000 UTC deployed clickhouse-v1.0 21.1 +``` + +### 步骤 3 : 部署校验 + +#### 查看集群 Pod + +执行如下命令,查看创建的集群。 + +```bash +kubectl get all --selector app.kubernetes.io/instance= -n +``` + +**预期结果** + +```bash +$ kubectl get all --selector app.kubernetes.io/instance=clickhouse -n test +NAME READY STATUS RESTARTS AGE +pod/clickhouse-s0-r0-0 1/1 Running 0 72s +pod/clickhouse-s0-r1-0 1/1 Running 0 72s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/clickhouse ClusterIP 10.96.230.92 9000/TCP,8123/TCP 72s +service/clickhouse-s0-r0 ClusterIP 10.96.83.41 9000/TCP,8123/TCP 72s +service/clickhouse-s0-r1 ClusterIP 10.96.240.111 9000/TCP,8123/TCP 72s + +NAME READY AGE +statefulset.apps/clickhouse-s0-r0 1/1 72s +statefulset.apps/clickhouse-s0-r1 1/1 72s +``` + +#### 查询 Pod 状态 + +执行如行命令,查看 `Events` 返回结果。当返回结果为 `Started` 且稳定后,即可正常访问 ClickHouse 集群。 + +```bash +kubectl describe pod -n +``` + +**预期结果** + +```bash +$ kubectl describe pod clickhouse-s0-r0-0 -n test +... +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Warning FailedScheduling 7m30s (x3 over 7m42s) default-scheduler error while running "VolumeBinding" filter plugin for pod "clickhouse-s0-r0-0": pod has unbound immediate PersistentVolumeClaims + Normal Scheduled 7m28s default-scheduler Successfully assigned default/clickhouse-s0-r0-0 to worker-p004 + Normal SuccessfulAttachVolume 7m6s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-21c5de1f-c396-4743-a31b-2b094ecaf79b" + Warning Unhealthy 5m4s (x3 over 6m4s) kubelet, worker-p004 Liveness probe failed: Code: 210. DB::NetException: Connection refused (localhost:9000) + Normal Killing 5m4s kubelet, worker-p004 Container clickhouse failed liveness probe, will be restarted + Normal Pulled 4m34s (x2 over 6m50s) kubelet, worker-p004 Container image "tceason/clickhouse-server:v21.1.3.32-stable" already present on machine + Normal Created 4m34s (x2 over 6m50s) kubelet, worker-p004 Created container clickhouse + Normal Started 4m33s (x2 over 6m48s) kubelet, worker-p004 Started container clickhouse +``` + +## 访问 RadonDB ClickHouse + +### 通过 Pod + +通过 `kubectl` 工具直接访问 ClickHouse Pod ,连接示例如下。 + +```bash +$ kubectl exec -it -n -- clickhouse-client --user= --password= +``` + +**预期结果** + +```bash +$ kubectl get pods -n test |grep clickhouse +clickhouse-s0-r0-0 1/1 Running 0 8m50s +clickhouse-s0-r1-0 1/1 Running 0 8m50s + +$ kubectl exec -it clickhouse-s0-r0-0 -n test -- clickhouse-client -u default --password=C1ickh0use --query='select hostName()' +clickhouse-s0-r0-0 + +``` + +### 通过 Service + +```bash +$ echo | curl 'http://:@:/' --data-binary @- +``` + +**预期结果** + +```bash +$ kubectl get service -n test | grep clickhouse +clickhouse ClusterIP 10.96.71.193 9000/TCP,8123/TCP 12m +clickhouse-s0-r0 ClusterIP 10.96.40.207 9000/TCP,8123/TCP 12m +clickhouse-s0-r1 ClusterIP 10.96.63.179 9000/TCP,8123/TCP 12m + +$ echo 'select hostname()' | curl 'http://default:C1ickh0use@10.96.71.193:8123/' --data-binary @- +clickhouse-s0-r1-0 +$ echo 'select hostname()' | curl 'http://default:C1ickh0use@10.96.71.193:8123/' --data-binary @- +clickhouse-s0-r0-0 +``` + +## 持久化 + +配置 Pod 使用 PersistentVolumeClaim 作为存储,实现 ClickHouse 持久化。 + +默认情况下,每个 Pod 将创建一个 PVC ,并将其挂载到 `/var/lib/clickhouse` 目录。 + +1. 创建一个使用 PVC 作为存储的 Pod。 +2. 创建一个 PVC 自动绑定到合适的 PersistentVolume。 + +> **注意** +> 在 PersistentVolumeClaim 中,可以配置不同特性的 PersistentVolume。 + +## 自定义配置 + +若需自定义更多参数,可通过修改集群 [values.yaml](../../clickhouse-cluster-helm/values.yaml) 文件中配置。 + +1. 下载 `values.yaml` 文件。 +2. 修改 `values.yaml` 文件中参数值。 +3. 执行如下命令,部署集群。 + +```bash +$ helm install --generate-name /clickhouse-cluster -n \ + -f //to/values.yaml +``` \ No newline at end of file diff --git a/docs/zh-cn/deploy_radondb-clickhouse_with_operator_on_kubernetes.md b/docs/zh-cn/deploy_radondb-clickhouse_with_operator_on_kubernetes.md new file mode 100644 index 00000000..33525b3f --- /dev/null +++ b/docs/zh-cn/deploy_radondb-clickhouse_with_operator_on_kubernetes.md @@ -0,0 +1,241 @@ +Contents +================= +- [Contents](#contents) +- [在 Kubernetes 上部署 RadonDB ClickHouse](#在-kubernetes-上部署-radondb-clickhouse) + - [简介](#简介) + - [部署准备](#部署准备) + - [部署步骤](#部署步骤) + - [步骤 1 : 添加仓库](#步骤-1--添加仓库) + - [步骤 2 : 部署 RadonDB ClickHouse Operator](#步骤-2--部署-radondb-clickhouse-operator) + - [步骤 3 : 部署 RadonDB ClickHouse 集群](#步骤-3--部署-radondb-clickhouse-集群) + - [步骤 4 : 部署校验](#步骤-4--部署校验) + - [查看 Pod 运行状态](#查看-pod-运行状态) + - [查询 SVC 运行状态](#查询-svc-运行状态) + - [访问 RadonDB ClickHouse](#访问-radondb-clickhouse) + - [通过 Pod](#通过-pod) + - [通过 Service](#通过-service) + - [持久化](#持久化) + - [配置](#配置) + - [自定义配置](#自定义配置) + +# 在 Kubernetes 上部署 RadonDB ClickHouse + +> [English](../en-us/deploy_radondb-clickhouse_with_operator_on_kubernetes.md) | 简体中文 + +## 简介 + +RadonDB ClickHouse 是基于 [ClickHouse](https://clickhouse.tech/) 的开源、高可用、云原生集群解决方案。具备高可用、PB 级数据存储、实时数据分析、架构稳定和可扩展等性能。 + +本教程演示如何使用命令行在 Kubernetes 上部署 RadonDB ClickHouse。 + +## 部署准备 + +- 已成功部署 Kubernetes 集群。 +- 已安装 Helm 包管理工具。 + +## 部署步骤 + +### 步骤 1 : 添加仓库 + +添加并更新 Helm 仓库。 + +```bash +$ helm repo add https://radondb.github.io/radondb-clickhouse-kubernetes/ +$ helm repo update +``` + +**预期效果** + +```bash +$ helm repo add ck https://radondb.github.io/radondb-clickhouse-kubernetes/ +"ck" has been added to your repositories + +$ helm repo update +Hang tight while we grab the latest from your chart repositories... +...Successfully got an update from the "ck" chart repository +Update Complete. ⎈Happy Helming!⎈ +``` + +### 步骤 2 : 部署 RadonDB ClickHouse Operator + +```bash +$ helm install --generate-name -n /clickhouse-operator +``` + +**预期效果** + +```bash +$ helm install clickhouse-operator ck/clickhouse-operator -n kube-system +NAME: clickhouse-operator +LAST DEPLOYED: Wed Aug 17 14:43:44 2021 +NAMESPACE: kube-system +STATUS: deployed +REVISION: 1 +TEST SUITE: None +``` + +> **注意** +> +> 上述示例 ClickHouse Operator 将会被安装在 `kube-system` 命名空间下,因此一个 Kubernetes 集群只需要安装一次 ClickHouse Operator。 + +### 步骤 3 : 部署 RadonDB ClickHouse 集群 + +```bash +$ helm install --generate-name /clickhouse-cluster -n \ + --set = +``` + +- 更多参数说明,请参见 [配置](#配置)。 +- 若需自定义更多参数,可修改集群 `values.yaml` 文件中配置,详细操作说明请参见[自定义配置](#自定义配置)。 + +**预期效果** + +```bash +$ helm install clickhouse ck/clickhouse-cluster -n test +NAME: clickhouse +LAST DEPLOYED: Wed Aug 17 14:48:12 2021 +NAMESPACE: test +STATUS: deployed +REVISION: 1 +TEST SUITE: None +``` + +### 步骤 4 : 部署校验 + +#### 查看 Pod 运行状态 + +执行如下命令,查看创建的集群 Pod 运行状态。 + +```bash +$ kubectl get pods -n +``` + +**预期结果** + +```bash +$ kubectl get pods -n test +NAME READY STATUS RESTARTS AGE +pod/chi-ClickHouse-replicas-0-0-0 2/2 Running 0 3m13s +pod/chi-ClickHouse-replicas-0-1-0 2/2 Running 0 2m51s +pod/zk-clickhouse-cluster-0 1/1 Running 0 3m13s +pod/zk-clickhouse-cluster-1 1/1 Running 0 3m13s +pod/zk-clickhouse-cluster-2 1/1 Running 0 3m13s +``` + +#### 查询 SVC 运行状态 + +执行如行命令,查看集群 SVC 运行状态。 + +```bash +$ kubectl get service -n +``` + +**预期结果** + +```bash +$ kubectl get service -n test +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/chi-ClickHouse-replicas-0-0 ClusterIP None 8123/TCP,9000/TCP,9009/TCP 2m53s +service/chi-ClickHouse-replicas-0-1 ClusterIP None 8123/TCP,9000/TCP,9009/TCP 2m36s +service/clickhouse-ClickHouse ClusterIP 10.96.137.152 8123/TCP,9000/TCP 3m14s +service/zk-client-clickhouse-cluster ClusterIP 10.107.33.51 2181/TCP,7000/TCP 3m13s +service/zk-server-clickhouse-cluster ClusterIP None 2888/TCP,3888/TCP 3m13s +``` + +## 访问 RadonDB ClickHouse + +### 通过 Pod + +通过 `kubectl` 工具直接访问 ClickHouse Pod。 + +```bash +$ kubectl exec -it -n -- clickhouse-client --user= --password= +``` + +**预期效果** + +```bash +$ kubectl get pods |grep clickhouse +chi-ClickHouse-replicas-0-0-0 1/1 Running 0 8m50s +chi-ClickHouse-replicas-0-1-0 1/1 Running 0 8m50s + +$ kubectl exec -it chi-ClickHouse-replicas-0-0-0 -- clickhouse-client -u clickhouse --password=c1ickh0use0perator --query='select hostName()' +chi-ClickHouse-replicas-0-0-0 +``` + +### 通过 Service + +```bash +$ echo '' | curl 'http://:@:/' --data-binary @- +``` + +**预期效果** + +```bash +$ kubectl get service |grep clickhouse +clickhouse-ClickHouse ClusterIP 10.96.137.152 9000/TCP,8123/TCP 12m +chi-ClickHouse-replicas-0-0 ClusterIP None 9000/TCP,8123/TCP 12m +chi-ClickHouse-replicas-0-1 ClusterIP None 9000/TCP,8123/TCP 12m + +$ echo 'select hostname()' | curl 'http://clickhouse:c1ickh0use0perator@10.96.137.152:8123/' --data-binary @- +chi-ClickHouse-replicas-0-1-0 +$ echo 'select hostname()' | curl 'http://clickhouse:c1ickh0use0perator@10.96.137.152:8123/' --data-binary @- +chi-ClickHouse-replicas-0-0-0 +``` + +## 持久化 + +配置 Pod 使用 PersistentVolumeClaim 作为存储,实现 ClickHouse 持久化。 + +默认情况下,每个 Pod 将创建一个 PVC ,并将其挂载到 `/var/lib/clickhouse` 目录。 + +1. 创建一个使用 PVC 作为存储的 Pod。 +2. 创建一个 PVC 自动绑定到合适的 PersistentVolume。 + +> **注意** +> +> 在 PersistentVolumeClaim 中,可以配置不同特性的 PersistentVolume。 + +## 配置 + +|参数 | 描述 | 默认值 | +|:----|:----|:----| +| **ClickHouse** | | | +| `clickhouse.clusterName` | ClickHouse cluster name. | all-nodes | +| `clickhouse.shardscount` | Shards count. Once confirmed, it cannot be reduced. | 1 | +| `clickhouse.replicascount` | Replicas count. Once confirmed, it cannot be modified. | 2 | +| `clickhouse.image` | ClickHouse image name, it is not recommended to modify. | radondb/clickhouse-server:v21.1.3.32-stable | +| `clickhouse.imagePullPolicy` | Image pull policy. The value can be Always/IfNotPresent/Never. | IfNotPresent | +| `clickhouse.resources.memory` | K8s memory resources should be requested by a single Pod. | 1Gi | +| `clickhouse.resources.cpu` | K8s CPU resources should be requested by a single Pod. | 0.5 | +| `clickhouse.resources.storage` | K8s Storage resources should be requested by a single Pod. | 10Gi | +| `clickhouse.user` | ClickHouse user array. Each user needs to contain a username, password and networks array. | [{"username": "clickhouse", "password": "c1ickh0use0perator", "networks": ["127.0.0.1", "::/0"]}] | +| `clickhouse.port.tcp` | Port for the native interface. | 9000 | +| `clickhouse.port.http` | Port for HTTP/REST interface. | 8123 | +| `clickhouse.svc.type` | K8s service type. The value can be ClusterIP/NodePort/LoadBalancer. | ClusterIP | +| `clickhouse.svc.qceip` | If the value of type is LoadBalancer, You need to configure loadbalancer that provided by third-party platforms. | nil | +| **BusyBox** | | | +| `busybox.image` | BusyBox image name, it is not recommended to modify. | busybox | +| `busybox.imagePullPolicy` | Image pull policy. The value can be Always/IfNotPresent/Never. | Always | +| **ZooKeeper** | | | +| `zookeeper.install` | Whether to create ZooKeeper by operator. | true | +| `zookeeper.port` | ZooKeeper service port. | 2181 | +| `zookeeper.replicas` | ZooKeeper cluster replicas count. | 3 | +| `zookeeper.image` | ZooKeeper image name, it is not recommended to modify. | radondb/zookeeper:3.6.2 | +| `zookeeper.imagePullPolicy` | Image pull policy. The value can be Always/IfNotPresent/Never. | Always | +| `zookeeper.resources.memory` | K8s memory resources should be requested by a single Pod. | Deprecated, if install = true | +| `zookeeper.resources.cpu` | K8s CPU resources should be requested by a single Pod. | Deprecated, if install = true | +| `zookeeper.resources.storage` | K8s storage resources should be requested by a single Pod. | Deprecated, if install = true | + +## 自定义配置 + +若需自定义更多参数,可通过修改集群 [values.yaml](../../clickhouse-cluster/values.yaml) 文件中配置。 + +1. 下载 `values.yaml` 文件。 +2. 修改 `values.yaml` 文件中参数值。 +3. 执行如下命令,部署集群。 + +```bash +$ helm install --generate-name /clickhouse-cluster -n \ + -f //to/values.yaml +``` diff --git a/docs/zh-cn/deploy_radondb-clickhouse_with_operator_on_kubesphere_appstore.md b/docs/zh-cn/deploy_radondb-clickhouse_with_operator_on_kubesphere_appstore.md new file mode 100644 index 00000000..ee3482e5 --- /dev/null +++ b/docs/zh-cn/deploy_radondb-clickhouse_with_operator_on_kubesphere_appstore.md @@ -0,0 +1,142 @@ +Contents +================= + +- [Contents](#contents) +- [在 KubeSphere 上部署 RadonDB ClickHouse](#在-kubesphere-上部署-radondb-clickhouse) + - [简介](#简介) + - [部署准备](#部署准备) + - [部署步骤](#部署步骤) + - [步骤 1 : 部署 RadonDB ClickHouse Operator](#步骤-1--部署-radondb-clickhouse-operator) + - [步骤 2 : 添加应用仓库](#步骤-2--添加应用仓库) + - [步骤 3:部署 ClickHouse 集群](#步骤-3部署-clickhouse-集群) + - [步骤 4:部署验证](#步骤-4部署验证) + - [访问 RadonDB ClickHouse](#访问-radondb-clickhouse) + +# 在 KubeSphere 上部署 RadonDB ClickHouse + +> [English](../en-us/deploy_radondb-clickhouse_with_operator_on_kubesphere_appstore.md) | 简体中文 + +## 简介 + +RadonDB ClickHouse 是基于 [ClickHouse](https://clickhouse.tech/) 的开源、高可用、云原生集群解决方案。具备高可用、PB 级数据存储、实时数据分析、架构稳定和可扩展等性能。 + +本教程演示了如何在 KubeSphere 上部署 ClickHouse Operator 和 ClickHouse 集群。 + +## 部署准备 + +- 已成功部署 KubeSphere 集群,且[已启用 OpenPitrix 系统](https://kubesphere.io/zh/docs/pluggable-components/app-store/)和[已创建企业空间、项目、帐户和角色](https://kubesphere.io/zh/docs/quick-start/create-workspace-and-project/)。 +- [已开启 KubeSphere 外网访问](https://kubesphere.io/zh/docs/project-administration/project-gateway/)。 + +## 部署步骤 + +### 步骤 1 : 部署 RadonDB ClickHouse Operator + +以 `admin` 身份登录 KubeSphere 的 Web 控制台,并使用**工具箱**中的 **Kubectl** 执行以下命令来安装 ClickHouse Operator。建议至少准备 2 个可用集群节点。 + +```bash +kubectl apply -f https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/deploy/operator/clickhouse-operator-install-bundle.yaml +``` + +> **注意** +> +> RadonDB ClickHouse Operator 将会被安装在 `kube-system` 命名空间下,因此一个 KubeSphere 集群只需要安装一次 ClickHouse Operator。 + + +### 步骤 2 : 添加应用仓库 + +1. 以 `ws-admin` 身份登录 KubeSphere 的 Web 控制台。在企业空间中,进入**应用管理**下的**应用仓库**页面,点击**添加仓库**。 + + ![add-repo](_images/add-repo.png) + +2. 在出现的对话框中,输入 `clickhouse` 作为应用仓库名称,输入 `https://radondb.github.io/radondb-clickhouse-operator/` 作为仓库的 URL。点击**验证**以验证 URL。在 URL 旁边呈现一个绿色的对号,验证通过后,点击**确定**继续。 + +3. 将仓库成功导入到 KubeSphere 之后,在列表中可查看 ClickHouse 仓库。 + + +### 步骤 3:部署 ClickHouse 集群 + +1. 以 `project-regular` 身份登录 KubeSphere 的 Web 控制台。在 `demo-project` 项目中,进入**应用负载**下的**应用**页面,点击**部署新应用**。 + + ![click-deploy-new-app](_images/click-deploy-new-app.png) + +2. 在对话框中,选择**来自应用模板**。 + + ![from-app-templates](_images/from-app-templates.png) + +3. 从下拉菜单中选择 `clickhouse` 应用仓库 ,然后点击 **clickhouse-cluster**。 + + ![clickhouse-cluster](_images/clickhouse-cluster.png) + +4. 在**配置文件**选项卡,可以直接通过控制台查看配置信息,也可以通过下载默认 `values.yaml` 文件查看。在**版本**列框下,选择一个版本号,点击**部署**以继续。 + + ![chart-tab](_images/chart-tab.png) + +5. 在**基本信息**页面,确认应用名称、应用版本以及部署位置。点击**下一步**以继续。 + + ![basic-info](_images/basic-info.png) + +6. 在**应用配置**页面,可以编辑 `values.yaml` 文件,也可以直接点击**部署**使用默认配置。 + + ![click-deploy](_images/click-deploy.png) + +7. 等待 ClickHouse 集群正常运行。可在**工作负载**下的**应用**页面,查看部署的应用。 + + ![app-running](_images/app-running.png) + +### 步骤 4:部署验证 + +1. 以 `project-regular` 身份登录 KubeSphere 的 Web 控制台。 + +2. 进入**应用负载**下的**工作负载**页面,点击**有状态副本集**,查看集群状态。 + + ![statefulsets-running](_images/statefulsets-running.png) + + 进入一个有状态副本集群详情页面,点击**监控**标签页,可查看一定时间范围内的集群指标。 + + ![statefulset-monitoring](_images/statefulset-monitoring.png) + +3. 进入**应用负载**下的**容器组**页面,可查看所有状态的容器。 + + ![pods-running](_images/pods-running.png) + +4. 进入**存储管理**下的**存储卷**页面,可查看存储卷,所有组件均使用了持久化存储。 + + ![volumes](_images/volumes.png) + + 查看某个存储卷用量信息,以其中一个数据节点为例,可以看到当前存储的存储容量和剩余容量等监控数据。 + + ![volume-status](_images/volume-status.png) + +5. 在项目**概览**页面,可查看当前项目资源使用情况。 + + ![project-overview](_images/project-overview.png) + +## 访问 RadonDB ClickHouse + +1. 以 `admin` 身份登录 KubeSphere 的 Web 控制台,将鼠标悬停在右下角的锤子图标上,选择 **Kubectl**。 + +2. 打开终端窗口,执行如下命令,并输入 ClickHouse 集群用户名和密码。 + + ```bash + kubectl edit chi -n + ``` + + > **注意** + > + > 以下命令示例中 **app name** 为 `clickhouse-app` ,**project name** 为 `demo-project`。 + + ![get-username-password](_images/get-username-password.png) + +3. 执行如下命令,访问 ClickHouse 集群,并可通过 `show databases` 命令查看数据库。 + + ```bash + kubectl exec -it -n -- clickhouse-client --user= --password= + ``` + + > **注意** + > + > - 以下命令示例中 **pod name** 为 `chi-clickhouse-app-all-nodes-0-1-0` ,**project name** 为 `demo-project`,**user name** 为 `clickhouse`,**password** 为 `clickh0use0perator`。 + > + > - 可在**应用负载**的**容器组**下获取 **pod name**。 + + ![use-clickhouse](_images/use-clickhouse.png) diff --git a/docs/zh-cn/quick_start.md b/docs/zh-cn/quick_start.md new file mode 100644 index 00000000..1f552ca7 --- /dev/null +++ b/docs/zh-cn/quick_start.md @@ -0,0 +1,377 @@ +> [English](../quick_start.md) | 简体中文 + +--- + +- [快速入门](#快速入门) + - [前提条件](#前提条件) + - [部署 RadonDB ClickHouse Operator](#部署-radondb-clickhouse-operator) + - [场景 1:在 `kube-system` 命名空间部署](#场景-1在-kube-system-命名空间部署) + - [场景 2:在 Kubernetes `1.17`及以下版本的 `kube-system` 命名空间中部署](#场景-2在-kubernetes-117及以下版本的-kube-system-命名空间中部署) + - [场景 3:自定命名空间部署](#场景-3自定命名空间部署) + - [场景 4:离线部署](#场景-4离线部署) + - [验证 Operator 部署](#验证-operator-部署) + - [从源码构建 Operator](#从源码构建-operator) + - [部署 RadonDB ClickHouse 集群](#部署-radondb-clickhouse-集群) + - [创建自定义命名空间](#创建自定义命名空间) + - [示例 1:测试集群](#示例-1测试集群) + - [示例 2:默认持久卷](#示例-2默认持久卷) + - [示例 3:自定义部署 Pod 和 VolumeClaim](#示例-3自定义部署-pod-和-volumeclaim) + - [示例 4:自定义 ClickHouse 配置](#示例-4自定义-clickhouse-配置) + - [验证集群部署](#验证集群部署) + - [访问 RadonDB ClickHouse](#访问-radondb-clickhouse) + - [通过 EXTERNAL-IP 访问](#通过-external-ip-访问) + - [通过 pod-NAME 访问](#通过-pod-name-访问) + +# 快速入门 + +## 前提条件 + +- 已准备 Kubernetes 集群,且集群正常运行。 +- 已准备 `kubectl` 工具,且正确配置。 +- 已准备 `curl` 工具。 + +## 部署 RadonDB ClickHouse Operator + +部署 `radondb-clickhouse-operator`,最简单的办法是直接从 `github` 应用部署示例。 + +### 场景 1:在 `kube-system` 命名空间部署 + +进入`kube-system` 命名空间,执行如下命令: + +```bash +kubectl apply -f https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/deploy/operator/clickhouse-operator-install-bundle.yaml +``` + +### 场景 2:在 Kubernetes `1.17`及以下版本的 `kube-system` 命名空间中部署 + +进入`kube-system` 命名空间,执行如下命令: + +```bash +kubectl apply -f https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/deploy/operator/clickhouse-operator-install-bundle-v1beta1.yaml +``` + +### 场景 3:自定命名空间部署 + +使用如下安装脚本,自定义 Operator 和 Operator 镜像的部署位置。 + +```bash +curl -s https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/deploy/operator-web-installer/clickhouse-operator-install.sh | OPERATOR_NAMESPACE=test-clickhouse-operator bash +``` + +如上命令,将新建一个命名空间,且该命名空间将被用于安装 `radondb-clickhouse-operator`。部署过程中将下载 `.yaml` 和 `.xml` 文件。安装成功后,将回显在 `test-clickhouse-operator` 命名空间成功部署自定义资源 `kind: ClickhouseInstallation`。 + +若未指定 `OPERATOR_NAMESPACE` 参数值,将部署至默认命名空间 `kube-system`,安装成功后,将回显在所有命名空间成功部署自定义资源 `kind: ClickhouseInstallation`。 + +```bash +cd ~ +curl -s https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/deploy/operator-web-installer/clickhouse-operator-install.sh | bash +``` + +### 场景 4:离线部署 + +下载 [radondb-clickhouse-operator 部署模版文件](../deploy/operator/clickhouse-operator-install-template.yaml),并自定义参数设置。通过 `kubectl` 工具应用该模版文件。 + +```bash +kubectl apply -f /clickhouse-operator-install-template.yaml +``` + +或者通过 `kubectl` 工具直接执行如下脚本。 + +```bash +# Namespace to install operator into +OPERATOR_NAMESPACE="${OPERATOR_NAMESPACE:-test-clickhouse-operator}" +# Namespace to install metrics-exporter into +METRICS_EXPORTER_NAMESPACE="${OPERATOR_NAMESPACE}" + +# Operator's docker image +OPERATOR_IMAGE="${OPERATOR_IMAGE:-radondb/chronus-operator:latest}" +# Metrics exporter's docker image +METRICS_EXPORTER_IMAGE="${METRICS_EXPORTER_IMAGE:-radondb/chronus-metrics-operator:latest}" + +# Setup clickhouse-operator into specified namespace +kubectl apply --namespace="${OPERATOR_NAMESPACE}" -f <( \ + curl -s https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/deploy/operator/clickhouse-operator-install-template.yaml | \ + OPERATOR_IMAGE="${OPERATOR_IMAGE}" \ + OPERATOR_NAMESPACE="${OPERATOR_NAMESPACE}" \ + METRICS_EXPORTER_IMAGE="${METRICS_EXPORTER_IMAGE}" \ + METRICS_EXPORTER_NAMESPACE="${METRICS_EXPORTER_NAMESPACE}" \ + envsubst \ +) +``` + +### 验证 Operator 部署 + +参考以上样例,部署 RadonDB ClickHouse Operator 成功后,将回显如下示例信息。 + +```text +Setup ClickHouse Operator into test-clickhouse-operator namespace +namespace/test-clickhouse-operator created +customresourcedefinition.apiextensions.k8s.io/clickhouseinstallations.clickhouse.radondb.com configured +serviceaccount/clickhouse-operator created +clusterrolebinding.rbac.authorization.k8s.io/clickhouse-operator configured +service/clickhouse-operator-metrics created +configmap/etc-clickhouse-operator-files created +configmap/etc-clickhouse-operator-confd-files created +configmap/etc-clickhouse-operator-configd-files created +configmap/etc-clickhouse-operator-templatesd-files created +configmap/etc-clickhouse-operator-usersd-files created +deployment.apps/clickhouse-operator created +``` + +可执行如下命令,确认 `radondb-clickhouse-operator`是否正常运行。 + +```bash +$ kubectl get pods -n test-clickhouse-operator +NAME READY STATUS RESTARTS AGE +clickhouse-operator-5ddc6d858f-drppt 1/1 Running 0 1m +``` + +### 从源码构建 Operator + +关于如何从源码构建 RadonDB ClickHouse Operator,以及如何部署一个 docker 镜像并在 `kubernetes` 中使用,请参考[从源码构建 Operator](../operator_build_from_sources.md)。 + +## 部署 RadonDB ClickHouse 集群 + +`radondb-clickhouse-operator` 项目提供多个[部署集群示例](../chi-examples/),以下为部分示例说明。 + +### 创建自定义命名空间 + +为方便 RadonDB ClickHouse 集群管理和高效运行,推荐将所有集群组件放在同一命名空间。以下以创建一个 `test` 命名空间为例。 + +```bash +$ kubectl create namespace test-clickhouse-operator +namespace/test-clickhouse-operator created +``` + +### 示例 1:测试集群 + +以下创建一个 [1 shard 1 replica](../chi-examples/01-simple-layout-01-1shard-1repl.yaml) 测试集群为例。 + +> **注意** +> +> 该集群不具备持久存储能力,仅用于部署验证。 + +```bash +$ kubectl apply -n test-clickhouse-operator -f https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/docs/chi-examples/01-simple-layout-01-1shard-1repl.yaml +clickhouseinstallation.clickhouse.radondb.com/simple-01 created +``` + +定义了一个单副本 RadonDB ClickHouse 集群。 + +```yaml +apiVersion: "clickhouse.radondb.com/v1" +kind: "ClickHouseInstallation" +metadata: + name: "simple-01" +``` + +### 示例 2:默认持久卷 + +[默认持久卷示例](../chi-examples/03-persistent-volume-01-default-volume.yaml) 定义了一个动态持久存储卷的 RadonDB ClickHouse 集群。 + +```bash +$ kubectl apply -n test-clickhouse-operator -f https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/docs/chi-examples/03-persistent-volume-01-default-volume.yaml +clickhouseinstallation.clickhouse.radondb.com/pv-simple created +``` + +```yaml +apiVersion: "clickhouse.radondb.com/v1" +kind: "ClickHouseInstallation" +metadata: + name: "pv-simple" +spec: + defaults: + templates: + dataVolumeClaimTemplate: volume-template + logVolumeClaimTemplate: volume-template + configuration: + clusters: + - name: "simple" + layout: + shardsCount: 1 + replicasCount: 1 + - name: "replicas" + layout: + shardsCount: 1 + replicasCount: 2 + - name: "shards" + layout: + shardsCount: 2 + templates: + volumeClaimTemplates: + - name: volume-template + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 123Mi +``` + +### 示例 3:自定义部署 Pod 和 VolumeClaim + +[自定义部署 Pod 和 VolumeClaim 示例](../chi-examples/03-persistent-volume-02-volume.yaml)定义了如下配置: + +- 指定部署 +- Pod 模版 +- VolumeClaim 模版 + +```bash +$ kubectl apply -n test-clickhouse-operator -f https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/docs/chi-examples/03-persistent-volume-02-volume.yaml +clickhouseinstallation.clickhouse.radondb.com/pv-log created +``` + +```yaml +apiVersion: "clickhouse.radondb.com/v1" +kind: "ClickHouseInstallation" +metadata: + name: "pv-log" +spec: + configuration: + clusters: + - name: "deployment-pv" + # Templates are specified for this cluster explicitly + templates: + podTemplate: pod-template-with-volumes + layout: + shardsCount: 1 + replicasCount: 1 + + templates: + podTemplates: + - name: pod-template-with-volumes + spec: + containers: + - name: clickhouse + image: yandex/clickhouse-server:19.3.7 + ports: + - name: http + containerPort: 8123 + - name: client + containerPort: 9000 + - name: interserver + containerPort: 9009 + volumeMounts: + - name: data-storage-vc-template + mountPath: /var/lib/clickhouse + - name: log-storage-vc-template + mountPath: /var/log/clickhouse-server + + volumeClaimTemplates: + - name: data-storage-vc-template + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 3Gi + - name: log-storage-vc-template + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 2Gi +``` + +### 示例 4:自定义 ClickHouse 配置 + +[自定义 ClickHouse 配置示例](../chi-examples/05-settings-01-overview.yaml)定义了 Operator 可配置 RadonDB ClickHouse 集群。 + +```bash +$ kubectl apply -n test-clickhouse-operator -f https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/docs/chi-examples/05-settings-01-overview.yaml +clickhouseinstallation.clickhouse.radondb.com/settings-01 created +``` + +```yaml +apiVersion: "clickhouse.radondb.com/v1" +kind: "ClickHouseInstallation" +metadata: + name: "settings-01" +spec: + configuration: + users: + # test user has 'password' specified, while admin user has 'password_sha256_hex' specified + test/password: qwerty + test/networks/ip: + - "127.0.0.1/32" + - "192.168.74.1/24" + test/profile: test_profile + test/quota: test_quota + test/allow_databases/database: + - "dbname1" + - "dbname2" + - "dbname3" + # admin use has 'password_sha256_hex' so actual password value is not published + admin/password_sha256_hex: 8bd66e4932b4968ec111da24d7e42d399a05cb90bf96f587c3fa191c56c401f8 + admin/networks/ip: "127.0.0.1/32" + admin/profile: default + admin/quota: default + # readonly user has 'password' field specified, not 'password_sha256_hex' as admin user above + readonly/password: readonly_password + readonly/profile: readonly + readonly/quota: default + profiles: + test_profile/max_memory_usage: "1000000000" + test_profile/readonly: "1" + readonly/readonly: "1" + quotas: + test_quota/interval/duration: "3600" + settings: + compression/case/method: zstd + disable_internal_dns_cache: 1 + files: + dict1.xml: | + + + + source1.csv: | + a1,b1,c1,d1 + a2,b2,c2,d2 + clusters: + - name: "standard" + layout: + shardsCount: 1 + replicasCount: 1 +``` + +### 验证集群部署 + +集群部署成功后,您可以通过如下命令,验证集群运行状态,并查看服务详情。 + +```bash +$ kubectl get pods -n test-clickhouse-operator +NAME READY STATUS RESTARTS AGE +chi-b3d29f-a242-0-0-0 1/1 Running 0 10m +``` + +```bash +$ kubectl get service -n test-clickhouse-operator +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +chi-b3d29f-a242-0-0 ClusterIP None 8123/TCP,9000/TCP,9009/TCP 11m +clickhouse-example-01 LoadBalancer 100.64.167.170 abc-123.us-east-1.elb.amazonaws.com 8123:30954/TCP,9000:32697/TCP 11m +``` + +## 访问 RadonDB ClickHouse + +### 通过 EXTERNAL-IP 访问 + +执行 `kubectl get service -n ` 命令获取 **EXTERNAL-IP**,您可直接访问数据库。 + +```bash +$ clickhouse-client -h abc-123.us-east-1.elb.amazonaws.com -u clickhouse_operator --password clickhouse_operator_password +ClickHouse client version 18.14.12. +Connecting to abc-123.us-east-1.elb.amazonaws.com:9000. +Connected to ClickHouse server version 19.4.3 revision 54416. +``` + +### 通过 pod-NAME 访问 + +若无可用 **EXTERNAL-IP**,您可在 Kubernetes 集群内,通过 **pod-NAME** 访问数据库。 + +```bash +$ kubectl -n test-clickhouse-operator exec -it chi-b3d29f-a242-0-0-0 -- clickhouse-client +ClickHouse client version 19.4.3.11. +Connecting to localhost:9000 as user default. +Connected to ClickHouse server version 19.4.3 revision 54416. +```