[ICN-24] Integrating Rook plugin with KuD Created: 31/Jul/19  Updated: 20/Aug/19  Resolved: 15/Aug/19

Status: Done
Project: Integrated Cloud Native NFV
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Story Priority: Medium
Reporter: Kuralamudhan Ramakrishnan Assignee: Tingjie Chen
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Epic Link: KuD V2.0 plugin-addons for ICN
Sprint: ICN Sprint 1, ICN Sprint 2
Story Points: 4

 Description   

Prerequisite: KUD installation step and documentation

Resources: Baremetal server, Storage device (/dev/sdb etc)

Components Version: OS Ubuntu 18.04, KUD version (Kubespray 2.9.0, Kubernetes 1.13.5), Rook 1.0.4, Ceph 13.2.2 (Mimic)

Task Description: Integration of Rook daemonset with KUD, bring up Rook operator and Ceph cluster in each K8s Node, and config Ceph-monitor and Ceph-osd with certain policy, provide storage provisioning function to K8s application.

Testing: Checking Rook operator and Ceph cluster status and scheduling mechanism, and check the CSI function for volumes.

Expected Deliverables: Daemonset yaml, install_package, configuration and launching script along with KUD integration. 

Gerrit Patch: https://gerrit.akraino.org/r/#/c/icn/+/1282/

 



 Comments   
Comment by Tingjie Chen [ 12/Aug/19 ]

The Rook operator and Ceph cluster deployment information, by default the Ceph version is Mimic: v13.2.2

----------------------------------------------------------

$ kubectl get pod -n rook-ceph
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-provisioner-0 2/2 Running 0 10m
csi-cephfsplugin-z8r2t 2/2 Running 0 10m
csi-rbdplugin-977dn 2/2 Running 0 10m
csi-rbdplugin-provisioner-0 4/4 Running 0 10m
rook-ceph-agent-gctv8 1/1 Running 0 10m
rook-ceph-mgr-a-7d5766c65d-bx49r 1/1 Running 0 9m17s
rook-ceph-mon-a-94665555f-zr7pl 1/1 Running 0 9m54s
rook-ceph-mon-b-54d5d78659-rqzkb 1/1 Running 0 9m45s
rook-ceph-mon-c-67dbf65685-rjnlf 1/1 Running 0 9m31s
rook-ceph-operator-948f8f84c-749zb 1/1 Running 0 11m
rook-ceph-osd-0-6ff8fb6d6b-hcrlx 1/1 Running 0 8m57s
rook-ceph-osd-prepare-localhost-pk2tc 0/2 Completed 0 9m3s
rook-ceph-tools-8b46fc6f-lqn5q 1/1 Running 0 11m
rook-discover-wnbqj 1/1 Running 0 10m
marvin@localhost:~/rook/rook_yaml$
marvin@localhost:~/rook/rook_yaml$ kubectl exec -ti rook-ceph-operator-948f8f84c-749zb -n rook-ceph – bash
[root@rook-ceph-operator-948f8f84c-749zb /]# ceph -s
cluster:
id: 81afbf78-e3d4-4651-a65e-194ddc28bad0
health: HEALTH_OK

services:
mon: 3 daemons, quorum b,a,c
mgr: a(active)
osd: 1 osds: 1 up, 1 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 116 GiB used, 1.6 TiB / 1.7 TiB avail
pgs:

[root@rook-ceph-operator-948f8f84c-749zb /]# ceph version
ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable)

 

Comment by Tingjie Chen [ 12/Aug/19 ]

r.kuralamudhan akhilakishore

I have upgrade to K8s 1.13.5 and deploy Rook with CSI successfully, you can refer the changes in KUD code base.

if something is wrong with kubectl server version, you can manually upgrade as command: kubeadm upgrade apply v1.13.5

after upgraded, the information as following:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:19:22Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:19:22Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

 

The code changes in KUD as following:

--------------------------------------------------------------------------------------------------

diff --git a/kud/deployment_infra/playbooks/kud-vars.yml b/kud/deployment_infra/playbooks/kud-vars.yml
index b11672f..eb9b24c 100644
— a/kud/deployment_infra/playbooks/kud-vars.yml
+++ b/kud/deployment_infra/playbooks/kud-vars.yml
@@ -57,8 +57,8 @@ ovn4nfv_source_type: "source"
ovn4nfv_version: aa14577f6bc672bc8622edada8a487825fdebce1
ovn4nfv_url: "https://git.opnfv.org/ovn4nfv-k8s-plugin/"

-go_version: '1.12.4'
-kubespray_version: 2.8.2
-helm_client_version: 2.9.1
+go_version: '1.12.5'
+kubespray_version: 2.9.0
+helm_client_version: 2.13.1

  1. kud playbooks not compatible with 2.8.0 - see MULTICLOUD-634
    ansible_version: 2.7.10

diff --git a/kud/hosting_providers/vagrant/inventory/group_vars/k8s-cluster.yml b/kud/hosting_providers/vagrant/inventory/group_vars/k8s-cluster.yml
index 9966ba8..cacb4b3 100644
— a/kud/hosting_providers/vagrant/inventory/group_vars/k8s-cluster.yml
+++ b/kud/hosting_providers/vagrant/inventory/group_vars/k8s-cluster.yml
@@ -48,7 +48,7 @@ local_volumes_enabled: true
local_volume_provisioner_enabled: true

    1. Change this to use another Kubernetes version, e.g. a current beta release
      -kube_version: v1.12.3
      +kube_version: v1.13.5

# Helm deployment
helm_enabled: true

Comment by Kuralamudhan Ramakrishnan [ 08/Aug/19 ]

tingjiec Thanks for bringing it up. It is really good that we identified issues here. akhilakishore could you please suggest us, is that possible for us by Aug 15th to update K8s version ? or you afraid that it will break current KuD offline operation. 

Comment by Tingjie Chen [ 08/Aug/19 ]

r.kuralamudhan May I ask the version of K8s in KUD deployment env? since in Rook we have CSI supported, which can have storage service provision for K8s applications.

And the CSI 1.0 require the K8s version must be >= 1.13, currently in our KUD env, the K8s version is 1.12.3, so the CSI plugins will not bring up and the functionality is invalid.

Comment by Tingjie Chen [ 08/Aug/19 ]

I have submit gerrit patch for Rook deployment: https://gerrit.akraino.org/r/c/icn/+/1282

There has Ceph storage(osd), by default I set to save in folder: /rook/storage-dir for database and journal for 10GB each, please reserve enough space in local disk.

Comment by Akhila Kishore [ 02/Aug/19 ]

Just sent out the setup instructions to them

Comment by Kuralamudhan Ramakrishnan [ 02/Aug/19 ]

akhilakishore ritu.sood@intel.com Please provide the KUD installation to tingjiec hle2 -  https://github.com/onap/multicloud-k8s/tree/master/kud &  https://github.com/onap/multicloud-k8s/tree/master/kud/deployment_infra/playbooks

Comment by Kuralamudhan Ramakrishnan [ 02/Aug/19 ]

AR for r.kuralamudhan to check the kubespray k8s version ?

Comment by Kuralamudhan Ramakrishnan [ 02/Aug/19 ]

Tin - AR: Can you send me the details on Storage device?

Generated at Sat Feb 10 05:56:32 UTC 2024 using Jira 9.4.5#940005-sha1:e3094934eac4fd8653cf39da58f39364fb9cc7c1.