Basic Functional Test (REC-31)

[REC-39] Storage check Created: 26/Aug/19  Updated: 22/Oct/19

Status: In Progress
Project: Radio Edge Cloud
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Sub-task Priority: Medium
Reporter: Deepak Kataria Assignee: Naga Sugguna
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

Testing Resiliency of PVC using Ceph RBD PV

Create a Ceph RBD PV
Bind that PV to POD with the PVC
Verify the POD is consuming a block storage Ceph RBD PV
Kill the POD
Reattach the Ceph RBD PV to the newly recreated POD with the PVC
Delete the PV (ensuring that the underling RBD image goes away)
Kill and restart the POD and check for the resiliency of the PVC

REC-34 is testing Ceph Object Store Back-end by

1. .Storing Docker image

2. Retrieving Docker image

3. Deleting Docker image

 

 



 Comments   
Comment by Naga Sugguna [ 22/Oct/19 ]

https://gerrit.akraino.org/r/c/ta/cloudtaf/+/1809/2/testcases/basic_func_tests/tc_008_storage_check.py

Comment by Deepak Kataria [ 18/Oct/19 ]

Naga is waiting for review.

Comment by Naga Sugguna [ 17/Oct/19 ]

https://gerrit.akraino.org/r/c/ta/cloudtaf/+/1800

Comment by Naga Sugguna [ 17/Oct/19 ]

Can I assume Ceph is already installed?
Can I assume RBD is also installed as part of Ceph?
Can I assume Retain/Delete StorageClass(s) provisioned with RBD? If TestCase has to create, how to get credentials and pool details of Ceph?

Comment by Deepak Kataria [ 16/Oct/19 ]

Naga is working on the patch. He will commit when it is completed.

Comment by Deepak Kataria [ 15/Oct/19 ]

Naga is working on this to-day.

Comment by Deepak Kataria [ 14/Oct/19 ]

Naga did not have time to work on this; he will work on test automation to complete this item

Comment by Deepak Kataria [ 09/Oct/19 ]

Naga working to automate these test cases.

Comment by Naga Sugguna [ 08/Oct/19 ]
  1. These are the manual steps to test the ask.
  2. Following yaml can be used to create a PVC.
    $ cat pvc.yaml
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
    name: ceph-claim
    spec:
    accessModes:
  • ReadWriteOnce
    resources:
    requests:
    storage: 2Gi
  1. Get the persitent volument name
    $ kubectl get pvc ceph-claim | awk 'NR==2 {print $3}

    '

  2. Describe PV to get Reclaim Policy, RBDImage, RBDPool
    $ kubectl describe pv pvc-e6aad6b5-ffb6-422c-ae9b-0d547a9a4685
    $ sudo rbd list -p <RBDPool> | grep <RBDImage>
  3. Incase of Reclaim Policy: Retain rbd list should return the image.

This is tested on a environment where Ceph is intstalled from its packages ( not docker container or k8s pods)

Comment by Deepak Kataria [ 08/Oct/19 ]

Naga is working to manually test RBD installation and RBD+K8s integration. He will finalize the manual steps to-day.

Comment by Deepak Kataria [ 27/Sep/19 ]

In this case, we can modify the test case as follows:

  1. Delete the PV
  2. If the RBD image still exists:
        a. Check if the StorageClass mandates retention, pass TC
        b. Check if the StorageClass does not mandate retention, fail TC
  3. Alternatively, if the RBD image does not exist:
        a. Check if the StorageClass mandates retention, fail TC
        b. Check if the StorageClass does not mandate retention, pass TC

cc: Levovar

Comment by Krisztián Lengyel [ 27/Sep/19 ]

By default requested PVs retained in REC. It's controlled by the StorageClass object: https://gerrit.akraino.org/r/gitweb?p=ta/caas-kubernetes.git;a=blob;f=ansible/roles/kubernetes_ceph/templates/ceph-storageclass.yaml.j2;h=d6988730ca27b7267cd17b0e4e2a5a4853c5645c;hb=HEAD#l35 So this TC will surely fail.

cc: Levovar

Comment by Deepak Kataria [ 20/Sep/19 ]

Thanks for your feedback. Please see clarifications below.

Delete the PV (ensuring that the underling RBD image goes away) - The idea of this test is that when you delete a Ceph-backed PV, the deletion should cascade to the Ceph RBD image that implements the PV in Ceph. This is an important test because if RBD images do not go away on PV deletion, we may end up with slowly-accumulating wasted space in Ceph instance.

 

REC-34 in not related to Ceph, docker registry uses different storage backend (swift) - Thank you very much. We will provide a swift test for docker registry.

 

 

 

Comment by Krisztián Lengyel [ 19/Sep/19 ]

Some points about the test case:

  • About

    Delete the PV (ensuring that the underling RBD image goes away)

    I am not 100% sure will work, I think deletion of a bound PV won't trigger a new PV provisioning.

  • REC-34 in not related to Ceph, docker registry uses different storage backend (swift).

 

Generated at Sat Feb 10 06:04:36 UTC 2024 using Jira 9.4.5#940005-sha1:e3094934eac4fd8653cf39da58f39364fb9cc7c1.