[REC-80] RIC port to AArch64 Created: 20/Jan/20 Updated: 15/Jun/20 Resolved: 05/Mar/20 |
|
| Status: | Done |
| Project: | Radio Edge Cloud |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Task | Priority: | Medium |
| Reporter: | Alex Antone | Assignee: | Alex Antone |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Attachments: |
|
| Description |
|
See attached RIC Components.xlsx
|
| Comments |
| Comment by Alex Antone [ 20/Feb/20 ] |
|
RIC cluster seems to be deploying successfully.
|
| Comment by Alex Antone [ 04/Feb/20 ] |
|
WIP RIC integration scripts at: |
| Comment by Alex Antone [ 04/Feb/20 ] |
|
https://wiki.akraino.org/display/AK/REC+Project+Minutes+2020.01.23 :
https://wiki.akraino.org/display/AK/REC+Project+Minutes+2020.01.31 :
|
| Comment by Alex Antone [ 03/Feb/20 ] |
|
Regarding RIC integration, based in the CI/CD logs, there is one more step "Install_CAAS_Ingress_on_OE1" before installing RIC which does not have logs pushed to nexus: 13:54:55 Started by upstream project "Install_RIC_on_OpenEdge1" build number 206 13:54:55 originally caused by: 13:54:55 Started by upstream project "Install_CAAS_Ingress_on_OE1" build number 38 13:54:55 originally caused by: 13:54:55 Started by upstream project "Install_REC_on_OpenEdge1" build number 412 13:54:55 originally caused by: 13:54:55 Started by upstream project "Synchronize_REC_ISO_cache" build number 3900 Install_REC_on_OpenEdge1 build number 412 -> https://nexus.akraino.org/content/sites/logs/att/job/Install_REC_on_OpenEdge1/412/ (Jan 27 2020) |
| Comment by Alex Antone [ 03/Feb/20 ] |
|
damnnet network config from attached job-Install_RIC_on_OpenEdge1-206-console-timestamp.log + cd ./REC_integration/danm/Middletown_OE1 + ls danmnet40-e2adapter.yaml danmnet40-portal.yaml danmnet40-ricplatform.yaml + xargs -n 1 kubectl apply -f Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply clusternetwork.danm.k8s.io/default configured clusternetwork.danm.k8s.io/e2agent created clusternetwork.danm.k8s.io/e2adapter created clusternetwork.danm.k8s.io/default unchanged clusternetwork.danm.k8s.io/ext created clusternetwork.danm.k8s.io/ext2 created clusternetwork.danm.k8s.io/ew created clusternetwork.danm.k8s.io/default unchanged clusternetwork.danm.k8s.io/ran created clusternetwork.danm.k8s.io/ext1 created clusternetwork.danm.k8s.io/ew configured |
| Comment by Alex Antone [ 31/Jan/20 ] |
|
Found some more charts under ric-infra/kong (Updated RIC Components.xlsx kong kong kong-ingress-controller busybox kong/charts/cassandra cassandra nuvo/cain criteord/cassandra_exporter kong/charts/postgresql bitnami/postgresql bitnami/minideb wrouesnel/postgres_exporter xapp-tiller kubernetes-helm/tiller |
| Comment by Alex Antone [ 31/Jan/20 ] |
|
Ported: |
| Comment by Alex Antone [ 29/Jan/20 ] |
|
Ported: |
| Comment by Alex Antone [ 29/Jan/20 ] |
|
Also the fact that some docker images clone and build from latest master of external repositories might be a big problem. For ric-plt-e2: |
| Comment by Alex Antone [ 29/Jan/20 ] |
|
ric-plt-e2 fails to build on AArch64 due to x86 specific assembler code: Scanning dependencies of target e2
[ 99%] Building CXX object CMakeFiles/e2.dir/RIC-E2-TERMINATION/sctpThread.cpp.o
In file included from /opt/e2/RIC-E2-TERMINATION/sctpThread.cpp:22:0:
/opt/e2/RIC-E2-TERMINATION/sctpThread.h: In function 'double approx_CPU_MHz(unsigned int)':
/opt/e2/RIC-E2-TERMINATION/sctpThread.h:430:71: error: impossible constraint in 'asm'
asm volatile ("rdtscp\n" : "=a" (rax), "=d" (rdx), "=c" (aux) : :);
^
/opt/e2/RIC-E2-TERMINATION/sctpThread.h:430:71: error: impossible constraint in 'asm'
asm volatile ("rdtscp\n" : "=a" (rax), "=d" (rdx), "=c" (aux) : :);
^
CMakeFiles/e2.dir/build.make:62: recipe for target 'CMakeFiles/e2.dir/RIC-E2-TERMINATION/sctpThread.cpp.o' failed
make[2]: *** [CMakeFiles/e2.dir/RIC-E2-TERMINATION/sctpThread.cpp.o] Error 1
CMakeFiles/Makefile2:149: recipe for target 'CMakeFiles/e2.dir/all' failed
Makefile:129: recipe for target 'all' failed
make[1]: *** [CMakeFiles/e2.dir/all] Error 2
make: *** [all] Error 2
The command '/bin/sh -c cd /opt/e2/ && git clone https://github.com/bilke/cmake-modules.git && cd /opt/e2/ && /usr/local/bin/cmake -D CMAKE_BUILD_TYPE=$BUILD_TYPE . && make' returned a non-zero code: 2
|
| Comment by Alex Antone [ 29/Jan/20 ] |
|
Also it seems the way docker image tags are selected is based on the example recipes. For ric-plt-e2mgr:
But in the ric.tar archive: ric/ric_image/ric-plt-e2mgr:2.0.4 ric/ric_image/ric-plt-e2mgr:2.0.7 |
| Comment by Alex Antone [ 29/Jan/20 ] |
|
Ported: Unfortunatelly these images built from the master branch might be too new and we might actually need to build from the recently pulled Amber (frozen?) release branch |
| Comment by Jimmy Lafontaine [ 27/Jan/20 ] |
|
Successfully built ric-plt-lib-rmr which some other containers are dependent on. To build and export the artifacts run: docker build -t lib-rmr -f ci/Dockerfile . docker run -v `pwd`/build:/export lib-rmr I also published the artifacts to http://artifacts.cachengo.com/ric-deps/ |
| Comment by Jimmy Lafontaine [ 27/Jan/20 ] |
|
Successfully built the infra images which are the base of some of the other images in ric-plt. Building it required some changes: https://github.com/cachengo/oran-ci-management/commit/a976e3939d4c0cf52c4c6c28535190fd81116e55 . Some of these containers also pull a Java app whose aarch64 support I'm not sure about. |
| Comment by Alex Antone [ 27/Jan/20 ] |
|
Successfully built ric-plt-a1 image on the Ampere POD1 Jumphost: git clone "https://gerrit.o-ran-sc.org/r/ric-plt/a1" ric-plt-a1 && cd ric-plt-a1 docker build --network=host -f Dockerfile . |
| Comment by Alex Antone [ 27/Jan/20 ] |
|
Build arguments for most of the o-ran-sc projects can be inferred from the https://gerrit.o-ran-sc.org/r/admin/repos/ci-management repo jjb files: jjb/ric-plt-a1/ric-plt-a1.yaml jjb/ric-plt-appmgr/ric-plt-appmgr.yaml jjb/ric-plt-asn1/ric-plt-asn1-documents.yaml jjb/ric-plt/dbaas/hiredis-vip/info-ric-plt-dbaas-hiredis-vip.yaml jjb/ric-plt-dbaas/ric-plt-dbaas.yaml jjb/ric-plt/demo1/info-ric-plt-demo1.yaml jjb/ric-plt-e2mgr/ric-plt-e2mgr.yaml jjb/ric-plt-e2/ric-plt-e2.yaml jjb/ric-plt/jaegeradapter/info-ric-plt-jaegeradapter.yaml jjb/ric-plt-lib-rmr/ric-plt-lib-rmr.yaml jjb/ric-plt/nodeb-rnib/info-ric-plt-nodeb-rnib.yaml jjb/ric-plt-resource-status-manager/ric-plt-resource-status-manager.yaml jjb/ric-plt/resource-status-processor/info-ric-plt-resource-status-processor.yaml jjb/ric-plt/ric-dep/info-ric-plt-ric-dep.yaml jjb/ric-plt/ric-test/info-ric-plt-ric-test.yaml jjb/ric-plt-rtmgr/ric-plt-rtmgr.yaml jjb/ric-plt-sdlgo/ric-plt-sdlgo.yaml jjb/ric-plt-sdlpy/ric-plt-sdlpy.yaml jjb/ric-plt-sdl/ric-plt-sdl.yaml jjb/ric-plt/streaming-protobufs/info-ric-plt-streaming-protobufs.yaml jjb/ric-plt-submgr/ric-plt-submgr.yaml jjb/ric-plt-tracelibcpp/ric-plt-tracelibcpp.yaml jjb/ric-plt-tracelibgo/ric-plt-tracelibgo.yaml jjb/ric-plt/utils/info-ric-plt-utils.yaml jjb/ric-plt-vespamgr/ric-plt-vespamgr.yaml jjb/ric-plt-xapp-frame/ric-plt-xapp-frame.yaml For example in jjb/ric-plt-a1/ric-plt-a1.yaml: - a1_common: &a1_common
# values apply to all A1 projects
name: a1-common
# git repo
project: ric-plt/a1
# jenkins job name prefix
project-name: ric-plt-a1
# maven settings file has docker credentials
mvn-settings: ric-plt-a1-settings
- project:
<<: *a1_common
name: ric-plt-a1
# image name
docker-name: 'o-ran-sc/{name}'
# source of docker tag
container-tag-method: yaml-file
# use host network
docker-build-args: '--network=host'
build-node: ubuntu1804-docker-4c-4g
stream:
- master:
branch: master
jobs:
- '{project-name}-gerrit-docker-jobs'
|
| Comment by Alex Antone [ 23/Jan/20 ] |
|
Compiled docs from the it/dep repo can be found here: Installation procedure: |
| Comment by Alex Antone [ 23/Jan/20 ] |
|
As mentioned in https://wiki.akraino.org/display/AK/REC+Project+Minutes+2020.01.23 , There will be a new RIC install procedure as ORAN-SCs RIC has a new release (Bronze).
|
| Comment by Alex Antone [ 22/Jan/20 ] |
|
The REC blueprint & workflow repo https://gerrit.akraino.org/r/admin/repos/rec also has some RIC references and some simillar steps as in REC_integration/INSTALL_RIC.sh from https://gerrit.mtlab.att-akraino.org/r/rec_integration : # Docker images referenced xapp-manager:latest e2mgr:1.0.0 e2:1.0.0 rtmgr:0.0.2 redis-standalone:latest # Namespace creation kubectl create namespace ricplatform # Run install script from it/dep/generated/ricplt/ # (source repo: https://gerrit.o-ran-sc.org/r/it/dep ) bash -x ./ric_install.sh 11: kubectl create namespace rictest 12: kubectl create namespace ricxa
|
| Comment by Alex Antone [ 21/Jan/20 ] |
|
Contents of final ric.tar image broken into sections: ric/
|
| Comment by Alex Antone [ 21/Jan/20 ] |
|
Looking at logs from https://nexus.akraino.org/content/sites/logs/att/job/Install_RIC_on_OpenEdge1/203/ the following steps are performed: 1. Updating and Repackaging of the ric.tar file * # Cloning repository ssh://attjenkins@gerrit.mtlab.att-akraino.org:29418/rec_integration -> REC_integration > git init /home/jenkins/workspace/Install_RIC_on_OpenEdge1/REC_integration > git fetch --tags --progress -- ssh://attjenkins@gerrit.mtlab.att-akraino.org:29418/rec_integration * # Cloning repository https://gerrit.o-ran-sc.org/r/it/dep -> dep > git init /home/jenkins/workspace/Install_RIC_on_OpenEdge1/dep > git fetch --tags --progress -- https://gerrit.o-ran-sc.org/r/it/dep * # Fetch ric.tar from http://www.mtlab.att-akraino.org/RIC/latest and rebuild tarfile + curl --output ./ric.tar http://www.mtlab.att-akraino.org/RIC/latest + tar xf ric.tar + rm -fr ric/dep ric/REC_integration ric.tar + rm -fr dep/.git REC_integration/.git + mv dep REC_integration ric + tar cf jenkins-Install_RIC_on_OpenEdge1-203/ric.tar ric + rm -fr ric # Push tarfile to 172.28.16.201 2. Install RIC * # Executing /tmp/INSTALL_RIC.sh /tmp/ric.tar Middletown_OE1 + cd /opt + sudo tar xfv /tmp/ric.tar Extracted files: RIC_tarfile_contents.txt * + sudo chown -R cloudadmin:cloudadmin ric + rm -f /tmp/ric.tar * # Preload images + cd /opt/ric + bash ./dep/bin/preload-images -p /opt/ric/ric_image -d registry.kube-system.svc.rec.io:5555/ric * # Deploying DANM Networks + cd ./REC_integration/danm/Middletown_OE1 + ls danmnet40-e2adapter.yaml danmnet40-portal.yaml danmnet40-ricplatform.yaml | xargs -n 1 kubectl apply -f * # Creating RIC K8S Namespaces + cd /opt/ric + echo ricinfra ricplt ricaux ricxapp | xargs -n 1 kubectl create ns + kubectl apply -f /opt/ric/REC_integration/ricplt-role.yaml + kubectl apply -f /opt/ric/REC_integration/rbac.yaml + kubectl apply -f /opt/ric/REC_integration/rbac2.yaml * # Deploying RIC Infrastructure + bash -e ./dep/bin/deploy-ric-infra /opt/ric/REC_integration/recipe/Middletown_OE1/infra.yaml Deploying RIC infra components [kong] Deploying RIC infra components [credential] Deploying RIC infra components [xapp-tiller] * # Deploying RIC Platform + bash -e ./dep/bin/deploy-ric-platform /opt/ric/REC_integration/recipe/Middletown_OE1/platform.yaml Deploying RIC infra components [appmgr rtmgr dbaas1 e2mgr e2term a1mediator submgr vespamgr rsm jaegeradapter] Deploying RIC infra components [extsvcplt] * # Finished installation + exit 0
|