As per the Prerequisites a Kubernetes cluster should have already been configured.
Run the following command on the master node to check the cluster and worker nodes are running:
$ kubectl get nodes,pods -n kube-system
The output will look similar to the following:
NAME STATUS ROLES AGE VERSION node/worker-node1 Ready <none> 17h v1.20.10 node/worker-node2 Ready <none> 17h v1.20.10 node/master-node Ready control-plane,master 23h v1.20.10 NAME READY STATUS RESTARTS AGE pod/coredns-66bff467f8-slxdq 1/1 Running 1 67d pod/coredns-66bff467f8-v77qt 1/1 Running 1 67d pod/etcd-10.89.73.42 1/1 Running 1 67d pod/kube-apiserver-10.89.73.42 1/1 Running 1 67d pod/kube-controller-manager-10.89.73.42 1/1 Running 27 67d pod/kube-flannel-ds-amd64-r2m8r 1/1 Running 2 48d pod/kube-flannel-ds-amd64-rdhrf 1/1 Running 2 6d1h pod/kube-flannel-ds-amd64-vpcbj 1/1 Running 3 66d pod/kube-proxy-jtcxm 1/1 Running 1 67d pod/kube-proxy-swfmm 1/1 Running 1 66d pod/kube-proxy-w6x6t 1/1 Running 1 66d pod/kube-scheduler-10.89.73.42 1/1 Running 29 67d
The OUD Kubernetes deployment requires access to an OUD container image. The image can be obtained in the following ways:
The latest prebuilt OUD container image can be downloaded from Oracle Container Registry. This image is prebuilt by Oracle and includes Oracle Unified Directory 18.104.22.168.0 and the latest PSU.
Note: Before using this image you must login to Oracle Container Registry, navigate to
oud_cpu and accept the license agreement.
Alternatively the same image can also be downloaded from My Oracle Support by referring to the document ID 2723908.1.
You can use this image in the following ways:
You can build your own OUD container image using the WebLogic Image Tool. This is recommended if you need to apply one off patches to a Prebuilt OUD container image. For more information about building your own container image with WebLogic Image Tool, see Create or update image.
You can use an image built with WebLogic Image Tool in the following ways:
Note: This documentation does not tell you how to pull or push the above images into a private container registry, or stage them on the master and worker nodes. Details of this can be found in the Enterprise Deployment Guide.
As referenced in Prerequisites the nodes in the Kubernetes cluster must have access to a persistent volume such as a Network File System (NFS) mount or a shared file system.
Make sure the persistent volume path has full access permissions, and that the folder is empty. In this example
/scratch/shared/ is a shared directory accessible from all nodes.
On the master node run the following command to create a
$ cd <persistent_volume> $ mkdir oud_user_projects $ chmod 777 oud_user_projects
$ cd /scratch/shared $ mkdir oud_user_projects $ chmod 777 oud_user_projects
On the master node run the following to ensure it is possible to read and write to the persistent volume:
$ cd <persistent_volume>/oud_user_projects $ touch file.txt $ ls filemaster.txt
$ cd /scratch/shared/oud_user_projects $ touch filemaster.txt $ ls filemaster.txt
On the first worker node run the following to ensure it is possible to read and write to the persistent volume:
$ cd /scratch/shared/oud_user_projects $ ls filemaster.txt $ touch fileworker1.txt $ ls fileworker1.txt
Repeat the above for any other worker nodes e.g fileworker2.txt etc. Once proven that it’s possible to read and write from each node to the persistent volume, delete the files created.
Oracle Unified Directory deployment on Kubernetes leverages deployment scripts provided by Oracle for creating Oracle Unified Directory containers using the Helm charts provided. To deploy Oracle Unified Directory on Kubernetes you should set up the deployment scripts on the persistent volume as below:
Note: The work directory must be created on the persistent volume as access to the helm charts is required by a cron job created during OUD deployment.
Create a working directory on the persistent volume to setup the source code.
$ mkdir <persistent_volume>/<workdir>
$ mkdir /scratch/shared/OUDContainer
Download the latest OUD deployment scripts from the OUD repository:
$ cd <persistent_volume>/<workdir> $ git clone https://github.com/oracle/fmw-kubernetes.git --branch release/22.2.1
$ cd /scratch/shared/OUDContainer $ git clone https://github.com/oracle/fmw-kubernetes.git --branch release/22.2.1
$WORKDIR environment variable as follows:
$ export WORKDIR=<workdir>/fmw-kubernetes/OracleUnifiedDirectory
$ export WORKDIR=/scratch/shared/OUDContainer/fmw-kubernetes/OracleUnifiedDirectory
You are now ready to create the OUD deployment as per Create OUD instances.