This chapter demonstrates how to deploy Oracle Unified Directory Services Manager (OUDSM) 12c instance(s) using the Helm package manager for Kubernetes.
Based on the configuration, this chart deploys the following objects in the specified namespace of a Kubernetes cluster.
Create a Kubernetes namespace for the OUDSM deployment by running the following command:
$ kubectl create namespace <namespace>
For example:
$ kubectl create namespace oudsmns
The output will look similar to the following:
namespace/oudsmns created
Create a Kubernetes secret that stores the credentials for the container registry where the OUDSM image is stored. This step must be followed if using Oracle Container Registry or your own private container registry. If you are not using a container registry and have loaded the images on each of the master and worker nodes, you can skip this step.
Run the following command to create the secret:
kubectl create secret docker-registry "orclcred" --docker-server=<CONTAINER_REGISTRY> \
--docker-username="<USER_NAME>" \
--docker-password=<PASSWORD> --docker-email=<EMAIL_ID> \
--namespace=<domain_namespace>
For example, if using Oracle Container Registry:
kubectl create secret docker-registry "orclcred" --docker-server=container-registry.oracle.com \
--docker-username="user@example.com" \
--docker-password=password --docker-email=user@example.com \
--namespace=oudsmns
Replace <USER_NAME>
and <PASSWORD>
with the credentials for the registry with the following caveats:
If using Oracle Container Registry to pull the OUDSM container image, this is the username and password used to login to Oracle Container Registry. Before you can use this image you must login to Oracle Container Registry, navigate to Middleware
> oudsm_cpu
and accept the license agreement.
If using your own container registry to store the OUDSM container image, this is the username and password (or token) for your container registry.
The output will look similar to the following:
secret/orclcred created
As referenced in Prerequisites the nodes in the Kubernetes cluster must have access to a persistent volume such as a Network File System (NFS) mount or a shared file system.
In this example /scratch/shared/
is a shared directory accessible from all nodes.
On the master node run the following command to create a user_projects
directory:
$ cd <persistent_volume>
$ mkdir oudsm_user_projects
$ sudo chown -R 1000:0 oudsm_user_projects
For example:
$ cd /scratch/shared
$ mkdir oudsm_user_projects
$ sudo chown -R 1000:0 oudsm_user_projects
On the master node run the following to ensure it is possible to read and write to the persistent volume:
$ cd <persistent_volume>/oudsm_user_projects
$ touch file.txt
$ ls filemaster.txt
For example:
$ cd /scratch/shared/oudsm_user_projects
$ touch filemaster.txt
$ ls filemaster.txt
On the first worker node run the following to ensure it is possible to read and write to the persistent volume:
$ cd /scratch/shared/oudsm_user_projects
$ ls filemaster.txt
$ touch fileworker1.txt
$ ls fileworker1.txt
Repeat the above for any other worker nodes e.g fileworker2.txt etc. Once proven that it’s possible to read and write from each node to the persistent volume, delete the files created.
The oudsm
Helm chart allows you to create or deploy Oracle Unified Directory Services Manager instances along with Kubernetes objects in a specified namespace.
The deployment can be initiated by running the following Helm command with reference to the oudsm
Helm chart, along with configuration parameters according to your environment.
cd $WORKDIR/kubernetes/helm
$ helm install --namespace <namespace> \
<Configuration Parameters> \
<deployment/release name> \
<Helm Chart Path/Name>
Configuration Parameters (override values in chart) can be passed on with --set
arguments on the command line and/or with -f / --values
arguments when referring to files.
Note: The examples in Create OUDSM instances below provide values which allow the user to override the default values provided by the Helm chart. A full list of configuration parameters and their default values is shown in Appendix: Configuration parameters.
For more details about the helm
command and parameters, please execute helm --help
and helm install --help
.
You can create OUDSM instances using one of the following methods:
Navigate to the $WORKDIR/kubernetes/helm
directory:
$ cd $WORKDIR/kubernetes/helm
Create an oudsm-values-override.yaml
as follows:
image:
repository: <image_location>
tag: <image_tag>
pullPolicy: IfNotPresent
imagePullSecrets:
- name: orclcred
oudsm:
adminUser: weblogic
adminPass: <password>
persistence:
type: filesystem
filesystem:
hostPath:
path: <persistent_volume>/oudsm_user_projects
For example:
image:
repository: container-registry.oracle.com/middleware/oudsm_cpu
tag: 12.2.1.4-jdk8-ol8-<July'24>
pullPolicy: IfNotPresent
imagePullSecrets:
- name: orclcred
oudsm:
adminUser: weblogic
adminPass: <password>
persistence:
type: filesystem
filesystem:
hostPath:
path: /scratch/shared/oudsm_user_projects
The following caveats exist:
Replace <password>
with a the relevant passwords.
If you are not using Oracle Container Registry or your own container registry for your OUD container image, then you can remove the following:
imagePullSecrets:
- name: orclcred
If using NFS for your persistent volume the change the persistence
section as follows:
persistence:
type: networkstorage
networkstorage:
nfs:
path: <persistent_volume>/oudsm_user_projects
server: <NFS IP address>
Run the following command to deploy OUDSM:
$ helm install --namespace <namespace> \
--values oudsm-values-override.yaml \
<release_name> oudsm
$ helm install --namespace oudsmns \
--values oudsm-values-override.yaml \
oudsm oudsm
Check the OUDSM deployment as per Verify the OUDSM deployment
--set
argumentNavigate to the $WORKDIR/kubernetes/helm
directory:
$ cd $WORKDIR/kubernetes/helm
Run the following command to create OUDSM instance:
$ helm install --namespace oudsmns \
--set oudsm.adminUser=weblogic,oudsm.adminPass=<password>,persistence.filesystem.hostPath.path=<persistent_volume>/oudsm_user_projects,image.repository=<image_location>,image.tag=<image_tag> \
--set imagePullSecrets[0].name="orclcred" \
<release_name> oudsm
For example:
$ helm install --namespace oudsmns \
--set oudsm.adminUser=weblogic,oudsm.adminPass=<password>,persistence.filesystem.hostPath.path=/scratch/shared/oudsm_user_projects,image.repository=container-registry.oracle.com/middleware/oudsm_cpu,image.tag=12.2.1.4-jdk8-ol8-<July'24> \
--set imagePullSecrets[0].name="orclcred" \
oudsm oudsm
The following caveats exist:
<password>
with a the relevant password.--set imagePullSecrets[0].name="orclcred"
persistence.networkstorage.nfs.path=<persistent_volume>/oudsm_user_projects,persistence.networkstorage.nfs.server:<NFS IP address>
.Check the OUDSM deployment as per Verify the OUDSM deployment
In all the examples above, the following output is shown following a successful execution of the helm install
command.
NAME: oudsm
LAST DEPLOYED: <DATE>
NAMESPACE: oudsmns
STATUS: deployed
REVISION: 1
TEST SUITE: None
Run the following command to verify the OUDSM deployment:
$ kubectl --namespace <namespace> get pod,service,secret,pv,pvc,ingress -o wide
For example:
$ kubectl --namespace oudsmns get pod,service,secret,pv,pvc,ingress -o wide
The output will look similar to the following:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/oudsm-1 1/1 Running 0 73m 10.244.0.19 <worker-node> <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/oudsm-1 ClusterIP 10.96.108.200 <none> 7001/TCP,7002/TCP 73m app.kubernetes.io/instance=oudsm,app.kubernetes.io/name=oudsm,oudsm/instance=oudsm-1
service/oudsm-lbr ClusterIP 10.96.41.201 <none> 7001/TCP,7002/TCP 73m app.kubernetes.io/instance=oudsm,app.kubernetes.io/name=oudsm
NAME TYPE DATA AGE
secret/orclcred kubernetes.io/dockerconfigjson 1 3h13m
secret/oudsm-creds opaque 2 73m
secret/oudsm-token-ksr4g kubernetes.io/service-account-token 3 73m
secret/sh.helm.release.v1.oudsm.v1 helm.sh/release.v1 1 73m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/oudsm-pv 30Gi RWX Retain Bound myoudsmns/oudsm-pvc manual 73m Filesystem
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/oudsm-pvc Bound oudsm-pv 30Gi RWX manual 73m Filesystem
NAME HOSTS ADDRESS PORTS AGE
ingress.extensions/oudsm-ingress-nginx oudsm-1,oudsm-2,oudsm + 1 more... 100.102.51.230 80 73m
Note: It will take several minutes before all the services listed above show. While the oudsm pods have a STATUS
of 0/1
the pod is started but the OUDSM server associated with it is currently starting. While the pod is starting you can check the startup status in the pod logs, by running the following command:
$ kubectl logs oudsm-1 -n oudsmns
Note : If the OUDSM deployment fails additionally refer to Troubleshooting for instructions on how describe the failing pod(s). Once the problem is identified follow Undeploy an OUDSM deployment to clean down the deployment before deploying again.
Kubernetes objects created by the Helm chart are detailed in the table below:
Type | Name | Example Name | Purpose |
---|---|---|---|
Service Account | <deployment/release name> | oudsm | Kubernetes Service Account for the Helm Chart deployment |
Secret | <deployment/release name>-creds | oudsm-creds | Secret object for Oracle Unified Directory Services Manager related critical values like passwords |
Persistent Volume | <deployment/release name>-pv | oudsm-pv | Persistent Volume for user_projects mount. |
Persistent Volume Claim | <deployment/release name>-pvc | oudsm-pvc | Persistent Volume Claim for user_projects mount. |
Pod | <deployment/release name>-N | oudsm-1, oudsm-2, … | Pod(s)/Container(s) for Oracle Unified Directory Services Manager Instances |
Service | <deployment/release name>-N | oudsm-1, oudsm-2, … | Service(s) for HTTP and HTTPS interfaces from Oracle Unified Directory Services Manager instance <deployment/release name>-N |
Ingress | <deployment/release name>-ingress-nginx | oudsm-ingress-nginx | Ingress Rules for HTTP and HTTPS interfaces. |
With an OUDSM instance now deployed you are now ready to configure an ingress controller to direct traffic to OUDSM as per Configure an ingress for an OUDSM.
Find the deployment release name:
$ helm --namespace <namespace> list
For example:
$ helm --namespace oudsmns list
The output will look similar to the following:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
oudsm oudsmns 2 <DATE> deployed oudsm-0.1 12.2.1.4.0
Delete the deployment using the following command:
$ helm uninstall --namespace <namespace> <release>
For example:
$ helm uninstall --namespace oudsmns oudsm
release "oudsm" uninstalled
Delete the contents of the oudsm_user_projects
directory in the persistent volume:
$ cd <persistent_volume>/oudsm_user_projects
$ rm -rf *
For example:
$ cd /scratch/shared/oudsm_user_projects
$ rm -rf *
The following table lists the configurable parameters of the ‘oudsm’ chart and their default values.
Parameter | Description | Default Value |
---|---|---|
replicaCount | Number of Oracle Unified Directory Services Manager instances/pods/services to be created | 1 |
restartPolicyName | restartPolicy to be configured for each POD containing Oracle Unified Directory Services Manager instance | OnFailure |
image.repository | Oracle Unified Directory Services Manager Image Registry/Repository and name. Based on this, image parameter would be configured for Oracle Unified Directory Services Manager pods/containers | oracle/oudsm |
image.tag | Oracle Unified Directory Services Manager Image Tag. Based on this, image parameter would be configured for Oracle Unified Directory Services Manager pods/containers | 12.2.1.4.0 |
image.pullPolicy | policy to pull the image | IfnotPresent |
imagePullSecrets.name | name of Secret resource containing private registry credentials | regcred |
nameOverride | override the fullname with this name | |
fullnameOverride | Overrides the fullname with the provided string | |
serviceAccount.create | Specifies whether a service account should be created | true |
serviceAccount.name | If not set and create is true, a name is generated using the fullname template | oudsm-< fullname >-token-< randomalphanum > |
podSecurityContext | Security context policies to add to the controller pod | |
securityContext | Security context policies to add by default | |
service.type | type of controller service to create | ClusterIP |
nodeSelector | node labels for pod assignment | |
tolerations | node taints to tolerate | |
affinity | node/pod affinities | |
ingress.enabled | true | |
ingress.type | Supported value: nginx | nginx |
ingress.host | Hostname to be used with Ingress Rules. If not set, hostname would be configured according to fullname. Hosts would be configured as < fullname >-http.< domain >, < fullname >-http-0.< domain >, < fullname >-http-1.< domain >, etc. | |
ingress.domain | Domain name to be used with Ingress Rules. In ingress rules, hosts would be configured as < host >.< domain >, < host >-0.< domain >, < host >-1.< domain >, etc. | |
ingress.backendPort | http | |
ingress.nginxAnnotations | { kubernetes.io/ingress.class: “nginx" nginx.ingress.kubernetes.io/affinity-mode: “persistent” nginx.ingress.kubernetes.io/affinity: “cookie” } | |
ingress.ingress.tlsSecret | Secret name to use an already created TLS Secret. If such secret is not provided, one would be created with name < fullname >-tls-cert. If the TLS Secret is in different namespace, name can be mentioned as < namespace >/< tlsSecretName > | |
ingress.certCN | Subject’s common name (cn) for SelfSigned Cert. | < fullname > |
ingress.certValidityDays | Validity of Self-Signed Cert in days | 365 |
secret.enabled | If enabled it will use the secret created with base64 encoding. if value is false, secret would not be used and input values (through –set, –values, etc.) would be used while creation of pods. | true |
secret.name | secret name to use an already created Secret | oudsm-< fullname >-creds |
secret.type | Specifies the type of the secret | Opaque |
persistence.enabled | If enabled, it will use the persistent volume. if value is false, PV and PVC would not be used and pods would be using the default emptyDir mount volume. | true |
persistence.pvname | pvname to use an already created Persistent Volume , If blank will use the default name | oudsm-< fullname >-pv |
persistence.pvcname | pvcname to use an already created Persistent Volume Claim , If blank will use default name | oudsm-< fullname >-pvc |
persistence.type | supported values: either filesystem or networkstorage or custom | filesystem |
persistence.filesystem.hostPath.path | The path location mentioned should be created and accessible from the local host provided with necessary privileges for the user. | /scratch/shared/oudsm_user_projects |
persistence.networkstorage.nfs.path | Path of NFS Share location | /scratch/shared/oudsm_user_projects |
persistence.networkstorage.nfs.server | IP or hostname of NFS Server | 0.0.0.0 |
persistence.custom.* | Based on values/data, YAML content would be included in PersistenceVolume Object | |
persistence.accessMode | Specifies the access mode of the location provided | ReadWriteMany |
persistence.size | Specifies the size of the storage | 10Gi |
persistence.storageClass | Specifies the storageclass of the persistence volume. | empty |
persistence.annotations | specifies any annotations that will be used | { } |
oudsm.adminUser | Weblogic Administration User | weblogic |
oudsm.adminPass | Password for Weblogic Administration User | |
oudsm.startupTime | Expected startup time. After specified seconds readinessProbe would start | 900 |
oudsm.livenessProbeInitialDelay | Paramter to decide livenessProbe initialDelaySeconds | 1200 |
elk.logStashImage | The version of logstash you want to install | logstash:8.3.1 |
elk.sslenabled | If SSL is enabled for ELK set the value to true, or if NON-SSL set to false. This value must be lowercase | TRUE |
elk.eshosts | The URL for sending logs to Elasticsearch. HTTP if NON-SSL is used | https://elasticsearch.example.com:9200 |
elk.esuser | The name of the user for logstash to access Elasticsearch | logstash_internal |
elk.espassword | The password for ELK_USER | password |
elk.esapikey | The API key details | apikey |
elk.esindex | The log name | oudsmlogs-00001 |
elk.imagePullSecrets | secret to be used for pulling logstash image | dockercred |