Use this Quick Start to create an Oracle SOA Suite domain deployment in a Kubernetes cluster (on-premise environments) with the WebLogic Kubernetes Operator. Note that this walkthrough is for demonstration purposes only, not for use in production. These instructions assume that you are already familiar with Kubernetes. If you need more detailed instructions, refer to the Install Guide.
The Linux kernel supported for deploying and running Oracle SOA Suite domains with the operator is Oracle Linux 8 and Red Hat Enterprise Linux 8. Refer to the prerequisites for more details.
For this exercise, the minimum hardware requirements to create a single-node Kubernetes cluster and then deploy the soaosb
(Oracle SOA Suite, Oracle Service Bus, and Enterprise Scheduler (ESS)) domain type with one Managed Server for Oracle SOA Suite and one for the Oracle Service Bus cluster, along with Oracle Database running as a container are:
Hardware | Size |
---|---|
RAM | 32GB |
Disk Space | 250GB+ |
CPU core(s) | 6 |
See here for resource sizing information for Oracle SOA Suite domains set up on a Kubernetes cluster.
Use the steps in this topic to create a single-instance on-premise Kubernetes cluster and then create an Oracle SOA Suite soaosb
domain type, which deploys a domain with Oracle SOA Suite, Oracle Service Bus, and Oracle Enterprise Scheduler (ESS).
For illustration purposes, these instructions are for Oracle Linux 8. If you are using a different flavor of Linux, you will need to adjust the steps accordingly.
These steps must be run with the root
user, unless specified otherwise.
Any time you see YOUR_USERID
in a command, you should replace it with your actual userid
.
Choose the directories where your Kubernetes files will be stored. The Kubernetes directory is used for the /var/lib/kubelet
file system and persistent volume storage.
$ export kubelet_dir=/u01/kubelet
$ mkdir -p $kubelet_dir
$ ln -s $kubelet_dir /var/lib/kubelet
Verify that IPv4 forwarding is enabled on your host.
Note: Replace eth0 with the ethernet interface name of your compute resource if it is different.
$ /sbin/sysctl -a 2>&1|grep -s 'net.ipv4.conf.eth0.forwarding'
$ /sbin/sysctl -a 2>&1|grep -s 'net.ipv4.conf.lo.forwarding'
$ /sbin/sysctl -a 2>&1|grep -s 'net.ipv4.ip_nonlocal_bind'
For example: Verify that all are set to 1:
$ net.ipv4.conf.eth0.forwarding = 1
$ net.ipv4.conf.lo.forwarding = 1
$ net.ipv4.ip_nonlocal_bind = 1
Solution: Set all values to 1 immediately:
$ /sbin/sysctl net.ipv4.conf.eth0.forwarding=1
$ /sbin/sysctl net.ipv4.conf.lo.forwarding=1
$ /sbin/sysctl net.ipv4.ip_nonlocal_bind=1
To preserve the settings permanently: Update the above values to 1 in files in /usr/lib/sysctl.d/
, /run/sysctl.d/
, and /etc/sysctl.d/
.
Verify the iptables rule for forwarding.
Kubernetes uses iptables to handle many networking and port forwarding rules. A standard container installation may create a firewall rule that prevents forwarding.
Verify if the iptables rule to accept forwarding traffic is set:
$ /sbin/iptables -L -n | awk '/Chain FORWARD / {print $4}' | tr -d ")"
If the output is “DROP”, then run the following command:
$ /sbin/iptables -P FORWARD ACCEPT
Verify if the iptables rule is properly set to “ACCEPT”:
$ /sbin/iptables -L -n | awk '/Chain FORWARD / {print $4}' | tr -d ")"
Disable and stop firewalld
:
$ systemctl disable firewalld
$ systemctl stop firewalld
Note: If you have already configured CRI-O and Podman, continue to Install and configure Kubernetes.
Make sure that you have the right operating system version:
$ uname -a
$ more /etc/oracle-release
Example output:
Linux xxxxxx 5.15.0-100.96.32.el8uek.x86_64 #2 SMP Tue Feb 27 18:08:15 PDT 2024 x86_64 x86_64 x86_64 GNU/Linux
Oracle Linux Server release 8.6
Installing CRI-O:
### Add OLCNE( Oracle Cloud Native Environment ) Repository to dnf config-manager. This allows dnf to install the additional packages required for CRI-O installation.
$ dnf config-manager --add-repo https://yum.oracle.com/repo/OracleLinux/OL9/olcne18/x86_64
### Installing cri-o
$ dnf install -y cri-o
Note: To install a different version of CRI-O or on a different operating system, see CRI-O Installation Instructions.
Start the CRI-O service:
Set up Kernel Modules and Proxies
### Enable kernel modules overlay and br_netfilter which are required for Kubernetes Container Network Interface (CNI) plugins
$ modprobe overlay
$ modprobe br_netfilter
### To automatically load these modules at system start up create config as below
$ cat <<EOF > /etc/modules-load.d/crio.conf
overlay
br_netfilter
EOF
$ sysctl --system
### Set the environmental variable CONTAINER_RUNTIME_ENDPOINT to crio.sock to use crio as the container runtime
$ export CONTAINER_RUNTIME_ENDPOINT=unix:///var/run/crio/crio.sock
### Setup Proxy for CRIO service
$ cat <<EOF > /etc/sysconfig/crio
http_proxy=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
https_proxy=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
HTTPS_PROXY=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
HTTP_PROXY=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
no_proxy=localhost,127.0.0.0/8,ADD-YOUR-INTERNAL-NO-PROXY-LIST,/var/run/crio/crio.sock
NO_PROXY=localhost,127.0.0.0/8,ADD-YOUR-INTERNAL-NO-PROXY-LIST,/var/run/crio/crio.sock
EOF
Set the runtime for CRI-O
### Setting the runtime for crio
## Update crio.conf
$ vi /etc/crio/crio.conf
## Append following under [crio.runtime]
conmon_cgroup = "kubepods.slice"
cgroup_manager = "systemd"
## Uncomment following under [crio.network]
network_dir="/etc/cni/net.d"
plugin_dirs=[
"/opt/cni/bin",
"/usr/libexec/cni",
]
Start the CRI-O Service
## Restart crio service
$ systemctl restart crio.service
$ systemctl enable --now crio
Installing Podman:
On Oracle Linux 8, if podman is not available, then install Podman and related tools with following command syntax:
$ sudo dnf module install container-tools:ol8
On Oracle Linux 9, if podman is not available, then install Podman and related tools with following command syntax:
$ sudo dnf install container-tools
Since the setup uses “docker” CLI commands, on Oracle Linux 8/9, install the podman-docker package if not available, that effectively aliases the docker command to podman,with following command syntax:
$ sudo dnf install podman-docker
Configure Podman rootless:
For using podman with your User ID (Rootless environment), Podman requires the user running it to have a range of UIDs listed in the files /etc/subuid and /etc/subgid. Rather than updating the files directly, the usermod program can be used to assign UIDs and GIDs to a user with the following commands:
$ sudo /sbin/usermod --add-subuids 100000-165535 --add-subgids 100000-165535 <REPLACE_USER_ID>
$ podman system migrate
Note: The above “podman system migrate” need to be executed with your User ID and not root.
Verify the user-id addition
$ cat /etc/subuid
$ cat /etc/subgid
Expected similar output
opc:100000:65536
<user-id>:100000:65536
Add the external Kubernetes repository:
$ cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
Set SELinux in permissive mode (effectively disabling it):
$ export PATH=/sbin:$PATH
$ setenforce 0
$ sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
Export proxy and enable kubelet
:
### Get the nslookup IP address of the master node to use with apiserver-advertise-address during setting up Kubernetes master
### as the host may have different internal ip (hostname -i) and nslookup $HOSTNAME
$ ip_addr=`nslookup $(hostname -f) | grep -m2 Address | tail -n1| awk -F: '{print $2}'| tr -d " "`
$ echo $ip_addr
### Set the proxies
$ export NO_PROXY=localhost,127.0.0.0/8,ADD-YOUR-INTERNAL-NO-PROXY-LIST,/var/run/crio/crio.sock,$ip_addr,.svc
$ export no_proxy=localhost,127.0.0.0/8,ADD-YOUR-INTERNAL-NO-PROXY-LIST,/var/run/crio/crio.sock,$ip_addr,.svc
$ export http_proxy=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
$ export https_proxy=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
$ export HTTPS_PROXY=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
$ export HTTP_PROXY=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
### Install the kubernetes components and enable the kubelet service so that it automatically restarts on reboot
$ dnf install -y kubeadm kubelet kubectl
$ systemctl enable --now kubelet
Ensure net.bridge.bridge-nf-call-iptables
is set to 1 in your sysctl
to avoid traffic routing issues:
$ cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
$ sysctl --system
Disable swap check:
$ sed -i 's/KUBELET_EXTRA_ARGS=/KUBELET_EXTRA_ARGS="--fail-swap-on=false"/' /etc/sysconfig/kubelet
$ cat /etc/sysconfig/kubelet
### Reload and restart kubelet
$ systemctl daemon-reload
$ systemctl restart kubelet
Pull the images using crio:
kubeadm config images pull --cri-socket unix:///var/run/crio/crio.sock
Install Helm v3.10.2+.
a. Download Helm from https://github.com/helm/helm/releases.
For example, to download Helm v3.10.2:
$ wget https://get.helm.sh/helm-v3.10.2-linux-amd64.tar.gz
b. Unpack tar.gz
:
$ tar -zxvf helm-v3.10.2-linux-amd64.tar.gz
c. Find the Helm binary in the unpacked directory, and move it to its desired destination:
$ mv linux-amd64/helm /usr/bin/helm
Run helm version
to verify its installation:
$ helm version
version.BuildInfo{Version:"v3.10.2", GitCommit:"50f003e5ee8704ec937a756c646870227d7c8b58", GitTreeState:"clean", GoVersion:"go1.18.8"}
Notes:
- These steps must be run with the
root
user, unless specified otherwise!- If you choose to use a different CIDR block (that is, other than
10.244.0.0/16
for the--pod-network-cidr=
in thekubeadm init
command), then also updateNO_PROXY
andno_proxy
with the appropriate value.
- Also make sure to update
kube-flannel.yaml
with the new value before deploying.- Replace the following with appropriate values:
ADD-YOUR-INTERNAL-NO-PROXY-LIST
REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
Create a shell script that sets up the necessary environment variables. You can append this to the user’s .bashrc
so that it will run at login. You must also configure your proxy settings here if you are behind an HTTP proxy:
## grab my IP address to pass into kubeadm init, and to add to no_proxy vars
ip_addr=`nslookup $(hostname -f) | grep -m2 Address | tail -n1| awk -F: '{print $2}'| tr -d " "`
export pod_network_cidr="10.244.0.0/16"
export service_cidr="10.96.0.0/12"
export PATH=$PATH:/sbin:/usr/sbin
### Set the proxies
export NO_PROXY=localhost,.svc,127.0.0.0/8,ADD-YOUR-INTERNAL-NO-PROXY-LIST,/var/run/crio/crio.sock,$ip_addr,$pod_network_cidr,$service_cidr
export no_proxy=localhost,.svc,127.0.0.0/8,ADD-YOUR-INTERNAL-NO-PROXY-LIST,/var/run/crio/crio.sock,$ip_addr,$pod_network_cidr,$service_cidr
export http_proxy=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
export https_proxy=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
export HTTPS_PROXY=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
export HTTP_PROXY=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
Source the script to set up your environment variables:
$ . ~/.bashrc
To implement command completion, add the following to the script:
$ [ -f /usr/share/bash-completion/bash_completion ] && . /usr/share/bash-completion/bash_completion
$ source <(kubectl completion bash)
Run kubeadm init
to create the master node:
$ kubeadm init \
--pod-network-cidr=$pod_network_cidr \
--apiserver-advertise-address=$ip_addr \
--ignore-preflight-errors=Swap > /tmp/kubeadm-init.out 2>&1
Log in to the terminal with YOUR_USERID:YOUR_GROUP
. Then set up the ~/.bashrc
similar to steps 1 to 3 with YOUR_USERID:YOUR_GROUP
.
Note that from now on we will be using
YOUR_USERID:YOUR_GROUP
to execute anykubectl
commands and notroot
.
Set up YOUR_USERID:YOUR_GROUP
to access the Kubernetes cluster:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Verify that YOUR_USERID:YOUR_GROUP
is set up to access the Kubernetes cluster using the kubectl
command:
$ kubectl get nodes
Note: At this step, the node is not in ready state as we have not yet installed the pod network add-on. After the next step, the node will show status as Ready.
Install a pod network add-on (flannel
) so that your pods can communicate with each other.
Note: If you are using a different CIDR block than
10.244.0.0/16
, then download and updatekube-flannel.yml
with the correct CIDR address before deploying into the cluster:
$ wget https://github.com/flannel-io/flannel/releases/download/v0.25.1/kube-flannel.yml
$ ### Update the CIDR address if you are using a CIDR block other than the default 10.244.0.0/16
$ kubectl apply -f kube-flannel.yml
Verify that the master node is in Ready status:
$ kubectl get nodes
Sample output:
NAME STATUS ROLES AGE VERSION
mymasternode Ready control-plane 12h v1.27.2
or:
$ kubectl get pods -n kube-system
Sample output:
NAME READY STATUS RESTARTS AGE
pod/coredns-86c58d9df4-58p9f 1/1 Running 0 3m59s
pod/coredns-86c58d9df4-mzrr5 1/1 Running 0 3m59s
pod/etcd-mymasternode 1/1 Running 0 3m4s
pod/kube-apiserver-node 1/1 Running 0 3m21s
pod/kube-controller-manager-mymasternode 1/1 Running 0 3m25s
pod/kube-flannel-ds-6npx4 1/1 Running 0 49s
pod/kube-proxy-4vsgm 1/1 Running 0 3m59s
pod/kube-scheduler-mymasternode 1/1 Running 0 2m58s
To schedule pods on the master node, taint
the node:
$ kubectl taint nodes --all node-role.kubernetes.io/control-plane-
Congratulations! Your Kubernetes cluster environment is ready to deploy your Oracle SOA Suite domain.
Refer to the official documentation to set up a Kubernetes cluster.
Create a working directory to set up the source code:
$ mkdir $HOME/soa_24.3.2
$ cd $HOME/soa_24.3.2
Download the WebLogic Kubernetes Operator source code and Oracle SOA Suite Kubernetes deployment scripts from the SOA repository. Required artifacts are available at OracleSOASuite/kubernetes
.
$ git clone https://github.com/oracle/fmw-kubernetes.git
$ export WORKDIR=$HOME/soa_24.3.2/fmw-kubernetes/OracleSOASuite/kubernetes
Pull the WebLogic Kubernetes Operator image:
$ podman pull ghcr.io/oracle/weblogic-kubernetes-operator:4.2.4
Obtain the Oracle Database image and Oracle SOA Suite Docker image from the Oracle Container Registry:
a. For first time users, to pull an image from the Oracle Container Registry, navigate to https://container-registry.oracle.com and log in using the Oracle Single Sign-On (SSO) authentication service. If you do not already have SSO credentials, you can create an Oracle Account using: https://profile.oracle.com/myprofile/account/create-account.jspx.
Use the web interface to accept the Oracle Standard Terms and Restrictions for the Oracle software images that you intend to deploy. Your acceptance of these terms are stored in a database that links the software images to your Oracle Single Sign-On login credentials.
To obtain the image, log in to the Oracle Container Registry:
$ podman login container-registry.oracle.com
b. Find and then pull the Oracle Database image for 12.2.0.1:
$ podman pull container-registry.oracle.com/database/enterprise:12.2.0.1-slim
c. Find and then pull the prebuilt Oracle SOA Suite image 12.2.1.4 install image:
$ podman pull container-registry.oracle.com/middleware/soasuite:12.2.1.4
Note: This image does not contain any Oracle SOA Suite product patches and can only be used for test and development purposes.
Create a namespace opns
for the operator:
$ kubectl create namespace opns
Create a service account op-sa
for the operator in the operator’s namespace:
$ kubectl create serviceaccount -n opns op-sa
Set up Helm with the location of the WebLogic Operator Helm Chart:
$ helm repo add weblogic-operator https://oracle.github.io/weblogic-kubernetes-operator/charts --force-update
Use Helm to install and start the operator:
$ helm install weblogic-kubernetes-operator weblogic-operator/weblogic-operator \
--namespace opns \
--set version=4.2.4 \
--set serviceAccount=op-sa \
--wait
Verify that the operator’s pod is running by listing the pods in the operator’s namespace. You should see one for the operator:
$ kubectl get pods -n opns
Verify that the operator is up and running by viewing the operator pod’s logs:
$ kubectl logs -n opns -c weblogic-operator deployments/weblogic-operator
The WebLogic Kubernetes Operator v4.2.4 has been installed. Continue with the load balancer and Oracle SOA Suite domain setup.
The WebLogic Kubernetes Operator supports these load balancers: Traefik, NGINX, and Apache. Samples are provided in the documentation.
This Quick Start demonstrates how to install the Traefik ingress controller to provide load balancing for an Oracle SOA Suite domain.
Create a namespace for Traefik:
$ kubectl create namespace traefik
Set up Helm for 3rd party services:
$ helm repo add traefik https://helm.traefik.io/traefik --force-update
Install the Traefik operator in the traefik
namespace with the provided sample values:
$ cd ${WORKDIR}
$ helm install traefik traefik/traefik \
--namespace traefik \
--values charts/traefik/values.yaml \
--set "kubernetes.namespaces={traefik}" \
--set "service.type=NodePort" \
--wait
Create a namespace that can host Oracle SOA Suite domains. Label the namespace with weblogic-operator=enabled
to manage the domain.
$ kubectl create namespace soans
$ kubectl label namespace soans weblogic-operator=enabled
Create Kubernetes secrets.
a. Create a Kubernetes secret for the domain in the same Kubernetes namespace as the domain. In this example, the username is weblogic
, the password is Welcome1
, and the namespace is soans
:
$ cd ${WORKDIR}/create-weblogic-domain-credentials
$ ./create-weblogic-credentials.sh \
-u weblogic \
-p Welcome1 \
-n soans \
-d soainfra \
-s soainfra-domain-credentials
b. Create a Kubernetes secret for the RCU in the same Kubernetes namespace as the domain:
SOA1
Oradoc_db1
Oradoc_db1
soainfra
soans
soainfra-rcu-credentials
$ cd ${WORKDIR}/create-rcu-credentials
$ ./create-rcu-credentials.sh \
-u SOA1 \
-p Oradoc_db1 \
-a sys \
-q Oradoc_db1 \
-d soainfra \
-n soans \
-s soainfra-rcu-credentials
Create the Kubernetes persistence volume and persistence volume claim.
a. Create the Oracle SOA Suite domain home directory.
Determine if a user already exists on your host system with uid:gid
of 1000:0
:
$ sudo getent passwd 1000
If this command returns a username (which is the first field), you can skip the following useradd
command. If not, create the oracle user with useradd
:
$ sudo useradd -u 1000 -g 0 oracle
Create the directory that will be used for the Oracle SOA Suite domain home:
$ sudo mkdir /scratch/k8s_dir
$ sudo chown -R 1000:0 /scratch/k8s_dir
b. The create-pv-pvc-inputs.yaml
has the following values by default:
domain
soainfra
soans
/scratch/k8s_dir
Review and update if any changes required.
$ cd ${WORKDIR}/create-weblogic-domain-pv-pvc
$ vim create-pv-pvc-inputs.yaml
c. Run the create-pv-pvc.sh
script to create the PV and PVC configuration files:
$ cd ${WORKDIR}/create-weblogic-domain-pv-pvc
$ ./create-pv-pvc.sh -i create-pv-pvc-inputs.yaml -o output
d. Create the PV and PVC using the configuration files created in the previous step:
$ kubectl create -f output/pv-pvcs/soainfra-domain-pv.yaml
$ kubectl create -f output/pv-pvcs/soainfra-domain-pvc.yaml
Install and configure the database for the Oracle SOA Suite domain.
This step is required only when a standalone database is not already set up and you want to use the database in a container.
The Oracle Database Docker images are supported only for non-production use. For more details, see My Oracle Support note: Oracle Support for Database Running on Docker (Doc ID 2216342.1). For production, it is suggested to use a standalone database. This example provides steps to create the database in a container.
a. Create a secret with your desired password (for example, Oradoc_db1
):
$ kubectl create secret generic oracle-db-secret --from-literal='password=Oradoc_db1'
b. Create a database in a container:
$ cd ${WORKDIR}/create-oracle-db-service
$ ./start-db-service.sh -a oracle-db-secret -i container-registry.oracle.com/database/enterprise:12.2.0.1-slim -p none
Once the database is successfully created, you can use the database connection string oracle-db.default.svc.cluster.local:1521/devpdb.k8s
as an rcuDatabaseURL
parameter in the create-domain-inputs.yaml
file.
c. Create Oracle SOA Suite schemas for the domain type (for example, soaosb
).
Create a secret that contains the database’s SYSDBA username and password.
$ kubectl -n default create secret generic oracle-rcu-secret \
--from-literal='sys_username=sys' \
--from-literal='sys_password=Oradoc_db1' \
--from-literal='password=Oradoc_db1'
To install the Oracle SOA Suite schemas, run the create-rcu-schema.sh
script with the following inputs:
-s <RCU PREFIX>
-t <SOA domain type>
-d <Oracle Database URL>
-i <SOASuite image>
-n <Namespace>
-c <Name of credentials secret containing SYSDBA username and password and RCU schema owner password>
-r <Comma-separated variables>
-l <Timeout limit in seconds. (optional). (default: 300)>
For example:
$ cd ${WORKDIR}/create-rcu-schema
$ ./create-rcu-schema.sh \
-s SOA1 \
-t soaosb \
-d oracle-db.default.svc.cluster.local:1521/devpdb.k8s \
-i container-registry.oracle.com/middleware/soasuite:12.2.1.4 \
-n default \
-c oracle-rcu-secret \
-r SOA_PROFILE_TYPE=SMALL,HEALTHCARE_INTEGRATION=NO
Now the environment is ready to start the Oracle SOA Suite domain creation.
The sample scripts for Oracle SOA Suite domain deployment are available at ${WORKDIR}/create-soa-domain/domain-home-on-pv
. You must edit create-domain-inputs.yaml
(or a copy of it) to provide the details for your domain.
Update create-domain-inputs.yaml
with the following values for domain creation:
domainType
: soaosb
initialManagedServerReplicas
: 1
$ cd ${WORKDIR}/create-soa-domain/domain-home-on-pv/
$ cp create-domain-inputs.yaml create-domain-inputs.yaml.orig
$ sed -i -e "s:domainType\: soa:domainType\: soaosb:g" create-domain-inputs.yaml
$ sed -i -e "s:initialManagedServerReplicas\: 2:initialManagedServerReplicas\: 1:g" create-domain-inputs.yaml
$ sed -i -e "s:image\: soasuite\:12.2.1.4:image\: container-registry.oracle.com/middleware/soasuite\:12.2.1.4:g" create-domain-inputs.yaml
Run the create-domain.sh
script to create a domain:
$ cd ${WORKDIR}/create-soa-domain/domain-home-on-pv/
$ ./create-domain.sh -i create-domain-inputs.yaml -o output
Create a Kubernetes domain object:
Once the create-domain.sh
is successful, it generates output/weblogic-domains/soainfra/domain.yaml
, which you can use to create the Kubernetes resource domain to start the domain and servers:
$ cd ${WORKDIR}/create-soa-domain/domain-home-on-pv
$ kubectl create -f output/weblogic-domains/soainfra/domain.yaml
Verify that the Kubernetes domain object named soainfra
is created:
$ kubectl get domain -n soans
NAME AGE
soainfra 3m18s
Once you create the domain, the introspect pod is created. This inspects the domain home and then starts the soainfra-adminserver
pod. Once the soainfra-adminserver
pod starts successfully, the Managed Server pods are started in parallel.
Watch the soans
namespace for the status of domain creation:
$ kubectl get pods -n soans -w
Verify that the Oracle SOA Suite domain server pods and services are created and in Ready state:
$ kubectl get all -n soans
Configure Traefik to manage ingresses created in the Oracle SOA Suite domain namespace (soans
):
$ helm upgrade traefik traefik/traefik \
--reuse-values \
--namespace traefik \
--set "kubernetes.namespaces={traefik,soans}" \
--wait
Create an ingress for the domain in the domain namespace by using the sample Helm chart:
$ cd ${WORKDIR}
$ export LOADBALANCER_HOSTNAME=$(hostname -f)
$ helm install soa-traefik-ingress charts/ingress-per-domain \
--namespace soans \
--values charts/ingress-per-domain/values.yaml \
--set "traefik.hostname=${LOADBALANCER_HOSTNAME}" \
--set domainType=soaosb
Verify the created ingress per domain details:
$ kubectl describe ingress soainfra-traefik -n soans
Get the LOADBALANCER_HOSTNAME
for your environment:
$ export LOADBALANCER_HOSTNAME=$(hostname -f)
Verify the following URLs are available for Oracle SOA Suite domains of domain type soaosb
:
Credentials:
username: weblogic
password: Welcome1
http://${LOADBALANCER_HOSTNAME}:30305/console
http://${LOADBALANCER_HOSTNAME}:30305/em
http://${LOADBALANCER_HOSTNAME}:30305/servicebus
http://${LOADBALANCER_HOSTNAME}:30305/soa-infra
http://${LOADBALANCER_HOSTNAME}:30305/soa/composer
http://${LOADBALANCER_HOSTNAME}:30305/integration/worklistapp
http://${LOADBALANCER_HOSTNAME}:30305/ess
http://${LOADBALANCER_HOSTNAME}:30305/EssHealthCheck