This sample demonstrates how to use the WebLogic Kubernetes Operator (hereafter “the operator”) to set up a WebLogic Server (WLS) cluster on the Tanzu Kubernetes Grid (TKG). After performing the sample steps, your WLS domain with a Model in Image domain source type runs on a TKG Kubernetes cluster instance. After the domain has been provisioned, you can monitor it using the WebLogic Server Administration console.
TKG is a managed Kubernetes Service that lets you quickly deploy and manage Kubernetes clusters. To learn more, see the Tanzu Kubernetes Grid (TKG) overview page.
This sample assumes the following prerequisite environment setup:
git --version
to test if git
works. This document was tested with version 2.17.1.tkg version
to test if TKG works. This document was tested with version v1.1.3.kubectl version
to test if kubectl
works. This document was tested with version v1.18.6.helm version
to check the helm
version. This document was tested with version v3.2.1.See Supported environments for general operator prerequisites and operator support limitations that are specific to Tanzu.
Create the Kubernetes cluster using the TKG CLI. See the Tanzu documentation to set up your Kubernetes cluster.
After your Kubernetes cluster is up and running, run the following commands to make sure kubectl
can access the Kubernetes cluster:
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-cluster-101-control-plane-8nj7t NotReady master 2d20h v1.18.6+vmware.1 192.168.100.147 192.168.100.147 VMware Photon OS/Linux 4.19.132-1.ph3 containerd://1.3.4
k8s-cluster-101-md-0-577b7dc766-552hn Ready <none> 2d20h v1.18.6+vmware.1 192.168.100.148 192.168.100.148 VMware Photon OS/Linux 4.19.132-1.ph3 containerd://1.3.4
k8s-cluster-101-md-0-577b7dc766-m8wrc Ready <none> 2d20h v1.18.6+vmware.1 192.168.100.149 192.168.100.149 VMware Photon OS/Linux 4.19.132-1.ph3 containerd://1.3.4
k8s-cluster-101-md-0-577b7dc766-p2gkz Ready <none> 2d20h v1.18.6+vmware.1 192.168.100.150 192.168.100.150 VMware Photon OS/Linux 4.19.132-1.ph3 containerd://1.3.4
You will need an Oracle Container Registry account. The following steps will direct you to accept the Oracle Standard Terms and Restrictions to pull the WebLogic Server images. Make note of your Oracle Account password and email. This sample pertains to 12.2.1.4, but other versions may work as well.
The WebLogic Kubernetes Operator is an adapter to integrate WebLogic Server and Kubernetes, allowing Kubernetes to serve as a container infrastructure hosting WLS instances. The operator runs as a Kubernetes Pod and stands ready to perform actions related to running WLS on Kubernetes.
Clone the WebLogic Kubernetes Operator repository to your machine. We will use several scripts in this repository to create a WebLogic domain.
Kubernetes Operators use Helm to manage Kubernetes applications. The operator’s Helm chart is located in the kubernetes/charts/weblogic-operator
directory. Install the operator by running the following commands.
Clone the repository.
$ git clone --branch v4.2.13
https://github.com/oracle/weblogic-kubernetes-operator.git
$ cd weblogic-kubernetes-operator
Grant the Helm service account the cluster-admin role.
$ cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: helm-user-cluster-admin-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: default
namespace: kube-system
EOF
Create a namespace and service account for the operator.
$ kubectl create namespace sample-weblogic-operator-ns
namespace/sample-weblogic-operator-ns created
$ kubectl create serviceaccount -n sample-weblogic-operator-ns sample-weblogic-operator-sa
serviceaccount/sample-weblogic-operator-sa created
Install the operator.
$ helm install weblogic-operator kubernetes/charts/weblogic-operator \
--namespace sample-weblogic-operator-ns \
--set serviceAccount=sample-weblogic-operator-sa \
--wait
NAME: weblogic-operator
LAST DEPLOYED: Tue Nov 17 09:33:58 2020
NAMESPACE: sample-weblogic-operator-ns
STATUS: deployed
REVISION: 1
TEST SUITE: None
Verify the operator with the following commands; the status will be running.
$ helm list -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
sample-weblogic-operator sample-weblogic-operator-ns 1 2020-11-17 09:33:58.584239273 -0700 PDT deployed weblogic-operator-3.1
$ kubectl get pods -n sample-weblogic-operator-ns
NAME READY STATUS RESTARTS AGE
weblogic-operator-775b668c8f-nwwnn 1/1 Running 0 32s
JAVA_HOME
environment variable must be set and must reference a valid JDK 8 or 11 installation./tmp/mii-sample
.$ mkdir /tmp/mii-sample
$ cp -r /root/weblogic-kubernetes-operator/kubernetes/samples/scripts/create-weblogic-domain/model-in-image/* /tmp/mii-sample
NOTE: We will refer to this working copy of the sample as /tmp/mii-sample
; however, you can use a different location.
Download the latest WebLogic Deploying Tooling (WDT) and WebLogic Image Tool (WIT) installer ZIP files to your /tmp/mii-sample/model-images
directory. Both WDT and WIT are required to create your Model in Image container images.
$ cd /tmp/mii-sample/model-images
$ curl -m 120 -fL https://github.com/oracle/weblogic-deploy-tooling/releases/latest/download/weblogic-deploy.zip \
-o /tmp/mii-sample/model-images/weblogic-deploy.zip
$ curl -m 120 -fL https://github.com/oracle/weblogic-image-tool/releases/latest/download/imagetool.zip \
-o /tmp/mii-sample/model-images/imagetool.zip
To set up the WebLogic Image Tool, run the following commands:
$ cd /tmp/mii-sample/model-images
$ unzip imagetool.zip
$ ./imagetool/bin/imagetool.sh cache addInstaller \
--type wdt \
--version latest \
--path /tmp/mii-sample/model-images/weblogic-deploy.zip
These steps will install WIT to the /tmp/mii-sample/model-images/imagetool
directory, plus put a wdt_latest
entry in the tool’s cache which points to the WDT ZIP file installer. You will use WIT later in the sample for creating model images.
The goal of image creation is to demonstrate using the WebLogic Image Tool to create an image named model-in-image:WLS-v1
from files that you will stage to /tmp/mii-sample/model-images/model-in-image:WLS-v1/
.
The staged files will contain a web application in a WDT archive, and WDT model configuration for a WebLogic Administration Server called admin-server
and a WebLogic cluster called cluster-1
.
Overall, a Model in Image image must contain a WebLogic Server installation and a WebLogic Deploy Tooling installation in its /u01/wdt/weblogic-deploy
directory.
In addition, if you have WDT model archive files, then the image must also contain these files in its /u01/wdt/models
directory.
Finally, an image optionally may also contain your WDT model YAML file and properties files in the same /u01/wdt/models
directory.
If you do not specify a WDT model YAML file in your /u01/wdt/models
directory, then the model YAML file must be supplied dynamically using a Kubernetes ConfigMap that is referenced by your Domain spec.model.configMap
field.
We provide an example of using a model ConfigMap later in this sample.
The following sections contain the steps for creating the image model-in-image:WLS-v1
.
The sample includes a predefined archive directory in /tmp/mii-sample/archives/archive-v1
that you will use to create an archive ZIP file for the image.
The archive top directory, named wlsdeploy
, contains a directory named applications
, which includes an ‘exploded’ sample JSP web application in the directory, myapp-v1
. Three useful aspects to remember about WDT archives are:
wlsdeploy
as the top directory.The application displays important details about the WebLogic Server instance that it’s running on: namely its domain name, cluster name, and server name, as well as the names of any data sources that are targeted to the server.
When you create the image, you will use the files in the staging directory, /tmp/mii-sample/model-in-image__WLS-v1
. In preparation, you need it to contain a ZIP file of the WDT application archive.
Run the following commands to create your application archive ZIP file and put it in the expected directory:
# Delete existing archive.zip in case we have an old leftover version
$ rm -f /tmp/mii-sample/model-images/model-in-image__WLS-v1/archive.zip
# Move to the directory which contains the source files for our archive
$ cd /tmp/mii-sample/archives/archive-v1
# Zip the archive to the location will later use when we run the WebLogic Image Tool
$ zip -r /tmp/mii-sample/model-images/model-in-image__WLS-v1/archive.zip wlsdeploy
In this step, you explore the staged WDT model YAML file and properties in the /tmp/mii-sample/model-in-image__WLS-v1
directory. The model in this directory references the web application in your archive,
configures a WebLogic Server Administration Server, and configures a WebLogic cluster. It consists of only two files, model.10.properties
, a file with a single property, and, model.10.yaml
, a YAML file with your WebLogic configuration model.10.yaml
.
CLUSTER_SIZE=5
Here is the WLS model.10.yaml
:
domainInfo:
AdminUserName: '@@SECRET:__weblogic-credentials__:username@@'
AdminPassword: '@@SECRET:__weblogic-credentials__:password@@'
ServerStartMode: 'prod'
topology:
Name: '@@ENV:CUSTOM_DOMAIN_NAME@@'
AdminServerName: 'admin-server'
Cluster:
'cluster-1':
DynamicServers:
ServerTemplate: 'cluster-1-template'
ServerNamePrefix: 'managed-server'
DynamicClusterSize: '@@PROP:CLUSTER_SIZE@@'
MaxDynamicClusterSize: '@@PROP:CLUSTER_SIZE@@'
MinDynamicClusterSize: '0'
CalculatedListenPorts: false
Server:
'admin-server':
ListenPort: 7001
ServerTemplate:
'cluster-1-template':
Cluster: 'cluster-1'
ListenPort: 8001
appDeployments:
Application:
myapp:
SourcePath: 'wlsdeploy/applications/myapp-v1'
ModuleType: ear
Target: 'cluster-1'
The model files:
Define a WebLogic domain with:
cluster-1
admin-server
cluster-1
targeted EAR application that’s located in the WDT archive ZIP file at wlsdeploy/applications/myapp-v1
Leverage macros to inject external values:
CLUSTER_SIZE
property is referenced in the model YAML file DynamicClusterSize
and MaxDynamicClusterSize
fields using a PROP macro.CUSTOM_DOMAIN_NAME
using an ENV macro.
weblogic-credentials
secret macro reference to the WebLogic credential secret.
webLogicCredentialsSecret
field in the Domain.weblogic-credentials
is a reserved name that always dereferences to the owning Domain actual WebLogic credentials secret name.A Model in Image image can contain multiple properties files, archive ZIP files, and YAML files but in this sample you use just one of each. For a complete description of Model in Images model file naming conventions, file loading order, and macro syntax, see Model files files in the Model in Image user documentation.
At this point, you have staged all of the files needed for the image model-in-image:WLS-v1
; they include:
/tmp/mii-sample/model-images/weblogic-deploy.zip
/tmp/mii-sample/model-images/model-in-image__WLS-v1/model.10.yaml
/tmp/mii-sample/model-images/model-in-image__WLS-v1/model.10.properties
/tmp/mii-sample/model-images/model-in-image__WLS-v1/archive.zip
If you don’t see the weblogic-deploy.zip
file, then you missed a step in the prerequisites.
Now, you use the Image Tool to create an image named model-in-image:WLS-v1
that’s layered on a base WebLogic image. You’ve already set up this tool during the prerequisite steps.
Run the following commands to create the model image and verify that it worked:
$ cd /tmp/mii-sample/model-images
$ ./imagetool/bin/imagetool.sh update \
--tag model-in-image:WLS-v1 \
--fromImage container-registry.oracle.com/middleware/weblogic:12.2.1.4 \
--wdtModel ./model-in-image__WLS-v1/model.10.yaml \
--wdtVariables ./model-in-image__WLS-v1/model.10.properties \
--wdtArchive ./model-in-image__WLS-v1/archive.zip \
--wdtModelOnly \
--wdtDomainType WLS \
--chown oracle:root
If you don’t see the imagetool
directory, then you missed a step in the prerequisites.
This command runs the WebLogic Image Tool in its Model in Image mode, and does the following:
container-registry.oracle.com/middleware/weblogic:12.2.1.4
base image.latest
when you set up the cache during the sample prerequisites steps.-wdtVersion
flag./u01/wdt/models
.When the command succeeds, it should end with output like the following:
[INFO ] Build successful. Build time=36s. Image tag=model-in-image:WLS-v1
Also, if you run the docker images
command, then you will see an image named model-in-image:WLS-v1
.
NOTE: If you have Kubernetes cluster worker nodes that are remote to your local machine, then you need to put the image in a location that these nodes can access. See Ensuring your Kubernetes cluster can access images.
This sample uses General Availability (GA) images. GA images are suitable for demonstration and development purposes only where the environments are not available from the public Internet; they are not acceptable for production use. In production, you should always use CPU (patched) images from OCR or create your images using the WebLogic Image Tool (WIT) with the --recommendedPatches
option. For more guidance, see Apply the Latest Patches and Updates in Securing a Production Environment for Oracle WebLogic Server.
In this section, you will deploy the new image to namespace sample-domain1-ns
, including the following steps:
password
value.Create a namespace that can host one or more domains:
$ kubectl create namespace sample-domain1-ns
## label the domain namespace so that the operator can autodetect and create WebLogic Server pods.
$ kubectl label namespace sample-domain1-ns weblogic-operator=enabled
First, create the secrets needed by the WLS type model domain. In this case, you have two secrets.
Run the following kubectl
commands to deploy the required secrets:
$ kubectl -n sample-domain1-ns create secret generic \
sample-domain1-weblogic-credentials \
--from-literal=username=<wl admin username> \
--from-literal=password=<wl admin password>
$ kubectl -n sample-domain1-ns label secret \
sample-domain1-weblogic-credentials \
weblogic.domainUID=sample-domain1
$ kubectl -n sample-domain1-ns create secret generic \
sample-domain1-runtime-encryption-secret \
--from-literal=password=<mii runtime encryption pass>
$ kubectl -n sample-domain1-ns label secret \
sample-domain1-runtime-encryption-secret \
weblogic.domainUID=sample-domain1
Some important details about these secrets:
Choosing passwords and usernames:
<wl admin username>
and <wl admin password>
with a username and password of your choice.
The password should be at least eight characters long and include at least one digit.
Remember what you specified. These credentials may be needed again later.<mii runtime encryption pass>
with a password of your choice.The WebLogic credentials secret:
username
and password
fields.spec.webLogicCredentialsSecret
field in your Domain.domainInfo.AdminUserName
and domainInfo.AdminPassWord
fields in your model YAML file.The Model WDT runtime secret:
password
field.spec.model.runtimeEncryptionSecret
field in its Domain.Deleting and recreating the secrets:
kubectl create secret
command.You name and label secrets using their associated domain UID for two reasons:
weblogic.domainUID
label as a convenience for finding all resources associated with a domain.Now, you create a Domain YAML file. A Domain is the key resource that tells the operator how to deploy a WebLogic domain.
Copy the following to a file called /tmp/mii-sample/mii-initial.yaml
or similar, or use the file /tmp/mii-sample/domain-resources/WLS/mii-initial-d1-WLS-v1.yaml
that is included in the sample source.
NOTE: Before you deploy the domain custom resource, determine if you have Kubernetes cluster worker nodes that are remote to your local machine. If so, you need to put the Domain’s image in a location that these nodes can access and you may also need to modify your Domain YAML file to reference the new location. See Ensuring your Kubernetes cluster can access images.
Run the following command to create the domain custom resource:
$ kubectl apply -f /tmp/mii-sample/domain-resources/WLS/mii-initial-d1-WLS-v1.yaml
NOTE: If you are choosing not to use the predefined Domain YAML file and instead created your own Domain YAML file earlier, then substitute your custom file name in the previously listed command. Previously, we suggested naming it /tmp/mii-sample/mii-initial.yaml
.
Verify the WebLogic Server pods are all running:
$ kubectl get all -n sample-domain1-ns
NAME READY STATUS RESTARTS AGE
pod/sample-domain1-admin-server 1/1 Running 0 41m
pod/sample-domain1-managed-server1 1/1 Running 0 40m
pod/sample-domain1-managed-server2 1/1 Running 0 40m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/sample-domain1-admin-server ClusterIP None <none> 7001/TCP 41m
service/sample-domain1-cluster-cluster-1 ClusterIP 100.66.99.27 <none> 8001/TCP 40m
service/sample-domain1-managed-server1 ClusterIP None <none> 8001/TCP 40m
service/sample-domain1-managed-server2 ClusterIP None <none> 8001/TCP 40m
Create a load balancer to access the WebLogic Server Administration Console and applications deployed in the cluster. Tanzu supports the MetalLB load balancer and NGINX ingress for routing.
Install the MetalLB load balancer by running following commands:
## create namespace metallb-system
$ kubectl create ns metallb-system
## deploy MetalLB load balancer
$ kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.9.2/manifests/metallb.yaml -n metallb-system
## create secret
$ kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
$ cat metallb-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.100.50-192.168.100.65
$ kubectl apply -f metallb-configmap.yaml
configmap/config created
$ kubectl get all -n metallb-system
NAME READY STATUS RESTARTS AGE
pod/controller-684f5d9b49-jkzfk 1/1 Running 0 2m14s
pod/speaker-b457r 1/1 Running 0 2m14s
pod/speaker-bzmmj 1/1 Running 0 2m14s
pod/speaker-gphh5 1/1 Running 0 2m14s
pod/speaker-lktgc 1/1 Running 0 2m14s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/speaker 4 4 4 4 4 beta.kubernetes.io/os=linux 2m14s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/controller 1/1 1 1 2m14s
NAME DESIRED CURRENT READY AGE
replicaset.apps/controller-684f5d9b49 1 1 1 2m14s
Install NGINX.
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx --force-update
$ helm install ingress-nginx ingress-nginx/ingress-nginx
Create ingress for accessing the application deployed in the cluster and to access the Administration console.
$ cat ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sample-nginx-ingress-pathrouting
namespace: sample-domain1-ns
spec:
ingressClassName: nginx
rules:
- host:
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: sample-domain1-cluster-cluster-1
port:
number: 8001
- path: /console
pathType: Prefix
backend:
service:
name: sample-domain1-admin-server
port:
number: 7001
$ kubectl apply -f ingress.yaml
Verify ingress is running.
$ kubectl get ingresses -n sample-domain1-ns
NAME CLASS HOSTS ADDRESS PORTS AGE
sample-nginx-ingress-pathrouting <none> * 192.168.100.50 80 7m18s
Access the Administration Console using the load balancer IP address, http://192.168.100.50/console
.
The console login screen expects the WebLogic administration credentials that you specified in the Secrets.
Access the sample application.
# Access the sample application using the load balancer IP (192.168.100.50)
$ curl http://192.168.100.50/myapp_war/index.jsp
<html><body><pre>
*****************************************************************
Hello World! This is version 'v1' of the mii-sample JSP web-app.
Welcome to WebLogic Server 'managed-server1'!
domain UID = 'sample-domain1'
domain name = 'domain1'
Found 1 local cluster runtime:
Cluster 'cluster-1'
Found 0 local data sources:
*****************************************************************
</pre></body></html>
$ curl http://192.168.100.50/myapp_war/index.jsp
<html><body><pre>
*****************************************************************
Hello World! This is version 'v1' of the mii-sample JSP web-app.
Welcome to WebLogic Server 'managed-server2'!
domain UID = 'sample-domain1'
domain name = 'domain1'
Found 1 local cluster runtime:
Cluster 'cluster-1'
Found 0 local data sources:
*****************************************************************
</pre></body></html>