Domain home on a PV

This sample demonstrates how to use the WebLogic Kubernetes Operator (hereafter “the operator”) to set up a WebLogic Server (WLS) cluster on the Azure Kubernetes Service (AKS) using the domain on PV approach. After going through the steps, your WLS domain runs on an AKS cluster instance and you can manage your WLS domain by accessing the WebLogic Server Administration Console.

Contents

Prerequisites

This sample assumes the following prerequisite environment.

  • If you don’t have an Azure subscription, create a free account before you begin.
    • It’s strongly recommended that the Azure identity you use to sign in and complete this article has either the Owner role in the current subscription or the Contributor and User Access Administrator roles in the current subscription.
    • If your identity has very limited role assignments, ensure you have the following role assignments in the AKS resource group and AKS node resource group.
      • Contributor role and User Access Administrator role in the resource group that runs AKS cluster. This requires asking a privileged user to assign the roles before creating resource in the resource group.
      • Contributor role in the AKS node resource group whose name starts with “MC_”. This requires asking a privileged user to assign the role after the AKS instance is created.
  • Operating System: GNU/Linux, macOS or WSL2 for Windows 10.
    • Note: the Docker image creation steps will not work on a Mac with Apple Silicon.
  • Git; use git --version to test if git works. This document was tested with version 2.25.1.
  • Azure CLI; use az --version to test if az works. This document was tested with version 2.58.0.
  • Docker for Desktop. This document was tested with Docker version 20.10.7
  • kubectl; use kubectl version to test if kubectl works. This document was tested with version v1.21.2.
  • Helm, version 3.1 and later; use helm version to check the helm version. This document was tested with version v3.6.2.
  • A Java JDK, Version 8 or 11. Azure recommends Microsoft Build of OpenJDK. Ensure that your JAVA_HOME environment variable is set correctly in the shells in which you run the commands.
  • Ensure that you have the zip/unzip utility installed; use zip/unzip -v to test if zip/unzip works.
Oracle Container Registry

You will need an Oracle account. The following steps will direct you to accept the license agreement for WebLogic Server. Make note of your Oracle Account password and email. This sample pertains to 12.2.1.4, but other versions may work as well.

  • In a web browser, navigate to https://container-registry.oracle.com and log in using the Oracle Single Sign-On authentication service. If you do not already have SSO credentials, at the top of the page, click the Sign In link to create them.
  • The Oracle Container Registry provides a WebLogic 12.2.1.4 General Availability (GA) installation image that is used in this sample.
  • NOTE: General Availability (GA) images are suitable for demonstration and development purposes only where the environments are not available from the public Internet; they are not acceptable for production use. In production, you should always use CPU (patched) images from the OCR or create your images using the WebLogic Image Tool (WIT) with the --recommendedPatches option. For more guidance, see Apply the Latest Patches and Updates in Securing a Production Environment for Oracle WebLogic Server.
  • Ensure that Docker is running. Find and then pull the WebLogic 12.2.1.4 installation image:
    $ docker login container-registry.oracle.com -u ${ORACLE_SSO_EMAIL} -p ${ORACLE_SSO_PASSWORD}
    $ docker pull container-registry.oracle.com/middleware/weblogic:12.2.1.4
    

If you have problems accessing the Oracle Container Registry, you can build your own images from the Oracle GitHub repository.

Sign in with Azure CLI

The steps in this section show you how to sign in to the Azure CLI.

  1. Open a Bash shell.

  2. Sign out and delete some authentication files to remove any lingering credentials.

    $ az logout
    $ rm ~/.azure/accessTokens.json
    $ rm ~/.azure/azureProfile.json
    
  3. Sign in to your Azure CLI.

    $ az login
    
  4. Set the subscription ID. Be sure to replace the placeholder with the appropriate value.

    $ export SUBSCRIPTION_ID=<your-subscription-id>
    $ az account set -s $SUBSCRIPTION_ID
    

The following sections of the sample instructions will guide you, step-by-step, through the process of setting up a WebLogic cluster on AKS - remaining as close as possible to a native Kubernetes experience. This lets you understand and customize each step. If you wish to have a more automated experience that abstracts some lower level details, you can skip to the Automation section.

Prepare parameters

# Change these parameters as needed for your own environment
export ORACLE_SSO_EMAIL=<replace with your oracle account email>
export ORACLE_SSO_PASSWORD=<replace with your oracle password>

# Specify a prefix to name resources, only allow lowercase letters and numbers, between 1 and 7 characters
export BASE_DIR=~
export NAME_PREFIX=wls
export WEBLOGIC_USERNAME=weblogic
export WEBLOGIC_PASSWORD=Secret123456
export domainUID=domain1
# Used to generate resource names.
export TIMESTAMP=`date +%s`
export AKS_CLUSTER_NAME="${NAME_PREFIX}aks${TIMESTAMP}"
export AKS_PERS_RESOURCE_GROUP="${NAME_PREFIX}resourcegroup${TIMESTAMP}"
export AKS_PERS_LOCATION=eastus
export AKS_PERS_STORAGE_ACCOUNT_NAME="${NAME_PREFIX}storage${TIMESTAMP}"
export AKS_PERS_SHARE_NAME="${NAME_PREFIX}-weblogic-${TIMESTAMP}"
export SECRET_NAME_DOCKER="${NAME_PREFIX}regcred"
export ACR_NAME="${NAME_PREFIX}acr${TIMESTAMP}"
Download the WebLogic Kubernetes Operator sample.

Download the WebLogic Kubernetes Operator sample ZIP file. We will use several scripts in this zip file to create a WebLogic domain. This sample was tested with v4.1.8, but should work with the latest release.

$ cd $BASE_DIR
$ mkdir sample-scripts
$ curl -m 120 -fL https://github.com/oracle/weblogic-kubernetes-operator/releases/download/v4.1.8/sample-scripts.zip \
      -o ${BASE_DIR}/sample-scripts/sample-scripts.zip
$ unzip ${BASE_DIR}/sample-scripts/sample-scripts.zip -d ${BASE_DIR}/sample-scripts

Create Resource Group

$ az extension add --name resource-graph
$ az group create --name $AKS_PERS_RESOURCE_GROUP --location $AKS_PERS_LOCATION

Create the AKS cluster

This sample doesn’t enable application routing. If you want to enable application routing, follow Managed nginx Ingress with the application routing add-on in AKS.

Run the following commands to create the AKS cluster instance.

$ az aks create \
   --resource-group $AKS_PERS_RESOURCE_GROUP \
   --name $AKS_CLUSTER_NAME \
   --node-count 2 \
   --generate-ssh-keys \
   --nodepool-name nodepool1 \
   --node-vm-size Standard_DS2_v2 \
   --location $AKS_PERS_LOCATION \
   --enable-managed-identity

Successful output will be a JSON object with the entry "type": "Microsoft.ContainerService/ManagedClusters".

After the deployment finishes, run the following command to connect to the AKS cluster. This command updates your local ~/.kube/config so that subsequent kubectl commands interact with the named AKS cluster.

$ az aks get-credentials --resource-group $AKS_PERS_RESOURCE_GROUP --name $AKS_CLUSTER_NAME

Successful output will look similar to:

Merged "wlsaks1596087429" as current context in /home/username/.kube/config

After your Kubernetes cluster is up and running, run the following commands to make sure kubectl can access the Kubernetes cluster:

$ kubectl get nodes -o wide

Successful output will look like the following.

NAME                                STATUS   ROLES   AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
aks-nodepool1-15679926-vmss000000   Ready    agent   118s   v1.25.6   10.224.0.4    <none>        Ubuntu 22.04.2 LTS   5.15.0-1041-azure   containerd://1.7.1+azure-1
aks-nodepool1-15679926-vmss000001   Ready    agent   2m8s   v1.25.6   10.224.0.5    <none>        Ubuntu 22.04.2 LTS   5.15.0-1041-azure   containerd://1.7.1+azure-1

NOTE: If you run into VM size failure, see Troubleshooting - Virtual Machine size is not supported.

Create storage

Our usage pattern for the operator involves creating Kubernetes “persistent volumes” to allow the WebLogic Server to persist its configuration and data separately from the Kubernetes Pods that run WebLogic Server workloads.

You will create an external data volume to access and persist data. There are several options for data sharing as described in Storage options for applications in Azure Kubernetes Service (AKS).

You will dynamically create and use a persistent volume with Azure Files NFS share. For details about this full featured cloud storage solution, see the Azure Files Documentation.

Create an Azure Storage account and NFS share
  1. Create an Azure Storage Account.

    Create a storage account using the Azure CLI. Make sure the following values are specified:

    Option name Value Notes
    name $AKS_PERS_STORAGE_ACCOUNT_NAME The storage account name can contain only lowercase letters and numbers, and must be between 3 and 24 characters in length.
    sku Premium_LRS Only Premium_LRS and Premium_ZRS work for NFS share, see the Azure Files NFS Share Documentation.
    https-only false You can’t mount an NFS file share unless you disable secure transfer.
    default-action Deny For security, we suggest that you deny access by default and choose to allow access from the AKS cluster network.
    
    $ az storage account create \
        --resource-group $AKS_PERS_RESOURCE_GROUP \
        --name $AKS_PERS_STORAGE_ACCOUNT_NAME \
        --location $AKS_PERS_LOCATION \
        --sku Premium_LRS \
        --kind FileStorage \
        --https-only false \
        --default-action Deny
    

    Successful output will be a JSON object with the entry "type": "Microsoft.Storage/storageAccounts".

  2. Create an NFS share.

    We strongly recommend NFS instead of SMB. NFS evolved from the UNIX operating system, and other variants such as GNU/Linux. For this reason, when using NFS with container technologies such as Docker, it is less likely to have problems for concurrent reads and file locking.

    Please be sure to enable NFS v4.1. Versions lower than v4.1 will have problems.

    To create the file share, you must use NoRootSquash to allow the operator to change the ownership of the directory in the NFS share.

    Otherwise, you will get an error like chown: changing ownership of '/shared': Operation not permitted.

    The following command creates an NFS share with 100GiB:

    
    # Create NFS file share
    $ az storage share-rm create \
        --resource-group $AKS_PERS_RESOURCE_GROUP \
        --storage-account $AKS_PERS_STORAGE_ACCOUNT_NAME \
        --name ${AKS_PERS_SHARE_NAME} \
        --enabled-protocol NFS \
        --root-squash NoRootSquash \
        --quota 100
    

    The command provisions an NFS file share with NFS 4.1 or above.

  3. Assign the AKS cluster Contributor role to access the storage account.

    You must configure role assignment allowing access from AKS cluster to the storage account.

    Get objectId of the AKS cluster with the following command and save it with variable AKS_OBJECT_ID:

    $ AKS_OBJECT_ID=$(az aks show --name ${AKS_CLUSTER_NAME} --resource-group ${AKS_PERS_RESOURCE_GROUP} --query "identity.principalId" -o tsv)
    

    Get Id of the storage account with the following command:

    $ STORAGE_ACCOUNT_ID=$(az storage account show --name ${AKS_PERS_STORAGE_ACCOUNT_NAME} --resource-group ${AKS_PERS_RESOURCE_GROUP} --query "id" -o tsv)
    

    Now, you are able to create a role assignment to grant the AKS cluster Contributor in the scope of the storage account. Then, the AKS cluster is able to access the file share.

    $ az role assignment create \
      --assignee-object-id "${AKS_OBJECT_ID}" \
      --assignee-principal-type "ServicePrincipal" \
      --role "Contributor" \
      --scope "${STORAGE_ACCOUNT_ID}"
    

    Successful output will be a JSON object with string like:

    {
    "condition": null,
    "conditionVersion": null,
    "createdBy": "d6fe7d09-3330-45b6-ae32-4dd5e3310835",
    "createdOn": "2023-05-11T04:13:04.922943+00:00",
    "delegatedManagedIdentityResourceId": null,
    "description": null,
    "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/wlsresourcegroup1683777168/providers/Microsoft.Storage/storageAccounts/wlsstorage1683777168/providers/Microsoft.Authorization/roleAssignments/93dae12d-21c8-4844-99cd-e8b088356af6",
    "name": "93dae12d-21c8-4844-99cd-e8b088356af6",
    "principalId": "95202c6f-2073-403c-b9a7-7d2f1cbb4541",
    "principalName": "3640cbf2-4db7-43b8-bcf6-1e51d3e90478",
    "principalType": "ServicePrincipal",
    "resourceGroup": "wlsresourcegroup1683777168",
    "roleDefinitionId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c",
    "roleDefinitionName": "Contributor",
    "scope": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/wlsresourcegroup1683777168/providers/Microsoft.Storage/storageAccounts/wlsstorage1683777168",
    "type": "Microsoft.Authorization/roleAssignments",
    "updatedBy": "d6fe7d09-3330-45b6-ae32-4dd5e3310835",
    "updatedOn": "2023-05-11T04:13:04.922943+00:00"
    }
    
  4. Configure network security.

    You must configure the network security allowing access from AKS cluster to the file share.

    First, you must get the virtual network name and the subnet name of the AKS cluster.

    Run the following commands to get network information:

    # get the resource group name of the AKS managed resources
    $ aksMCRGName=$(az aks show --name $AKS_CLUSTER_NAME --resource-group $AKS_PERS_RESOURCE_GROUP -o tsv --query "nodeResourceGroup")
    $ echo ${aksMCRGName}
    
    # get network name of AKS cluster
    $ aksNetworkName=$(az graph query -q "Resources \
        | where type =~ 'Microsoft.Network/virtualNetworks' \
        | where resourceGroup  =~ '${aksMCRGName}' \
        | project name = name" --query "data[0].name"  -o tsv)
    $ echo ${aksNetworkName}
    
    # get subnet name of AKS agent pool
    $ aksSubnetName=$(az network vnet subnet list --resource-group ${aksMCRGName} --vnet-name ${aksNetworkName} -o tsv --query "[*].name")
    $ echo ${aksSubnetName}
    
    # get subnet id of the AKS agent pool
    $ aksSubnetId=$(az network vnet subnet list --resource-group ${aksMCRGName} --vnet-name ${aksNetworkName} -o tsv --query "[*].id")
    $ echo ${aksSubnetId}
    

    You must enable the service endpoint Microsoft.Storage for the subnet using the following command:

    $ az network vnet subnet update \
        --resource-group $aksMCRGName \
        --name ${aksSubnetName} \
        --vnet-name ${aksNetworkName} \
        --service-endpoints Microsoft.Storage
    

    It takes several minutes to enable the service endpoint; successful output will be a JSON object with string like:

    "serviceEndpoints": [
    {
      "locations": [
        "eastus",
        "westus"
      ],
      "provisioningState": "Succeeded",
      "service": "Microsoft.Storage"
    }
    

    Now you must create a network rule to allow access from AKS cluster. The following command enables access from AKS subnet to the storage account:

    $ az storage account network-rule add \
      --resource-group $AKS_PERS_RESOURCE_GROUP \
      --account-name $AKS_PERS_STORAGE_ACCOUNT_NAME \
      --subnet ${aksSubnetId}
    

    Successful output will be a JSON object with virtual network rule like:

    "virtualNetworkRules": [
      {
        "action": "Allow",
        "state": "Succeeded",
        "virtualNetworkResourceId": "${aksSubnetId}"
      }
    ]
    
Create SC and PVC
Generated configuration files

Use the below command to generate configuration files.

cat >azure-csi-nfs-${TIMESTAMP}.yaml <<EOF
# Copyright (c) 2018, 2023, Oracle and/or its affiliates.
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: azurefile-csi-nfs
provisioner: file.csi.azure.com
parameters:
  protocol: nfs
  resourceGroup: ${AKS_PERS_RESOURCE_GROUP}
  storageAccount: ${AKS_PERS_STORAGE_ACCOUNT_NAME}
  shareName: ${AKS_PERS_SHARE_NAME}
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true

EOF

cat >pvc-${TIMESTAMP}.yaml <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: wls-azurefile-${TIMESTAMP}
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: azurefile-csi-nfs
  resources:
    requests:
      storage: 5Gi

EOF

Use the kubectl command to create the Storage Class and persistent volume claim to the default namespace.

$ kubectl apply -f azure-csi-nfs-${TIMESTAMP}.yaml
$ kubectl apply -f pvc-${TIMESTAMP}.yaml

Use the following command to verify:

$ kubectl get sc

Example of kubectl get sc output:

$ kubectl get sc
NAME                    PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
azurefile               file.csi.azure.com   Delete          Immediate              true                   30m
azurefile-csi           file.csi.azure.com   Delete          Immediate              true                   30m
azurefile-csi-nfs       file.csi.azure.com   Delete          Immediate              true                   24m
azurefile-csi-premium   file.csi.azure.com   Delete          Immediate              true                   30m
azurefile-premium       file.csi.azure.com   Delete          Immediate              true                   30m
...
$ kubectl get pvc

Example of kubectl get pvc output:

$ kubectl get pvc
NAME                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
wls-azurefile-1693900684   Bound    pvc-1f615766-0f21-4c88-80e1-93c9bdabb3eb   5Gi        RWX            azurefile-csi-nfs   46s

Create a domain creation image

This sample requires Domain creation images. For more information, see Domain on Persistent Volume.

Image creation prerequisites
  • The JAVA_HOME environment variable must be set and must reference a valid JDK 8 or 11 installation.

  • Copy the sample to a new directory; for example, use the directory /tmp/dpv-sample. In the directory name, dpv is short for “domain on pv”. Domain on PV is one of three domain home source types supported by the operator. To learn more, see Choose a domain home source type.

    $ rm /tmp/dpv-sample -f -r
    $ mkdir /tmp/dpv-sample
    
    $ cp -r $BASE_DIR/sample-scripts/create-weblogic-domain/domain-on-pv/* /tmp/dpv-sample
    

    NOTE: We will refer to this working copy of the sample as /tmp/dpv-sample; however, you can use a different location.

  • Copy the wdt-artifacts directory of the sample to a new directory; for example, use directory /tmp/dpv-sample/wdt-artifacts

    $ cp -r $BASE_DIR/sample-scripts/create-weblogic-domain/wdt-artifacts/* /tmp/dpv-sample
    
    $ export WDT_MODEL_FILES_PATH=/tmp/dpv-sample/wdt-model-files
    
  • Download the latest WebLogic Deploying Tooling (WDT) and WebLogic Image Tool (WIT) installer ZIP files to your ${WDT_MODEL_FILES_PATH} directory. Both WDT and WIT are required to create your Model in Image images.

    $ curl -m 120 -fL https://github.com/oracle/weblogic-deploy-tooling/releases/latest/download/weblogic-deploy.zip \
     -o ${WDT_MODEL_FILES_PATH}/weblogic-deploy.zip
    
    $ curl -m 120 -fL https://github.com/oracle/weblogic-image-tool/releases/latest/download/imagetool.zip \
      -o ${WDT_MODEL_FILES_PATH}/imagetool.zip
    
  • Set up the WebLogic Image Tool, run the following commands:

    $ unzip ${WDT_MODEL_FILES_PATH}/imagetool.zip -d ${WDT_MODEL_FILES_PATH}
    
    $ ${WDT_MODEL_FILES_PATH}/imagetool/bin/imagetool.sh cache deleteEntry --key wdt_latest
    
    $ ${WDT_MODEL_FILES_PATH}/imagetool/bin/imagetool.sh cache addInstaller \
      --type wdt \
      --version latest \
      --path ${WDT_MODEL_FILES_PATH}/weblogic-deploy.zip
    

    These steps will install WIT to the ${WDT_MODEL_FILES_PATH}/imagetool directory, plus put a wdt_latest entry in the tool’s cache which points to the WDT ZIP file installer. You will use WIT later in the sample for creating model images.

Image creation - Introduction

The goal of image creation is to demonstrate using the WebLogic Image Tool to create an image tagged as wdt-domain-image:WLS-v1 from files that you will stage to ${WDT_MODEL_FILES_PATH}/WLS-v1.

  • The directory where the WebLogic Deploy Tooling software is installed (also known as WDT Home), expected in an image’s /auxiliary/weblogic-deploy directory, by default.
  • WDT model YAML (model), WDT variable (property), and WDT archive ZIP (archive) files, expected in directory /auxiliary/models, by default.
Understanding your first archive

See Understanding your first archive.

Staging a ZIP file of the archive

Delete existing archive.zip in case we have an old leftover version.

$ rm -f ${WDT_MODEL_FILES_PATH}/WLS-v1/archive.zip

Create a ZIP file of the archive in the location that we will use when we run the WebLogic Image Tool.

$ cd /tmp/dpv-sample/archives/archive-v1
$ zip -r ${WDT_MODEL_FILES_PATH}/WLS-v1/archive.zip wlsdeploy
Staging model files

In this step, you explore the staged WDT model YAML file and properties in the ${WDT_MODEL_FILES_PATH}/WLS-v1 directory. The model in this directory references the web application in your archive, configures a WebLogic Server Administration Server, and configures a WebLogic cluster. It consists of only two files, model.10.properties, a file with a single property, and, model.10.yaml, a YAML file with your WebLogic configuration.

Here is the WLS model.10.properties:

CLUSTER_SIZE=5

Here is the WLS model.10.yaml:

domainInfo:
    AdminUserName: '@@SECRET:__weblogic-credentials__:username@@'
    AdminPassword: '@@SECRET:__weblogic-credentials__:password@@'
    ServerStartMode: 'prod'

topology:
    Name: '@@ENV:CUSTOM_DOMAIN_NAME@@'
    AdminServerName: 'admin-server'
    Cluster:
        'cluster-1':
            DynamicServers:
                ServerTemplate:  'cluster-1-template'
                ServerNamePrefix: 'managed-server'
                DynamicClusterSize: '@@PROP:CLUSTER_SIZE@@'
                MaxDynamicClusterSize: '@@PROP:CLUSTER_SIZE@@'
                MinDynamicClusterSize: '0'
                CalculatedListenPorts: false
    Server:
        'admin-server':
            ListenPort: 7001
    ServerTemplate:
        'cluster-1-template':
            Cluster: 'cluster-1'
            ListenPort: 8001

appDeployments:
    Application:
        myapp:
            SourcePath: 'wlsdeploy/applications/myapp-v1'
            ModuleType: ear
            Target: 'cluster-1'

The model file:

  • Defines a WebLogic domain with:

    • Cluster cluster-1
    • Administration Server admin-server
    • An EAR application, targeted to cluster-1, located in the WDT archive ZIP file at wlsdeploy/applications/myapp-v1
  • Leverages macros to inject external values:

    • The property file CLUSTER_SIZE property is referenced in the model YAML file DynamicClusterSize and MaxDynamicClusterSize fields using a PROP macro.
    • The model file domain name is injected using a custom environment variable named CUSTOM_DOMAIN_NAME using an ENV macro.
      • You set this environment variable later in this sample using an env field in its Domain.
      • This conveniently provides a simple way to deploy multiple differently named domains using the same model image.
    • The model file administrator user name and password are set using a weblogic-credentials secret macro reference to the WebLogic credential secret.
      • This secret is in turn referenced using the webLogicCredentialsSecret field in the Domain.
      • The weblogic-credentials is a reserved name that always dereferences to the owning Domain actual WebLogic credentials secret name.

An image can contain multiple properties files, archive ZIP files, and model YAML files but in this sample you use just one of each. For a complete description of WDT model file naming conventions, file loading order, and macro syntax, see Model files in the user documentation.

Creating the image with WIT

At this point, you have all of the files needed for image wdt-domain-image:WLS-v1 staged; they include:

  • /tmp/sample/wdt-artifacts/wdt-model-files/WLS-v1/model.10.yaml
  • /tmp/sample/wdt-artifacts/wdt-model-files/WLS-v1/model.10.properties
  • /tmp/sample/wdt-artifacts/wdt-model-files/WLS-v1/archive.zip

Now, you use the Image Tool to create an image named wdt-domain-image:WLS-v1. You’ve already set up this tool during the prerequisite steps.

Run the following commands to create the image and verify that it worked. Note that amagetool.sh is not supported on macOS with Apple Silicon. See Troubleshooting - exec format error.

$ ${WDT_MODEL_FILES_PATH}/imagetool/bin/imagetool.sh createAuxImage \
  --tag wdt-domain-image:WLS-v1 \
  --wdtModel ${WDT_MODEL_FILES_PATH}/WLS-v1/model.10.yaml \
  --wdtVariables ${WDT_MODEL_FILES_PATH}/WLS-v1/model.10.properties \
  --wdtArchive ${WDT_MODEL_FILES_PATH}/WLS-v1/archive.zip

This command runs the WebLogic Image Tool to create the domain creation image and does the following:

  • Builds the final container image as a layer on a small busybox base image.
  • Copies the WDT ZIP file that’s referenced in the WIT cache into the image.
    • Note that you cached WDT in WIT using the keyword latest when you set up the cache during the sample prerequisites steps.
    • This lets WIT implicitly assume it’s the desired WDT version and removes the need to pass a -wdtVersion flag.
  • Copies the specified WDT model, properties, and application archives to image location /auxiliary/models.

When the command succeeds, it should end with output like the following:

[INFO   ] Build successful. Build time=70s. Image tag=wdt-domain-image:WLS-v1

Verify the image is available in the local Docker server with the following command.

$ docker images | grep WLS-v1
wdt-domain-image          WLS-v1   012d3bfa3536   5 days ago      1.13GB

You may run into a Dockerfile parsing error if your Docker buildkit is enabled, see Troubleshooting - WebLogic Image Tool failure.

Pushing the image to Azure Container Registry

AKS can pull images from any container registry, but the easiest integration is to use Azure Container Registry (ACR). In addition to simplicity, using ACR simplifies high availability and disaster recovery with features such as geo-replication. For more information, see Geo-replication in Azure Container Registry. In this section, we will create a new Azure Container Registry, connect it to our pre-existing AKS cluster and push the image built in the preceding section to it. For complete details, see Azure Container Registry documentation.

Let’s create an instance of ACR in the same resource group we used for AKS. We will use the environment variables used during the steps above. For simplicity, we use the resource group name as the name of the ACR instance.

$ az acr create --resource-group $AKS_PERS_RESOURCE_GROUP --name $ACR_NAME --sku Basic --admin-enabled true

Closely examine the JSON output from this command. Save the value of the loginServer property aside. It will look something like the following.

"loginServer": "contosoresourcegroup1610068510.azurecr.io",

Use this value to sign in to the ACR instance. Note that because you are signing in with the az cli, you do not need a password because your identity is already conveyed by having done az login previously.

$ export LOGIN_SERVER=$(az acr show -n $ACR_NAME --resource-group $AKS_PERS_RESOURCE_GROUP --query "loginServer" -o tsv)
$ az acr login --name $LOGIN_SERVER

Successful output will include Login Succeeded.

Push the wdt-domain-image:WLS-v1 image created while satisfying the preconditions to this registry.

$ docker tag wdt-domain-image:WLS-v1 $LOGIN_SERVER/wdt-domain-image:WLS-v1
$ docker push ${LOGIN_SERVER}/wdt-domain-image:WLS-v1

Finally, connect AKS to the ACR. For more details on connecting ACR to an existing AKS, see Configure ACR integration for existing AKS clusters.

$ export ACR_ID=$(az acr show -n $ACR_NAME --resource-group $AKS_PERS_RESOURCE_GROUP --query "id" -o tsv)
$ az aks update --name $AKS_CLUSTER_NAME --resource-group $AKS_PERS_RESOURCE_GROUP --attach-acr $ACR_ID

Successful output will be a JSON object with the entry "type": "Microsoft.ContainerService/ManagedClusters".

If you see an error that seems related to you not being an Owner on this subscription, please refer to the troubleshooting section Cannot attach ACR due to not being Owner of subscription.

Install WebLogic Kubernetes Operator into the AKS cluster

The WebLogic Kubernetes Operator is an adapter to integrate WebLogic Server and Kubernetes, allowing Kubernetes to serve as a container infrastructure hosting WLS instances. The operator runs as a Kubernetes Pod and stands ready to perform actions related to running WLS on Kubernetes.

Kubernetes Operators use Helm to manage Kubernetes applications. The operator’s Helm chart is located in the kubernetes/charts/weblogic-operator directory. Please install the operator by running the corresponding command.

$ helm repo add weblogic-operator https://oracle.github.io/weblogic-kubernetes-operator/charts --force-update
$ helm repo update
$ helm install weblogic-operator weblogic-operator/weblogic-operator

The output will show something similar to the following:

$ helm install weblogic-operator weblogic-operator/weblogic-operator
NAME: weblogic-operator
LAST DEPLOYED: Tue Jan 18 17:07:56 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

Verify the operator with the following command; the STATUS must be Running. The READY must be 1/1.

$ kubectl get pods -w
NAME                                         READY   STATUS    RESTARTS   AGE
weblogic-operator-69794f8df7-bmvj9           1/1     Running   0          86s
weblogic-operator-webhook-868db5875b-55v7r   1/1     Running   0          86s

You will have to press Ctrl-C to exit this command due to the -w flag.

Create WebLogic domain

Now that you have created the AKS cluster, installed the operator, and verified that the operator is ready to go, you can ask the operator to create a WLS domain.

Create secrets

You will use the $BASE_DIR/sample-scripts/create-weblogic-domain-credentials/create-weblogic-credentials.sh script to create the domain WebLogic administrator credentials as a Kubernetes secret. Please run:

cd $BASE_DIR/sample-scripts/create-weblogic-domain-credentials
$ ./create-weblogic-credentials.sh -u ${WEBLOGIC_USERNAME} -p ${WEBLOGIC_PASSWORD} -d domain1
secret/domain1-weblogic-credentials created
secret/domain1-weblogic-credentials labeled
The secret domain1-weblogic-credentials has been successfully created in the default namespace.

You will use the kubernetes/samples/scripts/create-kubernetes-secrets/create-docker-credentials-secret.sh script to create the Docker credentials as a Kubernetes secret. Please run:

$ cd $BASE_DIR/sample-scripts/create-kubernetes-secrets
$ ./create-docker-credentials-secret.sh -s ${SECRET_NAME_DOCKER} -e ${ORACLE_SSO_EMAIL} -p ${ORACLE_SSO_PASSWORD} -u ${ORACLE_SSO_EMAIL}
secret/wlsregcred created
The secret wlsregcred has been successfully created in the default namespace.

Verify secrets with the following command:

$ kubectl get secret
NAME                                      TYPE                             DATA   AGE
domain1-weblogic-credentials              Opaque                           2      2m32s
sh.helm.release.v1.weblogic-operator.v1   helm.sh/release.v1               1      5m32s
weblogic-operator-secrets                 Opaque                           1      5m31s
weblogic-webhook-secrets                  Opaque                           2      5m31s
wlsregcred                                kubernetes.io/dockerconfigjson   1      38s

NOTE: If the NAME column in your output is missing any of the values shown above, please review your execution of the preceding steps in this sample to ensure that you correctly followed all of them.

Enable Weblogic Operator

Run the following command to enable the operator to monitor the namespace.

kubectl label namespace default weblogic-operator=enabled
Create WebLogic Domain

Now, you deploy a sample-domain1 domain resource and an associated sample-domain1-cluster-1 cluster resource using a single YAML resource file which defines both resources. The domain resource and cluster resource tells the operator how to deploy a WebLogic domain. They do not replace the traditional WebLogic configuration files, but instead cooperate with those files to describe the Kubernetes artifacts of the corresponding domain.

  • Run the following command to generate resource files.

    Export Domain_Creation_Image_tag, which will be referred in create-domain-on-aks-generate-yaml.sh.

    export Domain_Creation_Image_tag=${LOGIN_SERVER}/wdt-domain-image:WLS-v1
    
    cd $BASE_DIR/sample-scripts/create-weblogic-domain-on-azure-kubernetes-service  
    
    bash create-domain-on-aks-generate-yaml.sh
    

After running above commands, you will get three files: domain-resource.yaml, admin-lb.yaml, cluster-lb.yaml.

The domain resource references the cluster resource, a WebLogic Server installation image, the secrets you defined, PV and PVC configuration details, and a sample domain creation image, which contains a traditional WebLogic configuration and a WebLogic application. For detailed information, see Domain and cluster resources.

  • Run the following command to apply the two sample resources.

    $ kubectl apply -f domain-resource.yaml
    
  • Create the load balancer services using the following commands:

    $ kubectl apply -f admin-lb.yaml
    
    service/domain1-admin-server-external-lb created
    
    $ kubectl  apply -f cluster-lb.yaml
    
    service/domain1-cluster-1-external-lb created
    

    After a short time, you will see the Administration Server and Managed Servers running.

    Use the following command to check server pod status:

    $ kubectl get pods --watch
    

    It may take you up to 20 minutes to deploy all pods, please wait and make sure everything is ready.

    You can tail the logs of the Administration Server with this command:

    kubectl logs -f domain1-admin-server
    

    The final example of pod output is as following:

    $ kubectl get pods 
    
    NAME                                        READY   STATUS    RESTARTS   AGE
    domain1-admin-server                        1/1     Running   0          12m
    domain1-managed-server1                     1/1     Running   0          10m
    domain1-managed-server2                     1/1     Running   0          10m
    weblogic-operator-7796bc7b8-qmhzw           1/1     Running   0          48m
    weblogic-operator-webhook-b5b586bc5-ksfg9   1/1     Running   0          48m
    

    If Kubernetes advertises the WebLogic pod as Running you can be assured the WebLogic Server actually is running because the operator ensures that the Kubernetes health checks are actually polling the WebLogic health check mechanism.

    Get the addresses of the Administration Server and Managed Servers (please wait for the external IP addresses to be assigned):

    $ kubectl get svc --watch
    

    The final example of service output is as following:

    $ kubectl get svc --watch
    
      NAME                               TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)             AGE
      domain1-admin-server               ClusterIP      None           <none>          7001/TCP            13m
      domain1-admin-server-external-lb   LoadBalancer   10.0.30.252    4.157.147.131   7001:31878/TCP      37m
      domain1-cluster-1-lb               LoadBalancer   10.0.26.96     4.157.147.212   8001:32318/TCP      37m
      domain1-cluster-cluster-1          ClusterIP      10.0.157.174   <none>          8001/TCP            10m
      domain1-managed-server1            ClusterIP      None           <none>          8001/TCP            10m
      domain1-managed-server2            ClusterIP      None           <none>          8001/TCP            10m
      kubernetes                         ClusterIP      10.0.0.1       <none>          443/TCP             60m
      weblogic-operator-webhook-svc      ClusterIP      10.0.41.121    <none>          8083/TCP,8084/TCP   49m
    

    In the example, the URL to access the Administration Server is: http://4.157.147.131/console. The user name and password that you enter for the Administration Console must match the ones you specified for the domain1-weblogic-credentials secret in the Create secrets step.

    If the WLS Administration Console is still not available, use kubectl get events --sort-by='.metadata.creationTimestamp' to troubleshoot.

    $ kubectl get events --sort-by='.metadata.creationTimestamp'
    

To access the sample application on WLS, you may skip to the section Access sample application. The next section includes a script that automates all of the preceding steps.

Automation

If you want to automate the above steps of creating AKS cluster and WLS domain, you can use the script ${BASE_DIR}/sample-scripts/create-weblogic-domain-on-azure-kubernetes-service/create-domain-on-aks.sh.

The sample script will create a WLS domain home on the AKS cluster, including:

  • Creating a new Azure resource group, with a new Azure Storage Account and Azure File Share to allow WebLogic to persist its configuration and data separately from the Kubernetes pods that run WLS workloads.
  • Creating WLS domain home.
  • Generating the domain resource YAML files, which can be used to restart the Kubernetes artifacts of the corresponding domain.

For input values, you can edit ${BASE_DIR}/sample-scripts/create-weblogic-domain-on-azure-kubernetes-service/create-domain-on-aks-inputs.sh directly. The following values must be specified:

Name in YAML file Example value Notes
dockerEmail yourDockerEmail Oracle Single Sign-On (SSO) account email, used to pull the WebLogic Server Docker image.
dockerPassword yourDockerPassword Password for Oracle SSO account, used to pull the WebLogic Server Docker image, in clear text.
weblogicUserName weblogic Uername for WebLogic user account.
weblogicAccountPassword Secret123456 Password for WebLogic user account.
$ cd ${BASE_DIR}/sample-scripts/create-weblogic-domain-on-azure-kubernetes-service
$ ./create-domain-on-aks.sh 

The script will print the Administration Server address after a successful deployment. To interact with the cluster using kubectl, use az aks get-credentials as shown in the script output.

You now have created an AKS cluster with Azure Files NFS share to contain the WLS domain configuration files. Using those artifacts, you have used the operator to create a WLS domain.

Access sample application

Access the Administration Console using the admin load balancer IP address.

$ ADMIN_SERVER_IP=$(kubectl get svc domain1-admin-server-external-lb -o=jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ echo "Administration Console Address: http://${ADMIN_SERVER_IP}:7001/console/"

Access the sample application using the cluster load balancer IP address.

$ CLUSTER_IP=$(kubectl get svc domain1-cluster-1-lb -o=jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ curl http://${CLUSTER_IP}:8001/myapp_war/index.jsp

The test application will list the server host and server IP on the output, like the following:

<html><body><pre>
*****************************************************************

Hello World! This is version 'v1' of the sample JSP web-app.

Welcome to WebLogic Server 'managed-server1'!

  domain UID  = 'domain1'
  domain name = 'domain1'

Found 1 local cluster runtime:
  Cluster 'cluster-1'

Found min threads constraint runtime named 'SampleMinThreads' with configured count: 1

Found max threads constraint runtime named 'SampleMaxThreads' with configured count: 10

Found 0 local data sources:

*****************************************************************
</pre></body></html>

Validate NFS volume

There are several approaches to validate the NFS volume:

Use kubectl exec to enter the admin server pod to check file system status:

kubectl exec -it domain1-admin-server -- df -h

You will find output like the following, with filesystem ${AKS_PERS_STORAGE_ACCOUNT_NAME}.file.core.windows.net:/${AKS_PERS_STORAGE_ACCOUNT_NAME}/${AKS_PERS_SHARE_NAME}, size 100G, and mounted on /shared:

Filesystem                                                                                Size  Used Avail Use% Mounted on
...
wlsstorage1612795811.file.core.windows.net:/wlsstorage1612795811/wls-weblogic-1612795811  100G   76M  100G   1% /shared
...

Clean up resources

The output from the create-domain-on-aks.sh script includes a statement about the Azure resources created by the script. To delete the cluster and free all related resources, simply delete the resource groups. The output will list the resource groups, such as.

The following Azure Resouces have been created:
  Resource groups: wlsresourcegroup6091605169, MC_wlsresourcegroup6091605169_wlsakscluster6091605169_eastus

Given the above output, the following Azure CLI commands will delete the resource groups.

$ az group delete --yes --no-wait --name wlsresourcegroup6091605169

If you created the AKS cluster step by step, run the following commands to clean up resources.

$ az group delete --yes --no-wait --name $AKS_PERS_RESOURCE_GROUP

Troubleshooting

For troubleshooting advice, see Troubleshooting.