d. Logging and visualization

After the OIG domain is set up you can publish operator and WebLogic Server logs into Elasticsearch and interact with them in Kibana.

Install Elasticsearch and Kibana

If you do not already have a centralized Elasticsearch (ELK) stack then you must configure this first. For details on how to configure the ELK stack, follow Installing Elasticsearch (ELK) Stack and Kibana

Create the logstash pod

Variables used in this chapter

In order to create the logstash pod, you must create several files. These files contain variables which you must substitute with variables applicable to your environment.

Most of the values for the variables will be based on your ELK deployment as per Installing Elasticsearch (ELK) Stack and Kibana.

The table below outlines the variables and values you must set:

Variable Sample Value Description
<ELK_VER> 8.3.1 The version of logstash you want to install.
<ELK_SSL> true If SSL is enabled for ELK set the value to true, or if NON-SSL set to false. This value must be lowercase.
<ELK_HOSTS> https://elasticsearch.example.com:9200 The URL for sending logs to Elasticsearch. HTTP if NON-SSL is used.
<ELKNS> oigns The domain namespace.
<ELK_USER> logstash_internal The name of the user for logstash to access Elasticsearch.
<ELK_PASSWORD> password The password for ELK_USER.
<ELK_APIKEY> apikey The API key details.

You will also need the BASE64 version of the Certificate Authority (CA) certificate(s) that signed the certificate of the Elasticsearch server. If using a self-signed certificate, this is the self signed certificate of the Elasticsearch server. See Copying the Elasticsearch Certificate for details on how to get the correct certificate. In the example below the certificate is called elk.crt.

Create kubernetes secrets

  1. Create a Kubernetes secret for Elasticsearch using the API Key or Password.

    a) If ELK uses an API Key for authentication:

    $ kubectl create secret generic elasticsearch-pw-elastic -n <domain_namespace> --from-literal password=<ELK_APIKEY>
    

    For example:

    $ kubectl create secret generic elasticsearch-pw-elastic -n oigns --from-literal password=<ELK_APIKEY>
    

    The output will look similar to the following:

    secret/elasticsearch-pw-elastic created
    

    b) If ELK uses a password for authentication:

    $ kubectl create secret generic elasticsearch-pw-elastic -n <domain_namespace> --from-literal password=<ELK_PASSWORD>
    

    For example:

    $ kubectl create secret generic elasticsearch-pw-elastic -n oigns --from-literal password=<ELK_PASSWORD>
    

    The output will look similar to the following:

    secret/elasticsearch-pw-elastic created
    

    Note: It is recommended that the ELK Stack is created with authentication enabled. If no authentication is enabled you may create a secret using the values above.

  2. Create a Kubernetes secret to access the required images on hub.docker.com:

    Note: Before executing the command below, you must first have a user account on hub.docker.com.

    kubectl create secret docker-registry "dockercred" --docker-server="https://index.docker.io/v1/" \
    --docker-username="<DOCKER_USER_NAME>" \
    --docker-password=<DOCKER_PASSWORD> --docker-email=<DOCKER_EMAIL_ID> \
    --namespace=<domain_namespace>
    

    For example,

    kubectl create secret docker-registry "dockercred" --docker-server="https://index.docker.io/v1/" \
    --docker-username="user@example.com" \
    --docker-password=password --docker-email=user@example.com \
    --namespace=oigns
    

    The output will look similar to the following:

    secret/dockercred created
    

Find the mountPath details

  1. Run the following command to get the mountPath of your domain:

    $ kubectl describe domains <domain_uid> -n <domain_namespace> | grep "Mount Path"
    

    For example:

    $ kubectl describe domains governancedomain -n oigns | grep "Mount Path"
    

    If you deployed OIG using WLST, the output will look similar to the following:

    Mount Path:  /u01/oracle/user_projects/domains
    

    If you deployed OIG using WDT, the output will look similar to the following:

    Mount Path:  /u01/oracle/user_projects
    

Find the Domain Home and Log Home details

  1. Run the following command to get the Domain Home and Log Home of your domain:

     $ kubectl describe domains <domain_uid> -n <domain_namespace> | egrep "Domain Home: | Log Home:"
    

    ``

    For example:

    $ kubectl describe domains governancedomain -n oigns  | egrep "Domain Home: | Log Home:"
    

    ``

    The output will look similar to the following:

    Domain Home:                     /u01/oracle/user_projects/domains/governancedomain
    Http Access Log In Log Home:     true
    Log Home:                           /u01/oracle/user_projects/domains/logs/governancedomain
    

Find the persistentVolumeClaim details

  1. Run the following command to get the OIG domain persistence volume details:

    $ kubectl get pv -n <domain_namespace>
    

    For example:

    $ kubectl get pv -n oigns
    

    The output will look similar to the following:

    NAME                         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                               STORAGECLASS                         REASON   AGE
    governancedomain-domain-pv   10Gi       RWX            Retain           Bound    oigns/governancedomain-domain-pvc   governancedomain-oim-storage-class            28h
    

    Make note of the CLAIM value, for example in this case governancedomain-domain-pvc.

Create the Configmap

  1. Copy the elk.crt file to the $WORKDIR/kubernetes/elasticsearch-and-kibana directory.

  2. Navigate to the $WORKDIR/kubernetes/elasticsearch-and-kibana directory and run the following:

    kubectl create configmap elk-cert --from-file=elk.crt -n <namespace>
    

    For example:

    kubectl create configmap elk-cert --from-file=elk.crt -n oigns
    

    The output will look similar to the following:

    configmap/elk-cert created
    
  3. Create a logstash_cm.yaml file in the $WORKDIR/kubernetes/elasticsearch-and-kibana directory as follows:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: oig-logstash-configmap
      namespace: <ELKNS>
    data:
      logstash.yml: |
        #http.host: "0.0.0.0"
      logstash-config.conf: |
        input {
          file {
            path => "<Log Home>/servers/AdminServer/logs/AdminServer*.log*"
            tags => "Adminserver_log"
            start_position => beginning
          }
          file {
            path => "<Log Home>/**/logs/soa_server*.log*"
            tags => "soaserver_log"
            start_position => beginning
          }
          file {
            path => "<Log Home>/**/logs/oim_server*.log*"
            tags => "Oimserver_log"
            start_position => beginning
          }
          file {
            path => "<Domain Home>/servers/AdminServer/logs/AdminServer-diagnostic.log*"
            tags => "Adminserver_diagnostic"
            start_position => beginning
          }
          file {
            path => "<Domain Home>/servers/**/logs/soa_server*-diagnostic.log*"
            tags => "Soa_diagnostic"
            start_position => beginning
          }
          file {
            path => "<Domain Home>/servers/**/logs/oim_server*-diagnostic.log*"
            tags => "Oimserver_diagnostic"
            start_position => beginning
          }
          file {
            path => "<Domain Home>/servers/**/logs/access*.log*"
            tags => "Access_logs"
            start_position => beginning
          }
        }
        filter {
          grok {
            match => [ "message", "<%{DATA:log_timestamp}> <%{WORD:log_level}> <%{WORD:thread}> <%{HOSTNAME:hostname}> <%{HOSTNAME:servername}> <%{DATA:timer}> <<%{DATA:kernel}>> <> <%{DATA:uuid}> <%{NUMBER:timestamp}> <%{DATA:misc}    > <%{DATA:log_number}> <%{DATA:log_message}>" ]
          }
        if "_grokparsefailure" in [tags] {
            mutate {
                remove_tag => [ "_grokparsefailure" ]
            }
        }
        }
        output {
          elasticsearch {
        hosts => ["<ELK_HOSTS>"]
        cacert => '/usr/share/logstash/config/certs/elk.crt'
        index => "oiglogs-000001"
        ssl => <ELK_SSL>
        ssl_certificate_verification => false
        user => "<ELK_USER>"
        password => "${ELASTICSEARCH_PASSWORD}"
        api_key => "${ELASTICSEARCH_PASSWORD}"
          }
        }
    

    Change the values in the above file as follows:

    • Change the <ELKNS>, <ELK_HOSTS>, <ELK_SSL>, and <ELK_USER> to match the values for your environment.
    • Change <Log Home> and <Domain Home> to match the Log Home and Domain Home returned earlier.
    • If your domainUID is anything other than governancedomain, change each instance of governancedomain to your domainUID.
    • If using API KEY for your ELK authentication, delete the user and password lines.
    • If using a password for ELK authentication, delete the api_key line.
    • If no authentication is used for ELK, delete the user, password, and api_key lines.

    For example:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: oig-logstash-configmap
      namespace: oigns
    data:
      logstash.yml: |
        #http.host: "0.0.0.0"
      logstash-config.conf: |
        input {
          file {
            path => "/u01/oracle/user_projects/domains/logs/governancedomain/servers/AdminServer/logs/AdminServer*.log*"
            tags => "Adminserver_log"
            start_position => beginning
          }
          file {
            path => "/u01/oracle/user_projects/domains/logs/governancedomain/**/logs/soa_server*.log*"
            tags => "soaserver_log"
            start_position => beginning
          }
          file {
            path => "/u01/oracle/user_projects/domains/logs/governancedomain/**/logs/oim_server*.log*"
            tags => "Oimserver_log"
            start_position => beginning
          }
          file {
            path => "/u01/oracle/user_projects/domains/governancedomain/servers/AdminServer/logs/AdminServer-diagnostic.log*"
            tags => "Adminserver_diagnostic"
            start_position => beginning
          }
          file {
            path => "/u01/oracle/user_projects/domains/governancedomain/servers/**/logs/soa_server*-diagnostic.log*"
            tags => "Soa_diagnostic"
            start_position => beginning
          }
          file {
            path => "/u01/oracle/user_projects/domains/governancedomain/servers/**/logs/oim_server*-diagnostic.log*"
            tags => "Oimserver_diagnostic"
            start_position => beginning
          }
          file {
            path => "/u01/oracle/user_projects/domains/governancedomain/servers/**/logs/access*.log*"
            tags => "Access_logs"
            start_position => beginning
          }
        }
        filter {
          grok {
            match => [ "message", "<%{DATA:log_timestamp}> <%{WORD:log_level}> <%{WORD:thread}> <%{HOSTNAME:hostname}> <%{HOSTNAME:servername}> <%{DATA:timer}> <<%{DATA:kernel}>> <> <%{DATA:uuid}> <%{NUMBER:timestamp}> <%{DATA:misc}    > <%{DATA:log_number}> <%{DATA:log_message}>" ]
          }
        if "_grokparsefailure" in [tags] {
            mutate {
                remove_tag => [ "_grokparsefailure" ]
            }
        }
        }
        output {
          elasticsearch {
        hosts => ["https://elasticsearch.example.com:9200"]
        cacert => '/usr/share/logstash/config/certs/elk.crt'
        index => "oiglogs-000001"
        ssl => true
        ssl_certificate_verification => false
        user => "logstash_internal"
        password => "${ELASTICSEARCH_PASSWORD}"
          }
        }
    
  4. Run the following command to create the configmap:

    $  kubectl apply -f logstash_cm.yaml
    

    The output will look similar to the following:

    configmap/oig-logstash-configmap created
    

Deploy the logstash pod

  1. Navigate to the $WORKDIR/kubernetes/elasticsearch-and-kibana directory and create a logstash.yaml file as follows:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: oig-logstash
      namespace: <ELKNS>
    spec:
      selector:
        matchLabels:
          k8s-app: logstash
      template: # create pods using pod definition in this template
        metadata:
          labels:
            k8s-app: logstash
        spec:
          imagePullSecrets:
          - name: dockercred
          containers:
          - command:
            - logstash
            image: logstash:<ELK_VER>
            imagePullPolicy: IfNotPresent
            name: oig-logstash
            env:
            - name: ELASTICSEARCH_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: elasticsearch-pw-elastic
                  key: password
            resources:
            ports:
            - containerPort: 5044
              name: logstash
            volumeMounts:
            - mountPath: <mountPath>
              name: weblogic-domain-storage-volume
            - name: shared-logs
              mountPath: /shared-logs
            - mountPath: /usr/share/logstash/pipeline/
              name: oig-logstash-pipeline
            - mountPath: /usr/share/logstash/config/logstash.yml
              subPath: logstash.yml
              name: config-volume
            - mountPath: /usr/share/logstash/config/certs
              name: elk-cert
          volumes:
          - configMap:
              defaultMode: 420
              items:
              - key: elk.crt
                path: elk.crt
              name: elk-cert
            name: elk-cert
          - configMap:
              defaultMode: 420
              items:
              - key: logstash-config.conf
                path: logstash-config.conf
              name: oig-logstash-configmap
            name: oig-logstash-pipeline
          - configMap:
              defaultMode: 420
              items:
              - key: logstash.yml
                path: logstash.yml
              name: oig-logstash-configmap
            name: config-volume
          - name: weblogic-domain-storage-volume
            persistentVolumeClaim:
              claimName: governancedomain-domain-pvc
          - name: shared-logs
            emptyDir: {}
    
    • Change the <ELKNS>, and <ELK_VER> to match the values for your environment
    • Change <mountPath> to match the mountPath returned earlier
    • Change the claimName value to match the claimName returned earlier
    • If your Kubernetes environment does not allow access to the internet to pull the logstash image, you must load the logstash image in your own container registry and change image: logstash:<ELK_VER> to the location of the image in your container registry e.g: container-registry.example.com/logstash:8.3.1

    For example:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: oig-logstash
      namespace: oigns
    spec:
      selector:
        matchLabels:
          k8s-app: logstash
      template: # create pods using pod definition in this template
        metadata:
          labels:
            k8s-app: logstash
        spec:
          imagePullSecrets:
          - name: dockercred
          containers:
          - command:
            - logstash
            image: logstash:8.3.1
            imagePullPolicy: IfNotPresent
            name: oig-logstash
            env:
            - name: ELASTICSEARCH_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: elasticsearch-pw-elastic
                  key: password
            resources:
            ports:
            - containerPort: 5044
              name: logstash
            volumeMounts:
            - mountPath: /u01/oracle/user_projects/domains
              name: weblogic-domain-storage-volume
            - name: shared-logs
              mountPath: /shared-logs
            - mountPath: /usr/share/logstash/pipeline/
              name: oig-logstash-pipeline
            - mountPath: /usr/share/logstash/config/logstash.yml
              subPath: logstash.yml
              name: config-volume
            - mountPath: /usr/share/logstash/config/certs
              name: elk-cert
          volumes:
          - configMap:
              defaultMode: 420
              items:
              - key: elk.crt
                path: elk.crt
              name: elk-cert
            name: elk-cert
          - configMap:
              defaultMode: 420
              items:
              - key: logstash-config.conf
                path: logstash-config.conf
              name: oig-logstash-configmap
            name: oig-logstash-pipeline
          - configMap:
              defaultMode: 420
              items:
              - key: logstash.yml
                path: logstash.yml
              name: oig-logstash-configmap
            name: config-volume
          - name: weblogic-domain-storage-volume
            persistentVolumeClaim:
              claimName: governancedomain-domain-pvc
          - name: shared-logs
            emptyDir: {}
    
  2. Deploy the logstash pod by executing the following command:

    $ kubectl create -f $WORKDIR/kubernetes/elasticsearch-and-kibana/logstash.yaml 
    

    The output will look similar to the following:

    deployment.apps/oig-logstash created
    
  3. Run the following command to check the logstash pod is created correctly:

    $ kubectl get pods -n <namespace>
    

    For example:

    $ kubectl get pods -n oigns
    

    The output should look similar to the following:

    NAME                                            READY   STATUS      RESTARTS   AGE
    governancedomain-adminserver                                1/1     Running     0          90m
    governancedomain-create-fmw-infra-sample-domain-job-fqgnr   0/1     Completed   0          2d19h
    governancedomain-oim-server1                                1/1     Running     0          88m
    governancedomain-soa-server1                                1/1     Running     0          88m
    helper                                                      1/1     Running     0          2d20h
    oig-logstash-77fbbc66f8-lsvcw                               1/1     Running     0          3m25s
    

    Note: Wait a couple of minutes to make sure the pod has not had any failures or restarts. If the pod fails you can view the pod log using:

    $ kubectl logs -f oig-logstash-<pod> -n oigns
    

    Most errors occur due to misconfiguration of the logstash_cm.yaml or logstash.yaml. This is usually because of an incorrect value set, or the certificate was not pasted with the correct indentation.

    If the pod has errors, delete the pod and configmap as follows:

    $ kubectl delete -f $WORKDIR/kubernetes/elasticsearch-and-kibana/logstash.yaml
    $ kubectl delete -f $WORKDIR/kubernetes/elasticsearch-and-kibana/logstash_cm.yaml
    

    Once you have resolved the issue in the yaml files, run the commands outlined earlier to recreate the configmap and logstash pod.

Verify and access the Kibana console

To access the Kibana console you will need the Kibana URL as per Installing Elasticsearch (ELK) Stack and Kibana.

For Kibana 7.7.x and below:

  1. Access the Kibana console with http://<hostname>:<port>/app/kibana and login with your username and password.

  2. From the Navigation menu, navigate to Management > Kibana > Index Patterns.

  3. In the Create Index Pattern page enter oiglogs* for the Index pattern and click Next Step.

  4. In the Configure settings page, from the Time Filter field name drop down menu select @timestamp and click Create index pattern.

  5. Once the index pattern is created click on Discover in the navigation menu to view the OIG logs.

For Kibana version 7.8.X and above:

  1. Access the Kibana console with http://<hostname>:<port>/app/kibana and login with your username and password.

  2. From the Navigation menu, navigate to Management > Stack Management.

  3. Click Data Views in the Kibana section.

  4. Click Create Data View and enter the following information:

    • Name: oiglogs*
    • Timestamp: @timestamp
  5. Click Create Data View.

  6. From the Navigation menu, click Discover to view the log file entries.

  7. From the drop down menu, select oiglogs* to view the log file entries.