Monitor a domain and publish logs

After the Oracle SOA Suite domain is set up, you can:

Monitor the Oracle SOA Suite instance using Prometheus and Grafana

Using the WebLogic Monitoring Exporter you can scrape runtime information from a running Oracle SOA Suite instance and monitor them using Prometheus and Grafana.

Set up monitoring

Follow these steps to set up monitoring for an Oracle SOA Suite instance. For more details on WebLogic Monitoring Exporter, see here.

Publish WebLogic Server logs into Elasticsearch

You can publish the WebLogic Server logs to Elasticsearch using the WebLogic Logging exporter and interact with them in Kibana. See Publish logs to Elasticsearch.

WebLogic Server logs can also be published to Elasticsearch using Fluentd. See Fluentd configuration steps.

Publish SOA server diagnostics logs into Elasticsearch

This section shows you how to publish diagnostics logs to Elasticsearch and view them in Kibana. For publishing operator logs, see this sample.

Prerequisites

If you have not already set up Elasticsearch and Kibana for logs collection, refer this document and complete the setup.

Publish to Elasticsearch

The Diagnostics or other logs can be pushed to Elasticsearch server using logstash pod. The logstash pod should have access to the shared domain home or the log location. In case of the Oracle SOA Suite domain, the persistent volume of the domain home can be used in the logstash pod. The steps to create the logstash pod are,

  1. Get Domain home persistence volume claim details of the domain home of the Oracle SOA Suite domain. The following command will list the persistent volume claim details in the namespace - soans. In the example below the persistent volume claim is soainfra-domain-pvc:

    $ kubectl get pvc -n soans   
    

    Sample output:

    NAME                  STATUS   VOLUME               CAPACITY   ACCESS MODES   STORAGECLASS                    AGE
    soainfra-domain-pvc   Bound    soainfra-domain-pv   10Gi       RWX            soainfra-domain-storage-class   xxd
    
  2. Create logstash configuration file (logstash.conf). Below is a sample logstash configuration to push diagnostic logs of all servers available at DOMAIN_HOME/servers/<server_name>/logs/-diagnostic.log:

    input {                                                                                                                
      file {                                                                                                               
        path => "/u01/oracle/user_projects/domains/soainfra/servers/**/logs/*-diagnostic.log"                                          
        start_position => beginning                                                                                        
      }                                                                                                                    
    }                                                                                                                         
    filter {                                                                                                               
      grok {                                                                                                               
        match => [ "message", "<%{DATA:log_timestamp}> <%{WORD:log_level}> <%{WORD:thread}> <%{HOSTNAME:hostname}> <%{HOSTNAME:servername}> <%{DATA:timer}> <<%{DATA:kernel}>> <> <%{DATA:uuid}> <%{NUMBER:timestamp}> <%{DATA:misc}> <%{DATA:log_number}> <%{DATA:log_message}>" ]                                                                                        
      }                                                                                                                    
    }                                                                                                                         
    output {                                                                                                               
      elasticsearch {                                                                                                      
        hosts => ["elasticsearch.default.svc.cluster.local:9200"]                                                          
      }                                                                                                                    
    }
    
  3. Copy the logstash.conf into say /u01/oracle/user_projects/domains so that it can be used for logstash deployment, using Administration Server pod ( For example soainfra-adminserver pod in namespace soans):

    $ kubectl cp logstash.conf  soans/soainfra-adminserver:/u01/oracle/user_projects/domains --namespace soans
    
  4. Create deployment YAML (logstash.yaml) for logstash pod using the domain home persistence volume claim. Make sure to point the logstash configuration file to correct location ( For example: we copied logstash.conf to /u01/oracle/user_projects/domains/logstash.conf) and also correct domain home persistence volume claim. Below is a sample logstash deployment YAML:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: logstash-soa
      namespace: soans
    spec:
      selector:
        matchLabels:
          app: logstash-soa
      template: # create pods using pod definition in this template
        metadata:
          labels:
            app: logstash-soa
        spec:
          volumes:
          - name: soainfra-domain-storage-volume
            persistentVolumeClaim:
              claimName: soainfra-domain-pvc
          - name: shared-logs
            emptyDir: {}
          containers:
          - name: logstash
            image: logstash:6.6.0
            command: ["/bin/sh"]
            args: ["/usr/share/logstash/bin/logstash", "-f", "/u01/oracle/user_projects/domains/logstash.conf"]
            imagePullPolicy: IfNotPresent
            volumeMounts:
            - mountPath: /u01/oracle/user_projects
              name: soainfra-domain-storage-volume
            - name: shared-logs
              mountPath: /shared-logs
            ports:
            - containerPort: 5044
              name: logstash
    
  5. Deploy logstash to start publish logs to Elasticsearch:

    $ kubectl create -f  logstash.yaml
    
  6. Now, you can view the diagnostics logs using Kibana with index pattern “logstash-*”.