This FAQ is for remote EJB and JMS clients - not JTA clients. The operator does not currently support external WebLogic JTA access to a WebLogic cluster, as (A) external JTA access requires each server in the cluster to be individually addressable by the client, but this conflicts with (B) the current operator requirement that a network channel in a cluster have the same port across all servers in the cluster.
The Load Balancer Tunneling approach for giving external WebLogic EJB or JMS clients access to a Kubernetes hosted WebLogic cluster involves configuring a network channel on the desired WebLogic cluster that accepts T3 protocol traffic that’s tunnelled over HTTP, deploying a load balancer that redirects external HTTP network traffic to the desired WebLogic network channel, and ensuring that EJB and JMS clients specify a URL that resolves the load balancer’s network address.
Here are the specific steps:
Configure a custom channel for the T3 protocol in WebLogic that (A) enables HTTP Tunneling, and (B) specifies an external address and port that correspond to the address and port remote clients will use to access the load balancer. See Adding a WebLogic Custom Channel for samples and details.
Set up a load balancer that redirects HTTP traffic to the custom channel. For a discussion of load balancers, see Ingress. If you’re also using OKE/OCI to host your Kubernetes cluster, also see Using an OCI Load Balancer.
Important: Ensure that the load balancer configures the HTTP flow to be ‘sticky’ - for example, a Traefik load balancer has a
sticky sessions option. This ensures that all of the packets of a tunneling client connection flow to the same pod, otherwise the connection will stall when its packets are load balanced to a different pod.
Remote clients can then access the custom channel using an
http:// URL instead of a
Review the Security Notes below.
The Kubernetes NodePorts approach for giving external WebLogic EJB or JMS clients access to a Kubernetes hosted WebLogic cluster involves configuring a network channel on the desired WebLogic cluster that accepts T3 protocol traffic, and deploying Kubernetes NodePort that redirects external network traffic on the Kubernetes nodes to the network channel.
Here are the specific steps:
Configure a custom channel for the T3 protocol in WebLogic that specifies an external address and port that are suitable for remote client use. See Adding a WebLogic Custom Channel.
Deploy a Kubernetes NodePort to publicly expose the WebLogic ports. See Setting up a NodePort.
Review the Security Notes below.
WebLogic implicitly creates a multi-protocol default channel that spans the
Listen Address and
Port fields specified on each server in the cluster, but this channel is usually unsuitable for external network traffic from EJB and JMS clients. Instead, you may need to configure an additional dedicated WebLogic custom channel to handle remote EJB or JMS client network traffic.
A custom channel provides a way to configure an external listen address and port for use by external clients, unlike a default channel. External listen address and/or port configuration is needed when a channel’s configured listen address and/or port would not work if used to form a URL in the remote client. This is because remote EJB and JMS clients internally use their client’s channel’s configured network information to reconnect to WebLogic when needed. (The EJB and JMS clients do not always use the initial URL specified in the client’s JNDI context.)
A custom channel can be locked down using two-way SSL as a way to prevent access by unauthorizzed external JMS and EJB clients, only accepts protocols that are explicitly enabled for the channel, and can be configured to be the only channel that accepts EJB/JMS clients that tunnel over HTTP. A default channel may often be deliberately unencrypted for convenient internal use, or, if used externally, is only used for web traffic (not tunneling traffic). In addition, a default channel supports several protocols but it’s a best practice to limit the protocols that can be accessed by external clients. Finally, external clients may require access using HTTP tunneling in order to make connections, but it’s often inadvisable to enable tunneling for an unsecured default channel that’s already servicing external HTTP traffic. This is because enabling HTTP tunneling would potentially allow unauthorized external JMS and EJB clients unsecured access to the WebLogic cluster through the same HTTP path.
The basic requirements for configuring a custom channel for remote EJB and JMS access are:
Configure a T3 protocol network-access-point (NAP) with the same name and port on each server (the operator will set the listen address for you).
Configure the external listen address and port on each NAP to match the address and port component of a URL your clients can use. For example, if you are providing access to remote clients using a load balancer then these should match the address and port of the load balancer.
If you want WebLogic T3 clients to tunnel through HTTP, then enable HTTP tunneling on each NAP. This is often necessary for load balancers.
Do NOT set
true on the network-access-point (the default is
false), as this may cause internal network traffic to stall in an attempt to route through the network-access-point.
Ensure you haven’t enabled
calculated-listen-ports for WebLogic dynamic cluster servers. The operator requires that a channel have the same port on each server in a cluster, but
calculated-listen-ports causes the port to be different on each server.
For example, here is a snippet of a WebLogic domain’s
config.xml for channel
MyChannel defined for a WebLogic dynamic cluster named
<server-template> <name>cluster-1-template</name> <listen-port>8001</listen-port> <cluster>cluster-1</cluster> <network-access-point> <name>MyChannel</name> <protocol>t3</protocol> <public-address>some.public.address.com</public-address> <listen-port>7999</listen-port> <public-port>30999</public-port> <http-enabled-for-this-protocol>true</http-enabled-for-this-protocol> <tunneling-enabled>true</tunneling-enabled> <outbound-enabled>false</outbound-enabled> <enabled>true</enabled> <two-way-ssl-enabled>false</two-way-ssl-enabled> <client-certificate-enforced>false</client-certificate-enforced> </network-access-point> </server-template> <cluster> <name>cluster-1</name> <cluster-messaging-mode>unicast</cluster-messaging-mode> <dynamic-servers> <name>cluster-1</name> <server-template>cluster-1-template</server-template> <maximum-dynamic-server-count>5</maximum-dynamic-server-count> <calculated-listen-ports>false</calculated-listen-ports> <server-name-prefix>managed-server</server-name-prefix> <dynamic-cluster-size>5</dynamic-cluster-size> <max-dynamic-cluster-size>5</max-dynamic-cluster-size> </dynamic-servers> </cluster>
And here is a snippet of offline WLST code that corresponds to the above config.xml snippet:
templateName = "cluster-1-template" cd('/ServerTemplates/%s' % templateName) templateChannelName = "MyChannel" create(templateChannelName, 'NetworkAccessPoint') cd('NetworkAccessPoints/%s' % templateChannelName) set('Protocol', 't3') set('ListenPort', 7999) set('PublicPort', 30999) set('HttpEnabledForThisProtocol', true) set('TunnelingEnabled', true) set('OutboundEnabled', false) set('Enabled', true) set('TwoWaySslEnabled', false) set('ClientCertificateEnforced', false)
In this example:
WebLogic binds the custom network channel to port
7999 and the default network channel to
The operator will automatically create a Kubernetes service named
DOMAIN_UID-cluster-cluster-1 for both the custom and default channel.
Internal clients running in the same Kubernetes cluster as the channel can access the cluster using
External clients would be expected to access the cluster using the custom channel using URLs like
t3://some.public.address.com:30999 or, if using tunneling,
Channel configuration for a configured cluster requires configuring the same network-access-point on each server. The operator currently doesn’t test or support network channels that have a different configuration on each server in the cluster.
Additional steps are required for external clients beyond configuring the custom channel - see Approaches.
A Kubernetes NodePort exposes a port on each machine that hosts the Kubernetes cluster where the port is accessible from outside of a Kubernetes cluster. This port redirects network traffic to pods within the Kubernetes cluster. Setting up a Kubernetes NodePort is one approach for giving external WebLogic clients access to JMS or EJBs.
If an EJB or JMS service is running on an Administration Server, then you can skip the rest of this section and use the
spec.adminServer.adminService.channels domain resource attribute to have the operator create a NodePort for you. See Reference - Domain resource. Otherwise, if the EJB or JMS service is running in a WebLogic cluster or standalone WebLogic Managed Server, and you desire to provide access to the service using a NodePort, then the NodePort must be deployed ‘manually’ - see the following sample and table.
Setting up a NodePort usually also requires setting up a custom network channel. See Adding a WebLogic Custom Channel above.
The following NodePort YAML deploys an external node port of
30999 and internal port
7999 for a domain UID of
DOMAIN_UID, a domain name of
DOMAIN_NAME, and a cluster name of
CLUSTER_NAME. It assumes that
7999 corresponds to a T3 protocol port of a channel that’s configured on your WebLogic cluster.
apiVersion: v1 kind: Service metadata: namespace: default name: DOMAIN_UID-cluster-CLUSTER_NAME-ext labels: weblogic.domainUID: DOMAIN_UID spec: type: NodePort externalTrafficPolicy: Cluster sessionAffinity: ClientIP selector: weblogic.domainUID: DOMAIN_UID weblogic.clusterName: CLUSTER_NAME ports: - name: myclustert3channel nodePort: 30999 port: 7999 protocol: TCP targetPort: 7999
||For this particular use case, the NodePort name can be arbitrary as long as it is DNS compatible. But, as a convention it’s recommended to use
||Must match the namespace of your WebLogic cluster.|
||Optional. It’s helpful to set a
||This name is arbitrary.|
||The external port that clients will use. This must match the external port that’s configured on the WebLogic configured channels/network-access-points. By default, Kubernetes requires that this value range from
||These must match the port that’s configured on the WebLogic configured channel/network-access-point(s).|
With some cloud providers, a load balancer or NodePort may implicitly expose a port to the public Internet.
If such a port supports a protocol suitable for WebLogic clients, note that WebLogic allows access to JNDI entries, EJB/RMI applications, and JMS by anonymous users by default.
You can configure a custom channel with a secure protocol and two-way SSL to help prevent external access by unwanted clients. See When is a WebLogic Custom Channel needed?.
See Run Standalone WebLogic JMS Clients on Kubernetes for sample JMS client code and JMS configuration.
See T3 RMI Communication for WebLogic Server Running on Kubernetes for a deep-level discussion of using T3 in combination with port mapping.