Wednesday, October 25, 2017

Threads in Managed Environments. Why Our Work Managers Need Some Tuning

First of all, I need to say that the standard ('default') Work Manager is entirely permissible: a separate Work Manager with the default configuration will be created during server starting for every deployed application. An additional Work Manager should be defined only in the following cases:

  • By default, all threads have the same priority; if this behaviour isn't suitable, the Fair Share parameter must be set.

  • There is a response time goal assigned to the server; the Response Time parameter must be set.

  • A deadlock (e.g., during server-to-server communication) might be met, a Minimum Thread Constraint should be created and assigned to the Work Manager

  • Applications use a common JDBC connection pool, a maximum number of available threads (Maximum Threads Constraint) for the applications must be limited by the pool capacity.

Separately, if Oracle Service Bus is deployed to Oracle WebLogic and the Service Callout action is used, each Proxy- and Business-service invoked using a Service Callout should have its own Work Manager. More information can be found in the Following the Thread in OSB article by Antony Reynolds.

Friday, October 13, 2017

ESB vs EAI: "Universal Service", What is Wrong with This Pattern

Some technical people do understand the Enterprise Service Bus (ESB) concept as a universal channel designed just to enable some XML messages encoded as plain string transmission among enterprise applications. The channel should provide no validation/enrichment/monitoring capabilities, the channel is considered only as a dumb message router that also provides message transformation into an accessible for the enterprise applications format. A powerful and expensive integration middleware, like Oracle Service Bus, Oracle SOA Suite, IBM Integration Bus, or SAP PI/XI, is chosen as a platform for the integration solution. Usually, it's required that the IT team should be able to configure new or existing routes just by edit a few records in the configuration database.

The developers of such "universal solution" believe that a new application can be connected to the solution just by design an appropriate adapter and insert a few records into the configuration database.

In fact, the developers have to implement a number of integration patterns and, optionally, a canonical data model using a small subset of the capabilities provided by the integration platform.

The focus of the article is to explain why the above approach is not effective and why developers have to leverage as many capabilities of their preferable middleware platform as possible.

Tuesday, October 3, 2017

Threads in Managed Environments. Work Managers

We pay for modern application servers since they provide a managed environment for our applications. An application server implements some APIs, for example Java EE 7 or Java EE 8, as well as provides some capabilities such as application life-cycle management, transaction management, resource access and thread management.

Thread pool

An application server uses a thread pool to provide the thread management capability. While an application deployed on the server works, an application thread isn't created when a new request is accepted but taken from the pool. This approach protects the server from creating a lot of threads and overwhelming the operating system by the duty to process too many threads. The goal has been pursued through blocking accepted requests if there are no threads in the pool.

The IT team can specify the following parameters of the thread pool:

  • thread priority - ranges threads created by a number of pools by priority. A user request to a business critical application hangs other threads in the system.

  • number of threads - limits the number of concurrent threads executing requests. Modern application servers, for example Oracle WebLogic, let us set up the limit not only as a constant value but also as a reference to a data source so the maximum number of thread would be equal to the capacity of the connection pool related to the data source. On thread gets a connection to the database.

The application server takes into account the above parameters in cooperation with some inner optimizations by analyzing the current workload, the number of available processors and the amount of free memory.

Friday, September 8, 2017

Exposing Servlet- and JAX-RS-based WebSphere Liberty REST APIs with Swagger

An amazing article Developing a Swagger-enabled REST API using WebSphere Developer Tools demonstrates how to expose a usual servlet as a REST API using a new feature of WebSphere Liberty called apiDiscovery-1.0.

I've rewritten a bit the code of the servlet taking the JSR 353/JSON-P API into account and eliminated all WebSphere-related code, so the demonstration project can be built using Apache Maven: just put the 'javax.json:javax.json-api:jar' dependency into your pom.xml.

Including a swagger.json or swagger.yaml file inside the corresponding META-INF folder is the easiest way to expose the documentation of web modules, but not the only one. If the web application does not provide a swagger.json or swagger.yaml file and the application contains JAX-RS annotated resources, the Swagger document would be automatically generated. As mentioned in the official documentation, the server configuration must have the apiDiscovery-1.0 feature and the jaxrs-1.1 or jaxrs-2.0 feature; for example:

The product scans all classes in the web application for JAX-RS and Swagger annotations, searching for classes with @Path, @Api, and @SwaggerDefinition annotations. The apiDiscovery-1.0 feature automatically generates a corresponding Swagger document and makes it available at the following URIs: http://host:port/context-root/swagger.json and http://host:port/context-root/swagger.yaml.

For example, if the following JAX-RS resource is deployed on the server:

Thursday, August 31, 2017

Oracle SOA Suite Performance Monitoring

Oracle Enterprise Manager Fusion Middleware Control Console (EM) - ensures runtime governance through composite application modelling and monitoring as well as comprehensive Service and infrastructure management functionality to help organizations maximize the return on investment. Let's consider capabilities for performance management provided by this instrument.

Monitoring performance of the Oracle SOA Suite runtime

The Request Processing tab uses three grid views to present performance information. The tab is available under the Monitoring -> Request Processing item of the context menu SOA -> soa-infra. The displayed information is layered by:

  • service engine (BPEL, BPMN, Mediator, Human Workflow, Business Rule, Spring):
    • average request processing time - synchronous
    • average request processing time - asynchronous
    • active request count
    • processed requst count
    • fault count
  • the summary about service infrastructure:
    • average request processing time - synchronous
    • average request processing time - asynchronous
    • active request count
    • processed request count
    • fault count
  • binding components:
    • web-service (WS) inbound
    • web-service (WS) outbound
    • Java EE Connector Architecture (J2CA) inbound
    • Java EE Connector Architecture (J2CA) outbound
    The following metrics are available:
    • average request processing time
    • processed request count
    • error count

Wednesday, August 23, 2017

Oracle WebLogic Cluster Causes Network Storm When a Problem Happens

Not every system administrator is aware of such WebLogic Server capability as Message Forwarding to Domain Logs. In addition to writing messages to the server log file, each server instance forwards a subset of its messages to a domain-wide log file. This domain-wide log file certainly helps to the system administrator understand the situation on a large domain, for instance, when several dozen servers belong to the domain, it is very helpful having all server logs sit in one place. But the convenience comes at a price.

If there are some problems caused by applications deployed on the server, the server log tends to get stuck by diagnostic messages and large stack traces. The problem is these messages and traces are being written not only to the server log file but also being forwarded to the administration server via the network and causes network storm. The administration server, as a result, might become inaccessible.

For a medium domain (4 - 8 WebLogic Server instances belong to) the idea just to disable the message forwarding capability can be considered. The Domain log broadcaster:, Severity level property value on the Environment -> Servers -> SERVER -> Logging -> General, Advanced page for every managed server must be set to Critical or higher.

Thursday, August 17, 2017

DevOps Strategy for Oracle Fusion Middleware

Let me share the main points of a successful DevOps strategy for that large monolith as the Oracle Fusion Middleware platform.

I see the following principles for the DevOps strategy:

1. The reference configuration must be specified and stabilized. The DevOps world offers a number of Infrastructure as Code instruments such as Ansible, Chef, Puppet, Terraform, etc. Some utilities call themselves as "Orchestrators" but in any case, any environment should be created over the reference configuration only. Also, the reference configuration must be covered by tests. The tests must ensure the configuration stay in the actual state and has no leaks.

2. There is a negative phenomenon in the world of large middleware - Configuration Drift. The drift may take different forms from not matched set of patches installed on the test and the production environments, till a little bit different configuration of JDBC connection pools or JTA transaction timeouts. It's almost impossible to eliminate the configuration drift at all, but using the right DevOps strategy we can minimize it and take the control.

3. The basic step to controlling configuration drift is to prohibit manual changes to the configuration. Sometimes there may appear urgent changes without which the platform just doesn't work at all, but this changes must be immediately reflected in the reference configuration.

4. In order to bring additional reliability to the infrastructure, a playback mechanism should be introduced. If a change moves the production environment to a broken state, the operational team would have a strong mechanism to return back as quickly as possible. Another way is to check out a stable version of the infrastructure from a source code management tool since the infrastructure is a code itself but for Oracle Fusion Middleware based platforms such as SOA Suite or WebCenter Portal, this process takes excessive periods of time.

5. The next step could be the automatization of diagnostic information gathering and utilities for automation such as Chef, Puppet, Ansible and others can really help us.

Your comments are welcome!

Would you like to give a 'Like'? Please follow me on Twitter!