Archive for category Tutorials

Apache Tomcat SPNEGO authentication configuration

This is a step-by-step HOW-TO configure AD server and Apache Tomcat server to achieve NTLM single sign-on. We will try to use Tomcat built-in SPNEGO support without 3rd party configuration. Advantage is, that it works out of box. Disadvantage is, that there’s no fallback to BASIC authentication if client doesn’t support SPNEGO autentication.

Environmental parameters

  • domain: eu-central-1.compute.internal – the domain is derived from AWS internal DNS names, easier to resolve
  • realm: EU-CENTRAL-1.COMPUTE.INTERNAL – capital letters of the domain name

AD Server installation

follow this AD step by step tutorial

  1. Name the server (e.g. dc01)
  2. Add server Role – Advice Directory Services
  3. Promote server to Active Directory controller
  4. Add new AD Forest eu-central-1.compute.internal
  5. Create users:
    • tomcat – this user will be used to run the application server service
    • client – this will be used to test the service (or – we can use the administrator user to test)
  6. Add the client user (or administrator if you wish) to the Users group. We need at least one group which will be considered as user role to access the test application. the Users group is already there, so we will reuse it. Please not the users are not automatically in the group, they need to be added manually.
  7. Download and install JRE
  8. Download install Apache Tomcat server
  9. Open port 8080 on the firewall (we won’t use any HTTP proxy for this configuration)
  10. Run the tomcat service as the domain tomcat user. The user should be given permission Logon as Service.

Kerberos configuration

  • register SPN (Service Principal Name) – binding between service and account
    • setspn -A HTTP/ tomcat
    • each service may be bound only to one account
  • create a keytab file
    • ktpass /out c:\tomcat.keytab /mapuser tomcat@eu-central-1.compute.internal /princ HTTP/ /pass <tomcat_user_password> /kvno 0 /pType KRB5_NT_PRINCIPAL
    • move C:\tomcat.keytab C:\programs\Tomcat8\conf
  • initialize a Kerberos key (test keytab settings) by Kinit tool
    • java -D””=true -k -t C:\programs\Tomcat8\conf\tomcat.keytab HTTP/

Tomcat SPNEGO configuration

For simplification we will run the Apache Tomcat server on the AD domain controller (so we have only a single windows server instance running), however in production the Tomcat will run on different server (Windows or Linux) instance.

First – we will define the user realm, so authenticated users will be recognized and their roles loaded. Let’s define user realm in the conf/server.xml

roleBase="cn=Builtin,dc=eu-central-1,dc=compute,dc=internal" />

Validate the AD settings using Active Directory Service management console or simple JXplorer tool to see exact AD LDAP parameters. After this step you should be able to log in into the web applications using active directory username and password (using BASIC or FORM authentication so far)

Create ${CATALINA_HOME}/conf/krb5.ini file:

default_keytab_name = FILE:c:\programs\Tomcat8\conf\tomcat.keytab
default_tkt_enctypes = rc4-hmac,aes256-cts-hmac-sha1-96,aes128-cts-hmac-sha1-96
default_tgs_enctypes = rc4-hmac,aes256-cts-hmac-sha1-96,aes128-cts-hmac-sha1-96

kdc =

eu-central-1.compute.internal= EU-CENTRAL-1.COMPUTE.INTERNAL
.eu-central-1.compute.internal= EU-CENTRAL-1.COMPUTE.INTERNAL


  • on Linux JRE the default file name is krb5.conf
  • custom file path or name can be set in the system property
  • seems the capital letters in the REALM name are important

Security consideration:

In most of the examples and configuration samples we can find rc4-hmac support for default_tgs_enctypes and/or default_ktk_enctypes. Do not use it. RC4-HMAC is considered weak and is deprecated. However, in this case you may need to download “Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files“. Check that you are downloading the policy files for your current JRE version. Correctly – support for RC4-HMAC should be disabled on the AD server, but it is out of scope of this blog.

This encryption is used to encrypt the Negotiation HTTP headers, so I am ok to use it until proper HTTPS is used. Still – it is safer than fallback to the BASIC authentication and passing cleartext password (regardless encoded).

Create ${CATALINA_HOME}/conf/jaas.conf file: { required
}; { required


  • jaas.conf custom name or path can be defined in the system property

Web application

As a test application we can create a simple WAR web application with following files:


<%@page contentType="text/html" pageEncoding="UTF-8"%>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"

<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>SPNEGO test</title>
<h1>Hello World!</h1>
<p>auth type: <%=request.getAuthType()%> </p>
<p>remote user: <%=request.getRemoteUser() %> </p>
<p>principal: <%=request.getUserPrincipal() %></p>
<p>name: <%= (request.getUserPrincipal()!=null)?request.getUserPrincipal().getName():"NO PRINCIPAL" %></p>


<?xml version="1.0" encoding="UTF-8"?>
<web-app version="3.0" xmlns="" xmlns:xsi="" xsi:schemaLocation="">

    <description>App user role</description>


  • we assume the client user is in the Users group (which is assumed to be user’s role by the Realm definition)
  • we can test it first with the BASIC authentication method, so we see the Realm settings are correct


<?xml version="1.0" encoding="UTF-8"?>
<Context antiJARLocking="true" path="/spnego">
<!-- valve will be explicitly created when SPNEGO is used,
 this is to declare additional attributes -->
  <Valve className="org.apache.catalina.authenticator.SpnegoAuthenticator" 
  alwaysUseSession="true" cache="true"  />

I believe the SpnegoAuthenticator valve is added to the context automatically when SPNEGO authentication method is used, in this definition we specified additional settings.

Application Client

By default (according our experiences) the NTLM is directly supported by Internet Explorer and Google Chrome. Mozilla Firefox needs additional configuration.

To achieve SSO we need following configuration:

  • Client workstation must be in AD and user must be logged in by its domain account
  • Client workstation must be other windows instance then server. So it won’t work when we run the application server locally. According to the Tomcat documentation in this case the unsupported NTLM protocol will be used.
  • The application domain URL must be the same as defined in the krb5 configuration file.

Security consideration

NTLM is basically challenge-response authentication with data sent over HTTP headers. As best practice we advice to keep the headers confidential using HTTPS protocol. That could mitigate potential pass-the-hash attack.



, ,

Leave a comment

Trip to the client side


Talend ESB includes the SAM (Service Activity Monitoring) – a feature gathering and storing web service messages along some metadata. To display the stored event data the Talend offers a monitoring UI as a part of their commercial offering (subscription).

The SAM bundles include a REST service enabling simple access to the stored events. To have a real production monitoring one needs much more (search, filters, views, ..). Therefore for heavy-duty usage the clients should use the Talend Administration or a full blown monitoring solutions (Hyperic HQ, Nagios, LEK (LogStash/ElasticSearch/Kibana), etc). I found these SAM basic services particularly useful for simple logging and monitoring during development or production in its early phase.

I consider myself a server-side expert. Very competent in the system integration domain and as well in the server side development. When creating a user interface, I rely mostly on the JSF (Icefaces). So as a part of our integration solution I built a nice JSF web application with paginated table and views displaying event details.

And I got an idea. There’s a lot of buzz about HTML5 and client side applications, however – I’ve got proven distrust to anything based on JavaScript. Few years back one needed to test any JS functionality on all browsers with no warranty it will still work in a few weeks. But – maybe it’s time I give it another chance.
Let’s try to build a simple client over the services providing the logged data. It will be a nice exercise and as well an opportunity to learn something new. Another advantage would be, that a lighter application could run from the ESB itself without need of an external web server (in the real-life scenarios, there’s a web/app server present anyway).

Client application

Lets start simple. My idea was to have a table with server-side pagination.

Apparently – many people start learning new UI technologies with such an example, as it seems very practical and useful. However – it is not as simple as it seems. To have a paginated table, we need multiple components playing together. This exercise is far from a good step-by-step exercise. And I assume the reader know many things, so I skip things I consider granted and clear. As I can be pretty stubborn sometimes, I will go on with this idea.

So – we will

  • create a table
  • load data from an external service
  • paginate the data on the server-side

Another point – I am really no expert in the JS / HTML5 / CSS domain. If you find any way to improve it, give a constructive advice, you are really welcome. I took this task as an opportunity to learn and if you are willing to share too, it would be very appreciated.

Client services

This blog is not about the Talend ESB, nor the SAM itself, but if you want to start use it, I’ll give you a small hint:
– download and install the Talend ESB
– install the tesb-sam-* features

What I particularly like on this solution is, that the monitoring agent is implemented as a CXF feature. It means that it takes almost no effort to enable monitoring on existing services and it doesn’t touch framework libraries.

To get the SAM data, user can call the list REST service:
http://server:port/services/sam/list with optional parameters offset and limit.

For simplified testing, I’ve stored a sample output from the service so we can play locally too.

YUI Library

YUI is a free, open source JavaScript and CSS library for building rich interactive web applications.
Now – many of you may ask why did I choose the YUI Library. People use other frameworks too (jQuery, AngularJS, ..). Well, there is no apparent reason. Simply – I like the default skin design. Now it reminds me how I felt, when my mother chose her car based on its color. Simply – I chose one of many options. But the principles stay more or less the same.


Client application

YUI implementation

for start we will do only the main activity table, we won’t go to master / detail views, etc.
This is the whole implementation with comments

        <title>YUI test</title>
        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
        <script type="text/javascript" src=""></script>
        <link rel="stylesheet" href="apogado-css.css" />
            <div id="messages"></div>            
            <div id="mainTable"></div>
            <div id="mainTablePaginator"></div>
            <script type="text/javascript">
                        // modules to work with datatable,
                        // data source and json datasource
                        'datatable-base', 'datatype-date',
                        'datasource-io', 'datasource-jsonschema',
                        // gallery paginator
                        'gallery-datatable-paginator', 'gallery-paginator-view',
                        // node for displaying messages
                        function(Y) {
                            // function to display response status messages
                            function displayMessage(msg)
                                var msgElement ='#messages');

                            var mainTable = new Y.DataTable(
                                        columns: [
                                                key: "flowID",
                                                formatter: function(o) {
                                                    return '<img src="./arrow-next.gif"/>'
                                                allowHTML: true
                                            {   // convert long to readable
                                                // datetime format
                                                key: "timestamp",
                                                formatter: function(o) {
                                                    return Y.Date.format(new Date(o.value), {format: '%Y-%m-%d %H:%M:%S'});
                                            'operation', 'port', 'elapsed', 'types',
                                            'providerIP', 'providerHost'],
                                        paginator: new Y.PaginatorView({
                                            model: new Y.PaginatorModel(
                                                    {page: 1, itemsPerPage: 10}),
                                            container: '#mainTablePaginator',
                                            // display only reasonable amount 
                                            // of links
                                            maxPageLinks: 5
                                        // use server side pagination
                                        paginationSource: 'remote',
                                        // request template for each page
                                        requestStringTemplate: '?page={page}&limit={itemsPerPage}',
                            // define datasource
                            var dataSource = new Y.DataSource.IO({
                                source: './camel/sam/list',
                            // event handler, in this case we display 
                            // status code and status message
                            dataSource.on('response', function(e) {
                              displayMessage( + ' ';
                            // datasource produces JSON data
                            // we need to map the result to columns
                                        schema: {
                                            resultListLocator: 'aggregated',
                                            resultFields: [
                                                'timestamp', 'operation', 'port',
                                                'elapsed', 'types', 'flowID',
                                                'providerIP', 'providerHost'
                                            // necessary the server-side
                                            // pagination to work
// NOTE: It is ESSENTIAL that your response includes meta data 
// for totalItems, otherwise the paginator won't render.

                                            metaFields: {
                                                // the ‘count’ parameter reflects the count element in the 
                                                // returned data                                                
                                                totalItems: 'count'

                            // tell the table to use the datasource
                                        datasource: dataSource,
                                        // if initialRequest defined, 
                                        // a request is issued as soon
                                        // as the TableDataSource is initialized
                                        initialRequest: ''


Parameter consideration

The SAM resource REST service URL looks as follows
http://server:port/services/sam/list [?offset=…&[limit=…]]

and the paginated DataTable sends request in the format

so we need to create means to transform page to the offset. [offset=(page-1)*limit]

I didn’t find a reasonable way to do it at the UI level. When I consider back, the paginator works on pages, not offsets, so we won’t force the tool where not appropriate. As a result I created a Camel route transforming the page parameter to the offset. There are two advantages of this approach. first – I can do it quickly. Second – we can treat (secure) the service endpoint as a part of the application.

Follow up

I’ve created a client application with detail views, see

there are two build profiles – run-on-karaf and run-on-tomcat. I hope they are self-explanatory. To run on karaf, one must install camel-cxf and spring-web features too.


We have a light and quick client application, which even looks nice. It didn’t take a lot of time to learn it and mainly – it didn’t hurt 🙂


Author is a senior consultant at Apogado

, ,

Leave a comment

Authentication and authorization of a web application in Apache ServiceMix using JAAS

Why Apache ServiceMix?

First I had to answer myself why to deploy a web application on the Apache ServiceMix? Simply – because we had it in place. We use it as a runtime container for web services  (many popular ESB’s are based on the ServiceMix, such as Fuse ESB, Talend ESB, ..). I find the Apache ServiceMix very lightweight, having small footprint even under workload. So when creating a web application accessing exposed services (Web Services or OSGi based), it makes sense to deploy the web app directly in the ServiceMix instead of spinning up an extra web server just for one or two simple apps.

There’s a great resource by David Valeri about deploying a spring-mvc web application in the OSGi environment (and much more..).

In this exercise Talend ESB 5.2.0 based on Apache Karaf 2.2.9 is used. And lot of web searching and trying.

Configuring the authentication and authorization

Once I had my web app running, from the J2EE world I am used to set up the role based security using built in JAAS. It proved to be not so straightforward with the default Apache ServiceMix setup. The ServiceMix already uses JAAS realm 'karaf' by default users and groups defined in the ./etc/ file. for start I am happy with that. There are ways to set up JDBC or LDAP based realms, but it is not the goal and all I wanted is at least a simple basic role based security around my web app without weight of the whole spring-security configuration.

1. enable jetty.xml configuration in the etc/org.ops4j.pax.url.mvn.cfg


2. define users and groups

in the ./etc/ file I defined a user with its role user (e.g. webuser=<password>,user)

3. define spring configuration for the web app in the META-INF/spring/jetty-security.xml This configuration duplicates the web.xml security constraint definition, but it was the only way I found working. This configuration works with Jetty 7.6 shipped within Talend ESB. Note that other Jetty versions may a little bit other packaging of the classes.

<?xml version="1.0" encoding="UTF-8"?>
<beans    xmlns=""     xmlns:xsi=""    xsi:schemaLocation="">
  <bean id="loginService" class="">       
    <property name="name" value="karaf" />
    <property name="loginModuleName" value="karaf" />    
 <bean id="constraint" class="">        
   <property name="name" value="BASIC"/>       
   <property name="roles" value="user"/>        
   <property name="authenticate" value="true"/>   
<bean id="constraintMapping" class="">        
  <property name="constraint" ref="constraint"/>        
  <property name="pathSpec" value="/*"/>    
 <bean id="securityHandler" class="">       
   <property name="authenticator">          
     <bean class=""/>     
   <property name="constraintMappings">          
     <ref bean="constraintMapping"/>     
  <property name="loginService" ref="loginService" />      
  <property name="strict" value="false" />   

3. I set up a default web.xml configuration

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.5" xmlns="" xmlns:xsi="" xsi:schemaLocation="">
            <web-resource-name>All files</web-resource-name>

4. update pom.xml to import the configured packages

                Enable support for non-bundle packaging types
                                <DynamicImport-Package>javax.*, org.xml.sax, org.xml.sax.*, org.w3c.*</DynamicImport-Package>

5. this is an optional step. Well.. depends. Seems Jetty shipped with the ServiceMix has a bug manifesting on the Windows OS not releasing resources when using NIO transport, so having a look into the ./etc/jetty.xml file you can find there are blocking channels used by default (org.eclipse.jetty.server.nio.BlockingChannelConnector).

I believe on the *NIX environment we can happily set up the NIO transport channel instead of the blocking connectors.

<Call name="addConnector">
            <New class="org.eclipse.jetty.server.nio.SelectChannelConnector">
                <Set name="host">
                    <SystemProperty name="" />
                <Set name="port">
                    <SystemProperty name="jetty.port" default="8040"/>
                <Set name="maxIdleTime">30000</Set>
                <Set name="Acceptors">2</Set>
                <Set name="statsOn">false</Set>
                <Set name="confidentialPort">8443</Set>
                <Set name="lowResourcesConnections">5000</Set>
                <Set name="lowResourcesMaxIdleTime">5000</Set>
                <Set name="requestHeaderSize">8192</Set>
                <Set name="responseHeaderSize">8192</Set>
                <Set name="useDirectBuffers">false</Set>

6. deploy the web app. I’m shipping the bundles as features or KAR archive. But to simply test I use direct mvn deploy from the command line

features:install war
install -s war:mvn:<group-id>/<artifact-id>/<version>/war

, ,

1 Comment

Simple working JPA project in OSGi Environment


create a very simple working JPA persistence project to be understood and enable people to build in into a more complex solution.
This article will concern mainly JPA client bundle configuration and we assume knowledge of OSGi platforms and JPA specification.


  • We will keep the example simple (a single entity class, single service), but keeping OSGi modularization in mind.
  • In this example we will use non-jta data source and we let the implementation bean to manage its local transactions.
  • We will hardcode values in the blueprint configuration, no OSGi ConfigAdmin (or other external configuration) used

Note: this blog serves as my notepad, don’t take anything granted..

Used components

Apache Karaf ( Apache ServiceMix 4.4.1-fuse-08-15)
OSGi platform was originally intended to be very lightweight runtime container and a user could deploy and configure modules as needed. On the other hand – complex functionality, such as providing secure web services, data persistence, enterprise integration, etc requires set of modules playing together. So we advice to use an off-shelve bundled OSGi container with already prepared and tested features, such as – FuseSource, Talend Runtime, Apache ServiceMix, …

A small and lightweight database, very good to start with.

OpenJPA Persistence
Apache OpenJPA is a Java persistence project at The Apache Software Foundation that can be used as a stand-alone POJO persistence layer or integrated into any Java EE compliant container and many other lightweight frameworks. We chose this framework for this example as it requires somehow less configuration comparing to EclipseLink (of cause – there’s a price of robustness and options to configure).

There is a nice overview of JPA framework options: tutorial-using-jpa-in-an-osgi-environment

Generally – there are 3 main frameworks to implement object persistence, in this article we will create a sample for Apache Aries JPA with OpenJPA Persistence Manager

  • SpringDM ORM (jpa-hibernate OSGi feature)
  • Apache Aries (jpa feature)
    Mainstream persistence managers for Apache Aries JPA
    – Eclipse Gemini
    – Hibernate JPA
  • NoSQL custom projects (such as OrientDB Object)

There’s a rule to remember – DON’T MIX THEM!!!

In this example we will create 4 modules:
blogjpa project modules:

  • blogjpa-commons – entity classes and data access service interface
  • blogjpa-datasource – exposes javax.sql.DataSource service
  • blogjpa-store – data access service implementation with a test method
  • blogjpa-feature – OSGi features configuration, so we can install all at once. As well this file shows dependencies in the runtime environment

Note: blog-jpa is named because the example is intended for the blog


A module with entity classes, service interface and persistence unit definition.
Person – an example entity

package com.apogado.blogjpa.commons;

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.NamedQuery;

* example entity object
* @author Gabriel Vince
@Entity(name = "Person")
@NamedQuery(name = "Person.findById",query = "SELECT p FROM Person p WHERE = :id")
public class Person implements Serializable {

    public static final String QUERY_FIND_BY_ID = "Person.findById";

    private Integer id;
    private String name;
    private String address;

    @Id @GeneratedValue(strategy = GenerationType.AUTO)
    public Integer getId() {
        return id;

    public void setId(Integer id) { = id;

    @Column(name = "personname")
    public String getName() {
        return name;

    public void setName(String name) { = name;

    public String getAddress() {
        return address;

    public void setAddress(String address) {
        this.address = address;


PersonService – an example service interface

Persistence unit definition:

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.0" 
  <persistence-unit name="blogjpa" transaction-type="RESOURCE_LOCAL" >
      <property name="openjpa.Log" value="DefaultLevel=INFO, Tool=INFO"/>
      <property name="openjpa.jdbc.DBDictionary" value="derby"/>
      <property name="openjpa.jdbc.SynchronizeMappings" value="buildSchema(ForeignKeys=true)"/>


  • using JNDI feature allows OpenJPA to find an osgi service by JNDI lookup as a notation: osgi:service/<interface>/<filter> A data source as service is exposed by a blogjpa-datasource module.
  • OpenJPA requires the entity classes to be enhanced, so META-INF/persistence.xml must come along the entity classes Personally I don’t like the idea that data source references and database specific configuration (such as dialect) should come with data classes (in this the Person entity class)
  • From the architectural point of view I still suggest to create a separate module containing data classes and interfaces. I consider very heavy to bundle everything into a single module or the other way – it is too cumbersome to split entity classes into definition and implementation modules, mainly for very small project.


  • OpenJPA needs to enhance entity classes. It can be done during build time and as well dynamically, when classes are loaded. From my experience it is more reliable to do it during build time
  • Declaring Meta-Persistence element in the manifest, the Aries JPA exposes an EntityManagerFactory from this module.


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="" 




                                <path id="enhance.path.ref">
                                    <fileset dir="${}">
                                        <include name="Person.class" />
                                <pathconvert property="enhance.files" refid="enhance.path.ref" pathsep=" " />
                                <java classname="org.apache.openjpa.enhance.PCEnhancer">
                                    <arg line="-p persistence.xml" />
                                    <arg line="${enhance.files}" />
                                        <path refid="maven.dependency.classpath" />
                                        <path refid="maven.compile.classpath" />


Implementation of the service interface. Simple and straightforward:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns=""
    <bean id="personServiceBean" class="" init-method="test"  >
        <jpa:unit property="entityManagerFactory" unitname="blogjpa" />
    <service interface="com.apogado.blogjpa.commons.PersonService" ref="personServiceBean"/> 


jpa:unit injects an EntityManagerFactory
jpa:content injects an EntityManager

Example Service implementation


This module defines a Karaf feature – modules and their dependencies. Here I may deeply disappoint people doing things “lightly”, but as I wrote before, it is faster to use configured and tested features than fighting with dependencies.

In the resources there’s a link to with exact dependency configuration and as well configuration for other persistence managers.

<?xml version="1.0" encoding="utf-8"?>
<features xmlns="" name="ocm-features">
    <feature name="blogjpa" version="1.0.0-SNAPSHOT">

<!-- feature url: mvn:com.apogado.blogjpa/blogjpa-feature/1.0.0-SNAPSHOT/xml/features -->

        <!-- feature dependencies -->

        <!-- openjpa library dependencies -->

        <!-- install openjpa -->

        <!-- install db client -->

        <!-- application bundles -->


and pom.xml


Build a project – here we assume a common maven repository between build maven and Apache Karaf. If the repositories are not the same, you need to configure the maven repository in the Karaf environment (org.ops4j.pax.url.mvn.cfg)

  1. start a database server and create a database startNetworkServer
  2. add a feature configuration
    features:addurl mvn:com.apogado.blogjpa/blogjpa-feature/1.0.0-SNAPSHOT/xml/features
  3. install feature
    features:install -c -v blogjpa
  4. check log, should say
    15:38:47,831 | INFO  | rint Extender: 1 | PersonServiceImpl                | 90 - - 1.0.0.SNAPSHOT | running
    15:38:47,831 | INFO  | rint Extender: 1 | PersonServiceImpl                | 90 - - 1.0.0.SNAPSHOT | entity manager factory available
    15:38:57,417 | INFO  | rint Extender: 1 | PersonServiceImpl                | 90 - - 1.0.0.SNAPSHOT | Created Person id 1
    15:38:57,447 | INFO  | rint Extender: 1 | PersonServiceImpl                | 90 - - 1.0.0.SNAPSHOT | Person persisted with id: 1
    15:38:57,597 | INFO  | rint Extender: 1 | PersonServiceImpl                | 90 - - 1.0.0.SNAPSHOT | Person found with name: a9f0ed64-afaa-4e3c-9c5d-72c94424cbc3
    15:38:57,597 | INFO  | rint Extender: 1 | PersonServiceImpl                | 90 - - 1.0.0.SNAPSHOT | Person found with name: a9f0ed64-afaa-4e3c-9c5d-72c94424cbc3


    example project:

Author works as a senior consultant for Apogado. Apogado is a vendor-independent consultancy company focusing on sustainable, future-proof ICT architecture and enterprise architecture.

, ,


Web Service CXF Interceptor replacing request data

Goal of this exercise is to build a web service replacing a value in the request with an arbitrary value. This can be handy to fill e.g. SOAP header or HTTP header data to the message payload and make the data accessible in the service bean. Intended to use with declarative web services (OSGI blueprint / Spring configuration)

Prerequisites – knowledge of OSGi blueprint, CXF web services

Using Karaf 2.2.2-fuse-08-15, CXF 2.4.2

Web Service invocation chain

CXF invocation chain

CXF invocation chain

The main idea is to use interceptor to intercept incoming message and a value in the payload. For easy of use we will  deserialize request objects and try to use POJO approach. This approach may have some overhead, but we consider it to be reasonable price to pay for easy usage.


<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns=""

        class="com.apogado.test.interceptortest.InterceptorTestService" />
        class="com.apogado.test.interceptortest.TestInterceptor" />

    <jaxws:server id="InterceptorTestService"
            <ref component-id="interceptor" />


package com.apogado.test.interceptortest;

import java.util.List;
import java.util.Set;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.servlet.http.HttpServletRequest;
import org.apache.cxf.interceptor.Fault;
import org.apache.cxf.message.Message;
import org.apache.cxf.phase.AbstractPhaseInterceptor;
import org.apache.cxf.phase.Phase;

 * a test interceptor to alter payload data
 * @author Gabriel Vince
public class TestInterceptor extends AbstractPhaseInterceptor {

    private static final Logger logger = Logger.getLogger(TestInterceptor.class.getName());

    public TestInterceptor() {
        // the PRE_INVKE phase allows us access payload as objects and we don't 
        // need to worry some other object will mess it up
        // security should be already checked

     * get the request object and alter data there
     * @param t
     * @throws Fault 
    public void handleMessage(Message t) throws Fault {

            HttpServletRequest req = (HttpServletRequest)t.get("HTTP.REQUEST");
            String queryString = req.getQueryString();
            if (queryString != null
                    && (queryString.contains("wsdl") || queryString.contains("WSDL"))
                    && req.getMethod().equals("GET")) {
                return; //let the schema request pass
  "Test interceptor - processing data");

            // Content formats available: org.w3c.dom.Node , 
            // , java.util.List ,
            // seems only we reach payload as objects are using the List class
            List list = t.getContent(List.class); // message parts
            // TDataRequest is a request object generated from WSDL and XSD
            // and used as a request message payload part
            TDataRequest requestData = (TDataRequest) list.get(0);
            requestData.setValue("An arbitrary value");
        catch(Exception ex)
            logger.log(Level.SEVERE, "test interceptor",ex);

Project link:


1 Comment

Extracting text from email messages with JavaMail

This blog is focused on email processing – mostly how to extract the clear text from an email message. There is a lot of buzz (and eventually  good use) of unstructured data processing – often referred as BigData processing. In my case – we have a pilot project aimed for automatic classification of documents and emails – for abstraction – any textual content. For that purpose we need our server to “read” the received emails.

I am not very aware of historical protocols and formats, but today almost all emails are delivered as MIME message – see MIME on Wikipedia. From practical point – all delivered content has metadata – what is it and how should I read it.

Only for introduction – this is how to read email messages inside J2EE container. Now we will ignore message count, windowing,.. and we assume reader is skilled with Java and J2EE:

Context ctx = new InitialContext();
Session mailSession = (javax.mail.Session) ctx.lookup("java:comp/env/mail/Session");
Store store = mailSession.getStore(IMAP_STORE);
Folder inbox = store.getFolder(INBOX_FOLDER);;

Message[] messages = inbox.geTMessages();
// do somethinf with messages
this.content = EmailUtils.extractClearText(m, null);

About MIME parts – every message may be a multipart container or holds its content.   Most of current email clients send a multipart message, where one part is a HTML message and second is a plain text message for email clients, which do not support or disable HTML messages. It means, that a MimeMessage may (and will) contain several parts (BodyPart) messages. In this blog we won’t discuss all message types. For purposes we have already stated (from a human created email sent by a default emailing client we need to extract its clear text) we will assume there is a plain text part and alternatively an HTML part. Pretend that other parts can be happily ignored. Lets find and read the text.

Most of messages are instance of Multipart class. Multipart message contains parts – BodyPart objectd. To check what the part is, we will use the isMimeType() method.

Generally – if we find a text/plain multipart in the email message, we will use it, else we will use its html mime part. If the email was sent as a simple text message, we will use the text directly.

       if(message instanceof MimeMessage)
            MimeMessage m = (MimeMessage)message;
            Object contentObject = m.getContent();
            if(contentObject instanceof Multipart)
                BodyPart clearTextPart = null;
                BodyPart htmlTextPart = null;
                Multipart content = (Multipart)contentObject;
                int count = content.getCount();
                for(int i=0; i<count; i++)
                    BodyPart part =  content.getBodyPart(i);
                        clearTextPart = part;
                    else if(part.isMimeType("text/html"))
                        htmlTextPart = part;

                    result = (String) clearTextPart.getContent();
                else if (htmlTextPart!=null)
                    String html = (String) htmlTextPart.getContent();
                    result = Jsoup.parse(html).text();

             else if (contentObject instanceof String) // a simple text message
                result = (String) contentObject;
            else // not a mime message
                logger.log(Level.WARNING,"not a mime part or multipart {0}",message.toString());
                result = null;

As you see, for parsing HTML we’ve used the Jsoup project. Simple and effective way to treat html and get rid of all tags.

This blog serves as my personal notepad, but if anybody has a good use of that, feel free to share,  link or comment.

Author is a senior consultant at Apogado.

, , , ,