Skip to main content

WSO2 AppFactory - Using ESB Apptype

In next version of WSO2 AppFactory (2.2.0), it's going to introduce a new apptype ESB apptype. With this apptype, it's going allow users to develop WSO2 ESB Capps (CAR) with WSO2 AppFactory. In this article I will give you a guide line to follow when you are developing a Capp using WSO2 AppFactory.

WSO2 AppFactory ESB apptype is the first multi module apptype supported by it. It will contain 4 modules in the sample project. They are as below.

  1. Resources module
  2. Resources CAR module
  3. Synapse Config (Proxy Service) module
  4. Main Car module

Development

There are several rules that developer should obey when developing an ESB type application. Which are as follows.

  • Developers can add any number of modules to the project. But always there should only be two CAR modules. Which should be resources CAR and main CAR which will contain all synapse configs.
  • All synapse config names should contain the version number. Like foosequence-1.0.0. And the synapse config file name also should contain the version number in the same manner.
  • All modules should start with the application ID. This rule is not to conflict artifacts in between applications. If two applications contained artifacts in same name they could conflict.
  • Main Car module artifact id should be similar to the application ID.

It is recommended to use WSO2 Developer Studio to develop WSO2 AppFactory ESB type application. Developer Studio will validate the project structure and it will help developer to follow above rules. If some one is going to edit it by some other method, before accepting the commit it will check whether the required structure is there and will decide to accept it or reject it.

LifeCycle Management and Resources Management

When the ESB application is promoted it will still keep using the development endpoints / resources which is mentioned in the Resources CAR module. So the users in next stage (QA / DevOps) will need to update this resources CAR. So there will be an UI to upload a resources CAR for ESB applications. QAs and DevOps will need to checkout the code from the source code location which is mentioned in the Application Home page and they will need to edit registry resources and endpoints to match their endpoints and then they can build it and upload there Resources CAR so it will get deployed and main car will change it's endpoints to new ones.


Comments

Popular posts from this blog

Generate JWT access tokens from WSO2 Identity Server

In Identity Server 5.2.0 we have created an interface to generate access tokens. Using that we have developed a sample to generate JWT tokens. You can find that sample under msf4j samples[1][2]. If you are build it as it is you will need to use Java 8 to build since msf4j is developed on Java 8. So you will need to run Identity Server on Java 8 as well. After building the project[2] please copy the jar inside target directory to $IS_HOME/repository/components/dropins/ directory. And then please add the following configuration to Identity.xml which is placed under $IS_HOME/repository/conf/identity/ folder inside tag OAuth . <IdentityOAuthTokenGenerator>com.wso2.jwt.token.builder.JWTAccessTokenBuilder</IdentityOAuthTokenGenerator> Then go to the database you used to store oauth tokens (This is the database pointed from the datasource you mentioned in the $IS_HOME/repository/conf/identity/identity.xml) and then alter the size of the column ACCESS_TOKEN of the tab

Integrate New Relic with WSO2 API Manager

In WSO2 API Manager, we have two transports. HTTP servlet transport and Passthru / NIO transport. All the web application requests are handled through HTTP servlet transport which is on 9763 port and 9443 port with ssl and here we are using tomcat inside WSO2 products. All the service requests are served via Passthru / NIO transport which is on 8082 and 8243 with ssl. When we integrate API Manager with new relic in the way discussed in blog posts [5],[6], new relic only detects the calls made to tomcat transports. So we couldn’t get the API calls related data OOTB. But by further analyzing new relic APIs I managed to find a workaround for this problem. New relic supports publishing custom events via their insights api[1]. So what we can do is publish these data via custom API handler[2]. Following is a sample implementation of a handler that I used to test the scenario. I will attach the full project herewith[7]. I have created an osgi bundle with this implementation so after building

Setting up Single node Kubernetes Cluster with Core OS bare metal

You might know already there is an official documentation to follow to setup a Kubernetes cluster on Core OS bare metal. But when do that specially single node cluster, I found some gaps in that documentation [1] . And another reason for this blog post is to get everything into one place. So this blog post will describe how to overcome the issues of setting up a single node cluster. Installing Core OS bare metal. You can refer to doc [2]  to install core os.  First thing is about users. Documentation [2]  tells you how to create a user without password. To login as that user you will need ssh keys. So to create a user with username password, you can use a cloud-config.yaml file. Here is a sample. #cloud-config users: - name: user passwd: $6$SALT$3MUMz4cNIRjQ/Knnc3gXjJLV1vdwFs2nLvh//nGtEh/.li04NodZJSfnc4jeCVHd7kKHGnq5MsenN.tO6Z.Cj/ groups: - sudo - docker Here value for passwd is a hash value. One of the below methods can be used to hash a pas