Skip to main content

WSO2 Products - How User Stores work

Today we keep our users and profiles in several forms. Some times they are in a LDAP. Some uses Active Directory (AD). Some uses databases and etc. WSO2 Products are written in a way any of these format could support. If some one have their own way of storing users and any one can easily plug them in to WSO2 products by just writing a custom user store. In this post I will explain how these user stores works and the other components connected to them.

When we discuss about user management in WSO2 world, there are several key components. They are
  1. User Store Manager
  2. Authorization Manger
  3. Tenant Manager
In simple user management we need to authorize some user to some action / permission. Normally we group these actions / permissions as groups and assign these groups / roles to users. So there are two kind of mappings that we need to consider. They are
  1. User to Role Mapping
  2. Role to Permission Mapping
User to Role Mapping is managed by user store implementation and Role to Permission mapping is managed by authorization manger implementation. These things are configured in the configuration file under [1].

Tenant Manager comes in to play when Multi Tenancy is considered. This is configured under [2]. Lets discuss this later.

User Management


By default WSO2 products (except WSO2 Identity Server) it stores every thing in DB. There it use [3] as the user store manager implementation. In WSO2 Identity Server, it's shipped with an internal LDAP and users are stored in that LDAP. So there it uses the [4] as the implementation of the User Store Manager. And by default all WSO2 servers uses the DB to store Role to Permission Mapping. There it uses [5] as the authorization manager implementation.

I will explain the WSO2 Identity server case since it contains most of the elements. Since it has an LDAP user store, all users in the system, all roles in the system and the all the user to role mappings are saved in the LDAP. User Store Manager Implementation which is based on the interface [6]. WSO2 Products contains an Abstract User Store Implementation which includes the most of the common implementation is done and extension points provided for plug external implementations [7]. It is recommended to use [7] as the base when writing a user store manager always. All users, roles and user role mappings are  managed through [4] which are in LDAP. And all the Role to Permissions mappings are persisted in DB and it was handled via [5].

Figure 1 : User Stores and Permission Stores

Figure 1 is about the relationships among users, roles and permissions and where they are stored and who is handling them.

Multi tenancy

Lets get in to multi tenancy now. Some of you may already know what is multi tenancy. But on behalf of the others in multi tenancy we create a space ( a tenant ) which is isolated from everything and nobody other than the people in the tenant don't know the existence of the tenant.

So in WSO2 products OOTB its supporting for a completely isolated tenants. Each tenant will have their own artifact space, registry space and user store (We call this user realm). Figure 2 graphically explains this story.

Figure 2 : Each Tenant having their own user store

Since there could be lots of tenants in the system in WSO2 products, it won't load all the tenants to the memory. It will load the tenant to the memory when the tenant is active. And when the tenant is idle for some time it will get unloaded. When tenant get loaded it will load registry, user realm and artifacts belongs to the tenant.

In this user realm it contains all the users, roles, permissions and their mappings. Realm gets loaded from the implementation of [8] which we mention in [2]. So by having your own implantation you can plug your tenant structure to WSO2 Products.

In next post I will explain plugging a custom LDAP structure to WSO2 Products.

[1] $CARBON_HOME/repository/conf/user-mgt.xml
[2] $CARBON_HOME/repository/conf/tenant-mgt.xml
[3] org.wso2.carbon.user.core.jdbc.JDBCUserStoreManager
[4] org.wso2.carbon.user.core.ldap.ReadWriteLDAPUserStoreManager
[5] org.wso2.carbon.user.core.authorization.JDBCAuthorizationManager
[6] UserStoreManager.java
[7] AbstractUserStoreManager.java
[8] org.wso2.carbon.user.core.config.multitenancy.MultiTenantRealmConfigBuilder
[9] MultiTenantRealmConfigBuilder.java


Comments

Popular posts from this blog

Generate JWT access tokens from WSO2 Identity Server

In Identity Server 5.2.0 we have created an interface to generate access tokens. Using that we have developed a sample to generate JWT tokens. You can find that sample under msf4j samples[1][2]. If you are build it as it is you will need to use Java 8 to build since msf4j is developed on Java 8. So you will need to run Identity Server on Java 8 as well. After building the project[2] please copy the jar inside target directory to $IS_HOME/repository/components/dropins/ directory. And then please add the following configuration to Identity.xml which is placed under $IS_HOME/repository/conf/identity/ folder inside tag OAuth . <IdentityOAuthTokenGenerator>com.wso2.jwt.token.builder.JWTAccessTokenBuilder</IdentityOAuthTokenGenerator> Then go to the database you used to store oauth tokens (This is the database pointed from the datasource you mentioned in the $IS_HOME/repository/conf/identity/identity.xml) and then alter the size of the column ACCESS_TOKEN of the tab

Integrate New Relic with WSO2 API Manager

In WSO2 API Manager, we have two transports. HTTP servlet transport and Passthru / NIO transport. All the web application requests are handled through HTTP servlet transport which is on 9763 port and 9443 port with ssl and here we are using tomcat inside WSO2 products. All the service requests are served via Passthru / NIO transport which is on 8082 and 8243 with ssl. When we integrate API Manager with new relic in the way discussed in blog posts [5],[6], new relic only detects the calls made to tomcat transports. So we couldn’t get the API calls related data OOTB. But by further analyzing new relic APIs I managed to find a workaround for this problem. New relic supports publishing custom events via their insights api[1]. So what we can do is publish these data via custom API handler[2]. Following is a sample implementation of a handler that I used to test the scenario. I will attach the full project herewith[7]. I have created an osgi bundle with this implementation so after building

Setting up Single node Kubernetes Cluster with Core OS bare metal

You might know already there is an official documentation to follow to setup a Kubernetes cluster on Core OS bare metal. But when do that specially single node cluster, I found some gaps in that documentation [1] . And another reason for this blog post is to get everything into one place. So this blog post will describe how to overcome the issues of setting up a single node cluster. Installing Core OS bare metal. You can refer to doc [2]  to install core os.  First thing is about users. Documentation [2]  tells you how to create a user without password. To login as that user you will need ssh keys. So to create a user with username password, you can use a cloud-config.yaml file. Here is a sample. #cloud-config users: - name: user passwd: $6$SALT$3MUMz4cNIRjQ/Knnc3gXjJLV1vdwFs2nLvh//nGtEh/.li04NodZJSfnc4jeCVHd7kKHGnq5MsenN.tO6Z.Cj/ groups: - sudo - docker Here value for passwd is a hash value. One of the below methods can be used to hash a pas