This site contains the documentation that is relevant to older WSO2 product versions and offerings.
For the latest WSO2 documentation, visit https://wso2.com/documentation/.
API Manager Clustering Deployment Patterns
W SO2 API Manager includes four main components as the Publisher, Store, Gateway, and Key Manager. In a stand-alone APIM setup, these components are deployed in a single sever. However, in a typical production setup, they need to be deployed in separate servers for better performance. Installing and configuring each or selected component/s in different servers is called a distributed setup.
Note: It is recommended to separate the worker and manager nodes in scenarios where you have multiple Gateway nodes. See Clustering the Gateway for more information on how to configure this. Also see here for information on why it is advantageous to separate the worker and manager nodes.
The following topics introduce the main components of API manager and different ways to set them up in separate servers:
Main components of a distributed setup
The following diagram illustrates the main components of an API Manager distributed deployment.
Figure: Main components of an APIM distributed setup
Let's take a look at each component in the above diagram. For more information on these components, see Architecture.
In a clustered environment, you use Session Affinity to ensure that requests from the same client always get routed to the same server.
It is mandatory to set up Session Affinity in the load balancers that front the Publisher and Store clusters, and it is optional in the load balancer (if any) that fronts a Key Manager cluster.
However, authentication via session ID fails when Session Affinity is disabled in the load balancer.
The Key Manager first tries to authenticate the request via the session ID. If it fails, the Key Manager tries to authenticate via basic authentication.
Component | Description |
---|---|
API Gateway | This component is responsible for securing, protecting, managing, and scaling API calls. The API gateway is a simple API proxy that intercepts API requests and applies policies such as throttling and security checks. It is also instrumental in gathering API usage statistics. We use a set for handlers for security validation and throttling purposes in the API Gateway. Upon validation, it will pass Web service calls to the actual back end. If it is a token request call, it will then directly pass the call to the Key Manager Server to handle it. |
API Store | This component provides a space for consumers to self-register, discover API functionality, subscribe to APIs, evaluate them, and interact with API publishers. Users can view existing APIs and create their own application by bundling multiple APIs together into one application. |
API Publisher | This component enables API providers to easily publish their APIs, share documentation, provision API keys, and gather feedback on API features, quality, and usage. You can create new APIs by pointing to the actual back end service and also define rate-limiting policies available for the API. |
API Key Manager Server | This component is responsible for all security and key-related operations. When an API call is sent to the Gateway, it calls the Key Manager server and verifies the validity of the token provided with the API call. If the Gateway gets a call requesting a fresh access token, it forwards the username, password, consumer key, and consumer secret key obtained when originally subscribing to the API. All tokens used for validation are based on OAuth 2.0.0 protocol. All secure authorization of APIs is provided using the OAuth 2.0 standard for Key Management. The API Gateway supports API authentication with OAuth 2.0, and it enables IT organizations to enforce rate limits and throttling policies for APIs by consumer. Login/Token API in the Gateway node should point to the token endpoint of Key Manager node. It's given as https://localhost:9443/oauth2endpoints/token. In a distributed setup, it should be |
LB (load balancers) | The distributed deployment setup depicted above requires two load balancers. We set up the first load balancer, which is an instance of NGINX Plus, internally to manage the cluster. The second load balancer is set up externally to handle the requests sent to the clustered server nodes, and to provide failover and autoscaling. As the second load balancer, you can use an instance of Nginx Plus or any other third-party product. |
RDBMS (shared databases) | The distributed deployment setup depicted above shares the following databases among the APIM components set up in separate server nodes.
|
Message flows
The three main use cases of API Manager are API publishing, subscribing and invoking. Described below is how the message flow happens in these use cases.
Publishing APIs
A user assigned the publisher role has capability to publish APIs. This is done via the Publisher server. When an API is published in the API Publisher, the API Gateway must be updated with this API so that users can invoke it. Because we are using a clustered Gateway, all Gateway server nodes in the cluster are updated with the published API details, enabling any of these Gateway nodes to serve API calls that are received. When an API is published, it is also pushed to the registry database, so that it can be made available on the store via the shared database.Subscribing to APIs
A user with the subscriber role logs into the API Store and subscribes to an API. The user must then generate an access token to be able to invoke the API. When the subscriber requests to generate the token, a request is sent to the Key Manager Server cluster. The token is then generated, and the access token details are displayed to the subscriber via the Store.Invoking APIs
Subscribed users can invoke an API to which they have subscribed. When the API is invoked, the request is sent to the API Gateway server cluster. The Gateway server then forwards the request to the Key Manager server cluster for verification. Once the request is verified, the Gateway connects to the back-end implementation and obtains the response, which is sent back to the subscriber via the Gateway server.
The diagram below summarizes these message flows.
Distributed deployment patterns
We can identify several distributed deployment patterns involving the API Manager components discussed before. The most commonly-used of these patterns are as follows.
Store and Publisher components in a single server node
The diagram below depicts a minimum deployment of API Manager component nodes. This pattern is for an internal store, so the Store and Publisher components are deployed on a single server node.
Figure: Store and Publisher components in a single server node
Minimum cluster with an external store
This pattern is similar to the previous one, except that the Store component node is deployed on an external network. The diagram below depicts a minimum deployment of the store.
Figure: Store component on an external network
All components in a clustered, internal setup
This pattern extends the individual component nodes and deploys them in a cluster for high availability. Each component node can be extended up to n number of server nodes based on the deployment load. This pattern is based on a complete internal deployment where all API component nodes are deployed internally. The diagram below depicts a minimum deployment of the components.
Figure: All components in an internal cluster
All components in a clustered setup with an external store
This pattern is similar to the one above and also provides high availability, but it considers the use of an external store. The diagram below depicts a minimum deployment of the components.
Figure: All components except the Store in an internal cluster
See Clustering the API Manager for information on how to configure these deployment patterns.