WSO2 API Manager includes five main components as the Publisher, Store, Gateway, Traffic Manager and Key Manager. In a stand-alone APIM setup, these components are deployed in a single server. However, in a typical production setup, they need to be deployed in separate servers for better performance. Installing and configuring each or selected component/s in different servers is called a distributed setup.
Note: It is recommended to separate the worker and manager nodes in scenarios where you have multiple Gateway nodes. See Separating the Worker and Manager Nodes for information on why it is advantageous to separate the worker and manager nodes.
This topic includes the following sections.
Main components of a distributed setup
The following diagram illustrates the main components of an API Manager distributed deployment.
Figure: Main components of an APIM distributed setup
Let's take a look at each component in the above diagram. For more information on these components, see Architecture.
Component | Description |
---|---|
API Gateway | This component is responsible for securing, protecting, managing, and scaling API calls. The API gateway is a simple API proxy that intercepts API requests and applies policies such as throttling and security checks. It is also instrumental in gathering API usage statistics. We use a set of handlers for security validation and throttling purposes in the API Gateway. Upon validation, it will pass Web service calls to the actual back end. If it is a token request call, it will then directly pass the call to the Key Manager Server to handle it. |
API Store | This component provides a space for consumers to self-register, discover API functionality, subscribe to APIs, evaluate them, and interact with API publishers. Users can view existing APIs and create their own application by bundling multiple APIs together into one application. |
API Publisher | This component enables API providers to easily publish their APIs, share documentation, provision API keys, and gather feedback on API features, quality, and usage. You can create new APIs by pointing to the actual back end service and also define rate-limiting policies available for the API. |
API Key Manager Server | This component is responsible for all security and key-related operations. When an API call is sent to the Gateway, it calls the Key Manager server and verifies the validity of the token provided with the API call. If the Gateway gets a call requesting a fresh access token, it forwards the username, password, consumer key, and consumer secret key obtained when originally subscribing to the API. All tokens used for validation are based on OAuth 2.0.0 protocol. All secure authorization of APIs is provided using the OAuth 2.0 standard for Key Management. The API Gateway supports API authentication with OAuth 2.0, and it enables IT organizations to enforce rate limits and throttling policies for APIs by consumer. Login/Token API in the Gateway node should point to the token endpoint of Key Manager node. The Key Manager is given as https://localhost:9443/oauth2endpoints/token. In a distributed setup, it should be In a clustered environment, you use session affinity to ensure that requests from the same client always get routed to the same server. Session affinity is not mandatory when configuring a Key Manager cluster with a load balancer. However, authentication via session ID fails when session affinity is disabled in the load balancer. The Key Manager first tries to authenticate the request via the session ID. If it fails, the Key Manager tries to authenticate via basic authentication. |
API Traffic Manager | The Traffic Manager helps users to regulate API traffic, make APIs and applications available to consumers at different service levels, and secure APIs against security attacks. The Traffic Manager features a dynamic throttling engine to process throttling policies in real-time, including rate limiting of API requests. |
LB (load balancers) | The distributed deployment setup depicted above requires two load balancers. We set up the first load balancer, which is an instance of NGINX Plus, internally to manage the cluster. The second load balancer is set up externally to handle the requests sent to the clustered server nodes, and to provide failover and autoscaling. As the second load balancer, you can use an instance of Nginx Plus or any other third-party product. |
RDBMS (shared databases) | The distributed deployment setup depicted above shares the following databases among the APIM components set up in separate server nodes.
|
Message flows
The three main use cases of API Manager are API publishing, subscribing and invoking. Described below is how the message flow happens in these use cases.
Publishing APIs
A user assigned to the publisher role can publish APIs. This is done via the Publisher server. When an API is published in the API Publisher, it will be available in the API Store. Furthermore, the API Gateway must be updated with this API so that users can invoke it. As we are using a clustered Gateway, all Gateway server nodes in the cluster are updated with the published API details, enabling any of these Gateway nodes to serve API calls that are received. When an API is published, it is also pushed to the registry database, so that it can be made available on the store via the shared database.Subscribing to APIs
A user with the subscriber role logs into the API Store and subscribes to an API. The user must then generate an access token to be able to invoke the API. When the subscriber requests to generate the token, a request is sent to the Key Manager Server cluster. The token is then generated, and the access token details are displayed to the subscriber via the Store.Invoking APIs
Subscribed users can invoke an API to which they have subscribed. When the API is invoked, the request is sent to the API Gateway server cluster. The Gateway server then forwards the request to the Key Manager server cluster for verification. Once the request is verified, the Gateway connects to the back-end implementation and obtains the response, which is sent back to the subscriber via the Gateway server.
The diagram below summarizes these message flows where the Publisher is referred to as the publishing portal and the Store is referred to as the developer portal.
Figure: WSO2 API Manager message flows
WSO2 API Manager deployment patterns
Note the following:
- The following 5 patterns illustrate the latest deployment patterns for WSO2 API Manager. We have NOT yet released the Puppet Modules for these 5 new patterns. These deployment patterns have been designed by reviewing and refining the 7 older API-M deployment patterns in order to provide a more simplified set of patterns that meet the most required production use cases.
- If you want to try out API-M deployment with Puppet, you need to use one of the older API-M deployment patterns. For more information, see Using Puppet Modules to Set up WSO2 API-M.
Latest deployment patterns
Pattern 1
Single node (all-in-one) deployment.
You can use this pattern when you are working with a low throughput.
Pattern 2
Deployment with a separate Gateway and separate Key Manager.
You can use this pattern when you require a high throughput scenario that requires a shorter token lifespan.
Pattern 3
Fully distributed setup.
You can use this pattern to maintain scalability at each layer and higher flexibility at each component.
Pattern 4
Internal and external (on-premise) API Management.
You can use this pattern when you require a separate internal and external API Management with separated Gateway instances.
Pattern 5
Internal and external (public and private cloud) API Management.
You can use this pattern when you wish to maintain a cloud deployment as an external API Gateway layer.
Older deployment patterns
Pattern 0
Single node deployment.
Figure: All-in-one instance
This pattern consists of a stand-alone API-M setup with a single node deployment. This pattern uses the embedded H2 databases.
Pattern 1
Single node deployment.
Figure: All-in-one instance
This pattern consists of a stand-alone WSO2 API-M setup with a single node deployment. This pattern uses external RDBMS (e.g., MySQL databases). The only difference between pattern-0 and pattern-1 is that pattern-0 uses embedded H2 databases and pattern-1 is configured to use external RDBMS.
Pattern 2
Single node deployment, which has all WSO2 API-M components in one instance, with Analytics.
Figure: All-in-one instance with analytics
This pattern consists of a stand-alone WSO2 API-M setup with a single node deployment and with a single wso2am-analytics
server instance. This pattern uses external MySQL databases.
Pattern 3
Gateway worker/manager separation.
This pattern consists of a fully distributed WSO2 API-M setup (including a Gateway cluster of one manager and one worker) with a single wso2am-analytics
server instance. This pattern uses external MySQL databases.
Pattern 4
This pattern consist of a fully distributed API-M setup including two Gateway clusters, where each has one manager and one worker, with a single wso2am-analytics
server instance. You can have the gateway environments in any preferred environment (e.g., local-area network (LAN) and demilitarized zone (DMZ)).
Pattern 5
Gateway worker/manager separation. Gateway worker and Key Manager in the same node.
This pattern consists of a distributed WSO2 API-M setup including a Gateway cluster of one manager and one worker and the Gateway worker is merged with the Key Manager. It also consists of a single wso2am-analytics
server instance and it uses external MySQL databases.
Pattern 6
Gateway worker/manager separation. Store in the same node as the Publisher.
This pattern consists of a distributed WSO2 API-M setup (including a Gateway cluster of one manager and one worker) of which the Publisher is merged with the Store. It also consists of a single wso2am-analytics
server instance and it uses external MySQL databases.
Pattern 7
WSO2 Identity Server acts as a Key Manager node for the WSO2 API Manager.
This pattern consists of a stand-alone WSO2 APIM setup with a single node deployment. The pattern uses external MySQL databases. The only difference of this pattern from pattern-1 is that this uses WSO2 Identity Sever as the Key Manager.
Clustering Gateways and Key Managers with key caching
For key validation, the Gateway can usually handle 3,000 transactions per second, whereas the Key Manager can only handle 500 transactions per second. To improve performance, the key cache is enabled on the Gateway by default, which allows the system to handle 3,000 transactions per second. However, if you need better security, you can enable the cache on the Key Manager instead. Note the following about clustering with key caching:
- When the cache is enabled at the Gateway, you can have two Gateways per Key Manager.
- When the cache is enabled at the Key Manager and disabled at the Gateway, you can have only one Gateway per Key Manager.
- If both caches are disabled (not recommended), even with only one Gateway per Key Manager, the system may not be able to handle the load, as the Key Manager will only be able to handle 500 transactions per second.
For more information, see Key cache in the WSO2 API Manager documentation.
Traffic Manager scalable deployment patterns
See the article on Scalable Traffic Manager deployment patterns part 1 and part 2.