Profiles
Based on your environment, you can choose one of the profiles in {starter_name}. There are two profiles as follows:
eureka Profile
If your application is deployed in the Eureka environment, you can use the eureka profile when the agent application is deployed.
Under the eureka profile, the agent nodes can be scaled horizontally as per requirement.
The agent application can act as the master and also as the slave.
If there are multiple agent nodes in the region, one node should be deployed as a master and other nodes should be deployed as the slaves.
The master node is responsible for updating the token range based on the running nodes count.
Agent-service as Leader and Follower
A mentioned above after adding the {starter_name} as a dependency, the application should be configured as a leader and als as a follower.
Let’s see how the configuration looks like for both.
-
Master Instance configuration:
stacksaga.agent.eureka.instance-type=master (1) eureka.instance.instance-id=order-service-agent-us-east-master (2)1 Set the instance-typeasmasterto run the node as the master.2 Set a Eureka instance ID as a fixed (Static ID) one. It is recommended to use this format for the leader instance ID.
Format:${service-name}-agent-${region}-leader
Using the service name in the leader instance ID helps to avoid the collision if you are using same event-store for multiple services. Because the followers identify the leader instance in the database by the leader instance ID. and adding the region to the leader instance ID guarantees the region-based uniqueness. -
Slave Instance configuration:
stacksaga.agent.eureka.instance-type=slave (1) stacksaga.agent.eureka.follower.leader-id=order-service-agent-us-east-master (2) eureka.instance.instance-id=${spring.application.name}:${random.uuid} (3)1 Set the instance-typeas theslave.2 Set the master’s Static ID.this value should be the same exactly with the master’s id that we configured in the leader node in the same region. 3 Set the instance-idas a random ID.
Token range allocation for nodes
All agent applications are registered with the eureka server in eureka environment. So the leader service will have all other agent instances' details through the eureka server. The leader server periodically checks the changes of the instance based on the local eureka service registry cache and updates the database with the relevant token range for each instance. The position of each instance is sored based on the instance started time. For instance, if there are five StackSaga-agent instances in the cluster, the token range is divided with the help of Murmur3 Partition algorithm as follows:
Steps:
| 1 | Leader node uses the eureka client’s cache to get the list of all instances in the region. (It can be a single eureka server or peers) |
| 2 | Leader node calculates the range for each instance periodically based on their timetamps and updates the ranges is sent to each nodes. |
k8s Profile
When Stacksaga agent is deployed in the kubernetes environment, the deployment architecture is a bit different from the eureka environment. In the kubernetes environment, the nodes are deployed as StatefulSet. The reason for using StatefulSet is that the token range of the node is calculated by itself based on the position (index of the node) and the total number of nodes. All nodes continuously monitor changes of respective StatefulSet’s changes in real-time. If one instance goes down or added, all the nodes will be notified the update in real-time and then the token range will be updated accordingly by themselves.
Deploy stacksaga-agent-{starter_name} in kubernetes environment.
First you have to create a user account due to agent service access the kubernetes API in k8s profile.
And should create and bind the role with the created service account as follows.
apiVersion: v1
kind: ServiceAccount
metadata:
name: stacksaga-agent-service-account #the name of the service account.
namespace: default #the namespace the application is deployed.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: default
name: stacksaga-agent-access
rules:
# Grant read access to pods
- apiGroups: [ "" ]
resources: [ "pods" ]
verbs: [ "get", "list", "watch" ]
# Grant access to watch StatefulSets
- apiGroups: [ "apps" ]
resources: [ "statefulsets" ]
verbs: [ "watch", "get", "list" ]
# Grant access to nodes
- apiGroups: [ "" ]
resources: [ "nodes" ]
verbs: [ "get", "list" ]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: stacksaga-agent-access-binding
namespace: default
subjects:
- kind: ServiceAccount
name: stacksaga-agent-service-account
namespace: default
roleRef:
kind: ClusterRole
name: stacksaga-agent-access
apiGroup: rbac.authorization.k8s.io
Create the service-agent StatefulSet to deploy the agent-service.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: your-app
spec:
serviceName: "your-app"
replicas: 3
selector:
matchLabels:
app: your-app
template:
metadata:
labels:
app: your-app
spec:
serviceAccountName: stacksaga-agent-service-account #assign the service-account
containers:
- name: your-app-container
image: your-app-image:latest
ports:
- containerPort: 8080
apiVersion: v1
kind: Service
metadata:
name: your-app
spec:
clusterIP: None
selector:
app: your-app
ports:
- port: 8080
name: http