Creek uses different types of descriptors to allow users to define metadata for their system components and resources.
Creek defines two types of components: aggregates and services, with the former being an abstraction, with a defined public API, around a one or more of the latter. Both share a common base type .
Both kinds of component descriptor need registering, so they are discoverable by Creek’s system tests. The aggregate template repository handles this registration for you. If not using the repository, the Kafka Service aggregate api tutorial covers how to register the descriptors.
Resource descriptors exposed by the service may be ones also exposed by the service’s aggregate descriptor, and in which case these are public resources exposed as part of the aggregate’s API, or ones defined inline by the service, and in which case they are resources internal to the aggregate.
ProTip: The best way to get your head around component and resource descriptors is an example. Check out the basic Kafka Streams tutorial to see them in action.
When a microservice is starting up, it initialised Creek by passing its service descriptor to Creek to get a CreekContext .
The Creek system tests discover component descriptors on the class path to allow it to start services, discover what 3rd party services are needed, like Kafka, and to work with external resources, such as Kafka topics.
A Creek ‘aggregate’ is simply the public API of a logical grouping of services that together provide some business function, e.g. inventory tracking , or customer data, etc. The aggregate provides a higher level abstract, encapsulating the other components it contains.
In Domain-driven-development nomenclature this would be known as a Bounded Context .
The aggregate descriptor defines the aggregate’s name and its public API. The public API is defined as the set of resource descriptors, detailing the resources services from other aggregates are allowed to access, e.g. an aggregate may define output topics that others can consume.
ProTip: The Kafka Streams aggregate api tutorial covers defining and using an aggregate’s API.
Normally, an aggregate contains services and those services are developed in the aggregate’s own code repository. However, it is also possible to use aggregates to group multiple other aggregates together, allowing for multiple levels of abstraction and encapsulation.
Resource descriptors are provided by Creek extensions, such as the Creek Kafka extension, which provides descriptors to define input, internal and output topics.
Resource descriptors also capture the concept of ownership. A resource, e.g. a Kafka topic, is almost always conceptually owned by a service, and hence an engineering team. Ownership implies who is responsible for the lifecycle of the resource and the data within.
Using the example of Kafka topics, service’s often own their output topics as these contain the data the service is responsible for maintaining. Less common is for a service to own an input topic. However, this can occur, for example an alerts service might own its input topic, even though other service’s produce alerts to the topic.
Owned resources are managed by the service that owns them. For example, service’s with owned Kafka topics will ensure the topic exists, and any schemas registered, when the service starts up and Creek is initialised.