Migrating Kalix services to Akka SDK
Since the introduction of Protobuf support in Akka SDK 3.15.15 it is possible to migrate Kalix services with existing persisted data to Akka.
Service implementation
In Akka, each component type has a unique component id identifying it, this corresponding to the Kalix component type id.
Corresponding Akka component types for Kalix component types:
Kalix |
Akka |
In addition to these, Akka also provides some component types for Agentic AI use cases.
Differences between services built with Kalix JVM SDK and Akka SDK
The largest difference between services built with the Kalix JVM SDK and services built with the Akka SDK is how components are defined.
In Kalix protobuf files with gRPC service descriptors and message declarations are how components are defined.
The descriptors specify both component internals, such as what events an event sourced entity emits, what type of state it has, it’s API towards other components in the same service, what inputs a component accepts and what outputs it produces. It is also possibly also the public API of a service, both gRPC and transcoded HTTP/JSON.
The Kalix SDK then generates Java sources of the service based on the descriptors.
In Akka the components are instead declared in plain Java code.
It is possible, but completely optional, to use gRPC descriptors and protobuf files. When used the protobuf descriptors are only used to generate Java message classes and gRPC service interfaces. Components can then use the message types, and gRPC endpoints can be defined to provide a public API of the service.
Aspect |
Kalix |
Akka |
Public gRPC and HTTP APIs of a service |
Kalix allows APIs for calls from the outside to be defined directly on all components. |
In Akka, all public APIs (HTTP, gRPC) are defined separately from the entities and workflows, in Http Endpoint or Grpc Endpoint components. |
Protobuf descriptors |
All components are defined using a Protobuf/gRPC descriptor. |
Only Grpc Endpoints are defined through a gRPC descriptor. |
HTTP support |
HTTP support is always transcoded gRPC service methods |
Http endpoints are defined separately and only deal with HTTP requests and responses |
Execution |
Kalix executes the runtime and user service in isolated JVMs which communicate using gRPC |
Akka executes the runtime and user service in a single JVM, communicating with regular method calls. |
Serialization |
Kalix uses Protobuf messages across stored data and for messages over the wire, for example commands to components. |
Akka supports JSON serialization of regular Java classes as well as Protobuf messages, with Java classes being the recommended choice for non-migrated services. |
Cross component calls |
A generated |
Components are called using the Component Client |
Entity Ids |
The entity id is a part of each command, declaratively identified in the message type |
Entity id is passed to the Component Client separately from the command, as a single string identifier. |
Cross service calls |
The component context has functionality to look up a service by name and gRPC interface, ACLs/authentication is automatically handled |
A component can use HTTP or gRPC to call other services, ACLs/authentication is automatically handled |
Component call metadata |
Component calls and responses, are in fact gRPC calls, and can pass additional metadata as headers in requests and responses |
Component calls are method calls and do not pass anything other than the command and response values. Endpoint calls and responses, and persisted events/consumed can accept or emit metadata. |
JWT support |
Building an Akka service for migration of an existing Kalix service
For each Kalix component type, create the corresponding component type in Akka and use the Kalix type id as component id.
In general, migration of stored data requires copying all the protobuf message descriptors that are returned in effects from the components. To keep using Protobuf messages also for cross component calls, those must be copied as well.
Consumers and views that consume events from local components refer to the concrete entity classes so it easier to start with the entities and do views and consumers last.
Specific notes for individual component types:
Event sourced entities
The state of the entity must be the same Protobuf message type as the Kalix entity.
The entity must handle all the Protobuf event types that the Kalix entity did.
The event handler applyEvent must accept com.google.protobuf.GeneratedMessageV3 and perform
its own type matching for the expected message types.
The component class must also have the annotation
akka.javasdk.annotations.ProtoEventTypes listing all event types that the entity will use.
Unlisted event types will cause the entity to fail.
Workflows
The Workflow API has changed significantly. While the API simplifies defining workflows compared to the Kalix API it will require a careful re-implementation of any existing workflow logic.
The original workflow state protobuf message must be used to be able to inspect existing, completed workflows.
Views
Each view becomes a concrete View class.
Each source of events for a view in Akka is defined in a separate TableUpdater<T> where T is
the type of the state in the view table that it populates. This should be the same Protobuf type used for the table name in Kalix.
Each updater is annotated with akka.javasdk.annotations.Table to declare which table name it updates: @Table("mytable").
In Kalix eventing in for views can be defined on individual methods, or view wide on the view itself. Identify each unique source of events and create a nested public static class for it.
The source of the events for one updater is declared through one the following annotations:
-
@Consume.FromEventSourcedEntity([entity class].class) -
@Consume.FromKeyValueEntity([entity class].class) -
@Consume.FromWorkflow([workflow class].class) -
@Consume.FromTopic("topic-name") -
@Consume.FromServiceStream(service="service-name", id="stream-id")
Each query from the original View is declared as a method on the view class, returning either QueryEffect<T> or QueryStreamEffect<T> where T is the query result type.
The methods return either queryResult() or queryStreamResult() depending on the effect type.
The method is annotated with akka.javasdk.annotations.Query("Query language string") to specify
the query.
Actions consuming events
Each action that consumes events becomes a Consumer class.
The source of the events for the consumer is declared through one the following annotations:
-
@Consume.FromEventSourcedEntity([entity class].class) -
@Consume.FromKeyValueEntity([entity class].class) -
@Consume.FromWorkflow([workflow class].class) -
@Consume.FromTopic("topic-name") -
@Consume.FromServiceStream(service="service-name", id="stream-id")
The consumer handler method must accept com.google.protobuf.GeneratedMessageV3 and do its own type matching for the expected message types.
If it is consuming events from an Event Sourced Entity or a Key Value entity in the same service,
the concrete message types are inferred from that. For all other cases the consumer class must be annotated with
akka.javasdk.annotations.ProtoEventTypes listing all event types that the consumer will accept.
Unlisted message types arriving will fail the stream and stall the consumer until a service version supporting the event type is deployed.
Service to service producers
Events published for service to service consumers is defined using a Consumer. It must use the same stream id and set of Protobuf events as the original service to service producer.
Public APIs of the service
Each component that is called from other Kalix services or external clients will need a separate gRPC endpoint which implements the same gRPC service, for those calls to still work.
Each component that defines HTTP transcoding on its methods, will require translating those paths into one or more Http Endpoint classes
Migrating service tests
Kalix and Akka share the same two-level testing philosophy: unit tests that exercise a single component in isolation using a TestKit, and integration tests that run the full service and interact with it through a component client. The concepts carry over directly; what changes is the class names and some API details.
Unit tests for entities
Both SDKs provide per-entity TestKit classes that keep state in memory and let you call command handlers without starting a server.
Kalix (Java Protobuf SDK) |
Akka SDK |
|
|
|
|
The pattern is the same in both SDKs:
-
Instantiate the TestKit by passing the entity constructor:
EventSourcedTestKit.of(MyEntity::new)/KeyValueEntityTestKit.of(MyEntity::new) -
Call a command handler via
.method(MyEntity::myCommand).invoke(request) -
Assert the result with
.getReply(),.getNextEventOfType(…),.getAllEvents(), or.getState()
The main practical difference is that Kalix command handlers received a protobuf request generated from the descriptor, while Akka command handlers accept plain Java objects (or protobuf messages if you kept them for migration). Adjust the types you pass to invoke() accordingly.
See Event Sourced Entities and Key Value Entities for full unit-test examples.
Integration tests
Integration tests in both SDKs extend TestKitSupport, which starts the service in-process. The injected component client replaces the generated components() accessor from Kalix.
Kalix |
Akka |
|
|
|
|
|
|
The entity id is now passed as a separate argument to forEventSourcedEntity(id) / forKeyValueEntity(id) rather than being embedded in the request message.
Views
Views have no unit-test support; they are always tested through integration tests. In Akka the TestKit can inject synthetic update events directly, without needing to exercise the entity that would normally produce them:
// in @BeforeAll or test setup
TestKit.Settings settings = TestKit.Settings.DEFAULT
.withKeyValueEntityIncomingMessages(CustomerEntity.class);
Retrieve the IncomingMessages handle, publish test data with .publish(entity, entityId), then query the view through componentClient.forView() and assert with Awaitility, since views update asynchronously.
See Views for complete examples.
Consumers (formerly Actions)
The Kalix topic-mocking API maps directly to Akka’s EventingTestKit:
Kalix |
Akka |
|
|
|
|
Call .publish(message, entityId) to inject messages and .expectOneTyped(MyClass.class) to assert what the consumer produced. Clear topic state between test cases to prevent message cross-contamination.
See Consuming and Producing for full examples.
Migration of the deployed service
Migration caveats
-
Scheduled timers cannot be migrated but will be dropped during the migration
-
Once the service has been migrated, it is not safe to deploy the original Kalix service again
-
Workflows can only be migrated if completed, migrating in flight workflows is not possible.
Migration steps
|
The migration steps are preliminary, contact Akka support to coordinate actual migration. |
Once a 1:1 Akka service has been built, a few migration steps needs to be followed.
-
Contact Akka support and ask them to enable a migration flag
-
Make sure the service no longer performs any writes
-
Wait until all local projections has caught up
-
Scale down the service
-
Note the time
-
Configure the Akka service with the time as
akka.javasdk.eventing.start-from-timestampin -
Deploy the Akka service using the same service name
-
Wait for a bit, verify that the service works and provides the expected responses for requests
-
At some point in the future, deploy a version of the service where
akka.javasdk.eventing.start-from-timestamphas been removed again (if not, new consumers and views will start from the defined timestamp)