For a kafka stream to be resilient and reliable it is important to handle failures gracefully. Occurrence of failures can halt the stream and can cause disruption in the service. This article talks about exceptions that can occurr in a kafka stream and how to handle them.
Exceptions within the code
Exceptions which are thrown by the processor code such as arithmetic exceptions, parsing exceptions, or timeout exception on database call. These can be handled by simply putting a try-catch block around the piece of code which throws the exception.
Deserialization exception
The data may have different schema then what the stream consumer expects. This throws a deserialization exception when consumed by the consumer.
To handle such failures kafka provides DeserializationExceptionHandler
. As these failures occur due to data inconsistency.
|
|
Production exception
Any exception that occur during kafka broker and client interaction is a production exception. An example is RecordTooLargeException
.
If the kafka stream writes a record to a topic but the record size exceeds the largest message size allowed by the broker(max.message.bytes
), it will throw a RecordTooLargeException
. Such exceptions can be handled by ProductionExceptionHandler
.
|
|
Handling the uncaught exceptions
In case any uncaught exception occurs in a stream thread and the thread abruptly terminates. The setUncaughtExceptionHandler
method provides a way to trigger a code/logic (like notify) on termination.
References: