Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

we have an application, which uses kafka streams for reading from topic -> processing -> saving to another topic. Our application uses exactly_once semantic. Everything seems to be fine until we updated our application from kafka-streams 2.1.1 to version 2.6.0. We run out of disk space and reason was __consumer_offsets topics with lot of data.

I investigated __consumer_offsets topic and every 100 ms is written offset info for each topic and partition. I am aware about that exactly_once semantic changes commit interval to 100 ms, that would make a sense, but we did not notice such a behavior in 2.1.1 version. I tried to compare sourcodes between 2.1.1 and 2.6.0 and I did not find out fundamental difference that explained such a behavior.

Thanks for your answers

question from:https://stackoverflow.com/questions/65941718/kafka-streams-produces-too-many-consumer-offsets-for-exactly-once-semantic

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
261 views
Welcome To Ask or Share your Answers For Others

1 Answer

Waitting for answers

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...