java - Retaining data in Apache Kafka -


I have just started reading Apache Kafka a few days ago, so I am a kind of newbie in this technique. There are some doubts / questions and need to get an explanation. I like:

  1. According to the configuration: log.retention.hours Here we set the duration in hours can do. Can data retention time be extended up to 2 years? Accordingly, it says:

The Kafka cluster retains all published messages. Whether they are used for a configurable period of time or not. For example, if log retention has been set for two days, after two days of the message it is available for consumption, after which it will be discarded to free up space. Performance of Kafka is consistently consistent with the size of the data, so maintaining too many data is not a problem.

Because it already says Completion is static in relation to the data size So does this mean that we store as much data as possible can do? Does it require some additional configuration or monitor? 1) Surely log.retention.hours is an integer 2 years are only 17520 hours.

2) You can store as much data as you have. Just note that if you store more data in the performance of the cuff, it is not degraded, a consumer who tries to obtain a large amount of data from the disk, will actually affect the performance, make sure to perform the best performance. Consumers read relatively recent data, while still remembered.


Comments

Popular posts from this blog

winforms - C# Form - Property Change -

java - Messages from .properties file do not display UTF-8 characters -

javascript - amcharts makechart not working -