Werner Daehn
1 min readDec 21, 2021

--

Can you please provide prove for each of your facts as I am of a different opinion?

ad 1) “Scaling” as a quantity must be defined. I would argue that a) a single topic partition can cope with the change frequency of a typical database easily and b) you have options, e.g. do you really care if your order comes first and mine second as long as all order related tables committed in one transaction are consistent? If no, you can have the topic partitioned by order number. So two ways out.

ad 2) In Kafka you have a transactional producer and consumer. I agree however, that they do not provide the qualities we look for. But transactional consistency can be ensured by clever combination of Kafka features. I do that in my projects and it is little effort.

ad 3) Order is physical. Within a topic partition whatever the producer sends first will be consumed first. You must actively set producer properties to break that logic.

ad 4) Yes, you need a long term persistence. Confluent is in the process of enabling unlimited storage but even without, creating a cheap persistence of all messages is simple. Hence this is no argument against the concept, just a requirement.

--

--

Werner Daehn
Werner Daehn

Written by Werner Daehn

Data Integration expert for Big Data and SAP

Responses (1)