The last Kafka Summit in San Francisco focused on large swaths of discussions around operationally managing Kafka during its upgrade cycle, dealing with large scale Kafka deployments, ensuring recovery and high availability and handling the large data stream ingestion needed by teams who discover Kafka is available in their organizations. Equally important is how to handle the ballooning costs associated with catering to developer teams’ needs for more and better data processing. Companies are struggling with their own successes of deploying and utilizing Kafka in their enterprise.
Kafka Deployments in Containers
Most of the attendees who visited our booth were surprised to find out that running Kafka in containers in production is a reality today. They could not believe the ease of use, performance and most importantly, the manageability they could expect by deploying their Kafka clusters in containers and specifically on the Diamanti Enterprise Kubernetes Platform. The pain points of ballooning costs, scaling, and disaster recovery are all addressed by the technology innovations that Diamanti brings to containers.
By deploying Kafka in Kubernetes, enterprises can expect the following benefits:
• Quick and easy deployment for fastest time to market
• Seamless upgrades with no downtime
• High availability
• With enterprise class container images from vendors like Confluent, it is very easy to achieve elasticity and scale of the Kafka cluster on demand
Additionally, Diamanti Enterprise Kubernetes Platform provides the following benefits:
• Plug-n-play HCI infrastructure
• Superior performance
• Easy disaster recovery with in-cluster or across-the-cluster storage replication
• Guaranteed quality of service (QoS)
Supercharge Apache Kafka with the Diamanti Enterprise Kubernetes Platform
With Kubernetes on a hyperconverged infrastructure like Diamanti, the true potential of Kafka can be unveiled. This is the only way you can really push the limits of Kafka. In order to highlight the level of performance that can be expected from a Kafka cluster running on the Diamanti Enterprise Kubernetes Platform, we demonstrate the most conservative numbers that can be expected with a containerized Kafka cluster.
The test used a 3 node Kafka cluster with 6 CPU cores and 32 GB RAM each running a mix of replicated and non-replicated topics. The goal was to show that combining bare-metal and Kubernetes results in a highly performing and easily scalable Kafka deployment which allows business units to consume, process and analyze data, thus enabling quicker decision making.
• Producers can write topics at approximately 3 million writes per second
• Very low and deterministic write latency of 10 milliseconds
• Consumers can read topics at approximately 6 million reads per second
Using Kafka is becoming a defacto necessity in today’s world of fast moving data from multiple sources. It is the driving force behind business intelligence, web applications, IoT and others, as the single point of ingestion and retrieval.
Running Kafka on a Diamanti Enterprise Kubernetes Cluster allows enterprises to rapidly deploy Kafka, while achieving ease of manageability and reduce the total cost of ownership (TCO). Diamanti gives the performance boost with consistency while also providing elasticity and horizontal scaling for Kafka.
Want to learn more? Click here to download the solution brief.
Boris Kurktchiev is Field CTO EMEA at Diamanti.
7 november (online seminar op 1 middag)Praktische tutorial met Alec Sharp Alec Sharp illustreert de vele manieren waarop conceptmodellen (conceptuele datamodellen) procesverandering en business analyse ondersteunen. En hij behandelt wat elke data-pr...
11 t/m 13 november 2024Praktische driedaagse workshop met internationaal gerenommeerde trainer Lawrence Corr over het modelleren Datawarehouse / BI systemen op basis van dimensioneel modelleren. De workshop wordt ondersteund met vele oefeningen en pr...
18 t/m 20 november 2024Praktische workshop met internationaal gerenommeerde spreker Alec Sharp over het modelleren met Entity-Relationship vanuit business perspectief. De workshop wordt ondersteund met praktijkvoorbeelden en duidelijke, herbruikbare ...
26 en 27 november 2024 Organisaties hebben behoefte aan data science, selfservice BI, embedded BI, edge analytics en klantgedreven BI. Vaak is het dan ook tijd voor een nieuwe, toekomstbestendige data-architectuur. Dit tweedaagse seminar geeft antwoo...
De DAMA DMBoK2 beschrijft 11 disciplines van Data Management, waarbij Data Governance centraal staat. De Certified Data Management Professional (CDMP) certificatie biedt een traject voor het inleidende niveau (Associate) tot en met hogere niveaus van...
3 april 2025 (halve dag)Praktische workshop met Alec Sharp [Halve dag] Deze workshop door Alec Sharp introduceert conceptmodellering vanuit een non-technisch perspectief. Alec geeft tips en richtlijnen voor de analist, en verkent datamodellering op c...
10, 11 en 14 april 2025Praktische driedaagse workshop met internationaal gerenommeerde spreker Alec Sharp over herkennen, beschrijven en ontwerpen van business processen. De workshop wordt ondersteund met praktijkvoorbeelden en duidelijke, herbruikba...
15 april 2025 Praktische workshop Datavisualisatie - Dashboards en Data Storytelling. Hoe gaat u van data naar inzicht? En hoe gaat u om met grote hoeveelheden data, de noodzaak van storytelling en data science? Lex Pierik behandelt de stromingen in ...
Deel dit bericht