Artwork

Content provided by Allen Underwood, Michael Outlaw, Joe Zack, Allen Underwood, Michael Outlaw, and Joe Zack. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Allen Underwood, Michael Outlaw, Joe Zack, Allen Underwood, Michael Outlaw, and Joe Zack or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Nuts and Bolts of Apache Kafka

 
Share
 

Manage episode 422741338 series 2013945
Content provided by Allen Underwood, Michael Outlaw, Joe Zack, Allen Underwood, Michael Outlaw, and Joe Zack. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Allen Underwood, Michael Outlaw, Joe Zack, Allen Underwood, Michael Outlaw, and Joe Zack or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Topics, Partitions, and APIs oh my! This episode we’re getting further into how Apache Kafka works and its use cases. Also, Allen is staying dry, Joe goes for broke, and Michael (eventually) gets on the right page.

The full show notes are available on the website at https://www.codingblocks.net/episode236

News

  • Thanks for the reviews! angingjellies and Nick Brooker
    • Please leave us a review! (/review)
  • Atlanta Dev Con is coming up, on September 7th, 2024 (www.atldevcon.com)

Kafka Topics

  • They are partitioned – this means they are distributed (or can be) across multiple Kafka brokers into “buckets”
  • New events written to Kafka are appended to partitions
    • The distribution of data across brokers is what allows Kafka to scale so well as data can be written to and read from many brokers simultaneously
  • Events with the same key are written to the same partition as the original event
    • Kafka guarantees reads of events within a partition are always read in the order that they were written
  • For fault tolerance and high availability, topics can be replicated…even across regions and data centers
    • NOTE: If you’re using a cloud provider, know that this can be very costly as you pay for inbound and outbound traffic across regions and availability zones
    • Typical replication configurations for production setups are 3 replicas

Kafka APIS

  • Admin API – used for managing and inspecting topics, brokers, and other Kafka objects
  • Producer API – used to write events to Kafka topics
  • Consumer API – used to read data from Kafka topics
  • Kafka Streams API – the ability to implement stream processing applications/microservices. Some of the key functionality includes functions for transformations, stateful operations like aggregations, joins, windowing, and more
    • In the Kafka streams world, these transformations and aggregations are typically written to other topics (in from one topic, out to one or more other topics)
    • Kafka Connect API – allows for the use of reusable import and export connectors that usually connect external systems. These connectors allow you to gather data from an external system (like a database using CDC) and write that data to Kafka. Then you could have another connector that could push that data to another system OR it could be used for transforming data in your streams application
      • These connectors are referred to as Sources and Sinks in the connector portfolio (confluent.io)
      • Source – gets data from an external system and writes it to a Kafka topic
      • Sink – pushes data to an external system from a Kafka topic

Use Cases

  • Message queue – usually talking about replacing something like ActiveMQ or RabbitMQ
    ** Message brokers are often used for responsive types of processing, decoupling systems, etc. – Kafka is usually a great alternative that scales, generally has faster throughput, and offers more functionality
  • Website activity tracking – this was one of the very first use cases for Kafka – the ability to rebuild user actions by recording all the user activities as events
  • How and why Kafka was developed (LinkedIn)
    • Typically different activity types would be written to different topics – like web page interactions to one topic and searches to another
  • Metrics – aggregating statistics from distributed applications
  • Log aggregation – some use Kafka for storage of event logs rather than using something like HDFS or a file server or cloud storage – but why? Because using Kafka for the event storage abstracts away the events from the files
  • Stream processing – taking events in and further enriching those events and publishing them to new topics
  • Event sourcing – using Kafka to store state changes from an application that are used to replay the current state of an object or system
  • Commit log – using Kafka as an external commit log is a way for synchronizing data between distributed systems, or help rebuild the state in a failed system
{"@context":"http:\/\/schema.org\/","@id":"https:\/\/www.codingblocks.net\/podcast\/nuts-and-bolts-of-apache-kafka\/#arve-youtube-iuudru9-hrk67033022ea2f2004054139","type":"VideoObject","embedURL":"https:\/\/www.youtube-nocookie.com\/embed\/IuUDRU9-HRk?feature=oembed&iv_load_policy=3&modestbranding=1&rel=0&autohide=1&playsinline=0&autoplay=0"}

Tip of the Week

  • Rémi Gallego is a music producer who makes music under a variety of names like The Algorithm and Boucle Infini, almost all of it is instrumental Synthwave with a hard-rock edge. They also make a lot of video game music, including 2 of my favorite game soundtracks of all time “The Last Spell” and “Hell is for Demons” (YouTube)
  • Did you know that the Kubernetes-focused TUI we’ve raved about before can be used to look up information about other things as well, like :helm and :events. Events is particularly useful for figuring out mysteries. You can see all the “resources” available to you with “?”. You might be surprised at everything you see (pop-eye, x-ray, and monitoring)
  • WarpStream is an S3 backed, API compliant Kafka Alternative. Thanks MikeRg! (warpstream.com)
  • Cloudflare’s trillion message Kafka setup, thanks Mikerg! (blog.bytebytego.com)
  • Want the power and flexibility of jq, but for yaml? Try yq! (gitbook.io)
  • Zenith is terminal graphical metrics for your *nix system written in Rust, thanks MikeRg! (github.com)
  • 8 Big (O)Notation Every Developer should Know (medium.com)
  • Another Git cheat sheet (wizardzines.com)
  continue reading

73 episodes

Artwork

Nuts and Bolts of Apache Kafka

{CodingBlocks}.NET

166 subscribers

published

iconShare
 
Manage episode 422741338 series 2013945
Content provided by Allen Underwood, Michael Outlaw, Joe Zack, Allen Underwood, Michael Outlaw, and Joe Zack. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Allen Underwood, Michael Outlaw, Joe Zack, Allen Underwood, Michael Outlaw, and Joe Zack or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Topics, Partitions, and APIs oh my! This episode we’re getting further into how Apache Kafka works and its use cases. Also, Allen is staying dry, Joe goes for broke, and Michael (eventually) gets on the right page.

The full show notes are available on the website at https://www.codingblocks.net/episode236

News

  • Thanks for the reviews! angingjellies and Nick Brooker
    • Please leave us a review! (/review)
  • Atlanta Dev Con is coming up, on September 7th, 2024 (www.atldevcon.com)

Kafka Topics

  • They are partitioned – this means they are distributed (or can be) across multiple Kafka brokers into “buckets”
  • New events written to Kafka are appended to partitions
    • The distribution of data across brokers is what allows Kafka to scale so well as data can be written to and read from many brokers simultaneously
  • Events with the same key are written to the same partition as the original event
    • Kafka guarantees reads of events within a partition are always read in the order that they were written
  • For fault tolerance and high availability, topics can be replicated…even across regions and data centers
    • NOTE: If you’re using a cloud provider, know that this can be very costly as you pay for inbound and outbound traffic across regions and availability zones
    • Typical replication configurations for production setups are 3 replicas

Kafka APIS

  • Admin API – used for managing and inspecting topics, brokers, and other Kafka objects
  • Producer API – used to write events to Kafka topics
  • Consumer API – used to read data from Kafka topics
  • Kafka Streams API – the ability to implement stream processing applications/microservices. Some of the key functionality includes functions for transformations, stateful operations like aggregations, joins, windowing, and more
    • In the Kafka streams world, these transformations and aggregations are typically written to other topics (in from one topic, out to one or more other topics)
    • Kafka Connect API – allows for the use of reusable import and export connectors that usually connect external systems. These connectors allow you to gather data from an external system (like a database using CDC) and write that data to Kafka. Then you could have another connector that could push that data to another system OR it could be used for transforming data in your streams application
      • These connectors are referred to as Sources and Sinks in the connector portfolio (confluent.io)
      • Source – gets data from an external system and writes it to a Kafka topic
      • Sink – pushes data to an external system from a Kafka topic

Use Cases

  • Message queue – usually talking about replacing something like ActiveMQ or RabbitMQ
    ** Message brokers are often used for responsive types of processing, decoupling systems, etc. – Kafka is usually a great alternative that scales, generally has faster throughput, and offers more functionality
  • Website activity tracking – this was one of the very first use cases for Kafka – the ability to rebuild user actions by recording all the user activities as events
  • How and why Kafka was developed (LinkedIn)
    • Typically different activity types would be written to different topics – like web page interactions to one topic and searches to another
  • Metrics – aggregating statistics from distributed applications
  • Log aggregation – some use Kafka for storage of event logs rather than using something like HDFS or a file server or cloud storage – but why? Because using Kafka for the event storage abstracts away the events from the files
  • Stream processing – taking events in and further enriching those events and publishing them to new topics
  • Event sourcing – using Kafka to store state changes from an application that are used to replay the current state of an object or system
  • Commit log – using Kafka as an external commit log is a way for synchronizing data between distributed systems, or help rebuild the state in a failed system
{"@context":"http:\/\/schema.org\/","@id":"https:\/\/www.codingblocks.net\/podcast\/nuts-and-bolts-of-apache-kafka\/#arve-youtube-iuudru9-hrk67033022ea2f2004054139","type":"VideoObject","embedURL":"https:\/\/www.youtube-nocookie.com\/embed\/IuUDRU9-HRk?feature=oembed&iv_load_policy=3&modestbranding=1&rel=0&autohide=1&playsinline=0&autoplay=0"}

Tip of the Week

  • Rémi Gallego is a music producer who makes music under a variety of names like The Algorithm and Boucle Infini, almost all of it is instrumental Synthwave with a hard-rock edge. They also make a lot of video game music, including 2 of my favorite game soundtracks of all time “The Last Spell” and “Hell is for Demons” (YouTube)
  • Did you know that the Kubernetes-focused TUI we’ve raved about before can be used to look up information about other things as well, like :helm and :events. Events is particularly useful for figuring out mysteries. You can see all the “resources” available to you with “?”. You might be surprised at everything you see (pop-eye, x-ray, and monitoring)
  • WarpStream is an S3 backed, API compliant Kafka Alternative. Thanks MikeRg! (warpstream.com)
  • Cloudflare’s trillion message Kafka setup, thanks Mikerg! (blog.bytebytego.com)
  • Want the power and flexibility of jq, but for yaml? Try yq! (gitbook.io)
  • Zenith is terminal graphical metrics for your *nix system written in Rust, thanks MikeRg! (github.com)
  • 8 Big (O)Notation Every Developer should Know (medium.com)
  • Another Git cheat sheet (wizardzines.com)
  continue reading

73 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide