Artwork

Content provided by Confluent, founded by the original creators of Apache Kafka® and Founded by the original creators of Apache Kafka®. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Confluent, founded by the original creators of Apache Kafka® and Founded by the original creators of Apache Kafka® or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Becoming Data Driven with Apache Kafka and Stream Processing ft. Daniel Jagielski

48:10
 
Share
 

Manage episode 285621318 series 2355972
Content provided by Confluent, founded by the original creators of Apache Kafka® and Founded by the original creators of Apache Kafka®. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Confluent, founded by the original creators of Apache Kafka® and Founded by the original creators of Apache Kafka® or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

When it comes to adopting event-driven architectures, a couple of key considerations often arise: the way that an asynchronous core interacts with external synchronous systems and the question of “how do I refactor my monolith into services?” Daniel Jagielski, a consultant working as a tech lead/dev manager at VirtusLab for Tesco, recounts how these very themes emerged in his work with European clients.

Through observing organizations as they pivot toward becoming real time and event driven, Daniel identifies the benefits of using Apache Kafka® and stream processing for auditing, integration, pub/sub, and event streaming.

He describes the differences between a provisioned cluster vs. managed cluster and the importance of this within the Kafka ecosystem. Daniel also dives into the risk detection platform used by Tesco, which he helped build as a VirtusLab consultant and that marries the asynchronous and synchronous worlds.

As Tesco migrated from a legacy platform to event streaming, determining risk and anomaly detection patterns have become more important than ever. They need the flexibility to adjust due to changing usage patterns with COVID-19. In this episode, Daniel talks integrations with third parties, push-based actions, and materialized views/projects for APIs.

Daniel is a tech lead/dev manager, but he’s also an individual contributor for the Apollo project (an ICE organization) focused on online music usage processing. This means working with data in motion; breaking the monolith (starting with a proof of concept); ETL migration to stream processing, and ingestion via multiple processes that run in parallel with record-level processing.

EPISODE LINKS

  continue reading

265 episodes

Artwork
iconShare
 
Manage episode 285621318 series 2355972
Content provided by Confluent, founded by the original creators of Apache Kafka® and Founded by the original creators of Apache Kafka®. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Confluent, founded by the original creators of Apache Kafka® and Founded by the original creators of Apache Kafka® or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

When it comes to adopting event-driven architectures, a couple of key considerations often arise: the way that an asynchronous core interacts with external synchronous systems and the question of “how do I refactor my monolith into services?” Daniel Jagielski, a consultant working as a tech lead/dev manager at VirtusLab for Tesco, recounts how these very themes emerged in his work with European clients.

Through observing organizations as they pivot toward becoming real time and event driven, Daniel identifies the benefits of using Apache Kafka® and stream processing for auditing, integration, pub/sub, and event streaming.

He describes the differences between a provisioned cluster vs. managed cluster and the importance of this within the Kafka ecosystem. Daniel also dives into the risk detection platform used by Tesco, which he helped build as a VirtusLab consultant and that marries the asynchronous and synchronous worlds.

As Tesco migrated from a legacy platform to event streaming, determining risk and anomaly detection patterns have become more important than ever. They need the flexibility to adjust due to changing usage patterns with COVID-19. In this episode, Daniel talks integrations with third parties, push-based actions, and materialized views/projects for APIs.

Daniel is a tech lead/dev manager, but he’s also an individual contributor for the Apollo project (an ICE organization) focused on online music usage processing. This means working with data in motion; breaking the monolith (starting with a proof of concept); ETL migration to stream processing, and ingestion via multiple processes that run in parallel with record-level processing.

EPISODE LINKS

  continue reading

265 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide