Having started with classic monolith applications in the late 90s and adopting a new microservice architecture in 2015, our organization needed a convenient, reliable, and low-cost way to push changes back and forth between them. One that preferably utilized technology already on hand and could exchange information between multiple data stores.
In this session we will explore how Kafka Connect and its various connectors satisfied this need. We will review the two disparate tech stacks we needed to integrate, and the strategies and connectors we used to exchange information. Finally, we will cover some enhancements we made to our own processes including integrating Kafka Connect and its connectors into our CI/CD pipeline and writing tools to monitor connectors in our production environment.
Streamlining Python Development: A Guide to a Modern Project Setup
Utilizing Kafka Connect to Integrate Classic Monoliths into Modern Microservices | Zachary Lark, LogRhythm
1. Utilizing Kafka Connect to Integrate
Classic Monoliths into Modern
Microservices
@Zach_Lark
linkedin.com/in/zachary-lark
2. Who am I?
•Started in software
development in 2003
•Professional Services ->
Developer -> Dev Manager
-> Architect
•Using Kafka since 2016
•Started at LogRhythm in
August of 2021
34. JDBC Connector Config
Max number of tasks to allocate to the connector
For JDBC Connectors, this is governed by the number of tables being monitored
39. JDBC Connector Config
Query used when poll executes
Structured to accommodate for JDBC connector to append where clause appropriate for mode
40. JDBC Connector Config
Mode:
* bulk: All the data every time
* incrementing: Use an incrementing column to detect changes
* timestamp: Use a timestamp column to detect changes
* timestamp+incrementing: Use a combination of both to uniquely identify changes with time