Part 6/11:
Processing billions of orders per day with alerts every 10-15 minutes requires an infrastructure capable of ingesting and analyzing data continuously.
Using streaming technologies like Apache Kafka and Flink combined with open formats like Iceberg allows enterprises to create scalable, low-latency data flows aligned with AI needs.
The speaker advocates replacing the traditional batch workflows, which are costly and slow, with architectures that natively support real-time processing—a necessity for successfully deploying generative AI use cases.
Accelerating Machine Learning with New Infrastructure
The discussion transitioned into how intelligent data platforms can speed up machine learning (ML) workflows and deployment: