Apache Flink job to correlate telecom signaling messages for KPI extracted from streaming solutions
-
Updated
May 29, 2024
Apache Flink job to correlate telecom signaling messages for KPI extracted from streaming solutions
Stream Processing of website click data using Kafka and monitored and visualised using Prometheus and Grafana
Http Connector for Apache Flink. Provides sources and sinks for Datastream , Table and SQL APIs.
A [single|uber|fat] jar standalone Apache Flink connected to PostgreSQL via Ververica CDC connector. Usage Flink SQL to replicate data from PostgreSQL to Elasticsearch/something else.
This repository will contain examples of use cases that utilize Decodable streaming solution
A streaming data pipeline uses Kafka as the backbone and Flink for data processing and transformations. Kafka Connect is used for writing the streams to S3 compatible blob stores and Redis (low latency KV store for real-time ML inference). Spark is used for the batch job to backfill the ml feature data.
Collection of code examples for Amazon Managed Service for Apache Flink
This repository accompanies the article "Build a data ingestion pipeline using Kafka, Flink, and CrateDB" and the "CrateDB Community Day #2".
A data pipeline is about "Random Number Counting"
Ecommerce Sales Analytics Data Generation, developed a detailed system architecture using Apache Flink, Kafka, Elasticsearch, and Docker. Implemented real-time data streaming, established a robust, scalable data pipeline. Flink was set up, transactions were aggregated in Postgres and Elasticsearch, concluding with a dynamic streaming dashboard.
A data pipeline demo is for Sale Transaction.
Demo Flink and Kafka project to show how to react on tracking events in real-time and trigger offer for customer engagement based on campaign configurations. The project also utilizes the Broadcast State Pattern in order to update the rules (campaigns) at runtime without restarting the project, using a dedicated, low-frequency, Kafka topic.
A Makeshift data infrastructure setup for datafirstjobs.com.
we are thrilled to announce our new PoC project aimed at providing a complete real-time extraction, transformation, and exposure architecture for the new provincial transportation systems.
This repository contains my coursework projects for the Big Data course in my Master's degree program.
Project to practice Python / Kafka / Flink
This repository contains my coursework projects for the Big Data course in my Master's degree program.
AWS Kinesis Analytics gather metrics from various computers (cpu, memory), perform aggregation on Kinesis stream data using Kinesis Analytics (with flink) and store the stream data into AWS S3 bucket which is used by Amazon Athena for running various Analytics queries and rending charts using Grafana.
A flink source connector to provide the continuous, incremental and streaming events from Kudu tables
Project with Apache Flink for class Distributed Application Environment 2023
Add a description, image, and links to the flink-stream-processing topic page so that developers can more easily learn about it.
To associate your repository with the flink-stream-processing topic, visit your repo's landing page and select "manage topics."