Advertisement · 728 × 90
#
Hashtag
#Kestra
Advertisement · 728 × 90
Preview
CVE-2026-34612: CWE-89: Improper Neutralization of Special Elements used in an S Kestra, an open-source event-driven orchestration platform, has a SQL Injection vulnerability (CWE-89) in its default docker-compose deployment prior to version 1.3.7. The flaw exists in the GET /api/v1/main/flows/search endpoint, where an

CRITICAL: Kestra < 1.3.7 vulnerable to SQL Injection (CVSS 10). Authenticated users can achieve RCE. Upgrade to v1.3.7 now to stay protected! radar.offseq.com/threat/cve-2026-34612-cw... #OffSeq #Kestra #SQLInjection

0 0 0 0
Preview
Kestra Secures $25 Million in Series A to Lead Enterprise Workflow Orchestration Kestra has raised $25 million in Series A funding led by RTP Global, aiming to revolutionize enterprise orchestration across data, AI, and workflows.

Kestra Secures $25 Million in Series A to Lead Enterprise Workflow Orchestration #USA #New_York #AI_Orchestration #Kestra #RTP_Global

0 0 0 0
Preview
GitHub - CodingJhames/de-zoomcamp-james Contribute to CodingJhames/de-zoomcamp-james development by creating an account on GitHub.

Week 3 #DataZoomcamp done! 🚀

Migrated the DW logic to AWS Athena & S3 ☁️

🔹 20M+ records with #Kestra 🔹 Optimized queries: 310MB → 26MB scan using Partitioning & Clustering 🔹 Mastered cloud-agnostic DW concepts

Check my repo: 🔗 github.com/CodingJhames...

#DataEngineering #AWS #Athena

2 0 0 0
Preview
GitHub - middaycoffee/kestra-pipeline Contribute to middaycoffee/kestra-pipeline development by creating an account on GitHub.

DAY 4: Module 2 of DataTalks Data Engineering ZoomCamp is complete.

Done:
- @kestra_io workflow orchestration
- ETL pipelines for taxi data
- Backfill & scheduling
- Variables & dynamic flows

My Solution: github.com/middaycoffee...
#dezoomcamp
#kestra
#sql

0 0 0 0

ELT (extract-load-transform)

Proceso que aprendimos para manejar datos:

1. Extraemos datos
2. Los cargamos en nuestro Data Lake (repositorio/bucket para datos crudos)
3. Transformamos con SQL (BigQuery)

Todo orquestado por @kestra.io

#dezoomcamp #DataEngineeringZoomcamp #DataTalksClub #Kestra

0 0 0 0

Kestra

Es nuestro "director de orquesta" que nos ayudará a que todas nuestras herramientas y plataformas (python, código, bases de datos, cloud, etc.) trabajen juntas.

#dezoomcamp #DataEngineeringZoomcamp #DataTalksClub #Kestra

0 0 0 0
Preview
Orchestrating Streams: Episode 2 — Consuming Kafka Topics From Kestra Welcome to Orchestrating Streams, a blog series where I explore the space of data streaming and orchestration. In each post, I will…

I just published Orchestrating Streams: Episode 2 — Consuming Kafka Topics From Kestra medium.com/p/orchestrat...

#OpenSource #ApacheKafka #Kestra #Orchestration

2 1 0 0

DAY 2: Started Week 2 of DataTalks ZoomCamp 2026!

today: started module 2.
time: 4.5h

- Workflow orchestration
- Kestra
- Writing flow code and executing
- Debugging with the help of AI

#LearnInPublic
#DEZoomCamp
#Kestra
#Docker
#Python

0 0 0 0
Preview
Chore/java 25 by loicmathieu · Pull Request #14221 · kestra-io/kestra We read every piece of feedback, and take your input very seriously.

After migrating Kestra from Java 21 to Java 25, we see a significant improvement in memory usage.
It uses 35% less heap and 12% less metaspace!
Upgrading always brings benefits ;)
#java #kestra
github.com/kestra-io/ke...

21 3 2 0
Preview
Orchestrating Streams: Episode 1 — Producing Data from Kestra to Kafka Welcome to Orchestrating Streams, a blog series where I explore the space of data streaming and orchestration. In each post, I will…

It’s been a while since I last wrote a blog post — so I’m starting a new series: Orchestrating Streams!
Episode 1 is out: producing data from Kestra to Kafka ⚡
#ApacheKafka #Kestra #DataEngineering #OpenSource

medium.com/@fhussonnois...

3 1 0 0
Post image

Happy birthday Lulu Wilson, born October 7, 2005.

Full Post
www.facebook.com/photo?fbid=1...

#TodayInNerdHistory #October7 #LuluWilson #StarTrek #StarTrekPicard #Nepenthe #Kestra #Troi #Riker #Louie #BlackBox #TheMillers #DeliverUsFromEvil #TheMoney #HerComposition #Teachers #Birthday

0 0 0 0
Post image

CharacterEscapes: Jackson’s hidden gem At Kestra, the data orchestration platform I work for, we had an issue ([#10326] (https://github.com/kestra-io/kestra/issues/10326 opened by a user report...

#informatique #jackson #kestra

Origin | Interest | Match

0 0 0 0
Preview
Kestra 1.0: Revolutionizing Enterprise Automation with Groundbreaking Orchestration Technology Kestra Technologies has launched Kestra 1.0, an innovative orchestration platform designed to unify automation, reduce costs, and streamline workflow management for enterprises worldwide.

Kestra 1.0: Revolutionizing Enterprise Automation with Groundbreaking Orchestration Technology #USA #New_York #Automation #Kestra #Orchestration

0 0 0 0
Post image

Stop fighting complex data pipelines. Kestra is an open-source, event-driven orchestrator that's infinitely scalable. 🧩 Define workflows with simple YAML or a no-code UI, then sync with Git. #opensource #kestra #orchestration #dataops #devops

1 0 0 0
Preview
Peer Review 1: Analyzing Poland's Real Estate Market (Part 1) ## Introduction Welcome to the first part of **Peer Review 1** for DTC DEZOOMCAMP**.** This two-part series provides an in-depth review of a data engineering pipeline designed to analyze Poland's real estate market. The project demonstrates the use of modern data engineering tools such as **BigQuery** , **dbt Cloud** , and **Kestra** , along with a **Streamlit** dashboard for visualization. This post will focus on the **problem description** , **data ingestion pipeline** , and the **cloud setup** , while the next post will explore the interactive dashboard and insights. ## Problem Description The project aims to analyze Poland's real estate market, focusing on rental and sales trends across various cities. By processing and visualizing the data, the following questions are addressed: * **Which cities have the highest rental or sales activity?** * **What are the price trends across different cities?** * **How does the real estate market vary between rentals and sales?** A dataset from Kaggle, containing apartment prices in Poland, serves as the starting point. This dataset includes details such as city names, transaction types (rent/sale), and prices. The primary challenge lies in transforming the raw CSV data into actionable insights while ensuring scalability and reproducibility. ## Data Ingestion: Batch Processing with Kestra ### Workflow Orchestration The project employs **Kestra** for handling multiple CSV files and automating the ETL process. The workflow includes: 1. **Data Extraction:** CSV files containing raw real estate data are ingested into the pipeline. 2. **Data Transformation:** Kestra facilitates cleaning and structuring the data for analysis. 3. **Data Loading:** The cleaned data is loaded into both **PostgreSQL** (for local analysis) and **BigQuery** (for cloud-based analysis). ### Why Kestra? Kestra provides the ability to automate the entire ETL process, ensuring consistency and minimizing manual intervention. Although the dataset isn’t updated regularly, the pipeline is scalable and can handle new data efficiently. ### Example Kestra Flow An example Kestra flow processes the CSV files by: * Taking file paths and metadata (e.g., month and year) as input. * Executing tasks for data cleaning, validation, and loading. * Producing cleaned data as output in BigQuery and PostgreSQL. ## Cloud Setup: BigQuery and dbt Cloud ### BigQuery as the Data Warehouse BigQuery serves as the data warehouse for storing and querying the transformed data. Its serverless architecture and scalability make it an excellent choice. Key features utilized include: * **SQL Queries:** Used to analyze price distributions, trends, and city-level activity. * **Integration with dbt Cloud:** Enables modular and reusable transformations. ### Transformations with dbt Cloud **dbt Cloud** is employed for data cleaning and structuring. It allows: * Writing modular SQL models. * Testing data integrity. * Creating curated tables with calculated fields like medians, percentiles, and trends. ### Example dbt Configuration Below is a snippet from the `dbt_project.yml` file: name: 'polish_flats_dbt' version: '1.0' config-version: 2 profile: 'default' # Use the default profile from profiles.yml model-paths: - models ### Challenges and Workarounds * **Challenge:** Streamlit occasionally failed due to sync delays from the US cluster of dbt Cloud. * **Workaround:** Pre-exported CSVs were used for local analysis, significantly improving performance and reliability. ## Reproducibility The README file provides detailed instructions for setting up the project locally. These include: 1. Setting up **PostgreSQL** and **Kestra** using Docker. 2. Installing dependencies for dbt and running transformations. 3. Configuring BigQuery and dbt Cloud for seamless integration. ### Running Locally The following steps can be followed to run the pipeline locally: 1. Clone the repository: git clone <https://github.com/elgrassa/Data-engineering-professional-certificate.git> cd Data-engineering-professional-certificate 2. Start PostgreSQL and Kestra using Docker: docker-compose -p kestra-postgres up -d 3. Install dependencies: pip install -r requirements.txt pip install dbt-bigquery ## Conclusion This post reviewed the problem description, batch data ingestion pipeline with Kestra, and the cloud setup using BigQuery and dbt Cloud. These components form the backbone of the project, enabling efficient ETL processes and scalable storage. The next post will delve into the **Streamlit dashboard** , **visualizations** , and **insights** derived from the data.
0 0 0 0
Preview
InsightFlow Part 9: Workflow Orchestration with Kestra # 9. Workflow Orchestration with Kestra In modern data engineering, orchestrating workflows is a critical component of building reliable, scalable, and automated data pipelines. For the **InsightFlow** project, we leverage **Kestra** , an open-source declarative orchestration platform, to manage the end-to-end workflow of ingesting, transforming, and analyzing retail and economic data from public sources. This blog post will walk you through how Kestra is used in this project and why it is an excellent choice for workflow orchestration. ## **Why Kestra?** Kestra is a modern orchestration platform designed to simplify the management of complex workflows. It offers several features that make it ideal for the InsightFlow project: 1. **Declarative Workflow Design** : Workflows are defined in YAML, making them easy to read, version-control, and maintain. 2. **Scalability** : Kestra can handle large-scale workflows with hundreds of tasks, ensuring reliability even under heavy loads. 3. **Extensibility** : With over 600 plugins, Kestra supports a wide range of tasks, including AWS services, database queries, and custom scripts. 4. **Observability** : Kestra provides detailed logs, metrics, and monitoring tools to track workflow execution and troubleshoot issues. 5. **Integration with Modern Tools** : Kestra integrates seamlessly with Git, Terraform, and other tools, enabling a streamlined CI/CD pipeline. ## **Kestra in the InsightFlow Project** In the InsightFlow project, Kestra orchestrates the following key workflows: 1. **Data Ingestion** : Fetching raw data from public sources using AWS Batch. 2. **Data Transformation** : Running dbt models to clean, normalize, and structure the data. 3. **Data Cataloging** : Updating the AWS Glue Data Catalog to reflect the latest data. 4. **Testing and Validation** : Running dbt tests to ensure data quality. 5. **Scheduling and Automation** : Automating the entire pipeline to run on a daily schedule. ## **Workflow Overview** The Kestra workflow for the production environment is defined in the file `insightflow_prod_pipeline.yml`. Below is an overview of the key tasks: ### **1. Data Ingestion via AWS Batch** The workflow starts by submitting an AWS Batch job to ingest raw data from public sources into the S3 bucket `insightflow-prod-raw-data`. This is achieved using the following task: - id: submit_batch_ingestion_job_cli type: io.kestra.core.tasks.scripts.Bash commands: - | echo "Submitting AWS Batch Job..." JOB_DEF_NAME="insightflow-prod-ingestion-job-def" JOB_QUEUE_NAME="insightflow-prod-job-queue" TARGET_BUCKET_NAME="insightflow-prod-raw-data" AWS_REGION="ap-southeast-2" JOB_NAME="insightflow-ingestion-{{execution.id}}" JOB_OUTPUT=$(aws batch submit-job \\ --region "$AWS_REGION" \\ --job-name "$JOB_NAME" \\ --job-queue "$JOB_QUEUE_NAME" \\ --job-definition "$JOB_DEF_NAME" \\ --container-overrides '{ "environment": [ {"name": "TARGET_BUCKET", "value": "'"$TARGET_BUCKET_NAME"'"} ] }') JOB_ID=$(echo "$JOB_OUTPUT" | grep -o '"jobId": "[^"]*' | awk -F'"' '{print $4}') echo "Submitted Job ID: $JOB_ID" ### **2. Updating the Glue Data Catalog** Once the raw data is ingested, the workflow triggers an AWS Glue Crawler to update the Glue Data Catalog. This ensures that the latest data is available for querying in Athena. - id: start_glue_crawler_cli type: io.kestra.core.tasks.scripts.Bash commands: - | echo "Starting AWS Glue Crawler..." CRAWLER_NAME="insightflow-prod-raw-data-crawler" AWS_REGION="ap-southeast-2" aws glue start-crawler --region $AWS_REGION --name "$CRAWLER_NAME" echo "Crawler $CRAWLER_NAME started." ### **3. Running dbt Models** After the data is cataloged, the workflow runs dbt models to transform the raw data into an analysis-ready format. This includes tasks for syncing dbt files, installing dependencies, and running the models. - id: dbt_run type: io.kestra.plugin.dbt.cli.DbtCLI commands: - dbt run --target prod namespaceFiles: enabled: false containerImage: pizofreude/kestra-dbt-athena:latest ### **4. Testing and Validation** To ensure data quality, the workflow runs dbt tests on the transformed data. Any issues are logged for further investigation. - id: dbt_test type: io.kestra.plugin.dbt.cli.DbtCLI commands: - dbt test --target prod namespaceFiles: enabled: false containerImage: pizofreude/kestra-dbt-athena:latest ### **5. Scheduling** The workflow is scheduled to run daily at 5:00 AM UTC using Kestra's scheduling feature. triggers: - id: daily_schedule type: io.kestra.plugin.core.trigger.Schedule cron: "0 5 * * *" ## **Benefits of Using Kestra** 1. **Automation** : Kestra automates the entire pipeline, reducing manual intervention and ensuring consistency. 2. **Error Handling** : With built-in retry mechanisms and detailed logs, Kestra makes it easy to identify and resolve issues. 3. **Scalability** : Kestra can handle large-scale workflows with multiple tasks and dependencies. 4. **Flexibility** : The declarative YAML syntax allows for easy customization and extension of workflows. ## **Getting Started with Kestra** To set up Kestra for your own projects, follow these steps: 1. **Install Kestra** : Refer to the Kestra documentation for installation instructions. 2. **Define Workflows** : Create YAML files to define your workflows, as shown in the examples above. 3. **Run Workflows** : Use the Kestra UI or CLI to execute and monitor your workflows. 4. **Integrate with CI/CD** : Use Git and Terraform to version-control and deploy your workflows. ## **Conclusion** Kestra is a powerful tool for orchestrating workflows in modern data pipelines. In the InsightFlow project, it plays a crucial role in automating the ingestion, transformation, and validation of retail and economic data. By leveraging Kestra's features, we ensure that the pipeline is reliable, scalable, and easy to maintain. If you're building a similar project, consider using Kestra to simplify your workflow orchestration. For more details, check out the Kestra documentation or explore the InsightFlow repository. Happy orchestrating!
0 0 0 0
Video

Lulu Wilson
#younghollywood #celeb #celebrity #sexy #hot #ass #booty #fit #becky #startrek #kestra

4 0 0 0
Post image

Find 5+ alternatives to Zapier/ Make Managing workflows and automating processes can be simpler a...

blog.elest.io/find-5-alternatives-to-z...

#N8N #FlowiseAI #Budibase #Kestra #Activepieces

Event Attributes

0 0 0 0
Preview
a woman in a purple dress is doing a split ALT: a woman in a purple dress is doing a split

Day 18 / 80 of the #dezoomcamp course! Almost finished with module-2 so here is a quick blog post “Go with the flow…with #kestra.”

tinker0425.github.io/data-enginee...

0 0 0 0