We wrote about the software design patterns that make this possible: feldera.com/blog/ai-agents-arent-coworkers-embed-them-in-your-software
Posts by Feldera
The result is a cleaner division of labor: the agent interprets new information and updates the logic; the engine applies that logic continuously and signals when something has changed. Calm technology, not for humans this time, but for the agents working on our behalf.
That's the model Feldera is built on: an incremental compute engine that emits only what changed, so even queries spanning hundreds of joins don't require recomputation from scratch every time new data arrives. Precise and easy to act on.
One underappreciated place to start: how your database communicates with an agent. With Change Data Capture, the database emits a continuous stream of precise events — what was inserted, updated, deleted, and when. The agent receives exactly what matters, when it matters.
Most software was never designed for agentic workflows. That raises a more interesting question: what kinds of software patterns let agents work quietly, react to change, make progress without constant supervision at minimal token cost?
Most AI agents are forced to work through interfaces built for humans: dashboards to read, tables to scan, CSVs to parse. So they poll for updates, have to summarize it, work hard to find out what changes, and then usually need to wait for the human to respond.
What are the software patterns that make agents useful?
-Feldera Demo Library: We launched a new demo repository with 8 pipelines, including agentic fraud detection and agentic fine-grained access control. Every demo runs inside Claude Code with a single command. Try it yourself: github.com/feldera/feldera-demos
-Custom columns in Postgres output: The Postgres connector now supports an extra_columns config option. Write user-defined values, changeable at runtime, into additional columns alongside your pipeline output. Available today to try in our sandbox: try.feldera.com
See a demo of the Feldera x dbt adapter by Raki Rahman.
www.youtube.com/watch?v=ctjt...
-dbt adapter for Feldera: Run Feldera pipelines directly from dbt. Write your models in SQL, point dbt at Feldera, and get incremental computation without changing how you work. This is an early experimental release built by Feldera OSS contributor Raki Rahman (thank you!). More coming soon.
⚡️Shipped This Week
Every week we move fast and we build in the open. This week the Feldera team and a community contributor shipped features that make your pipelines more powerful, more resilient, and easier to operate at scale.
Here are some highlights from this week:
Your agents are ready to make decisions in real time.
Is your data platform ready to provide live data?
Feldera is.
Try it now: github.com/feldera/feld...
Tell us about your experience with agentic real-time decision-making.
How do you give AI agents live, actionable signals?
Stay tuned. 👀
CONVERT_TIMEZONE in SQL: Need to convert timezones in your pipeline SQL? Now you can write CONVERT_TIMEZONE(source_tz, target_tz, timestamp) and move on.
All of this is live in our sandbox right now at try.feldera.com. No infrastructure or setup required.
Grafana dashboard update: Full visibility into your pipeline, out of the box. Latency, throughput, memory, storage, and connector rates are all there and configurable from the moment you connect.
Constant-time Delta writes: The Delta connector now creates automatic checkpoints in the delta-lake it writes to, keeping write time constant regardless of how many commits have accumulated. A pipeline that’s running for months performs the same as one that just started.
Every week our compute engine gets more reliable, more observable, and more predictable at scale. Your pipelines should just work at any size, for as long as you need them to.
Here are a few highlights from this week:
dbt has millions of users. Data engineers who live in dbt now have a direct path to Feldera. Thank you, Raki! This one means a lot. 🙌
Building something on Feldera? We want to hear about it.
🔗 github.com/feldera/feld...
We love seeing contributions to our OSS community!
Raki Rahman knows the dbt ecosystem well and he chose to build a dbt adapter for Feldera, complete with integration tests, a demo video, a python SDK to be released on PyPI, and he even reached out to the dbt docs team to get it listed officially.
To users accustomed to traditional batch analytics, this looks like the batch job completing instantly.
We wrote a guide on how to get there from your existing Spark jobs.
docs.feldera.com/use_cases/ba...
The laws of computational complexity haven't changed. But you don't need to make batch jobs faster. You can replace them with always-on incremental pipelines that update results in real time as input data changes.
What would it mean for your business if all your batch jobs completed instantly?
How many pipelines are you maintaining just to keep data fresh?
What new use cases would you unlock if complex analytical queries returned answers the moment a user submits a form, makes a payment, or visits a branch?
All of this is live in our sandbox right now at try.feldera.com. No infrastructure or setup required.
→ Negative weight merging: When deletions accumulate in the spine faster than they get merged, lookups slow down because the engine has to step over records that will eventually cancel out. We now promote those batches up the merge levels more eagerly.
→ Smarter storage spill: The engine was writing large batches to disk defensively even when memory pressure didn't warrant it. It now only spills aggressively when the memory backpressure mechanism actually says to. Pipelines that don't need to spill stop paying the cost of it.
→ Kafka backpressure wakeup: With synchronize_partitions enabled, receiver threads were waiting for up to a second after backpressure cleared instead of waking up on a signal. Fixing this significantly improved performance for pipelines with this feature enabled.
Most data pipelines do 10x more work than the problem requires. Every day we ship features and improvements to Feldera that fix that.
Here are a few highlights from this week:
The result: engineers can now see exactly where time is being spent, down to the millisecond.
If you care about performance observability in real-time data systems, this one's worth a read.
👇
www.feldera.com/blog/making-...
Our latest blog dives into how we leveled up our profiling capabilities, adding detailed timing markers for pipeline steps, operator evaluations, LSM tree merges, and network activity across multi-host pipelines.