Advertisement · 728 × 90

Posts by Jason Miles

Preview
The Ideal Microsoft Fabric CI/CD Approach: Git for Change, Deployment Pipelines for Promotion, and a Code-First Escape Hatch Microsoft Fabric CI/CD has a reputation for being confusing—usually because people look at Git integration and Deployment Pipelines as competing ideas rather than two halves of a single delivery story. The good news is that the “ideal” approach is not exotic. It’s a handoff: Use Git integration to support real developer workflows (including branching that maps cleanly to isolated workspaces). Use Deployment Pipelines to promote approved changes across environments. When you need richer approvals, tests, and release controls, let traditional tooling—especially GitHub Actions or Azure DevOps Pipeline—orchestrate promotions via Fabric APIs.

#MicrosoftFabric CI/CD and #DataOps is easier than it looks—once you stop trying to make pipelines do collaboration. The “ideal” pattern is a handoff: Git integration for change, Deployment Pipelines for promotion, and traditional #CICD for #governance.

2 months ago 0 0 0 0
Preview
The NotebookUtils Gems I Wish More Fabric Notebooks Used Most Fabric notebook code I review has the same telltale shape: a little Spark, a hardcoded path (or three), and just enough glue logic to “get it to run.” And then, a month later, someone copies it into another workspace and everything breaks. NotebookUtils is one of the easiest ways to avoid that fate. It’s built into Fabric notebooks, it’s designed for the common “day two” problems (orchestration, configuration, identities, file movement), and it’s still surprisingly underused. NotebookUtils is also the successor to mssparkutils—backward compatible today, but clearly where Microsoft is investing going forward.

If your Fabric notebooks keep breaking when you copy them to a new workspace, it’s probably not your Spark—it’s your glue code. Here are the underused NotebookUtils functions that make notebooks modular, portable, and production-friendly. #MicrosoftFabric #DataEngineering #OneLake

2 months ago 0 0 0 0
Preview
DirectLake Without OneLake Access: A Fixed-Identity Pattern That Keeps the Lakehouse Off-Limits There’s a moment that catches a lot of Fabric teams off guard. You publish a beautiful report on a DirectLake semantic model. Users can slice, filter, and explore exactly the way you intended. Then someone asks, “Why can I open the lakehouse and browse the tables?” Or worse: “Why can I query the SQL analytics endpoint directly?” If your objective is semantic model consumption without lake access, the default DirectLake behavior can feel like it’s working against you. By default, DirectLake uses Microsoft Entra ID single sign-on (SSO)—meaning the viewer’s identity…

Stop granting lakehouse permissions just so people can read a report.

This fixed-identity DirectLake pattern cleanly separates semantic model consumption from OneLake access—and it’s one of the most practical #DataGovernance moves you can make in #MicrosoftFabric today. #DirectLake #PowerBI

2 months ago 0 0 0 0
Preview
Workspace Sprawl Isn’t Your Fabric Problem—Stale Workspaces Are “Do we really need another workspace?” If you’ve built anything meaningful in Microsoft Fabric, you’ve heard some version of that question. It usually comes wrapped in a familiar anxiety: workspace sprawl. Too many containers. Too much to govern. Too hard to manage. Here’s the reframing that matters: workspace count is rarely the risk. The real risk is stale workspaces and stale data—the forgotten corners of your tenant where ownership is unclear, permissions linger, and the platform quietly accumulates operational and compliance debt. In this post I’ll walk through why “workspace sprawl” is a false fear, why workspaces naturally form clusters (and why good development multiplies them), and how intentional permissioning in Microsoft Entra and Fabric keeps management from becoming a linear slog—especially once you introduce automation and tooling.

#WorkspaceSprawl” in #MSFabric is mostly a myth. The real risk is stale workspaces: unclear ownership, lingering access, and old data that never dies. If your #Governance model still scales linearly with workspaces, it’s time to shift, so you can take advantage of the power of #workspaces.

2 months ago 0 0 0 0
Preview
Fabric Environments Feel Like a Turbo Button—Until Private Link Gets Involved If you’ve spent any real time in notebooks, you’ve felt it: the “why am I doing this again?” moment. You start a session, install the same libraries, chase a version mismatch, restart a kernel, and finally get back to what you actually came to Fabric to do. Microsoft Fabric Environments are a strong answer to that pain. They pull your Spark runtime choice, compute settings, and library dependencies into a reusable, shareable artifact you can attach to notebooks and Spark Job Definitions. And with the latest previews—Azure Artifact Feed support inside Environments and Fabric Runtime 2.0 Experimental—it’s clear Microsoft is investing in making Spark development in Microsoft Fabric more repeatable and more “team ready.”

Environments are a game-changer for repeatable Spark work—until Private Link forces you to rethink how you ship libraries. If you’re adopting Runtime 2.0 or Azure Artifact Feeds in preview, make sure your network security posture isn’t quietly taking options away. #MicrosoftFabric #DataEngineering

2 months ago 0 0 0 0
Preview
The Hidden Permission Chain Behind Cross-Workspace Lakehouse Shortcuts (for Semantic Models) One of the cleanest patterns in Microsoft Fabric is splitting your world in two: a “data product” workspace that owns curated lakehouses, and an “analytics” workspace that owns semantic models and reports. You connect the two with a OneLake shortcut, and suddenly you’ve avoided copies, reduced refresh complexity, and kept your architecture tidy. Then the first DirectLake semantic model hits that shortcut and… the tables don’t load. This post walks through what’s really happening in that moment in Microsoft Fabric, what permissions you actually need (and where), and how to tighten the whole pattern with OneLake Security instead of simply widening access.

Cross-workspace #OneLake #Shortcuts are one of the best architectural patterns in #MSFabric—until your #SemanticModel hits a 403 and nobody’s sure why. The fix isn’t “make them a workspace member.” It's to really understand the permissioning structures.

2 months ago 0 0 0 0
Preview
Stop Paying Hot-Tier Prices for Cold Data: Using ADLS Gen2 to Tame Fabric Ingestion Storage Costs If you’ve been living in Microsoft Fabric for a few months, you’ve probably felt it: the platform makes it incredibly easy to ingest data… and surprisingly easy to rack up storage spend while you’re doing it (especially considering how much storage is included). The pattern is common. A team starts with a Lakehouse, adds Pipelines or Dataflows Gen2 for ingestion, follows a sensible medallion approach, and before long they’re keeping “just in case” raw files, repeated snapshots, and long-running history inside OneLake—often at the same performance tier as yesterday’s data.

#MSFabric storage got expensive? You’re not alone. The fix usually isn’t “delete data”—it’s separating #Archives from analytics storage. In this deep dive, I walk through how to use ADLS Gen2 + #OneLake shortcuts + trusted workspace access to cut storage bloat while keeping Fabric workflows intact.

2 months ago 0 0 0 0
Preview
Data Quality as Code in Fabric: Declarative Checks on Materialized Lake Views If you’ve ever shipped a “clean” silver or gold table only to discover (later) that it quietly included null keys, impossible dates, or negative quantities… you already know the real pain of data quality. The frustration isn’t that bad data exists. The frustration is that quality rules often live somewhere else: in a notebook cell, in a pipeline activity, in a dashboard someone checks (sometimes), or in tribal knowledge that never quite becomes a contract. Microsoft Fabric’s Materialized Lake Views (MLVs) give you a more disciplined option: you can define declarative data quality checks 

Your lakehouse doesn’t need more dashboards that claim the data is clean. It needs quality rules that run where your data is built—and signals that tell you when quality drifts. Use #MaterializedLakeViews to add declarative #DataQuality constraints to #MSFabric's lineage, with #PowerBI reports!

2 months ago 0 0 0 0
Preview
The Advanced Lakehouse Data Product: Shortcuts In, Materialized Views Through, Versioned Schemas Out There’s a familiar tension in modern analytics: teams want data products that are easy to discover and safe to consume, but they also want to move fast—often faster than the governance model can tolerate. In Microsoft Fabric, that tension frequently shows up as a perception of workspace sprawl. A “single product per workspace” model is clean on paper—strong boundaries, tidy ownership, straightforward promotion—but it can quickly turn into dozens (or hundreds) of workspaces to curate, secure, and operate. This post proposes a different pattern—an advanced lakehouse approach that treats the lakehouse itself like a product factory:

A “workspace per data product” sounds clean—until you have 60 #DataProducts. This advanced lakehouse pattern uses shortcuts + #MaterializedLakeViews + versioned schemas to deliver left-shifted data products with #OneLakeSecurity, while keeping the perception of #MSFabric sprawl under control.

2 months ago 0 0 0 0
Advertisement
Preview
Freeze-and-Squash: Turning Snapshot Tables into a Versioned Change Feed with Fabric Materialized Lake Views Periodic snapshots are a gift and a curse. They’re a gift because they’re easy to land: each load is a complete “as-of” picture, and ingestion rarely needs fancy orchestration. They’re a curse because the moment you want history with meaning—a clean versioned change feed, a Type 2 dimension, a Data Vault satellite—you’re suddenly writing heavy window logic, MERGEs, and stateful pipelines that are harder to reason about than the business problem you were trying to solve. This post describes a Fabric Materialized Lake View (MLV) pattern that “squashes” a rolling set of snapshot tables down into a bounded, versioned change feed by pairing a chain of MLVs with a periodically refreshed frozen table.

#DataSnapshots don’t have to doom you to heavy MERGEs or unbounded refreshes. This #MSFabric #MLV “freeze-and-squash” pattern turns periodic snapshots into a bounded, reusable change feed that can drive both Type 2 dimensions and #DataVault artifacts —without abandoning an MLV-forward architecture.

2 months ago 0 0 0 0
Preview
Ship Your Lakehouse Like Code: Deploying MLVs with a SQL-Only Configuration Notebook If you’re building with Materialized Lake Views (MLVs), you’ve probably felt the tension: the definitions live in code, but the Lakehouse itself is an environment-specific artifact. That gap is where deployments get messy—schemas drift, tables don’t exist yet, and MLV refresh behavior looks “random” when it’s really just reacting to configuration. This post lays out a pattern that closes that gap cleanly: a lakehouse configuration notebook that you promote through your deployment pipeline and run in every environment to create schemas, tables, and MLVs idempotently—using SQL cells only. The key is that MLVs are treated as “definition-driven assets” that can be iterated in dev and re-stamped into test/prod with the same notebook.

Want your #MSFabric #MaterializedLakeView deployments to stop being “it worked in dev” stories? Treat your #Lakehouse like code: one SQL-only configuration notebook, promoted through your pipeline, idempotent in every environment—with CDF set intentionally in the final cell.

2 months ago 0 0 0 0
Preview
Delta First: Building Efficient Bitemporal Tables in Microsoft Fabric In financial services, the questions that matter most are rarely answered by “the latest record.” Regulators, auditors, model validators, and operations teams want something more specific: what was true for the business at the time, and what did we know at the time? That’s bitemporal thinking—and it’s exactly the kind of problem where Microsoft Fabric’s Lakehouse on Delta becomes more than storage. It becomes a practical design advantage. In this post, I’m going to walk through what bitemporal tables actually require, why intervals matter (ValidFrom/ValidTo), and how to implement bitemporal efficiently in Fabric by leaning into #DeltaLake in the Lakehouse.

#Bitemporal isn’t extra history—it’s operational clarity: what was true, and what did we know, at the time. Here’s why #MSFabric Lakehouse on Delta is a powerful bitemporal implementation, plus how materialized lake views can own interval closure and when #AzureSQL belongs in the mix. #MSFabric

3 months ago 0 0 0 0
Preview
DirectLake on OneLake CI/CD: A Practical Two-Step Deployment Pattern with Sempy Labs + Variable Libraries DirectLake on OneLake is one of those “this is what we’ve been waiting for” features in Microsoft Fabric—until you try to deploy it cleanly across Dev → Test → Prod and realize you’ve re-entered the world of post-deployment manual fixes. In this how-to, I’m going to do three things: Contrast DirectLake on SQL endpoints (the “classic” flavor) with DirectLake on OneLake (the newer flavor), and explain why OneLake is worth the trouble. Walk through the normal deployment pipeline approach that works well for DirectLake on SQL. Show a two-step, semi-automated approach for DirectLake on OneLake using: …

DirectLake on OneLake is the semantic layer many Fabric teams want—but CI/CD is still catching up. Here’s a practical two-step deployment pattern using Sempy Labs + Variable Libraries to rebind DirectLake models automatically after promotion. #MicrosoftFabric #PowerBI #DirectLake #CICD

3 months ago 0 0 0 0
Preview
Two Flavors of DirectLake: Over SQL vs. Over OneLake (and How to Switch Without Surprises) DirectLake has a way of sounding wonderfully simple: “Power BI, but it reads the lake directly.” Then you build two semantic models that both say DirectLake, and they behave… differently. One falls back to DirectQuery when you least expect it. Another refuses to touch your SQL views. Security works for you, but not for your report consumers. Suddenly, “DirectLake” feels less like a feature and more like a riddle. The good news: this is explainable. And once you understand the two flavors—DirectLake over SQL and DirectLake over OneLake…

#DirectLake isn’t “one mode,”it’s two. If your #MSFabric #PowerBI semantic model is slow, failing security tests, or behaving inconsistently, there’s a good chance you’re running the wrong DirectLake flavor (or falling back without realizing it).

3 months ago 0 0 0 0
Preview
From Tables to Networks: A Deep Dive into Graph in Microsoft Fabric for Financial Services Insights Most financial services data is already “connected.” It just isn’t modeled that way. Fraud rings don’t show up as a single row. Money laundering doesn’t announce itself in one transaction. Counterparty exposure isn’t obvious from one booking. The meaningful signal lives in relationships: who shares an address, which accounts route funds through the same nodes, where devices and identities overlap, and how risk propagates through a network. Graph in Microsoft Fabric is designed for exactly that: turning your OneLake data into a connected model you can explore visually, query with GQL, and enrich with built-in graph algorithms—without standing up a separate graph stack and duplicating data.

Your #fraud and #AML signals are hiding in plain sight,inside the relationships your tables don’t model well. This post shows how #Graph in #MicrosoftFabric turns #OneLake data into a network you can query and explore visually—so investigations and insights start from connections, not joins.

3 months ago 0 0 0 0
Preview
Gold as the Contract: Schema Evolution, Data Products, and Governance in Microsoft Fabric A schema change is rarely “just a schema change.” It’s the moment an upstream team’s perfectly reasonable adjustment becomes a downstream team’s broken report, confusing metric, or silent misinterpretation. And that’s why schema evolution has always been a source of anxiety: a schema isn’t simply structure—it’s an interface. In this post, I’ll do three things. First, I’ll ground why schema evolution has historically been such a persistent concern. Next, I’ll reframe Medallion with Gold as the published surface area of the data product, and Silver as an optional workshop layer where data is supplemented and transformed.

Stop treating #SchemaEvolution as an engineering failure. In a modern #MicrosoftFabric #Lakehouse, it’s an interface management problem: Bronze absorbs change, Silver optionally augments it, and Gold publishes the #DataProduct contract—governed, versioned, and consumable. #DataGovernance

3 months ago 0 0 0 0
Preview
Power BI Copilot Has Multiple Modes. Here’s What Each One Does—and How Fabric Data Agents Change the Game. Copilot in Power BI isn’t “one feature.” It’s a growing set of experiences that show up in different places, behave differently, and—most importantly—solve different problems. That’s why two people can both say “Copilot didn’t work for me,” and both be right. One might be trying to generate a report page in Desktop. Another might be trying to chat across any model in their tenant. A third might be expecting an agent-like experience that stays grounded in a curated subject area. In this post, we’ll map the major modes of Power BI Copilot (where it shows up, what it’s best at, and what it’s 

Power BI Copilot isn’t one tool—it’s a set of modes. Once you understand where each mode shines (and where Fabric Data Agents fit), you can design an AI analytics experience users actually trust—and actually use. #PowerBI #MicrosoftFabric #Copilot #DataGovernance

3 months ago 0 0 0 0
Preview
From ClickOps to Confidence: CI/CD Best Practices for Microsoft Fabric If you’ve ever clicked Deploy in Fabric, watched the spinner, and hoped nothing “mysteriously” changed in Test or Prod… you’re not alone. Microsoft Fabric has made it possible to manage analytics artifacts like software. But getting to reliable releases—repeatable deployments, environment-safe configuration, and auditable changes—still takes intent. In this post, I’ll walk through a practical, production-minded approach to CI/CD in Microsoft Fabric: how to structure Deployment Pipelines, where Git fits, how Variable Libraries and Deployment Rules reduce environment drift, and when to lean on Fabric-CICD and APIs to move beyond the UI.

Still #Deploying #MicrosoftFabric changes by “click and pray”? A disciplined mix of #Git, #VariableLibraries, #DeploymentRules, and #DeploymentPipelines turns #Eeleases into something you can repeat—and trust.

3 months ago 1 0 0 0
Preview
When Facts Don’t Live in a Domain: Why Data Products Beat Pure Domain-Driven Data Engineering There’s a pattern I see in mature analytics organizations: as soon as the data platform gets big enough to feel “enterprise,” someone reaches for domain-driven design (DDD) as the organizing principle for data engineering and governance. It’s an understandable move. DDD gives us language for ownership, boundaries, and “this team is responsible for that thing.” And when you’re trying to untangle a spaghetti warehouse, that sounds like oxygen. But here’s the catch: the most valuable data in a warehouse—the fact tables in traditional star schemas—often doesn’t live neatly inside a single domain.

Stop trying to force fact tables into a single “domain.” The highest-value data usually lives at the intersections—and that’s exactly where #DomainDriven #DataEngineering gets messy.
A #DataProduct model makes intersection facts easier to own, to govern, and to consume. #MicrosoftFabric

3 months ago 0 0 0 0
Advertisement
Preview
Materialize Responsibly: How Fabric’s External Data Materialization Affects “Zero Unmanaged Copy” — and Where Materialized Lake Views Now Shine Microsoft Fabric’s Warehouse can now materialize external files into tables with straight‑ahead T‑SQL, and Materialized Lake Views (MLVs) have quietly leveled up with optimal refresh (including incremental) and stronger, UI‑backed monitoring. If your north star is Zero unmanaged copy, the question isn’t “should I materialize?”—it’s “how do I materialize responsibly under OneLake governance?” Here’s what changed since our last take—and what to use when.

Materialize responsibly. #MicrosoftFabric’s Warehouse can land files in seconds—and now #MaterializedLakeViews add optimal refresh (incremental/full/skip) with #DataQuality and #Lineage. Here’s how to stay true to zero unmanaged copy with #OneLakeSecurity and Outbound Access Protection in the loop.

3 months ago 0 0 0 0
Preview
Real‑Time Data Isn’t Free: The Complexity and Cost Tradeoffs (From Trickle to Internet‑Class) The first time someone asks for “real‑time,” it sounds like a small tweak: refresh the dashboard faster, trigger an alert sooner, show a counter that feels alive. In a data platform, that single request quietly changes everything—how you ingest, how you process, how you serve, and how you operate. This post keeps it practical. It frames real‑time as a freshness target (not a vibe), walks through the two taxes real‑time introduces—architectural complexity and cost—and shows how patterns evolve as you scale from modest #StreamingData to internet‑class velocity. It also folds in recent Microsoft Ignite announcements that matter for real‑time platforms, including SQL Server 2025’s “change event streaming” and near real‑time analytics via OneLake/Fabric mirroring, plus the continued maturation of Microsoft Fabric’s Real‑Time Intelligence building blocks.

“Real‑time” isn’t a toggle—it’s a tax. This update breaks down the complexity and cost curve from trickle‑scale to internet‑class, plus what Ignite 2025 signals about mirroring, CDC, and integrated streaming stacks. #RealTimeAnalytics #StreamingData #DataPlatforms #CostOptimization

3 months ago 1 0 0 0
Preview
2025 Year in Review: When Microsoft Fabric and Microsoft Purview Turned “Data + AI” Into a Governed Operating Model By the end of 2025, the conversation around analytics stopped being about dashboards and started sounding a lot more like operations. The rise of autonomous and semi-autonomous agents put a sharper edge on an old truth: AI only becomes an enterprise capability when the underlying data is trusted, discoverable, and defensible. Microsoft Fabric and Microsoft Purview spent 2025 building toward that reality from opposite (but increasingly overlapping) sides of the house. Fabric pushed the platform forward—unifying workloads, expanding OneLake, and adding new intelligence and database capabilities designed for AI-era workloads. Purview tightened the governance and security loop—making data quality, cataloging, risk visibility, and policy enforcement feel less like a separate initiative and more like part of the daily flow.

#2025 was the year Microsoft stopped treating “data + AI + governance” as three separate initiatives. #MicrosoftFabric expanded into a true AI-era data estate (databases, OneLake interoperability, #FabricIQ, and agents), while #MicrosoftPurview pulled governance and security into the workflow.

3 months ago 0 0 0 0
Preview
Edit, Retarget, and Redeploy: A Practical TMDL Folder Workflow for Fabric Semantic Models There’s a moment in every Fabric semantic model lifecycle where the “click it in the UI” approach stops scaling. It usually happens when you need to rename dozens (or hundreds) of fields to match a business glossary, or when Dev is stable and you’re ready to point the same model at a new Lakehouse for Test/Prod. That’s when the model stops being a diagram and starts being an artifact—something you want to treat like code. This guide reflows the whole workflow end-to-end, using the Fabric service Edit in Desktop experience to open the model, exporting it to a PBIP project stored as a TMDL folder, editing that folder externally (no scripting inside Power BI Desktop), and then getting those changes back into the service—

Stop rebuilding #SemanticModels just to rename 100 columns or repoint to a new Lakehouse.
This walkthrough shows a clean PBIP + TMDL folder workflow for #MicrosoftFabric semantic models—including how to retarget the entire model (or a single table) to a different Lakehouse. #PowerBI #DataModeling

3 months ago 2 0 0 0
Preview
From Warehouses to Products: SAMR for Your Cloud Data Platform Financial Services, Insurance, Wealth Management, and Professional Services have a gift—and a curse—when it comes to data. The gift is that these industries know how to run critical systems with discipline. The curse is that we’re so good at controlling risk that we often rebuild the same constraints in every new platform we adopt. That’s why so many “modern cloud data platforms” in these sectors end up feeling like the old data warehouse with a new hosting model: better infrastructure, familiar bottlenecks.

Cloud migrations in Financial Services, Insurance, Wealth, and Professional Services often recreate the same old warehouse dynamics—just with better infrastructure. This post applies the SAMR lens to data platforms and shows how data products + design thinking help you move from “inventory” to…

4 months ago 1 0 0 0
Preview
From “Should‑Do” to Done: Digital Workers for Wealth, Energy, and Financial Services Every enterprise carries a shadow backlog—the should‑do work that never beats the urgent. It’s the reconciliation that almost closes, the control that’s “fine for now,” the evidence that exists but isn’t filed where audit will accept it. None of these items is existential in isolation; together they become trust debt: silent risk, rework, slower decisions, and reputational drag. 2025 amplified the problem.

Trust debt hides in the “should‑do” list. See how policy‑aware digital workers plug into existing systems, finish the work that never makes the sprint, and hand back proof—updated for T+1, methane program shifts, BOI changes, and evolving data‑sharing rules. #Automation #WealthManagement #OilAndGas

4 months ago 1 0 0 0
Preview
From Chunks to Queries—Ignite 2025 Update: Fabric Data Agents, RAG, and the New IQ Layer Monday, 9:02 a.m. The CFO pings: “What was Q3 gross margin by region—and did audit call out any risks?” Your RAG bot shines on PDFs and wiki pages, but it can’t compute a number you’d put on a KPI card. After Ignite 2025, the answer is cleaner than ever: let a Fabric Data Agent generate and run a governed query for the metric, and let your RAG retriever bring back the one‑sentence risk note.

From Chunks to Queries—Ignite 2025 Update: Fabric Data Agents, RAG, and the New IQ Layer

Monday, 9:02 a.m. The CFO pings: “What was Q3 gross margin by region—and did audit call out any risks?” Your RAG bot shines on PDFs and wiki pages, but it can’t compute a number you’d put on a KPI card. After…

4 months ago 1 0 0 0
Preview
Fabric Is Medallion‑First, Not Medallion‑Only If you work with Microsoft Fabric long enough, it’s easy to come away with the impression that “real” Fabric means “medallion everywhere.” The official docs walk through Bronze, Silver, and Gold patterns for lakehouses. The learning paths lean on medallion as the canonical example. Fabric clearly makes medallion a first‑class citizen. But that doesn’t mean your data platform – or your data products – must be medallion‑shaped.

Microsoft Fabric makes medallion a first‑class citizen – but your data products don’t have to be medallion‑shaped. In a managed, domain‑driven world, inputs and outputs matter more than internal layers. This post shows how to treat medallion as a powerful option, not a mandate, with simple examples…

4 months ago 2 0 0 0
Preview
Spec‑Driven Development: Make the Specification the First Commit If your acceptance criteria live in a comment thread, they’re not requirements—they’re opinions. Spec‑driven development (SDD) turns those opinions into executable truth so code, tests, docs, and operations move in lockstep. Building on our split between functional and nonfunctional requirements, this follow‑up introduces spec‑driven development: what it is, why it reduces drift, and how to run it inside agile without ceremony.

Your backlog tells you what to build. Your spec should tell you when it’s good enough to ship. Here’s how to make a three‑file spec drive code, tests, and SLOs—without slowing your team. #SpecDrivenDevelopment #Agile #DevOps #APIs

4 months ago 1 0 0 0
Advertisement
Preview
From Substitution to Outcomes: How AI and SAMR Are Forcing a Rethink of Development Strategy We like to say we’ve “transformed” how work gets done. But if you look closely at many enterprise systems, you still see the outline of a paper form hiding under a slick UI. We replaced paper with terminals, terminals with web apps, web apps with SaaS—and then pointed automation at the whole stack. In too many places, we’ve simply substituted one medium for another, without asking whether the underlying process still makes sense.

Most “modern” workflows are just yesterday’s paper forms, rebuilt in browsers and automated with bots—sometimes still obeying the preferences of someone who retired fifty years ago. AI gives us a chance to stop automating those ghosts and start designing goal‑based processes that focus on outcomes.

4 months ago 0 0 0 0
Preview
Functional vs. Nonfunctional Requirements: Making the Split Work in Agile If you’ve ever shipped a feature that “works” and still disappointed users, you’ve met the gap between what a system does and how well it does it. That gap is the space nonfunctional requirements occupy—and it’s where agile teams win or lose product trust. In this continuation of our requirements series, we’ll clarify the difference between functional and nonfunctional requirements, show how to make nonfunctional requirements measurable, and connect both to practical agile habits—user stories, acceptance criteria, Definition of Done, SLOs, and pipeline checks.

Shipped the feature and still missed the mark? The fix is in your requirements. Here’s a practical way to make nonfunctional requirements measurable and make them stick in agile—without ceremony. #Agile #DevOps #RequirementsEngineering #ProductManagement

4 months ago 0 0 0 0