Mirroring for SQL Server in Fabric: Inside the Replication Process

Description

Explore how Mirroring for SQL Server in Fabric really works: what happens on the source database, how data lands in OneLake, data retention policies, monitoring and logging, configuration of replicated objects, and which scenarios require restarting replication.

Key Takeaways

My Notes

Action Items

Slides

Source: Decks/mirroring_for_sql_server_in_fabric_inside_the_replication_process.pdf

Mirroring for SQL Server
in Fabric:
Inside the Replication
Process
Meagan Longoria
Justin Cunningham
ProcureSQL
What We'll Cover
What is Fabric Mirroring?
Mirroring Configuration
Architecture overview and how it differs from traditional replication
Source Database Activity
Landing Data in OneLake
Change Feed vs CDC
Data Retention Policies
Table selection, retention threshold
Delta Parquet format, partitioning, and the landing zone structure
Monitoring & Logging
How long data is kept, what gets purged, and configuration options
Database system views, Fabric portal views
What Causes Replication to Restart
Scenarios requiring a restart and how to do it safely
What is Fabric Mirroring?
SQL Server 2016+
Fabric Mirroring Service
Change Data
• CDC or Change Feed
• Log Reader
Fabric OneLake
Delta Parquet
• Orchestrates replication
• Manages snapshots
• Handles schema changes
• Delta Lake tables
• T-SQL Endpoint
Near Real-Time
No ETL Pipeline Needed
Incremental by Default
Read-Only in Fabric
Typical latency of seconds to
minutes; not a batch process
Managed service —
no ADF or Spark jobs required
to replicate
After initial snapshot, only
changed rows are replicated**
Mirrored tables are read-only;
T-SQL endpoint allows adding
views, procs, functions, RLS,
OLS.
A Big Change for Mirroring with SQL Server 2025
The Technology Behind the Scenes
SQL Server 2016-2022: CDC
SQL Server 2025 and Azure SQL: Change Feed
Change Data Capture
▸ sys.sp_cdc_enable_db enables CDC on the
database
Change Feed
▸ Change Feed is built into the database engine —
no SQL Agent required
▸ Per-table capture instances created in the cdc
schema
▸ sys.dm_change_feed_log_scan_sessions for
native monitoring
▸ SQL Agent job cdc.{db}_capture reads the
transaction log
▸ No cdc schema, no capture/cleanup job
management
▸ SQL Agent job cdc.{db}_cleanup purges old
change entries
▸ Azure SQL: enabled per-database via Fabric
mirroring configuration
▸ SQL Agent must be running — if stopped,
replication lag accumulates
Both paths: initial full snapshot always runs first before incremental changes begin
SQL Server 2025+ Mirroring: Requirements
SQL Server 2025 (on-premises)
Not supported on Azure VM or Linux instances
CDC must NOT be enabled on the source database
Change Feed replaces CDC; the two conflict if both enabled
Replication must NOT be enabled
SQL Server Replication conflicts with Change Feed mirroring
Primary database of AG only; no FCI
Failover cluster instances not supported
ALTER ANY EXTERNAL MIRROR, VIEW DATABASE PERFORMANCE STATE, VIEW DATABASE SECURITY STATE
permissions
Must be granted to the Fabric login
Delayed transaction durability must be disabled
Database cannot be mirrored if delayed durability is enabled
SQL Server 2025+ Mirroring: Limitations
Max 1000 tables per mirrored database
"Mirror all data" picks first 1000 alphabetically; remaining tables skipped
No column filtering
All columns replicate; use views on the Fabric side to limit exposure
Unsupported data types
json, vector, FILESTREAM block entire table; PK on geometry/geography/hierarchyid/UDT/sql_variant blocks table; computed columns
skipped
DDL changes trigger full table reseed
ALTER/DROP column; partition switch blocked while mirroring
Table features that block replication
Temporal/ledger history, Always Encrypted, in-memory, graph, external tables
Security not propagated to Fabric
RLS, column permissions, dynamic data masking not reflected in OneLake
Configuring Replicated Objects
Supported
Not Supported
Table Selection
Column Filtering
Choose up to 1000 individual tables; all user tables available by default
All columns always replicated; use Fabric views to limit exposure
Tables Without Primary Keys
Tables w/ PK/index on unsupported types
No PK required since April 2025; existing tables need a stop/start to pick up
geometry, geography, hierarchyid, sql_variant, UDTs, datetime2(7)
ADD COLUMN (DDL)
Unsupported Data Types
Schema additions replicate automatically to Fabric
json, vector block the entire table from mirroring
Most Standard Data Types
Views and Computed Columns
int, bigint, varchar, nvarchar, datetime, decimal, bit, and more
Only base tables replicate; no view or computed column support
DEMO
Configuring Mirroring
Data Retention Policies
Source: Change Feed Retention (SQL Server)
Destination: OneLake Delta Retention
Controlled by
Internal change feed process
Controlled by
Delta table versioning
History visibility
Not configurable — depends on
availability of log data
History visibility
Default 1-day (new); 7-day (legacy);
configurable
Deleted row handling
Deletes available in the change
feed
Deleted row handling
Soft deletes tracked in Delta log
until outside of retention window
If Change Feed falls too far behind due to log truncation or an outage, a gap can occur — requiring a full restart of
replication.
What Mirroring Enables in SQL Server 2025
Change Feed Enabled on Database
Enabled via Fabric portal through Azure Arc — no T-SQL sp_enable call
Change Feed on Each Table
Per-table tracking; no cdc schema — Change Feed uses SQL 2025 internal infrastructure
sys.dm_change_feed_log_scan_sessions
Monitor Change Feed scan sessions and latency
No SQL Agent Required
Change Feed uses a built-in internal process — no SQL Agent capture job needed
No Cleanup Job Needed
Unlike CDC, Change Feed has no user-managed retention or cleanup jobs
How Data Lands in OneLake
Phase 1: Initial Snapshot
Phase 2: Incremental (Change Feed)
▸ Full table read via bulk export
▸ SQL 2025+ Change Feed streams row changes in
near real-time
▸ Written as Parquet files to OneLake staging
▸ Inserts → ADD to Delta log
▸ Delta Log initialized with Add operations
▸ Updates = two CDF rows (pre-image + post-image)
▸ Tables become queryable after snapshot completes
▸ Deletes → DELETE entry in Delta log
▸ Large tables may take minutes to hours
▸ Vacuum/OPTIMIZE run periodically by Fabric
OneLake Path: Files/Mirrored/{WorkspaceId}/{MirroredDatabaseId}/{SchemaName}/{TableName}/
DEMO
Data in OneLake
Mirroring Change Data Feed
Performance & Storage Considerations
Transaction Log Size
Change Feed holds log truncation until changes are replicated. Monitor log size and ensure long-running transactions don't cause the log to fill.
Minor Change Feed Overhead
The internal Change Feed process adds minimal CPU/IO. No polling interval or maxtrans tuning needed.
Change Feed Internal Storage
Change Feed publishes changes directly to Fabric landing zone — no cdc schema or change tables created on source
Low Source Overhead
Change Feed reads directly from the transaction log with minimal CPU/IO impact. No external job process to monitor.
Monitoring & Logging
On the Source (SQL Server)
In the Fabric Portal
Programmatic / Advanced
sys.dm_change_feed_log_scan_sessions —
latency and scan sessions
Mirroring status: Running / Stopped / Error
Fabric REST API — GET mirrored database
status
sys.dm_change_feed_errors — Change Feed
errors with error codes
EXEC sp_help_change_feed — table state
(4=healthy) and full config
Per-table replication status (rows, last
update)
Error messages surfaced in mirroring
properties
Check OneLake file timestamps for staleness
.NET SDK (Microsoft.Fabric.Api) for C#
automation
PowerShell / AZ CLI — calls Fabric REST API
for start/stop/status
Custom alerts via Azure Monitor / Log
Analytics
sys.databases (log_reuse_wait_desc) —
check if log is growing due to mirroring
delays
Custom queries / alerts on workspace
monitoring data
No SQL Agent jobs — Change Feed is
managed internally by SQL Server 2025
Replication lag > 5 minutes may indicate Change Feed scan delays, log truncation issues, or network problems between source and Fabric.
DEMO
Monitor the SQL
Server Change Feed
When Replication Restart May Be Required
Requires Full Restart (re-snapshot)
Self-Healing / Pause & Resume OK
Table removed & re-added
Network blip / timeout
Must re-initialize from scratch for that table
Fabric auto-resumes from last known LSN
Source DB restore
Adding new tables
LSN chain breaks; Change Feed history is reset
New snapshot only for new tables; existing continue
Schema change (DROP/ALTER column)
Adding a new column
Breaking DDL not supported by incremental Change Feed
Column added to mirrored table; unsupported types skipped
Change Feed gap on source
Credential / password expiry
Transaction log truncated before Change Feed could read it
Update connection credentials; mirroring resumes
Minor column rename (via sp)
Tracked via Change Feed metadata reset
Restarting Replication Safely
Steps to restart mirroring in Fabric
Stop mirroring from the Fabric portal (or via REST API)
Verify or fix the root cause on the source SQL Server
Confirm Change Feed is active on the source database
Optionally remove and re-add problem tables from the mirror
config
Start mirroring — initial snapshot will re-run
Monitor replication status until tables show ‘Running’
Pre-Restart Checklist

Fabric login credentials and permissions verified

No blocking long transactions on source

Network connectivity Fabric → Source OK

Transaction log space available

Downstream consumers aware of re-snapshot

Snapshot window planned for off-peak hours
Key Takeaways
Change Feed replaces CDC
Fabric Mirroring for SQL 2025+ depends on Change Feed —
ensure CDC and Replication are disabled on the source.
Retention matters on both ends
SQL side: log truncation before Fabric reads Change Feed =
restart risk. Fabric side: Delta table retention controls how
long history is queryable.
Monitor proactively
Use DMVs, the Fabric portal, and custom alerts to catch lag
before it becomes a gap.
Not all DDL is safe
Breaking schema changes (DROP/ALTER) can require a full
re-snapshot.
Restarts mean re-snapshots
Plan restart windows carefully — large tables can take
significant time to reload.
Two ways to query mirrored data
Use the SQL analytics endpoint for T-SQL queries, or access
OneLake Delta files directly from Spark, pipelines, and
other Fabric workloads.
Mirroring questions?
Let’s talk about it.
Meagan Longoria
Justin Cunningham
ProcureSQL
How was
the session?
Complete Session Surveys in
for your chance to WIN
PRIZES!