Back to Troubleshooting
Troubleshooting

Performance Tuning

31 views

title: Performance Tuning category: Troubleshooting tags: performance, optimization, database, sync, memory, tuning priority: Normal

Performance Tuning

This guide covers optimizing IdentityCenter performance across all components: database, synchronization, UI, memory, and scheduling. Follow these recommendations to ensure smooth operation at any scale.

Hardware Recommendations

Scale your infrastructure based on the number of directory objects managed:

Component Small (<10K objects) Medium (10K-50K) Large (50K+)
CPU 2 cores 4 cores 8+ cores
RAM 4 GB 8 GB 16+ GB
Disk 50 GB SSD 100 GB SSD 250+ GB NVMe SSD
SQL Server Express (10 GB limit) Standard Standard or Enterprise
Network 100 Mbps 1 Gbps 1 Gbps+

Tip: SSD storage for the SQL Server data and log files makes the single biggest difference in overall application responsiveness.

Database Performance

Automatic Index Maintenance

IdentityCenter includes a DatabaseOptimizationService that automatically maintains indexes. This service:

  • Detects fragmented indexes and rebuilds or reorganizes them
  • Updates statistics for query optimizer accuracy
  • Runs during low-usage windows to minimize impact

No manual configuration is required. However, you can verify optimization is running by checking the application logs for DatabaseOptimizationService entries.

Manual Index Maintenance

For environments where you want additional control:

-- Check index fragmentation levels
SELECT
    OBJECT_NAME(ips.object_id) AS TableName,
    i.name AS IndexName,
    ips.avg_fragmentation_in_percent,
    ips.page_count
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'LIMITED') ips
JOIN sys.indexes i ON ips.object_id = i.object_id AND ips.index_id = i.index_id
WHERE ips.avg_fragmentation_in_percent > 10
    AND ips.page_count > 1000
ORDER BY ips.avg_fragmentation_in_percent DESC;

Fragmentation thresholds:

Fragmentation Action Command
10-30% Reorganize ALTER INDEX [name] ON [table] REORGANIZE;
30%+ Rebuild ALTER INDEX [name] ON [table] REBUILD;

Connection Pool Sizing

IdentityCenter uses SQL Server connection pooling. Default settings work well for most environments, but large deployments may benefit from tuning:

Setting Default Recommended (Large) Notes
Max Pool Size 100 200 Increase if you see pool exhaustion errors
Min Pool Size 0 10 Keeps warm connections ready
Connection Timeout 30s 30s Increase for remote SQL servers
Command Timeout 30s 60-120s Increase for large data operations

Adjust via the connection string in appsettings.json:

Server=sqlserver;Database=IdentityCenter;Max Pool Size=200;Min Pool Size=10;

SQL Server Memory Allocation

For dedicated SQL servers, configure memory limits to prevent SQL Server from consuming all available RAM:

-- Set maximum memory (leave 4 GB for OS on a 16 GB server)
EXEC sp_configure 'max server memory', 12288;  -- 12 GB in MB
RECONFIGURE;

For shared servers where IdentityCenter and SQL Server run together, allocate roughly 60% of total RAM to SQL Server.

Query Performance Monitoring

Use SQL Server Dynamic Management Views (DMVs) to identify slow queries:

-- Top 10 queries by total elapsed time
SELECT TOP 10
    qs.total_elapsed_time / qs.execution_count AS avg_elapsed_ms,
    qs.execution_count,
    SUBSTRING(qt.text, (qs.statement_start_offset/2) + 1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(qt.text)
            ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) + 1) AS query_text
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
ORDER BY qs.total_elapsed_time / qs.execution_count DESC;

Sync Performance

LDAP Page Size

IdentityCenter uses paged LDAP queries via DirectoryQueryService with a default page size of 1000. Adjust this based on your environment:

Environment Recommended Page Size Rationale
Low-latency LAN 1000 (default) Fewer round trips
High-latency WAN 250-500 Smaller per-request payload
Very large OUs (100K+) 500 Balance between round trips and memory

Bulk Upsert Optimization

The SyncObjectRepository.FastBulkUpsertObjectsAsync method uses a HashSet combined with a SQL MERGE strategy for high-performance bulk operations. This approach:

  • Deduplicates objects in memory before sending to the database
  • Uses a single MERGE statement instead of individual INSERT/UPDATE calls
  • Skips unchanged objects to reduce database writes

Performance impact: This can process tens of thousands of objects in seconds rather than minutes.

To get the best performance from bulk operations:

  1. Minimize attribute mappings -- Only map attributes you actually need. Each additional attribute increases processing time.
  2. Use targeted LDAP filters -- Narrow your sync scope to reduce the number of objects processed.
  3. Avoid overlapping sync steps -- If two steps query the same objects, the deduplication logic handles it but wastes network and CPU time.

Sync Step Ordering

Order your sync project steps to maximize efficiency:

  1. Directory Query steps first -- Pull data from AD
  2. Lookup steps second -- Resolve references (e.g., manager DN to ObjectId)
  3. Internal processing steps last -- Apply transformations and business logic

Parallel Processing Considerations

When running multiple sync projects concurrently:

  • Each sync project runs independently through the SyncProjectOrchestrator
  • Avoid scheduling more than 2-3 heavy sync projects at the same time
  • Stagger start times by at least 5 minutes to avoid database lock contention

UI Performance

Blazor Server Circuit Management

IdentityCenter uses Blazor Server, which maintains a persistent SignalR connection for each active user session. Performance considerations:

Setting Impact Recommendation
Circuit timeout How long idle circuits persist 3-5 minutes for production
Max retained circuits Memory per idle user Default is sufficient for most deployments
SignalR buffer size Message throughput Increase for large data grids

Session Cleanup

Idle sessions consume server memory. IdentityCenter automatically cleans up expired circuits, but for environments with many concurrent users:

  1. Set reasonable session timeouts (15-30 minutes for inactive sessions)
  2. Monitor active SignalR connections via the application dashboard
  3. Consider load balancing with sticky sessions for deployments with 50+ concurrent users

Large Data Pages

Pages that display large datasets (such as the Objects page with 50K+ records) use server-side pagination. If these pages feel slow:

  1. Use filters to narrow the displayed dataset
  2. Avoid sorting on non-indexed columns
  3. Check that database indexes are healthy (see Database Performance above)

Memory Optimization

Large Environment Considerations (50K+ Objects)

For environments managing over 50,000 directory objects:

  1. Increase application memory -- Ensure the app pool or service has at least 2 GB available
  2. Monitor GC pressure -- Check for frequent garbage collections in application logs
  3. Optimize sync batch sizes -- Processing objects in batches of 5,000-10,000 reduces peak memory usage
  4. Database connection hygiene -- Ensure connections are disposed properly (IdentityCenter handles this automatically via DI scoping)

Garbage Collection Tuning

For the .NET 8 runtime hosting IdentityCenter, Server GC is recommended for production:

{
  "runtimeOptions": {
    "configProperties": {
      "System.GC.Server": true,
      "System.GC.Concurrent": true
    }
  }
}

Server GC uses more memory but provides better throughput for web applications with many concurrent requests.

Scheduling Optimization

Avoiding Job Overlap

Heavy operations should not run simultaneously:

Job Type Recommended Schedule Duration Estimate
Full directory sync Daily, off-hours (2:00 AM) 5-30 minutes
Delta sync Every 15-60 minutes 1-5 minutes
Policy evaluation After sync completes 2-10 minutes
Database optimization Weekly, weekend 5-15 minutes
Report generation Daily, early morning 1-5 minutes

Tip: Use Quartz.NET cron triggers to stagger jobs. For example, schedule sync at 2:00 AM, policy evaluation at 2:30 AM, and reports at 3:00 AM.

Staggering Policy Evaluations

If you have many policies, avoid evaluating them all at once:

  1. Group policies by priority (Critical, High, Normal)
  2. Schedule Critical policies more frequently (every hour)
  3. Schedule Normal policies less frequently (daily)
  4. Use different evaluation windows for different policy groups

Monitoring Performance

Key Metrics to Track

Metric Healthy Range Warning Threshold
Page load time < 2 seconds > 5 seconds
Sync duration (full) < 30 minutes > 60 minutes
Database CPU < 50% average > 80% sustained
Memory usage < 70% of allocated > 90% of allocated
Active SignalR connections Proportional to users Unexpected spikes
Failed sync runs 0 per day > 3 per day

Application Logs

Navigate to /admin/logging to monitor performance-related entries. Filter by:

  • Warning level for slow query alerts
  • Error level for timeout and resource exhaustion issues
  • Source component DatabaseOptimizationService for index maintenance results

SQL Server DMVs for Ongoing Monitoring

-- Check current wait statistics (what SQL Server is waiting on)
SELECT TOP 10 wait_type, wait_time_ms, signal_wait_time_ms
FROM sys.dm_os_wait_stats
WHERE wait_type NOT LIKE '%SLEEP%'
ORDER BY wait_time_ms DESC;

-- Check for blocking
SELECT blocking_session_id, session_id, wait_type, wait_time, command
FROM sys.dm_exec_requests
WHERE blocking_session_id <> 0;

Next Steps

Tags: performance optimization database sync memory tuning

Was this article helpful?

Related Articles

Common Issues & Solutions
Troubleshooting Sync Errors
Troubleshooting Connection Issues