When You Need Background Processing
Many web applications need to do work outside of the HTTP request/response cycle: processing uploaded files, syncing data with external APIs, sending emails, or running scheduled cleanup tasks.
In a previous project, we had a freight management system that needed to monitor an SFTP server for incoming files, download them, parse the contents, and load the data into a SQL Server database — all happening automatically, every 30 minutes, without any user interaction. This is a classic background processing problem.
ASP.NET Core provides two solid options: Hosted Services for simple scenarios and Quartz.NET for more complex scheduling needs. I've used both extensively, and each has its strengths.
Hosted Services
The simplest approach is implementing IHostedService or extending BackgroundService. This runs as part of your web application process — no separate Windows Service or console app needed.
Basic Periodic Task
public class FileProcessingService : BackgroundService
{
private readonly IServiceProvider _services;
private readonly ILogger<FileProcessingService> _logger;
public FileProcessingService(IServiceProvider services, ILogger<FileProcessingService> logger)
{
_services = services;
_logger = logger;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
_logger.LogInformation("File processing service started");
while (!stoppingToken.IsCancellationRequested)
{
try
{
using var scope = _services.CreateScope();
var processor = scope.ServiceProvider.GetRequiredService<IFileProcessor>();
var filesProcessed = await processor.ProcessPendingFilesAsync(stoppingToken);
_logger.LogInformation("Processed {Count} files", filesProcessed);
}
catch (Exception ex) when (ex is not OperationCanceledException)
{
_logger.LogError(ex, "Error processing files");
}
await Task.Delay(TimeSpan.FromMinutes(5), stoppingToken);
}
}
}
Register it in Program.cs:
builder.Services.AddHostedService<FileProcessingService>();
Real-World Example: SFTP File Processing
Here's a more realistic example based on a system I built to automate SFTP-based file processing. The service monitors a remote SFTP directory, downloads new files, processes them, and archives the originals:
public class SftpFileMonitorService : BackgroundService
{
private readonly IServiceProvider _services;
private readonly ILogger<SftpFileMonitorService> _logger;
private readonly SftpSettings _settings;
public SftpFileMonitorService(
IServiceProvider services,
ILogger<SftpFileMonitorService> logger,
IOptions<SftpSettings> settings)
{
_services = services;
_logger = logger;
_settings = settings.Value;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
try
{
await ProcessNewFilesAsync(stoppingToken);
}
catch (Exception ex) when (ex is not OperationCanceledException)
{
_logger.LogError(ex, "SFTP sync failed, will retry next cycle");
}
await Task.Delay(TimeSpan.FromMinutes(_settings.PollIntervalMinutes), stoppingToken);
}
}
private async Task ProcessNewFilesAsync(CancellationToken ct)
{
using var scope = _services.CreateScope();
var sftpClient = scope.ServiceProvider.GetRequiredService<ISftpClient>();
var fileParser = scope.ServiceProvider.GetRequiredService<IFileParser>();
var repository = scope.ServiceProvider.GetRequiredService<ILoadDataRepository>();
var files = await sftpClient.ListFilesAsync(_settings.RemotePath, ct);
foreach (var file in files.Where(f => f.Name.EndsWith(".csv")))
{
_logger.LogInformation("Processing {FileName}", file.Name);
var content = await sftpClient.DownloadAsync(file.FullPath, ct);
var records = fileParser.ParseCsv(content);
await repository.BulkInsertAsync(records, ct);
await sftpClient.MoveToArchiveAsync(file.FullPath, ct);
_logger.LogInformation("Completed {FileName}: {Count} records", file.Name, records.Count);
}
}
}
This pattern — download, parse, save, archive — cut manual file handling by about 30% on the project where I implemented it. The key is making each step idempotent: if the service crashes between downloading and archiving, the file will be processed again on the next cycle, and the BulkInsert should handle duplicates gracefully (using MERGE or checking for existing records).
When to use Hosted Services
- Simple periodic tasks (poll every N minutes)
- One-off background work triggered by a request (e.g., fire-and-forget email sending)
- Lightweight processing that doesn't need complex scheduling
- When you want the background work to live in the same process as your web app
Limitations
- No built-in persistence — if the app restarts, the service starts fresh with no memory of previous runs
- No cron-style scheduling — you can only do "every N minutes/seconds", not "at 2:00 AM on weekdays"
- No built-in retry logic — you have to implement your own error handling
- Tied to the web app's lifecycle — if the app pool recycles, your background work stops
Quartz.NET
For anything more complex — cron-like schedules, job persistence, retry logic — Quartz.NET is the way to go. It's a mature, full-featured job scheduling library that's been ported from the Java world (Quartz Scheduler).
Setup
First, install the packages:
dotnet add package Quartz
dotnet add package Quartz.Extensions.Hosting
Defining a Job
public class SftpSyncJob : IJob
{
private readonly ISftpService _sftp;
private readonly ILogger<SftpSyncJob> _logger;
public SftpSyncJob(ISftpService sftp, ILogger<SftpSyncJob> logger)
{
_sftp = sftp;
_logger = logger;
}
public async Task Execute(IJobExecutionContext context)
{
_logger.LogInformation("SFTP sync job started, fire time: {FireTime}", context.FireTimeUtc);
var result = await _sftp.DownloadAndProcessFilesAsync();
_logger.LogInformation(
"SFTP sync completed: {Downloaded} downloaded, {Processed} processed, {Errors} errors",
result.Downloaded, result.Processed, result.Errors);
}
}
Scheduling Jobs
builder.Services.AddQuartz(q =>
{
// SFTP sync — every 30 minutes
var sftpKey = new JobKey("sftp-sync");
q.AddJob<SftpSyncJob>(opts => opts.WithIdentity(sftpKey));
q.AddTrigger(opts => opts
.ForJob(sftpKey)
.WithIdentity("sftp-sync-trigger")
.WithCronSchedule("0 0/30 * * * ?"));
// Daily cleanup — every day at 2 AM
var cleanupKey = new JobKey("daily-cleanup");
q.AddJob<DailyCleanupJob>(opts => opts.WithIdentity(cleanupKey));
q.AddTrigger(opts => opts
.ForJob(cleanupKey)
.WithIdentity("daily-cleanup-trigger")
.WithCronSchedule("0 0 2 * * ?"));
// Report generation — weekdays at 7 AM
var reportKey = new JobKey("morning-report");
q.AddJob<MorningReportJob>(opts => opts.WithIdentity(reportKey));
q.AddTrigger(opts => opts
.ForJob(reportKey)
.WithIdentity("morning-report-trigger")
.WithCronSchedule("0 0 7 ? * MON-FRI"));
});
builder.Services.AddQuartzHostedService(q => q.WaitForJobsToComplete = true);
Cron Expression Cheat Sheet
Quartz cron expressions have 7 fields: Seconds Minutes Hours DayOfMonth Month DayOfWeek Year
| Expression | Meaning |
|---|---|
0 0/5 * * * ? |
Every 5 minutes |
0 0 * * * ? |
Every hour, on the hour |
0 0 2 * * ? |
Every day at 2:00 AM |
0 0 9-17 ? * MON-FRI |
Every hour, 9 AM to 5 PM, weekdays |
0 0 0 1 * ? |
First day of every month at midnight |
0 0/30 8-18 ? * MON-FRI |
Every 30 min, 8 AM–6 PM, weekdays |
Persistent Job Store
By default, Quartz.NET keeps job data in memory. If your app restarts, all job state is lost. For production workloads, you can configure a persistent job store using SQL Server:
builder.Services.AddQuartz(q =>
{
q.UsePersistentStore(store =>
{
store.UseSqlServer(sqlServer =>
{
sqlServer.ConnectionString =
builder.Configuration.GetConnectionString("QuartzDb");
});
store.UseNewtonsoftJsonSerializer();
});
});
This gives you:
- Jobs survive application restarts
- Misfire handling (what to do when a scheduled fire time was missed)
- Clustered execution (multiple app instances share the same schedule without duplicate executions)
When to use Quartz.NET
- Cron-based scheduling (e.g., "every day at 2 AM")
- Jobs that need to survive application restarts (with persistent job store)
- Complex job dependencies or chaining
- When you need misfire policies (what happens when a trigger fires while the previous execution is still running)
- Clustered environments where only one instance should run a job
Error Handling Patterns
Background jobs are invisible to users, so robust error handling is essential. Here are patterns I use:
Retry with Exponential Backoff
public class ResilientJob : IJob
{
private static readonly int[] RetryDelaysSeconds = { 5, 30, 120, 600 };
public async Task Execute(IJobExecutionContext context)
{
var retryCount = context.MergedJobDataMap.GetInt("RetryCount");
try
{
await DoWorkAsync();
}
catch (Exception ex)
{
if (retryCount < RetryDelaysSeconds.Length)
{
context.MergedJobDataMap.Put("RetryCount", retryCount + 1);
var delay = TimeSpan.FromSeconds(RetryDelaysSeconds[retryCount]);
var retrigger = TriggerBuilder.Create()
.StartAt(DateTimeOffset.UtcNow.Add(delay))
.ForJob(context.JobDetail)
.UsingJobData("RetryCount", retryCount + 1)
.Build();
await context.Scheduler.ScheduleJob(retrigger);
}
else
{
// Max retries exceeded — log and alert
// Consider sending a notification to a monitoring channel
}
}
}
}
Dead Letter Logging
For jobs that fail permanently, log enough detail to diagnose and replay:
catch (Exception ex)
{
_logger.LogError(ex,
"Job {JobKey} failed permanently after {Retries} retries. " +
"Input: {@JobData}. Manual intervention required.",
context.JobDetail.Key,
retryCount,
context.MergedJobDataMap.WrappedMap);
}
Practical Tips
Always use scoped services — create a new DI scope for each job execution to avoid stale
DbContextinstances. This is critical. ADbContextcached from a previous run will have stale data and can cause concurrency exceptions.Handle exceptions gracefully — a failed job shouldn't crash your entire application. Wrap the work in try/catch, log the error, and decide whether to retry or skip.
Add structured logging — use structured log messages with properties (
{FileName},{Count}) instead of string interpolation. This makes it possible to search and filter in log aggregation tools like Seq or Application Insights.Consider idempotency — jobs might run twice (after a restart, or due to a misfire). Design them to handle that safely. Use database constraints, check-before-insert patterns, or idempotency keys to prevent duplicate processing.
Monitor execution time — if a job that normally takes 30 seconds starts taking 5 minutes, that's an early warning sign. Log the duration and set up alerts.
Use cancellation tokens — always pass the
CancellationTokenthrough to async operations. When the application shuts down, you want jobs to stop gracefully, not get killed mid-operation.
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
// Pass stoppingToken to every async call
var data = await _httpClient.GetAsync(url, stoppingToken);
await _repository.SaveAsync(data, stoppingToken);
await Task.Delay(interval, stoppingToken);
}
}
Choosing Between the Two
| Consideration | Hosted Service | Quartz.NET |
|---|---|---|
| Setup complexity | Minimal | Moderate |
| Scheduling | Simple interval | Full cron expressions |
| Persistence | None | Optional (SQL Server) |
| Retry logic | Manual | Built-in misfire policies |
| Clustering | Not supported | Supported |
| Dependencies | None (built-in) | NuGet packages |
| Best for | Simple polling, fire-and-forget | Complex schedules, critical jobs |
Conclusion
Start with Hosted Services for simple needs — a polling loop, a queue consumer, or a fire-and-forget task. Graduate to Quartz.NET when you need real scheduling, persistence, or clustering. Both integrate cleanly with ASP.NET Core's DI system, so switching from one to the other is straightforward if your needs change.
For most web applications, a combination works well: Hosted Services for lightweight, always-running tasks (like monitoring a queue), and Quartz.NET for scheduled business processes (like nightly reports or periodic data syncs). The key is choosing the right tool for the job complexity, not over-engineering a simple task or under-engineering a critical one.