Async Import Architecture
1. Overview
The Async Import architecture provides a framework for processing large spreadsheet imports asynchronously, with support for interactive column and cell mapping when automatic resolution fails. This enables imports that would otherwise timeout to complete successfully with user assistance.
2. Implementation Status (2026-04)
The framework was implemented in two layers that both ship in release 2.4:
-
Generic interactive framework —
ImportResource+ state machine + mapping services. Designed to support UX-driven imports where the operator walks through column and cell mapping interactively. Fully wired end-to-end; used today by any future import that needs interactive resolution. -
Per-entity facade endpoints (primary surface) — thin controllers on each domain’s existing REST resource (
ResultSetResourceEx,EventParticipantResourceEx,MembershipResourceEx) that accept a one-shot upload, orchestrate the generic framework internally, and return HTTP 202 with anImportJobDTO. The caller polls a per-entityGET /import/{jobId}that deserialises the per-type response DTO (BulkResultImportResponseDTO,EventParticipantImportResultDTO,MembershipImportResultDTO). This is the contract you almost certainly want — domain-typed Swagger, tenant-scoped security, shape parity with the prior synchronous response.
Two processor modes feed the generic framework:
-
Whole-file (RESULT, EP) — the processor sets
supportsWholeFile()=trueand consumes the uploaded file in a single call viaprocessWholeFile(ImportJob, InputStream, ImportContext). Used when cross-row logic (number-change detection, category grouping, upsert-by-seq for RESULT; summary aggregation for EP) cannot be expressed row-at-a-time. The job skipsCOLUMN_MAPPINGandCELL_MAPPING(skipMappingPhases=true) and transitions straight fromUPLOADEDtoPROCESSING. -
Row-by-row (MEMBERSHIP) — the processor overrides
processRow(ImportJob, SpreadsheetRow, rowNumber, ImportContext). The framework iterates the spreadsheet itself, persisting anImportRowResultper row. The per-entity GET endpoint aggregates those rows into aMembershipImportResultDTOon demand.
See Result Import Design, Event Participant Import Design and Import Operations Runbook for the per-entity surfaces and operator workflows. The sections below describe the generic framework in full.
3. Problem Statement
3.1. Current Limitations
The synchronous import approach has several limitations:
| Limitation | Impact |
|---|---|
HTTP timeout |
Large files fail before processing completes |
No progress visibility |
Users don’t know import status |
All-or-nothing mapping |
Any unmapped column/cell fails the entire import |
No resume capability |
Failed imports must restart from scratch |
Resource consumption |
Long-running requests tie up server threads |
3.2. Requirements
The async import system must:
-
Accept large files with immediate acknowledgment
-
Process rows asynchronously in the background
-
Pause for user input when mappings cannot be resolved
-
Allow resume after user provides missing mappings
-
Track progress and provide status updates
-
Clean up completed/abandoned imports automatically
5. State Machine
5.1. States
|
The |
5.2. State Descriptions
| State | Description | Next States |
|---|---|---|
UPLOADED |
File received and stored as BLOB. Awaiting column detection. |
COLUMN_MAPPING |
COLUMN_MAPPING |
Headers analyzed. Required fields not all matched. Waiting for user to provide mappings. |
CELL_MAPPING, CANCELLED |
CELL_MAPPING |
Columns mapped. Foreign key values cannot be resolved. Waiting for user to select valid values. |
PROCESSING, CANCELLED |
PROCESSING |
Actively processing rows. Progress tracked. |
CELL_MAPPING (new value), COMPLETED, FAILED, CANCELLED |
COMPLETED |
All rows processed. Results available. |
(terminal) |
FAILED |
Fatal error occurred during processing. |
(terminal) |
CANCELLED |
User cancelled the import. |
(terminal) |
6. BLOB Storage
6.1. Attachment Entity
Import files are stored using the existing Attachment entity:
@Entity
@Inheritance(strategy = InheritanceType.JOINED)
public class Attachment {
@Id @GeneratedValue
private Long id;
@Column(unique = true, nullable = false)
private String uuid;
@Lob @Basic(fetch = FetchType.EAGER)
private byte[] data;
@ManyToOne(optional = false)
private Organisation organisation;
@NotNull
private String mediaType;
private Instant expiryDate;
@NotNull
private String name;
}
7. Database Schema
7.2. Enumerations
public enum ImportType {
EVENT_PARTICIPANT,
MEMBERSHIP,
RESULT
}
public enum ImportJobStatus {
UPLOADED,
COLUMN_MAPPING,
CELL_MAPPING,
PROCESSING,
COMPLETED,
FAILED,
CANCELLED
}
public enum MappingStatus {
AUTO_MATCHED, // System matched with high confidence
MANUAL_MATCHED, // User confirmed or corrected
UNMATCHED, // Cannot match, needs user input
IGNORED // User chose to skip
}
public enum ImportOutcome {
CREATED,
UPDATED,
SKIPPED,
ERROR
}
8. API Design
8.1. Per-Entity Facade Endpoints (primary surface)
These are the endpoints operators and integrations should use by default. Each accepts a one-shot upload, orchestrates the generic framework internally, and returns HTTP 202 + ImportJobDTO immediately; the caller polls the paired GET /import/{jobId} for the domain-typed response DTO once the job reaches COMPLETED or FAILED.
| Method | Endpoint | Description |
|---|---|---|
PUT |
|
Upload a combined CSV for a bulk result import. Query params: |
GET |
|
Returns |
PUT |
|
Upload an EP roster. Query params: |
GET |
|
Returns |
PUT |
|
Upload a membership roster. Query params: |
GET |
|
Returns |
8.2. Generic Framework Endpoints (interactive / cross-cutting)
These endpoints expose the full state machine for interactive imports (where the operator needs to walk through column and cell mapping) and for cross-cutting operations that don’t belong to any one domain.
| Method | Endpoint | Description |
|---|---|---|
POST |
|
Upload file and create import job (interactive flow). Returns 201 + |
GET |
|
List import jobs (paginated, filterable by status and organisation). |
GET |
|
Get import job status (generic — no domain DTO). Useful during polling when the caller wants |
DELETE |
|
Cancel import job. |
GET |
|
Get column mappings (interactive). |
PUT |
|
Update column mappings (interactive). |
POST |
|
Confirm all auto-matched column mappings and proceed to cell mapping (interactive). |
GET |
|
Get cell mappings (interactive). |
GET |
|
Get candidate entities for FK resolution (interactive dropdown population). |
PUT |
|
Update cell mappings (interactive). |
POST |
|
Confirm all cell mappings and start processing (interactive). |
POST |
|
Skip cell mapping and start processing (from COLUMN_MAPPING). Equivalent to |
GET |
|
Get per-row results (paginated, filterable by outcome). Generic shape; per-entity endpoints give the domain DTO. |
GET |
|
Get results summary by outcome (generic counts). |
8.3. Endpoint Details
8.3.1. Create Import Job
POST /api/imports
Content-Type: multipart/form-data
Parameters:
- file: MultipartFile (required) - The spreadsheet file
- importType: String (required) - EVENT_PARTICIPANT, MEMBERSHIP, RESULT
- contextId: Long (optional) - Event ID, MembershipPeriod ID, or Race ID
- organisationId: Long (optional) - Organisation ID (defaults to current user's org)
Response: 201 Created
Location: /api/imports/{uuid}
{
"identifier": "abc-123-def-456",
"importType": "EVENT_PARTICIPANT",
"status": "COLUMN_MAPPING",
"originalFilename": "registrations.xlsx",
"totalRows": 150,
"processedRows": 0,
"successCount": 0,
"errorCount": 0,
"createdAt": "2026-01-03T10:00:00Z",
"columnMappings": [
{"id": 1, "columnIndex": 0, "sourceHeader": "Name", "targetField": "FIRST_NAME", "status": "AUTO_MATCHED", "confidenceScore": 1.0},
{"id": 2, "columnIndex": 1, "sourceHeader": "Unknown Col", "targetField": null, "status": "UNRESOLVED", "confidenceScore": 0.0}
]
}
8.3.2. Get Import Job Status
GET /api/imports/{uuid}?includeMappings=true
Response: 200 OK
{
"identifier": "abc-123-def-456",
"importType": "EVENT_PARTICIPANT",
"status": "PROCESSING",
"totalRows": 150,
"processedRows": 75,
"successCount": 72,
"errorCount": 3,
"progressPercent": 50,
"createdAt": "2026-01-03T10:00:00Z"
}
8.3.3. List Import Jobs
GET /api/imports?organisationId=1&status=PROCESSING&page=0&size=20
Response: 200 OK
X-Total-Count: 5
[
{"identifier": "abc-123", "status": "PROCESSING", "progressPercent": 50, ...},
{"identifier": "def-456", "status": "COMPLETED", "progressPercent": 100, ...}
]
8.3.4. Update Column Mappings
PUT /api/imports/{uuid}/column-mappings
Content-Type: application/json
[
{"id": 1, "confirm": true},
{"id": 2, "targetField": "EVENT_CATEGORY_NAME"},
{"id": 3, "ignore": true}
]
Response: 202 Accepted (all required mapped)
Response: 406 Not Acceptable (required fields still unresolved)
8.3.5. Confirm Column Mappings
POST /api/imports/{uuid}/column-mappings/confirm
Response: 202 Accepted
{
"identifier": "abc-123-def-456",
"status": "CELL_MAPPING",
"cellMappings": [
{"id": 1, "targetField": "EVENT_CATEGORY_NAME", "sourceValue": "Junior", "status": "AUTO_MATCHED", "targetEntityId": 42},
{"id": 2, "targetField": "EVENT_CATEGORY_NAME", "sourceValue": "Unknown Cat", "status": "UNRESOLVED", "targetEntityId": null}
]
}
8.3.6. Get Cell Mapping Candidates
GET /api/imports/{uuid}/cell-mappings/candidates?targetField=EVENT_CATEGORY_NAME
Response: 200 OK
[
{"id": 42, "displayName": "Junior (U18)"},
{"id": 43, "displayName": "Senior (18+)"},
{"id": 44, "displayName": "Masters (40+)"}
]
8.3.7. Update Cell Mappings
PUT /api/imports/{uuid}/cell-mappings
Content-Type: application/json
[
{"id": 1, "confirm": true},
{"id": 2, "targetEntityId": 43},
{"id": 3, "ignore": true}
]
Response: 202 Accepted (all resolved)
Response: 406 Not Acceptable (unresolved mappings remain)
8.3.8. Start Processing
POST /api/imports/{uuid}/start
Description: Confirms all auto-matched mappings and starts processing.
Can be called from COLUMN_MAPPING (skips cell mapping) or CELL_MAPPING status.
Response: 202 Accepted
{
"identifier": "abc-123-def-456",
"status": "PROCESSING",
"totalRows": 150,
"processedRows": 0
}
8.3.9. Get Row Results
GET /api/imports/{uuid}/results?outcome=ERROR&page=0&size=50
Response: 200 OK
X-Total-Count: 5
[
{"id": 1, "rowNumber": 15, "outcome": "ERROR", "message": "Duplicate email: [email protected]"},
{"id": 2, "rowNumber": 42, "outcome": "ERROR", "message": "Invalid date format"}
]
9. Column Mapping
9.1. Auto-Matching Process
-
Extract headers from first row
-
Normalize each header (remove spaces, special chars, uppercase)
-
Look up in header dictionary
-
Calculate confidence score for fuzzy matches
-
Mark required fields that couldn’t match
10. Cell Mapping
10.1. Foreign Key Resolution
For fields that reference other entities, values must be resolved to IDs:
| Field | Resolution |
|---|---|
ID_COUNTRY |
Lookup |
EVENT_CATEGORY_NAME |
Lookup |
CUSTOM_LIST_* |
Lookup |
MEMBERSHIP_TYPE_ID |
Lookup |
10.2. Resolution Strategy
public interface CellValueResolver {
/**
* Attempt to resolve a source value to an entity ID.
*/
CellResolutionResult resolve(String sourceValue, ImportContext context);
/**
* Get all valid target values for UI dropdown.
*/
List<TargetOptionDTO> getAvailableTargets(ImportContext context);
}
11. Whole-File Bridge
Some bulk importers cannot be expressed as "row in → row out". RESULT, for example, needs two passes over the CSV (category grouping + per-category upsert), cross-row number-change detection, and a reconciliation summary that counts blank/header/malformed rows. Modelling these as a loop over processRow(…) would throw away the synchronous importer’s battle-tested logic.
The bridge is a default-method extension on ImportRowProcessor:
public interface ImportRowProcessor {
ImportType getImportType();
default boolean supportsWholeFile() { return false; }
default Object processWholeFile(ImportJob job, InputStream stream, ImportContext ctx) {
throw new UnsupportedOperationException();
}
default ImportRowResult processRow(ImportJob job, SpreadsheetRow row, int rowNumber, ImportContext ctx) {
throw new UnsupportedOperationException();
}
default void beforeProcessing(ImportJob job, ImportContext ctx) {}
default void afterProcessing(ImportJob job, ImportContext ctx) {}
}
11.1. Dispatch
ImportProcessingService.processRows(…) branches on processor.supportsWholeFile():
-
true →
processWholeFileInternalre-opens the uploaded attachment as anInputStream, hands the whole stream toprocessor.processWholeFile(…), JSON-serialises the returned DTO toImportJob.resultPayloadJson, and returns. Counters the processor wrote directly onto the managedImportJob(e.g.fileLines,successCount,blankLines) are persisted. -
false →
processRowByRowInternaliterates the spreadsheet viaSpreadsheetReaderand callsrowProcessingService.processRowInTransaction(jobId, processor, row, rowNumber, ctx)per row. Each row commits in its ownREQUIRES_NEWtransaction so a rollback on one row doesn’t affect subsequent rows.
11.2. Contract for whole-file processors
The processor is expected to:
-
Parse
ImportJob.getConfigJson()to recover per-type parameters (e.g.participantIdMode,pointsCalculator,applyNumberChangesfor RESULT). -
Do its work — typically by delegating to the synchronous bulk importer (
ResultImportXLS.processBulkCsv,EventParticipantImportXLS.process). -
Write observability counters onto the managed
ImportJobpassed tobeforeProcessing:fileLines,totalRows,processedRows,successCount,errorCount,blankLines,headerRows,issueCount,numberChangeCount. These feed the generic status endpoint and dashboards. -
Return the response DTO — shape should match the synchronous importer’s so the per-entity GET endpoint can deserialise and return it verbatim.
11.3. Configuration JSON
ImportJob.configJson is a CLOB holding a per-type options object. Shape is intentionally open so new options can be added without schema migration. Current keys:
| Type | Key | Meaning |
|---|---|---|
RESULT |
|
|
RESULT |
|
Short code (e.g. |
RESULT |
|
|
EVENT_PARTICIPANT |
|
XLSX sheet index (default 0, ignored for CSV) |
EVENT_PARTICIPANT |
|
Auto-create missing custom-list values (default |
MEMBERSHIP |
|
XLSX sheet index |
12. Row-Counter Invariant
After processing completes, ImportProcessingService.completeJob(…) asserts:
fileLines == successCount + blankLines + headerRows + errorCount
Violation is logged at WARN (with the expected and actual values) but does not fail the job — the invariant is a defensive check, not a correctness constraint; individual failures would already have surfaced during processing.
The whole-file processors populate every term from the sync DTO’s reconciliation summary; row-by-row imports (MEMBERSHIP) currently leave fileLines null and skip the assertion.
13. Observability
-
import.processWholeFile(Micrometer Timer, taggedimportType) — latency of the async whole-file path per import type. Hooks into the existing Spring Boot Actuator / OTLP stack. -
import.status.transitions(Micrometer Counter, taggedimportType,from,to) — incremented every time a job reaches a terminal status viacompleteJobormarkJobFailed. Enables alerting on unusual FAILED rates per type. -
Structured logs on terminal transitions include the job identifier, import type, from/to status, success/error counts, and
fileLines— tailable from ELK/Loki without needing the full metrics pipeline.
14. Async Processing
14.1. Spring Async Configuration
@Configuration
@EnableAsync
public class AsyncConfig {
@Bean(name = "importTaskExecutor")
public Executor importTaskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(2);
executor.setMaxPoolSize(5);
executor.setQueueCapacity(25);
executor.setThreadNamePrefix("import-");
executor.initialize();
return executor;
}
}
14.2. Processing Service
@Service
public class ImportProcessingService {
@Async("importTaskExecutor")
public CompletableFuture<Void> processImport(UUID jobId) {
ImportJob job = findByUuid(jobId);
try {
job.setStatus(PROCESSING);
save(job);
SpreadsheetReader reader = createReader(job);
int rowIndex = 1; // Skip header
while (rowIndex < reader.getRowCount()) {
// Check cancellation
if (isCancelled(jobId)) return complete();
try {
processRow(reader.getRow(rowIndex), job);
job.incrementSuccess();
} catch (UnmappedCellException e) {
// Pause for user input
savePendingCellMapping(job, e);
job.setStatus(CELL_MAPPING);
save(job);
return complete();
} catch (Exception e) {
saveError(job, rowIndex, e);
job.incrementError();
}
job.setProcessedRows(rowIndex);
if (rowIndex % 100 == 0) save(job);
rowIndex++;
}
job.setStatus(COMPLETED);
job.setCompletedAt(Instant.now());
save(job);
} catch (Exception e) {
job.setStatus(FAILED);
save(job);
}
return complete();
}
}
15. Purging Strategy
15.1. Retention Rules
| Status | Retention | Rationale |
|---|---|---|
COMPLETED |
7 days |
Allow review and re-download |
FAILED |
7 days |
Allow debugging |
CANCELLED |
7 days |
Same as completed |
UPLOADED, COLUMN_MAPPING, CELL_MAPPING |
90 days |
User may return to complete |
PROCESSING |
No auto-purge |
Requires manual intervention |
15.2. Scheduled Cleanup
@Service
public class ImportCleanupService {
@Scheduled(cron = "0 0 2 * * ?") // Daily at 2 AM
@Transactional
public void cleanupExpiredImports() {
Instant now = Instant.now();
// Completed/Failed/Cancelled: 7 days
deleteByStatusesOlderThan(
Set.of(COMPLETED, FAILED, CANCELLED),
now.minus(7, ChronoUnit.DAYS)
);
// Abandoned: 90 days
deleteByStatusesOlderThan(
Set.of(UPLOADED, COLUMN_MAPPING, CELL_MAPPING),
now.minus(90, ChronoUnit.DAYS)
);
}
private void deleteByStatusesOlderThan(Set<Status> statuses, Instant cutoff) {
List<ImportJob> jobs = repository.findByStatusInAndCreatedAtBefore(statuses, cutoff);
for (ImportJob job : jobs) {
attachmentRepository.delete(job.getSourceFile());
repository.delete(job);
}
}
}
17. Related Documentation
-
File Import - XLSX/CSV parsing and column mapping
-
Data Synchronisation - Related synchronisation patterns