Import Operations Guide
1. Purpose
This guide is for operators running bulk imports against a deployed admin service. It covers Event Participant, Membership, and Result imports. Scope:
-
Preparing the source file
-
Choosing the right endpoint, mode, and query parameters
-
Running the import against Dev / Stage / Prod
-
Reading the response and recognising common failure modes
-
Recovering from operator errors
For functional requirements see Event Participant Import Requirements and Result Import Requirements. For design see Event Participant Import Design, Membership Import Design, and Results Import Design.
2. Quick Reference
All three imports below are asynchronous — the upload returns 202 Accepted + job id; poll the paired GET /import/{jobId} for the result DTO. See Async Delivery Mode.
Each row’s Task cell links to the corresponding design doc’s API section (endpoint spec, request body, response DTO, invariants). The In this guide link jumps to the operational walkthrough below (file prep, curl example, how to read the response, recovery).
| Task (design doc) | Endpoint | Key query params | In this guide |
|---|---|---|---|
|
|
||
|
|
||
|
|
All endpoints require authentication — use the X-API-KEY header with an ADMIN-role key, or a valid JWT from admin-ui.
3. Environment URLs
Refer to Management API Access / reference_environment_urls.md for the current production API key and dev/stage/prod URLs. As of this writing:
| Environment | Base URL |
|---|---|
Production |
|
Stage |
|
Dev |
|
Test against Dev first for any new event or any file type you haven’t imported before. A failed import can delete pre-existing data (see Recovering from a Mode Mismatch). Dev is on the same MySQL cluster as Prod ( |
4. Async Delivery Mode
From release 2.4 all three import endpoints (EP, Memberships, Bulk Results) are asynchronous. The upload returns HTTP 202 Accepted + an ImportJobDTO immediately; the response body you previously got inline is now fetched from a per-entity GET /import/{jobId} endpoint once the job reaches COMPLETED or FAILED.
Common shape:
# 1. Upload — returns immediately with a job identifier
PUT /api/{entity}/import[-bulk]?... → 202 Accepted
Location: /api/{entity}/import/{jobId}
{ "identifier": "<jobId>", "status": "PROCESSING", ... }
# 2. Poll until the job reaches a terminal status
GET /api/{entity}/import/{jobId}
→ 409 Conflict (still processing)
→ 200 OK + <EntityImportResultDTO> (COMPLETED or FAILED — payload carries the status)
→ 404 Not Found (unknown job id, wrong import type, or expired)
The response DTO returned by the GET is byte-identical to what the prior synchronous endpoint returned inline — the job summary, issues list, counters, number changes, etc., are all in the same shape. Only the delivery mode changed.
| Entity | Upload endpoint | Results endpoint |
|---|---|---|
Event Participants |
|
|
Memberships |
|
|
Results (bulk CSV) |
|
|
A generic status endpoint GET /api/imports/{jobId} is also available; it returns the framework-level state (status, counters, file-lines, skipMappingPhases) without the domain DTO. Useful during polling when you only need status.
4.1. Cancellation
DELETE /api/imports/{jobId} marks the job as CANCELLED. For result imports (whole-file, monolithic processor), the CANCELLED flag is observed between mapping phases but not mid-processing — the bulk CSV processor runs to completion once started, so a CANCELLED request on a RESULT job that is already in PROCESSING is effectively a no-op until the current file finishes. EP and Membership imports honour cancel between rows.
4.2. Polling example
# Upload
RESP=$(curl -sS -X PUT \
-H "X-API-KEY: $ADMIN_API_KEY" \
-F "file=@/path/to/125-ResultExport.csv" \
"$BASE_URL/api/result-sets/import-bulk?eventId=125&participantIdMode=regid")
JOB_ID=$(echo "$RESP" | jq -r '.identifier')
# Poll every 2 seconds until COMPLETED or FAILED
while :; do
STATUS=$(curl -sS -H "X-API-KEY: $ADMIN_API_KEY" \
"$BASE_URL/api/imports/$JOB_ID" | jq -r '.status')
case "$STATUS" in
COMPLETED|FAILED) break ;;
*) sleep 2 ;;
esac
done
# Fetch the per-entity result DTO
curl -sS -H "X-API-KEY: $ADMIN_API_KEY" \
"$BASE_URL/api/result-sets/import/$JOB_ID" | jq .
5. Event Participant Import
5.1. When to use it
You have a spreadsheet (from a club secretary, a legacy system export, or a manual roster) and want every row to become an EventParticipant in a named event. Re-running the same file with corrections should update existing records, not create duplicates.
5.2. Preparing the file
The import accepts XLSX (not CSV). Required columns:
-
FirstName(aliases:First Name,FN,Name) -
LastName(aliases:Last Name,Surname,SN,LN)
Strongly recommended columns for clean upsert on re-run:
-
RegistrationID(aliases:ExternalID,ExternalEPID) — the primary upsert key within the event. Without it, the import cannot recognise existing EPs and will create duplicates. -
IDNumber,IDType,IDCountry— used for Person matching across imports and events -
DOB,Email,Cellphone— secondary Person-matching signals -
Category—EventCategory.namefor the target event
Common extra columns: Gender, Address, Town, PostalCode, Bib / Number, OrderNumber, Schoolyouareattending (as a custom list).
See Event Participant Import Design for the full 32-column list and aliases.
5.3. Running the import
The upload is async (see Async Delivery Mode): this returns 202 Accepted with a job identifier; poll GET /api/event-participants/import/{jobId} for the result DTO.
curl -X PUT \
-H "X-API-KEY: $ADMIN_API_KEY" \
-F "file=@/path/to/participants.xlsx" \
"$BASE_URL/api/event-participants/import?eventId=123&sheetIndex=0&createCustom1=true&createCustom2=true&createCustom3=true"
Parameters:
-
eventId(required) — target event -
sheetIndex(optional, default0) — 0-based sheet index in the workbook -
orgId(optional) — defaults to the event’s organisation; only override for cross-org test data -
createCustom1/2/3(defaulttrue) — auto-create missing custom-list values (e.g. new school names)
5.4. Reading the response
The DTO below is what comes back from GET /api/event-participants/import/{jobId} once the job has reached COMPLETED or FAILED — not from the upload response.
{
"totalRows": 185,
"created": 20,
"updated": 160,
"skipped": 0,
"warnings": 4,
"errors": 1,
"issues": [
{ "outcome": "WARNING", "firstName": "...", "lastName": "...",
"registrationId": "29364", "message": "Ambiguous person match — 2 candidates" },
{ "outcome": "ERROR", "firstName": "...", "lastName": "...",
"message": "CustomList 'School' value not found and createCustom1=false" }
]
}
Invariant: totalRows == created + updated + skipped + warnings + errors. If it doesn’t match, the import bailed mid-file — investigate logs.
Only non-success rows appear in issues — successful CREATED / UPDATED rows are counted but not listed.
6. Membership Import
6.1. When to use it
You have a spreadsheet of members to add to a MembershipPeriod — typically from a club secretary, a legacy export, or a renewal drive. The import creates Membership + User (Person) + optional Order + RaceNumber assignments in a single pass.
Unlike EP, the Membership import today has no upsert: re-running the same file will create duplicate memberships rather than updating existing ones. See Membership Outstanding #5.
6.2. Preparing the file
Accepts XLSX. Required columns:
-
FirstName(aliases:First Name,FN,Name) -
LastName(aliases:Last Name,Surname,SN,LN)
Strongly recommended columns for clean processing:
-
MembershipNumber(aliases:Member#,MemberNo,Membership#) — required today; the import throwsSKIPPEDif missing -
IDNumber,IDType,IDCountry— drive Person matching; an existing Person with a different identity number will reject the row asERROR -
MembershipTypeID,MembershipCriteriaID— FK ids into the period
Common extras: Gender, DOB, Email, Cellphone, Address, Town, PostalCode, OrderNumber, Status.
See Membership Import Design for the full 30-column list and aliases.
6.3. Running the import
The upload is async (see Async Delivery Mode): this returns 202 Accepted with a job identifier; poll GET /api/memberships/import/{jobId} for the result DTO.
curl -X PUT \
-H "X-API-KEY: $ADMIN_API_KEY" \
-F "file=@/path/to/members.xlsx" \
"$BASE_URL/api/memberships/import?periodId=17&sheetIndex=0"
Parameters:
-
periodId(required) — targetMembershipPeriod -
sheetIndex(optional, default0) — 0-based sheet index in the workbook -
orgId(optional) — defaults to the period’s organisation; only override for cross-org test data
6.4. Reading the response
The DTO below is what comes back from GET /api/memberships/import/{jobId} once the job has reached COMPLETED or FAILED — not from the upload response.
{
"totalRows": 120,
"created": 118,
"skipped": 1,
"errors": 1,
"issues": [
{ "lineNumber": 47, "outcome": "SKIPPED",
"message": "Membership number not supplied" },
{ "lineNumber": 93, "outcome": "ERROR",
"message": "Identity number does not match existing person" }
]
}
Invariant: totalRows == created + skipped + errors.
The issues list contains one entry per row that did not result in CREATED. Each issue carries the source lineNumber to help correct the file and re-run — trim the XLSX to only the failed rows before retrying, because the import today has no upsert and will duplicate previously-created memberships.
6.5. Success heuristics
-
Clean first run against a fresh period:
created ≈ totalRows,issues = [] -
Partial failure with recoverable rows: a small number of
issues— fix the source file and re-run only the failed rows (trim the XLSX) to avoid duplicates -
Systematic failure: every row produces the same
ERROR(e.g. "Identity number does not match") — stop, investigate the file’s identity columns, don’t blindly retry
7. Result Import (Bulk CSV)
7.1. When to use it
The timing system (RaceDay Scoring) has produced a multi-category CSV. You want every category’s ResultSet to reflect the finishing order in the file.
7.2. Preparing the file
The file comes straight from RaceDay Scoring. You should not hand-edit it. The expected shape:
Bib,First Name,Last Name,School,Registration Event,External Reference ID,Place,Total Time,Num. Laps
219,Freddie,Visser,,Category 1,29364,1,00:57:37.12,12
225,Imtiyaaz,Schultz,,Category 1,29349,2,00:57:58.56,12
...
(blank line)
Bib,First Name,... ← repeated header
103,Someone,Else,,Category 2,29400,1,00:52:11.03,11
...
Concatenated per-category sections (each with their own header and a blank separator) are expected. The importer classifies these as non-data rows automatically.
7.3. Choosing participantIdMode
This is the single most important decision. Getting it wrong can wipe pre-existing results (see Recovering from a Mode Mismatch).
| Mode | Column interpretation | When to use |
|---|---|---|
|
|
The timing system was told about EPs via the new-system PK (rare). Most clean-slate events start this way. |
|
|
The timing system holds legacy-system PKs because EPs were migrated from a legacy system. Use this for all migrated WPCA Road League events (119, 120, 121, and the rest of Series 10). |
|
|
The timing system holds the Person/User id (e.g. for cross-event standings). Uncommon in current operations. |
|
Verify the mode before firing. The CSV’s |
Spot-check command:
-- Is the first id an EP id in the target event?
SELECT id FROM event_participant WHERE event_id = <eventId> AND id = <first-id-in-csv>;
-- Or a registrationId?
SELECT id FROM event_participant WHERE event_id = <eventId> AND registration_id = '<first-id-in-csv>';
7.4. Choosing pointsCalculator
The default is the SA School Cycling calculator. Available alternatives:
| Short code | When to use |
|---|---|
(omit) |
SA School Cycling series points (scores down to ~position 67) |
|
WPCA Road League scheme: 50/45/40/36/34/32/30/28/26/24/22/20 for top 12; 2 participation points for positions 13+; 0 for non-finishers |
A fully-qualified class name is also accepted. See PointsCalculatorFactory in the admin service for the registered list.
7.5. applyNumberChanges
Defaults to false. Leave it off unless you are running a post-race reconciliation where you explicitly want to update participants' RaceNumber assignments to match the bibs they actually used on the day.
When true, only SIMPLE changes are applied automatically (target bib is unassigned stock). SWAP and CONFLICT are always reported but never auto-applied.
7.6. Running the import
The upload is async (see Async Delivery Mode): the PUT below returns 202 Accepted with a job identifier; the full BulkResultImportResponseDTO is fetched from GET /api/result-sets/import/{jobId} once the job reaches COMPLETED.
curl -X PUT \
-H "X-API-KEY: $ADMIN_API_KEY" \
-F "file=@/path/to/125-ResultExport.csv" \
"$BASE_URL/api/result-sets/import-bulk?eventId=125&participantIdMode=regid&pointsCalculator=wpca-road-league"
7.7. Reading the response
The DTO below is what comes back from GET /api/result-sets/import/{jobId} once the job has reached COMPLETED or FAILED.
{
"eventId": 120,
"fileLines": 212,
"summary": {
"dataRows": 203,
"imported": 201,
"skipped": 2,
"nonDataRows": {
"blankLines": 4,
"repeatedHeaders": 4,
"malformedRows": 0,
"totalNonData": 8
}
},
"categories": [
{ "categoryName": "Category 1", "raceId": ..., "resultSetId": ...,
"rowCount": 51, "importedCount": 51, "skippedCount": 0 },
...
],
"unmatchedCategories": [],
"numberChanges": [ ... ]
}
Invariants to check before declaring success:
-
fileLines == 1 (header) + summary.nonDataRows.totalNonData + summary.dataRows -
summary.dataRows == summary.imported + summary.skipped -
unmatchedCategoriesis empty (or contains only values you expect to be missing)
Any non-zero skipped count is worth investigating — those rows were not imported. The detailed issues list for skipped rows is a work-in-progress (ADO-456); until it ships, inspect the application logs for WARN lines around the import timestamp.
7.8. Success heuristics
-
First import of an event:
imported == dataRows,skipped = 0,unmatchedCategories = [] -
Re-import after a correction: same as first-import, plus
RaceResult.idvalues in the database are unchanged compared to before the re-import (upsert-by-seq preserves identity) -
Normal "number change noise": if the event’s
NumberTypehas noRaceNumberstock loaded, every row triggers aCONFLICTnumber-change report. This is operational data config, not an import error.
8. Recovering from a Mode Mismatch
8.1. Symptom
You ran /api/result-sets/import-bulk with the wrong participantIdMode. The response shows summary.imported = 0, summary.skipped == summary.dataRows, and the categories[].importedCount is 0 for every category. A follow-up database check shows RaceResult rows for that event have been deleted.
8.2. Why it happens
The reconciliation logic expects the CSV to be the source of truth for each category’s ResultSet. When every row fails to resolve a participant, no seq is "touched", so the end-of-category delete pass removes every pre-existing RaceResult as "absent from the file". This is the correct behaviour for "participant genuinely removed" but the wrong behaviour for "wrong mode". The importer currently cannot distinguish the two cases.
See FR-RI-012: Mode Mismatch Safety for the pending fix.
8.3. Immediate recovery
-
Identify the correct mode (see [choosing-participantidmode]).
-
Re-run the import with the correct mode. Because the reconciliation is idempotent and the CSV is authoritative, this restores every row that should have been there.
-
RaceResult.idvalues will be different after recovery — the ids you had before the bad run are gone for good. If you have any external system that referenced these ids (unlikely, but possible), it will need to be resynced.
8.4. Prevention
-
Spot-check an id before firing the import (see sample SQL above).
-
Test against Dev first for any new event or any event whose timing-system configuration you haven’t seen before.
-
Check the response summary before walking away — if
imported = 0, stop and investigate before re-running.
9. Troubleshooting Catalogue
| Symptom | Likely cause | Resolution |
|---|---|---|
|
|
Verify the event id via |
|
Missing |
Re-upload; check the file opens in Excel / a text editor; confirm |
|
Nginx 180s timeout on synchronous imports; backend usually finishes anyway |
Wait 2–5 minutes, then check the database for the expected rows. Avoid retrying blindly or you may double-apply number changes. |
|
Mode mismatch — see Recovering from a Mode Mismatch |
Re-run with correct |
|
Timing system’s Registration Event label does not match any |
Either add the missing |
Every row produces a |
No |
Operational data config issue — load the bib range into |
Some rows in |
Two or more existing |
Manually resolve via the admin UI — merge the duplicate persons, then re-run the import |
All |
File was imported against an admin-service version older than 2.3.31 (Bug #452) |
Re-import against 2.3.31+; upsert-by-seq will fill |
10. Pre-Flight Checklist
Before running against production:
-
[ ] File opens cleanly in Excel (EP import) or a text editor (Results import)
-
[ ] Required columns are present (
FirstName/LastNamefor EP;Registration Event+External Reference ID+Placefor Results) -
[ ] Target
eventIdis correct (double-check againstGET /api/events/{id}) -
[ ] For Results:
participantIdModeis confirmed (spot-checked against database) -
[ ] For Results:
pointsCalculatoris correct for this series -
[ ] Same file has been run successfully against Dev (for any new file shape)
-
[ ] You have a rollback plan — most commonly "re-run with the correct parameters"; occasionally "restore the pre-import snapshot from the DB backup"