Microservice Pattern — EMS Service Bootstrap Baseline
1. Overview
Every EMS micro-service — backend (admin-service), portal (registration-portal, admin-portal, …), MCP adapter (mcp-server, …) — shares this bootstrap baseline. The baseline is a deliberate, copyable shape: a Maven module, a Jib-built container image, a per-service Helm chart, ArgoCD Applications per environment, a fixed set of GitHub Actions workflows, an OpenTelemetry javaagent baked into the image, and the same probe/profile/security conventions across the fleet.
Three specialisations build on top of this baseline:
-
Portal Pattern — services that serve a browser SPA: gateway + Angular SPA, OIDC client, session-held JWT, tenant resolution.
-
MCP Pattern — services that expose admin-service via the Model Context Protocol to LLM-driven clients: OAuth resource server, MCP transport, tool catalogue, audit log.
-
Java Library Pattern — Maven JAR artefacts published to GitHub Packages and consumed as dependencies (parent-pom,
event-database,wordpress-database,spreadsheet-importer). Re-uses the Maven module + CI/CD halves of this baseline; drops Jib, Helm, ArgoCD, OTel, ports and probes.
If you are bootstrapping a new EMS service or library, read this page first, then read whichever specialisation matches the persona. Add ADRs only for points where you genuinely deviate.
2. When to Use This Pattern
Use this pattern for any new service intended to:
-
Run in the EMS Kubernetes cluster (
idl-xnl-jhb1-rc01). -
Be deployed via the existing ArgoCD app-of-apps in
idl-xnl-jhb-rc01. -
Be released through the EMS GitFlow pipeline.
-
Carry the same observability and security posture as the rest of the fleet.
If any of those four bullets do not apply, the service belongs outside this pattern (e.g. an external Lambda, a third-party SaaS we self-host elsewhere, a desktop tool).
3. What’s In Scope (Common Across All Services)
| Concern | Treatment |
|---|---|
Maven module structure |
Single jar artefact, parented to |
Spring Boot platform |
Spring Boot 3.4.x via JHipster 8.11.0 dependency platform inherited from the parent POM. No service runs on a different platform without a hard reason. |
Container image |
Jib build from |
OpenTelemetry agent |
|
Helm chart |
Per-service chart under |
ArgoCD manifests |
One Application manifest per (service × environment) under |
CI/CD |
Five GitHub Actions wrapper workflows in the service repo’s |
Health probes |
|
Container security |
|
Logging |
Logback with MDC fields |
Spring profile model |
Conventional set: |
Test conventions |
Smoke |
Application bootstrap |
|
4. What’s Out of Scope (Specialised per Service)
| Concern | Where it lives |
|---|---|
Auth model (OIDC client vs OAuth resource server vs API-key only) |
Portal Pattern / MCP Pattern / per-service decision |
Database, JPA, Liquibase |
admin-service has these; portals proxy; MCP adapters don’t have them. Per-service. |
Frontend / SPA |
Portal Pattern only |
Reverse-proxy / BFF endpoints |
Portal Pattern only |
Hazelcast (session replication, distributed caches) |
admin-service + portals only; MCP adapters are stateless |
Tool catalogue / MCP transport |
MCP Pattern only |
Tenant resolution model |
Portal Pattern (server-side, in session) vs MCP Pattern (per-tool-call parameter) |
5. Maven Module Structure
5.1. POM essentials
<parent>
<groupId>za.co.idealogic</groupId>
<artifactId>event</artifactId>
<version>1.3.1</version>
</parent>
<artifactId>my-service</artifactId>
<version>${revision}</version>
<packaging>jar</packaging>
<properties>
<revision>0.1.0-SNAPSHOT</revision>
<java.version>17</java.version>
<start-class>za.co.idealogic.event.myservice.MyServiceApp</start-class>
<!-- OTel agent locations — same conventions across the fleet -->
<agent-extraction-root>${project.build.directory}/jib-agents</agent-extraction-root>
<agent-install-location>/javaagent</agent-install-location>
<opentelemetry-javaagent-filename>opentelemetry-javaagent.jar</opentelemetry-javaagent-filename>
<opentelemetry-javaagent.version>2.10.0</opentelemetry-javaagent.version>
</properties>
<repositories>
<repository>
<id>github-christhonie</id>
<url>https://maven.pkg.github.com/christhonie/event</url>
<releases><enabled>true</enabled><checksumPolicy>fail</checksumPolicy></releases>
<snapshots><enabled>false</enabled></snapshots>
</repository>
</repositories>
The ${revision} flow gives CI-friendly versioning: GitFlow release-start rewrites it to 0.1.0, release-finish bumps it to 0.1.1-SNAPSHOT on develop and tags 0.1.0 on main.
5.2. Port assignments
The EMS port range is reserved at 12500. Each service gets a stable port:
| Port | Service | Notes |
|---|---|---|
12504 |
admin-service |
Backend REST API |
12505 |
registration-portal |
Public registration portal |
12506 |
admin-portal |
Staff admin portal (greenfield, in flight) |
12507 |
mcp-server |
First MCP adapter |
12508+ |
(next service) |
Reserve at module bootstrap; record here |
6. CI/CD
Every service repo has the same five wrapper workflows under .github/workflows/, each delegating to a reusable workflow in christhonie/event/.github/workflows/ (the parent-pom repo). Pipeline shape, reusable workflow inventory, required secrets, repository settings, and YAML templates are documented in GitHub Actions CI/CD. Read that page for any depth.
Summary at the microservice-pattern level:
| Wrapper | Trigger and purpose |
|---|---|
|
Push to |
|
Push to |
|
Push to |
|
PR to non-main branch → test (backend + frontend filter for portals) |
|
|
Three GitHub-side bring-up requirements are easy to miss; they are covered in GitHub Actions CI/CD § Required Repository Settings:
-
Default workflow permissions must be set to
write(otherwise reusable workflows fail at startup with no log output). -
Reusable workflow access on
christhonie/eventmust allow user-owned repos. -
service-nameinput toargocd-update.ymlis the chart name (= image name = ArgoCD manifest filename prefix), not the GitHub repo name.
Three repository secrets are required: EVENT_PACKAGE_REPO_TOKEN, DOCKER_PAT, ARGOCD_REPO_TOKEN.
7. Image Build (Jib)
Jib builds the image without a Dockerfile or Docker daemon. The OTel javaagent is always baked in (the agent is always present; whether it sends data depends on the Spring otlp profile + endpoint config).
7.1. Pattern
<!-- 1. Always download the OTel agent into the build output -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>copy-javaagent</id>
<phase>package</phase>
<goals><goal>copy</goal></goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>io.opentelemetry.javaagent</groupId>
<artifactId>opentelemetry-javaagent</artifactId>
<version>${opentelemetry-javaagent.version}</version>
<outputDirectory>${agent-extraction-root}</outputDirectory>
<destFileName>${opentelemetry-javaagent-filename}</destFileName>
</artifactItem>
</artifactItems>
</configuration>
</execution>
</executions>
</plugin>
<!-- 2. Jib copies it into the image at /javaagent/opentelemetry-javaagent.jar -->
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<configuration>
<to>

</to>
<extraDirectories>
<paths>
<path>
<from>${agent-extraction-root}</from>
<into>${agent-install-location}</into>
</path>
</paths>
</extraDirectories>
<container>
<ports><port>12507</port></ports>
<environment>
<OTEL_SERVICE_NAME>${project.artifactId}</OTEL_SERVICE_NAME>
</environment>
<jvmFlags>
<jvmFlag>-Djava.security.egd=file:/dev/./urandom</jvmFlag>
<jvmFlag>-XX:+UseContainerSupport</jvmFlag>
<jvmFlag>-XX:MaxRAMPercentage=75.0</jvmFlag>
<jvmFlag>-javaagent:${agent-install-location}/${opentelemetry-javaagent-filename}</jvmFlag>
<jvmFlag>-Dotel.logs.exporter=otlp</jvmFlag>
<jvmFlag>-Dotel.traces.exporter=otlp</jvmFlag>
<jvmFlag>-Dotel.metrics.exporter=otlp</jvmFlag>
</jvmFlags>
<mainClass>${start-class}</mainClass>
</container>
</configuration>
</plugin>
Anti-pattern (do not do this): downloading the agent into src/main/jib/ with <extraDirectories><paths>src/main/docker/jib</paths></extraDirectories>. The path mismatch silently drops the agent from the image; the JVM emits a warning at startup and OTel emits nothing. Source-tree pollution is also wrong — agent jars belong in target/.
See Jib Docker Image Build for layering, registry auth, and tagging detail.
8. Helm Chart
Per-service chart under src/main/helm/. Schema is shared across services so operators see the same keys regardless of which service they are configuring.
8.1. Layout
<service>/src/main/helm/
├── Chart.yaml
├── values.yaml
└── templates/
├── deployment.yaml
├── service.yaml
├── ingress.yaml
├── serviceaccount.yaml
├── _helpers.tpl
└── NOTES.txt
8.2. Standard values keys
config:
profiles: "prod,otlp" # comma-separated Spring profiles
existingsecret: "" # K8s Secret name; chart reads, never creates in prod
image:
repository: docker.io/christhonie/<service-name>
pullPolicy: IfNotPresent # Always for dev (mutable SNAPSHOT tags)
imagePullSecrets:
- name: christhonie-docker
ingress:
enabled: false
className: nginx
annotations: {}
hosts: []
tls: []
resources:
requests: { cpu: 100m, memory: 256Mi }
limits: { memory: 512Mi }
replicaCount: 1
podAnnotations: {} # populated by the workflow with build-sha; see § Rollover trigger
podSecurityContext:
fsGroup: 1000
securityContext:
runAsNonRoot: true
runAsUser: 1000
allowPrivilegeEscalation: false
readOnlyRootFilesystem: false
capabilities:
drop: [ALL]
8.3. Rollover trigger (the podAnnotations propagation pattern)
The chart’s templates/deployment.yaml must propagate .Values.podAnnotations onto the pod template:
template:
metadata:
annotations:
config/secret-version: {{ .Values.config.existingsecret | default "none" | quote }}
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
Without this, the argocd-update.yml reusable’s yq -i … podAnnotations.app.kubernetes.io/build-sha=<sha> has no effect on the pod spec. SNAPSHOT image rebuilds (which carry the same chart version) won’t trigger pod rollover. With this in place, every commit’s SHA differs, the rendered pod template differs, and Helm/ArgoCD rolls the deployment.
Reference implementation: registration-portal/src/main/helm/templates/deployment.yaml.
8.4. Ingress
Two schema variants exist in the EMS fleet:
-
Single-host flat schema (admin-service):
hostname,pathType,tls: true. Older. -
Multi-host array schema (mcp-server):
hosts: [{host, paths}],tls: [{secretName, hosts}]. Newer; matches mainline Helm chart conventions.
New services should use the multi-host array schema. Legacy services can keep their existing schema until they are touched for unrelated reasons.
See Helm Chart Structure for full template detail.
9. ArgoCD Manifests
One Application manifest per (service × environment) under ~/dev/idl-xnl-jhb-rc01/argocd/<service-name>-<env>.yml. The cluster’s cluster-bootstrap.yml (an Application that watches the argocd/ directory) auto-creates child Applications for each manifest.
Standard shape:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: <service-name>-<env>
namespace: argocd
spec:
project: default
destination:
name: idl-xnl-jhb1-rc01
namespace: event-<env>
source:
repoURL: registry-1.docker.io
chart: christhonie/<service-name>
targetRevision: <version>-RELEASE # or -SNAPSHOT-RELEASE for dev
helm:
releaseName: <env>-<service-name>
valuesObject: { ... per-env overrides ... }
syncPolicy:
automated: { prune: true, selfHeal: true, allowEmpty: false }
syncOptions: [CreateNamespace=true, ServerSideApply=true]
revisionHistoryLimit: 3
Promotion convention:
-
dev → auto-bumped by
push-dev.ymlon every commit to develop. -
stage → auto-bumped by
push-main.ymlon every merge to main (i.e. every release). -
prod → manual
targetRevisionedit after stage validation. Approval gate by convention, not by automation.
See ArgoCD Deployment Patterns for promotion procedures, secret management, and troubleshooting.
10. OpenTelemetry
Agent is always baked into the image (see § Image Build above). Spring-side instrumentation is profile-gated under otlp.
10.1. otlp profile dependencies (in pom.xml)
<profile>
<id>otlp</id>
<properties>
<profile.otlp>,otlp</profile.otlp>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>io.opentelemetry.instrumentation</groupId>
<artifactId>opentelemetry-instrumentation-bom</artifactId>
<version>2.10.0</version>
<type>pom</type><scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>io.opentelemetry.instrumentation</groupId>
<artifactId>opentelemetry-spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>io.opentelemetry.instrumentation</groupId>
<artifactId>opentelemetry-instrumentation-annotations</artifactId>
</dependency>
<dependency>
<groupId>io.opentelemetry.instrumentation</groupId>
<artifactId>opentelemetry-logback-appender-1.0</artifactId>
<version>2.10.0-alpha</version>
</dependency>
</dependencies>
</profile>
10.2. application-otlp.yml
management:
metrics:
export:
otlp:
enabled: true
otel:
java:
global-autoconfigure:
enabled: true
exporter:
otlp:
endpoint: 'http://opentelemetry-collector.observability.svc.cluster.local:4317'
jaeger:
enabled: false
zipkin:
enabled: false
springboot:
resource:
enable: true
resource:
attributes:
'service.version': '${project.version}'
'deployment.environment': production
instrumentation:
annotations:
enabled: true
logback-appender.enabled: true
spring-web.enabled: false
spring-webmvc.enabled: false
spring-webflux.enabled: false
The Spring Web auto-instrumentation is disabled because the javaagent already instruments these at bytecode level — leaving both on produces duplicate spans.
See OpenTelemetry Configuration for collector topology, sampling, custom metrics, and per-service variations.
11. Profiles
| Profile | Active when |
|---|---|
|
Local development, dev cluster |
|
Stage, prod |
|
All deployed environments (dev/stage/prod) — observability |
|
All environments (springdoc enabled to serve OpenAPI spec at |
|
Only when terminating TLS in-app (rare; ingress usually handles it) |
|
Some legacy services; not required for new services |
dev + prod are mutually exclusive — services should reject the combination at startup (@PostConstruct check).
12. Health Probes
Two endpoints, both permitAll() in the security filter chain:
-
GET /livez— kubelet liveness. Returns 200 OK if the JVM is responsive. -
GET /readyz— kubelet readiness. Returns 200 OK once the service is ready to accept traffic.
Distinct from /actuator/health/** (heavier, authenticated, used for diagnostic). Helm chart probe stanza:
readinessProbe:
httpGet: { path: /readyz, port: http }
initialDelaySeconds: 20
periodSeconds: 10
timeoutSeconds: 3
failureThreshold: 3
livenessProbe:
httpGet: { path: /livez, port: http }
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 3
failureThreshold: 5
13. Logging
Logback config under src/main/resources/logback-spring.xml. Per-line pattern includes MDC fields:
%d{yyyy-MM-dd HH:mm:ss.SSS} %-5level [%thread] %X{traceId:-} %X{actor:-} %logger{36} - %msg%n
Per-service contextual MDC fields (e.g. mcp-server adds actor and tool) are populated by an aspect or filter. The OTel logback appender automatically emits traceId and spanId when the otlp profile is active, correlating logs to traces in the backend.
13.1. Structured-log conventions (audit / business events)
Use a dedicated logger name (e.g. audit.tool for MCP, audit.access for backend) and emit key=value or JSON for fields. Field names align with admin-service’s LoggingAspect for cross-service grep:
log.info("event=tool.invocation actor={} tool={} params={} latencyMs={} outcome={}",
sub, toolName, args, latency, outcome);
Elastic EDOT picks up the structured fields directly when OTel logback appender is active.
14. Bootstrap Checklist
Step-by-step from "git init" to "deployed in dev":
-
GitHub repo created (e.g.
christhonie/<service-name>). .gitignore generated. -
Required secrets added to the new repo:
EVENT_PACKAGE_REPO_TOKEN,DOCKER_PAT,ARGOCD_REPO_TOKEN. -
Repository workflow permissions flipped to Read and write.
-
Reusable workflow access on
christhonie/eventconfirmed to allow this repo (it’s allowed by default if the existing rule is "user-owned"). -
Maven module scaffolded:
-
POM with parent
za.co.idealogic:event:1.3.1, ${revision} versioning, repositories block, OTel agent properties, Jib + helm-maven-plugin in pluginManagement. -
Sources:
<package>/<Service>App.javamain class. -
Resources:
application.yml+application-{dev,prod,otlp}.yml,logback-spring.xml,banner.txt. -
Test: smoke
@SpringBootTestcontext-load test undersrc/test/java.
-
-
Helm chart under
src/main/helm/— copy from a sibling service and adapt the chart name + port. -
.github/workflows/populated with the five wrappers (push-dev, push-main, push-release, pr-non-main, manual-release-start). Setservice-name:to the chart name. -
Local verification:
mvn clean test(smoke test loads),mvn package jib:dockerBuild(image builds),mvn helm:init helm:lint helm:template helm:package(chart packages). -
First push to develop triggers
push-dev.yml. Watch viagh run watch. -
ArgoCD manifest at
idl-xnl-jhb-rc01/argocd/<service-name>-dev.ymlmodelled on a sibling service. Commit + push. -
Pre-create dev Secret in
event-devnamespace with whatever the chart’sexistingsecretreferences (API keys, OAuth client secrets, etc.). -
Confirm
christhonie-dockerimage-pull secret exists inevent-dev. -
Watch the rollout —
kubectl get pod -n event-dev -l app.kubernetes.io/name=<service-name> -w. First rollout typically completes within 2 minutes of the push-dev workflow finishing. -
Smoke-check via the dev URL:
curl -fsS https://<service>-dev.idealogic.co.za/livezand any service-specific endpoints.
For staging + prod, follow the GitFlow release flow (see ArgoCD Deployment Patterns § Promotion).
15. Reference Implementations
| Service | Persona | Notes |
|---|---|---|
|
Backend |
Heaviest implementation. Liquibase + JPA + Hazelcast + every cross-cutting concern. The deployment-shell reference for backends. |
|
Portal |
The reference for portal pattern. Spring Cloud Gateway MVC, session JWT, tenant resolution. Portal Pattern specialises this. |
|
Portal |
Greenfield staff portal. Currently bootstrapping. Same baseline + portal pattern. |
|
MCP adapter |
The reference for MCP pattern. Spring AI MCP server + OAuth resource server + per-org API keys. MCP Pattern specialises this. |
16. Specialisations
-
Portal Pattern — gateway + Angular SPA topology, OIDC client, session JWT, tenant resolution, BFF endpoints, Hazelcast session replication.
-
MCP Pattern — OAuth resource server, MCP transport (Spring AI starter),
@Tool-annotated methods, audit aspect, per-organisation API key map, rate limiting. -
Java Library Pattern — Maven JAR libraries (parent-pom,
event-database,wordpress-database,spreadsheet-importer) published to GitHub Packages. Re-uses the Maven module + CI/CD halves of this baseline; drops the runtime container baseline (Jib, Helm, ArgoCD, OTel, ports, probes).
When no specialisation fits, write a new sibling pattern doc and link it here.
17. Further Reading
Architecture:
-
Hazelcast Configuration — relevant for portals + admin-service; not for MCP adapters
-
JHipster — Keep & Drop — what we retain from JHipster scaffolding and what we replace
Build, deployment, operations:
18. Change History
| Date | Change |
|---|---|
2026-04-27 |
Initial draft. Extracts the deployment baseline shared by Portal Pattern and MCP Pattern. Captures the GitHub Actions wrapper convention, OTel agent baking, podAnnotations rollover trigger, port assignments, bootstrap checklist. |
2026-04-29 |
Add Java Library Pattern as a third specialisation — re-uses the Maven module + CI/CD halves, drops the runtime container baseline. Triggered by getting |