Helm Chart Patterns

This document describes the patterns used to package and deploy applications using Helm charts in the Event and Membership Management system.

Overview

Each deployable application includes a Helm chart in src/main/helm/. Charts are:

  • Packaged using the Maven helm plugin

  • Published to Docker Hub as OCI artifacts

  • Deployed via ArgoCD with environment-specific values

Chart Structure

Standard Helm Chart Layout
src/main/helm/
├── Chart.yaml
├── values.yaml
├── templates/
│   ├── _helpers.tpl
│   ├── deployment.yaml
│   ├── service.yaml
│   ├── ingress.yaml
│   ├── serviceaccount.yaml
│   ├── secret.yml
│   ├── config-map-application.yml
│   ├── role-service-discovery.yml
│   ├── service-discovery-role-binding.yml
│   ├── hpa.yaml
│   └── NOTES.txt

Chart.yaml Configuration

apiVersion: v2
name: registration-portal
description: Registration Portal Gateway
type: application
version: 0.0.0  # Replaced by Maven during build
appVersion: "0.0.0"  # Replaced by Maven during build

The version and appVersion are set by Maven helm plugin during packaging.

Values.yaml Pattern

Core Configuration Structure
# Application configuration
config:
  profiles: "prod,api-docs,kubernetes,otlp"
  existingsecret: ""  # Use shared secret if set

  db:
    url: ""  # Without jdbc: or r2dbc: prefix
    username: ""
    password: ""

  liquibase:
    contexts: ""

  services:
    apikey: ""
    eventadminservice: http://event-admin-service

  security:
    oidc:
      enabled: false
      issuer: ""
      clientid: ""
      clientsecret: ""

  mail:
    from: [email protected]

  logging:
    level:
      ROOT: INFO

  otel:
    enabled: false
    url: ""

# Kubernetes resources
replicaCount: 1

image:
  repository: christhonie/registration-portal
  pullPolicy: IfNotPresent
  tag: ""  # Defaults to Chart.appVersion

imagePullSecrets: []

# Security
podSecurityContext:
  fsGroup: 2001

securityContext:
  runAsUser: 1001
  runAsGroup: 1001
  allowPrivilegeEscalation: false
  runAsNonRoot: true
  readOnlyRootFilesystem: true
  capabilities:
    drop: [ALL]

# Networking
service:
  type: ClusterIP
  port: 80

ingress:
  enabled: false
  className: ""
  hostname: chart-example.local
  tls: false

Template Patterns

Helper Functions (_helpers.tpl)

{{/*
Expand the name of the chart.
*/}}
{{- define "registration-portal.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Create a default fully qualified app name.
*/}}
{{- define "registration-portal.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}

{{/*
Create the name of the application config secret
*/}}
{{- define "registration-portal.configSecret" -}}
{{- if .Values.config.existingsecret }}
{{- .Values.config.existingsecret }}
{{- else }}
{{- include "registration-portal.fullname" . }}
{{- end }}
{{- end }}

Deployment Template

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "registration-portal.fullname" . }}
  labels:
    {{- include "registration-portal.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      {{- include "registration-portal.selectorLabels" . | nindent 6 }}
  template:
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          env:
          - name: SPRING_PROFILES_ACTIVE
            value: {{ .Values.config.profiles }}
          - name: SPRING_CONFIG_IMPORT
            value: 'kubernetes:'
          - name: SPRING_DATASOURCE_PASSWORD
            valueFrom:
              secretKeyRef:
                name: {{ include "registration-portal.configSecret" . }}
                key: spring.datasource.password
          {{- if .Values.config.liquibase.contexts }}
          - name: SPRING_LIQUIBASE_CONTEXTS
            value: {{ .Values.config.liquibase.contexts | quote }}
          {{- end }}
          ports:
            - name: http
              containerPort: 80
          readinessProbe:
            httpGet:
              path: /readyz
              port: http
          livenessProbe:
            httpGet:
              path: /livez
              port: http
          volumeMounts:
            - name: tmp
              mountPath: /tmp
            - name: logs
              mountPath: /logs
      volumes:
        - name: tmp
          emptyDir:
            sizeLimit: 500Mi
        - name: logs
          emptyDir:
            sizeLimit: 10Gi

Existing Secret Pattern

The existingsecret pattern allows multiple deployments to share a single Kubernetes Secret:

# In ArgoCD valuesObject
config:
  existingsecret: event-admin-service  # Use shared secret

This references an existing secret instead of creating a new one per deployment. The secret must contain:

  • spring.datasource.password - Database password

  • jhipster.security.authentication.jwt.base64-secret - JWT signing key

Secret Management

Creating Shared Secrets

Secrets are created manually or via sealed-secrets:

kubectl create secret generic event-admin-service \
  --namespace event-dev \
  --from-literal=spring.datasource.password='<password>' \
  --from-literal=jhipster.security.authentication.jwt.base64-secret='<jwt-secret>'

Secret Structure

The chart’s secret.yml template creates a secret when existingsecret is not set:

apiVersion: v1
kind: Secret
metadata:
  name: {{ include "registration-portal.fullname" . }}
type: Opaque
stringData:
  spring.datasource.password: {{ .Values.config.db.password | quote }}
  jhipster.security.authentication.jwt.base64-secret: {{ .Values.config.security.jwt.encryptionkey | quote }}

Spring Cloud Kubernetes Integration

Applications use Spring Cloud Kubernetes to load configuration from ConfigMaps and Secrets.

Required Environment Variable

- name: SPRING_CONFIG_IMPORT
  value: 'kubernetes:'

RBAC Requirements

The chart includes RBAC resources for service discovery:

role-service-discovery.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: {{ include "registration-portal.fullname" . }}-service-discovery
rules:
  - apiGroups: [""]
    resources: ["configmaps", "secrets", "services", "endpoints", "pods"]
    verbs: ["get", "list", "watch"]

Ingress Configuration

ingress:
  enabled: true
  className: "nginx"
  annotations:
    "cert-manager.io/cluster-issuer": "letsencrypt-prod"
    "external-dns.alpha.kubernetes.io/cloudflare-proxied": "false"
  hostname: app-dev.idealogic.co.za
  pathType: Prefix
  tls: true

Building and Publishing Charts

Maven Helm Plugin

<plugin>
    <groupId>io.kokuwa.maven</groupId>
    <artifactId>helm-maven-plugin</artifactId>
    <configuration>
        <chartDirectory>src/main/helm</chartDirectory>
        <chartVersion>${revision}-RELEASE</chartVersion>
        <appVersion>${revision}</appVersion>
        <uploadRepoStable>
            <name>dockerhub</name>
            <url>oci://registry-1.docker.io/christhonie</url>
            <type>OCI</type>
        </uploadRepoStable>
    </configuration>
</plugin>

Version Convention

  • Chart Version: <MAVEN_VERSION>-RELEASE (e.g., 1.2.4-SNAPSHOT-RELEASE)

  • App Version: <MAVEN_VERSION> (e.g., 1.2.4-SNAPSHOT)

  • Docker Image Tag: <MAVEN_VERSION> (matches appVersion)

Publishing

Charts are published via GitHub Actions:

mvn helm:init
mvn helm:package helm:push

Common Issues

Pod Fails to Start - CreateContainerError

Cause: The Docker image doesn’t have a proper entrypoint/command.

Solution: Ensure the Dockerfile has a proper ENTRYPOINT or CMD.

Pod CrashLoopBackOff - Exit Code 0

Cause: Missing Spring Cloud Kubernetes dependencies.

Solution:

  1. Ensure spring-cloud-starter-kubernetes-fabric8-config is in dependencies

  2. Verify SPRING_CONFIG_IMPORT=kubernetes: is set

  3. Check RBAC permissions allow reading ConfigMaps/Secrets

Database Connection Failed

Cause: Secret not found or wrong key names.

Solution:

  1. Verify secret exists: kubectl get secret <name> -n <namespace>

  2. Check secret has required keys: kubectl get secret <name> -o yaml

  3. Ensure existingsecret value matches actual secret name

Ingress Not Working

Cause: Missing ingress controller or annotations.

Solution:

  1. Verify ingress controller is installed: kubectl get pods -n ingress-nginx

  2. Check cert-manager is running for TLS

  3. Verify DNS records point to ingress IP