.png)
Enabling HSSYS Mirroring Out of the Gate with IKO
For those of us building InterSystems workloads on Kubernetes, we are definitely spoiled with the InterSystems Kubernetes Operator (IKO) doing the heavy lifting and mirroring on day one. Where us spoiled brats jump up and down is when we try to add additional databases/namespaces when we provision from HealthConnect containers on day two, while others get to utilize HealthShare Mirroring for this task, the prerequisite of mirroring HSSYS out of the gate has been somewhat elusive. Here is example on how you can this powerful feature up and running with the employment of IKO and IrisClusters.
.png)
HealthCare Mirroring
The documentation for this feature is great and highlights the functionally of what its protecting etc, but it misses bragging about the magical operational add it provides. What it means for those automating workloads is that once you mirror HSSYS, any subsequent namspace provisioned after it gets automatically mirrored for free.
The documentation provides a process of which to set it up, which is manual in nature, that we need to automate to have IKO carry it out, which is great.
https://docs.intersystems.com/healthconnect20253/csp/docbook/Doc.View.cl...
The top three things that need to happen:
- Mirror HSSYS
- Schedule the Mirroring Agent
- Use the Installer Wizard to create a Foundation namespace on primary
Enabling IKO Features
The bits and pieces of functionality we exploited to get this work.
- iko seeding
- iris-main --after operations
I wrote up seeding pretty good in a previous post:
https://community.intersystems.com/post/iko-plus-database-management-mig...
But something I like that we have been taking advantage of is the --before and --after flags in the iris-main `args`.
args:
- --before
- /usr/bin/bash /hs/before/before.sh
- --after
- /usr/bin/bash /hs/after/after.sh
These are configmaps that are mounted as scripts and execute as their label indicates.
before.sh - iris is not available here, good for grabbing stuff, oras operations possibly, file system stuff, whatever.
after.sh - iris is available here, run cos code, import ipm packages, spin sugar.
1️⃣ Mirror
Obviously mirror this, but also declare a databases block as mirrored using a seed from the container location of the database.
data:
irisDatabases:
- directory: /irissys/data/IRIS/mgr/hssys
mirrored: true
name: HSSYS
seed: /usr/irissys/mgr/hssys/
mirrorMap: primary,backup
mirrored: true
2️⃣ initContainer
Now, the upcoming yaml slingers will say this is a task for an agent or the operator itself, but is emulating the manual task for provisioning HSSYS as a mirror.
Add a volume and a volumeMount for the shared volume for the init container ...
volumeMounts:
- mountPath: /hs/before/
name: ikoplus-before-volume
- mountPath: /hs/after/
name: ikoplus-after-volume
- name: hssys-volume
mountPath: /hssys
...and the initContainer itself.
initContainers:
- name: hssys-copy
image: alpine:3.19
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
command: ["/bin/sh", "-c"]
args:
- |
set -euo pipefail
# Install deps: tar + kubectl
apk add --no-cache tar curl ca-certificates >/dev/null
KUBECTL_VERSION="${KUBECTL_VERSION:-v1.29.0}"
curl -fsSL -o /usr/local/bin/kubectl "https://dl.k8s.io/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl"
chmod +x /usr/local/bin/kubectl
if [ "$POD_NAME" = "{{ $.Release.Name }}-hssys-data-0-1" ]
echo "Hostname match; performing pod-to-pod hssys copy..."
kubectl cp -n {{ .Release.Namespace }} {{ $.Release.Name }}-hssys-data-0-0:/irissys/data/IRIS/mgr/hssys/IRIS.DAT /hssys/IRIS.DAT
echo "Copy complete."
else
echo "Hostname does not match; skipping copy."
fi
securityContext:
runAsUser: 0 # run as root
runAsNonRoot: false
readOnlyRootFilesystem: false
volumeMounts:
- name: hssys-volume
mountPath: /hssys
Its quite simple and only runs the script in the initi container if its running on the backup at start. This takes the mirrored version of hssys/IRIS.DAT and pops it on the backup in /hssys/IRIS.DAT.
3️⃣ The after.sh party 🎉
Almost verbatim to the manual steps:
- Unmount
- Copy
- Mount
- Activate
- CatchUp
- Schedule Task
... only do this once, and only do this on the backup at provision time.
---
apiVersion: v1
data:
after.sh: |-
#!/usr/bin/bash
if ! [ -f "/irissys/data/after.done" ]
echo "{{ $.Release.Name }} After Script..."
if [[ "$(hostname)" == "{{ $.Release.Name }}-hssys-data-0-1" ]]
echo "After for mirror b only..."
iris session IRIS <<'EOF'
zn "%SYS"
w ##class(SYS.Database).%OpenId("/irissys/data/IRIS/mgr/hssys/").Dismount()
Set sc = ##class(%File).CopyFile("/hssys/IRIS.DAT", "/irissys/data/IRIS/mgr/hssys/IRIS.DAT", 1)
w ##class(SYS.Database).%OpenId("/irissys/data/IRIS/mgr/hssys/").Mount()
SET sc = ##class(SYS.Mirror).ActivateMirroredDatabase("/irissys/data/IRIS/mgr/hssys/")
Set db=##class(SYS.Database).%OpenId("/irissys/data/IRIS/mgr/hssys/")
set SFNlist = $LISTBUILD(db.SFN)
SET sc = ##class(SYS.Mirror).CatchupDB(SFNlist)
Halt
EOF
fi
iris session IRIS <<'EOF'
zn "%SYS"
set tSC=##Class(Security.Users).Create("ikoplus","%All","ikoplus","HSSYS","","","",0,1,,,,,,1,1)
zn "HSLIB"
do ##class(HS.Util.Mirror.Task).Schedule("HSSYS")
Halt
EOF
touch "/irissys/data/after.done"
fi
exit 0
Note the service account is needed in the data pod to exec across IrisClusters, you will see that in the reference IrisCluster below.
Reference IrisCluster
Ill paste this here, but I ran it as chart so the Helm tags are resident, but adapt it to your own use or put on a pair of helm glasses. The things you do not see here are the secrets, which are the container pull secrets/licenses, but the rest is there including the applicable service account with rootabega on the cluster to exec across pods.
apiVersion: intersystems.com/v1alpha1
kind: IrisCluster
metadata:
name: {{ $.Release.Name }}-hssys
spec:
imagePullSecrets:
- name: containers-pull-secret
licenseKeySecret:
name: license-key-secret
serviceTemplate:
metadata: {}
spec:
externalTrafficPolicy: Local
type: LoadBalancer
tls:
common: {}
ecp: {}
iam: {}
mirror: {}
superserver: {}
webgateway: {}
topology:
arbiter:
image: containers.intersystems.com/intersystems/arbiter:2025.1
podTemplate:
controller: {}
metadata: {}
spec:
resources: {}
updateStrategy: {}
data:
compatibilityVersion: 2025.1.0
image: containers.intersystems.com/intersystems/healthconnect:2025.3
irisDatabases:
- directory: /irissys/data/IRIS/mgr/hssys
mirrored: true
name: HSSYS
seed: /usr/irissys/mgr/hssys/
mirrorMap: primary,backup
mirrored: true
podTemplate:
controller: {}
metadata: {}
spec:
serviceAccountName: {{ $.Release.Name }}-pod-exec-sa
args:
- --before
- /usr/bin/bash /hs/before/before.sh
- --after
- /usr/bin/bash /hs/after/after.sh
env:
- name: ENV_THINGER
value: {{ $.Release.Name }}-value
resources: {}
securityContext:
fsGroup: 51773
runAsGroup: 51773
runAsNonRoot: true
runAsUser: 51773
initContainers:
- name: hssys-copy
image: alpine:3.19
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
command: ["/bin/sh", "-c"]
args:
- |
set -euo pipefail
# Install deps: tar + kubectl
apk add --no-cache tar curl ca-certificates >/dev/null
KUBECTL_VERSION="${KUBECTL_VERSION:-v1.29.0}"
curl -fsSL -o /usr/local/bin/kubectl "https://dl.k8s.io/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl"
chmod +x /usr/local/bin/kubectl
if [ "$POD_NAME" = "{{ $.Release.Name }}-hssys-data-0-1" ]
echo "Hostname match; performing pod-to-pod hssys copy..."
kubectl cp -n {{ .Release.Namespace }} {{ $.Release.Name }}-hssys-data-0-0:/irissys/data/IRIS/mgr/hssys/IRIS.DAT /hssys/IRIS.DAT
echo "Copy complete."
else
echo "Hostname does not match; skipping copy."
fi
securityContext:
runAsUser: 0 # run as root
runAsNonRoot: false
readOnlyRootFilesystem: false
volumeMounts:
- name: hssys-volume
mountPath: /hssys
storageJournal1:
resources: {}
storageJournal2:
resources: {}
storageSYS:
resources: {}
storageWIJ:
resources: {}
updateStrategy: {}
volumeMounts:
- mountPath: /hs/before/
name: ikoplus-before-volume
- mountPath: /hs/after/
name: ikoplus-after-volume
- name: hssys-volume
mountPath: /hssys
webgateway:
alternativeServers: LoadBalancing
applicationPaths:
- /csp/sys
- /csp/bin
- /api
- /api-healthshare-rest/hssys
- /csp/bin
- /csp/broker
- /csp/healthshare
- /csp/user
ephemeral: true
image: containers.intersystems.com/intersystems/webgateway-lockeddown:2025.1
loginSecret:
name: webgateway-secret
podTemplate:
controller: {}
metadata: {}
spec:
resources: {}
replicas: 1
storageDB:
resources: {}
type: apache-lockeddown
updateStrategy: {}
updateStrategy:
type: RollingUpdate
volumes:
- configMap:
name: {{ $.Release.Name }}-before-script
name: ikoplus-before-volume
- configMap:
name: {{ $.Release.Name }}-after-script
name: ikoplus-after-volume
- name: hssys-volume
emptyDir: {}
---
apiVersion: v1
data:
after.sh: |-
#!/usr/bin/bash
if ! [ -f "/irissys/data/after.done" ]
echo "{{ $.Release.Name }} After Script..."
if [[ "$(hostname)" == "{{ $.Release.Name }}-hssys-data-0-1" ]]
echo "After for mirror b only..."
iris session IRIS <<'EOF'
zn "%SYS"
w ##class(SYS.Database).%OpenId("/irissys/data/IRIS/mgr/hssys/").Dismount()
Set sc = ##class(%File).CopyFile("/hssys/IRIS.DAT", "/irissys/data/IRIS/mgr/hssys/IRIS.DAT", 1)
w ##class(SYS.Database).%OpenId("/irissys/data/IRIS/mgr/hssys/").Mount()
SET sc = ##class(SYS.Mirror).ActivateMirroredDatabase("/irissys/data/IRIS/mgr/hssys/")
Set db=##class(SYS.Database).%OpenId("/irissys/data/IRIS/mgr/hssys/")
set SFNlist = $LISTBUILD(db.SFN)
SET sc = ##class(SYS.Mirror).CatchupDB(SFNlist)
Halt
EOF
fi
iris session IRIS <<'EOF'
zn "%SYS"
set tSC=##Class(Security.Users).Create("ikoplus","%All","ikoplus","HSSYS","","","",0,1,,,,,,1,1)
zn "HSLIB"
do ##class(HS.Util.Mirror.Task).Schedule("HSSYS")
Halt
EOF
touch "/irissys/data/after.done"
fi
exit 0
kind: ConfigMap
metadata:
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-weight: "-10"
name: {{ $.Release.Name }}-after-script
---
apiVersion: v1
data:
before.sh: |-
#! /usr/bin/bash
# IRIS Not Available Here
echo "{{ $.Release.Name }} Before Script..."
if [ "$ENV_THINGER" = "{{ $.Release.Name }}-value" ]
echo "{{ $.Release.Name }} Before..."
if [[ "$(hostname)" == "{{ $.Release.Name }}-hssys-data-0-1" ]]
echo "Armageddon!!!"
fi
fi
kind: ConfigMap
metadata:
name: {{ $.Release.Name }}-before-script
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ $.Release.Name }}-pod-exec-sa
namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ $.Release.Name }}-pod-exec-role
namespace: {{ .Release.Namespace }}
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ $.Release.Name }}-pod-exec-binding
namespace: {{ .Release.Namespace }}
subjects:
- kind: ServiceAccount
name: {{ $.Release.Name }}-pod-exec-sa
namespace: {{ .Release.Namespace }}
roleRef:
kind: Role
name: {{ $.Release.Name }}-pod-exec-role
apiGroup: rbac.authorization.k8s.io
I named the namespace after @Eduard Lebedyuk who afforded me some banter trying to figure this one out.
kubectl create ns eduard
helm install ikoplus14 .
.png)
Flow
So the dev story for the order of things is this:
IKO gets put into motion through the admission hook to create a mirrored HealthConnect deployment, which is a pair of IrisClusters and an arbiter. The Primary comes up with the instruction to create HSSYS from a seed in the container, then a bit of ugliness occurs, but nobody sees it except us, or observability if they are pushing logs.
If you were to freeze frame at this moment, the state would look a little bit like:
Its inside a teaser because well, its ugly, and the ugliness occurs on both primary and backup on creation.
However, after.sh backup comes up, it takes care of all of that, but copying over HSSYS, activating, catching things up and covers it up like an Epstein file in a public bucket.
Attestation
Now shell into your active primary, and create 5 databases using the installer.
For i=1:1:5 { Set ns = "OMOP"_i Do ##class(HS.Util.Installer.Foundation).Install(ns) }
Now sit back and relax, as the task fires every 5 minutes out of the box and the installer chugs a bit in the loop, inspecting on the backup member you should see the mirrored databases being created, activated, and caught up one by one, for free.
.png)
Addtionally, there is a ui to see how hard the mirroring agent is working and what its been up to. Here you can see the last action was class mappings... if you have every watched the installer do its thing in the ui or from the backend, that is definitely the latest of the operations.
.png)