In order to avoid 'wall-of-yaml' we use a declarative, composable configuration format with sane defaults that is transformed into Kubernets objectsBjarte Karlsen, Technical Architect NTA
A deploy starts in the AuroraAPI, triggered from one of the user facing clients (AO and AuroraConsole), or automatically from the build pipeline. The API extracts and merges relevant parts of the specified AuroraConfig in order to create an AuroraDeploymentSpec for the application being deployed.
From the AuroraDeploymentSpec we provision resources in our existing infrastructure and generate OpenShift objects that are applied to the cluster. The application is then rolled out either via importing a new image or triggering a new deploy. The deploy result is saved for later inspection.
Applications are built, tested and verified in our custom Jenkins CI/CD pipeline. They are then passed on to the proprietary CustomBuilder, Architect, as zip files called DeliveryBundles. A DeliveryBundle contains the application files and metadata.
Builds are triggered in one of several ways;
We augment the application status data that OpenShift already keeps by regularly inspecting the master API and the management interface (part of our runtime contract) of the applications. The extra status data collected is compiled into a separate status value called AuroraStatus. This allow us, among other things, to create custom wallboards, alert integrations, rate of errors and 95% percentile response times.