Multiple Codecoapps
Hello,
I would like to inquire on how can we deploy two separate application groups in parallel using CAM, such that the solver and scheduler treats them as distinct applications? Is this supported in the current CAM architecture?
In KinD with SWM v2.0.6 and ASSURED class (even though the l2sm net interfaces is not shown to be used) I changed the name in the example yaml to codecoappinstance4 and commented the resourceVersion and the uid, and changed the service names, and deployed the Codecoapp, and it was running:
apiVersion: codeco.he-codeco.eu/v1alpha1
kind: CodecoApp
metadata:
generation: 1
name: codecoappinstance4
namespace: he-codeco-acm
#resourceVersion: "1456"
#uid: 5c948d7e-43d6-425b-b0b2-76402b606e07
spec:
appEnergyLimit: "20"
appFailureTolerance: ""
appName: acm-swm-app
codecoapp-msspec:
- podspec:
containers:
- image: quay.io/skupper/hello-world-backend:latest
name: skupper-backend
ports:
- containerPort: 8080
name: skupper-backend
protocol: TCP
resources:
limits:
cpu: "2"
memory: 4Gi
serviceChannels:
- advancedChannelSettings:
minBandwidth: "5"
frameSize: "100"
maxDelay: "1"
sendInterval: "10"
channelName: frontend-v4
serviceClass: ASSURED
otherService:
appName: acm-swm-app
port: 9090
serviceName: front-end-v4
serviceName: backend-v4
- podspec:
containers:
- image: quay.io/dekelly/frontend-app:v0.0.2
name: front-end
ports:
- containerPort: 8080
protocol: TCP
serviceChannels:
- advancedChannelSettings:
minBandwidth: "5"
frameSize: "100"
maxDelay: "1"
sendInterval: "10"
channelName: backend-v4
serviceClass: ASSURED
otherService:
appName: acm-swm-app
port: 8080
serviceName: backend-v4
serviceName: front-end-v4
complianceClass: High
qosClass: Gold
securityClass: Good
$ k get pods -n he-codeco-acm
NAME READY STATUS RESTARTS AGE
acm-operator-controller-manager-568889675d-bglnm 2/2 Running 0 44m
acm-swm-app-backend-v4 1/1 Running 0 6s
acm-swm-app-front-end-v4 1/1 Running 0 6s
$ kubectl get applications.qos-scheduler.siemens.com -n he-codeco-acm acm-swm-app -o yaml
apiVersion: qos-scheduler.siemens.com/v1alpha1
kind: Application
metadata:
creationTimestamp: "2025-06-20T13:07:46Z"
generation: 2
labels:
application-group: acm-applicationgroup
name: acm-swm-app
namespace: he-codeco-acm
ownerReferences:
- apiVersion: qos-scheduler.siemens.com/v1alpha1
blockOwnerDeletion: true
controller: true
kind: ApplicationGroup
name: acm-applicationgroup
uid: 9cf96d79-3d8d-4bab-89cf-4381d01c9385
resourceVersion: "61780"
uid: 232546ba-b8f0-44ff-9809-e7a54c630cdd
spec:
workloads:
- basename: backend-v4
channels:
- bandwidth: "5"
basename: frontend-v4
framesize: "100"
maxDelay: "1"
otherWorkload:
applicationName: acm-swm-app
basename: front-end-v4
port: 9090
sendInterval: "10"
serviceClass: ASSURED
costs: {}
nodeRecommendations: {}
template:
metadata: {}
spec:
containers:
- image: quay.io/skupper/hello-world-backend:latest
name: skupper-backend
ports:
- containerPort: 8080
name: skupper-backend
protocol: TCP
resources:
limits:
cpu: "2"
memory: 4Gi
- basename: front-end-v4
channels:
- bandwidth: "5"
basename: backend-v4
framesize: "100"
maxDelay: "1"
otherWorkload:
applicationName: acm-swm-app
basename: backend-v4
port: 8080
sendInterval: "10"
serviceClass: ASSURED
costs: {}
nodeRecommendations: {}
template:
metadata: {}
spec:
containers:
- image: quay.io/dekelly/frontend-app:v0.0.2
name: front-end
ports:
- containerPort: 8080
protocol: TCP
resources: {}
status:
phase: Running
$ k describe codecoapps.codeco.he-codeco.eu codecoappinstance4 -n he-codeco-acm
Name: codecoappinstance4
Namespace: he-codeco-acm
Labels: <none>
Annotations: <none>
API Version: codeco.he-codeco.eu/v1alpha1
Kind: CodecoApp
Metadata:
Creation Timestamp: 2025-06-20T13:07:46Z
Generation: 1
Resource Version: 61880
UID: 2f2ad8a0-6439-4c3d-885a-1548f181db38
Spec:
App Energy Limit: 20
App Failure Tolerance:
App Name: acm-swm-app
Codecoapp - Msspec:
Podspec:
Containers:
Image: quay.io/skupper/hello-world-backend:latest
Name: skupper-backend
Ports:
Container Port: 8080
Name: skupper-backend
Protocol: TCP
Resources:
Limits:
Cpu: 2
Memory: 4Gi
Service Channels:
Advanced Channel Settings:
Frame Size: 100
Max Delay: 1
Min Bandwidth: 5
Send Interval: 10
Channel Name: frontend-v4
Other Service:
App Name: acm-swm-app
Port: 9090
Service Name: front-end-v4
Service Class: ASSURED
Service Name: backend-v4
Podspec:
Containers:
Image: quay.io/dekelly/frontend-app:v0.0.2
Name: front-end
Ports:
Container Port: 8080
Protocol: TCP
Service Channels:
Advanced Channel Settings:
Frame Size: 100
Max Delay: 1
Min Bandwidth: 5
Send Interval: 10
Channel Name: backend-v4
Other Service:
App Name: acm-swm-app
Port: 8080
Service Name: backend-v4
Service Class: ASSURED
Service Name: front-end-v4
Compliance Class: High
Qos Class: Gold
Security Class: Good
Status:
App Metrics:
Num Pods: 2
Service Metrics:
Avg Service Cpu: 0.431660
Avg Service Memory: 2200917.333333
Cluster Name: codeco-cluster-1
Node Name: kind-control-plane
Pod Name: acm-swm-app-front-end-v4
Service Name: backend-v4
Avg Service Cpu: 0.431660
Avg Service Memory: 1880064.000000
Cluster Name: codeco-cluster-1
Node Name: kind-control-plane
Pod Name: acm-swm-app-backend-v4
Service Name: front-end-v4
Node Metrics:
Avg Node Cpu: 0.494222
Avg Node Energy: 0.000000
Avg Node Memory: 11723853824.000000
Node Name: kind-control-plane
Avg Node Cpu: 0.505417
Avg Node Energy: 0.000000
Avg Node Memory: 11726417920.000000
Node Name: c2
Avg Node Cpu: 0.500806
Avg Node Energy: 0.000000
Avg Node Memory: 11735094272.000000
Node Name: c1
Events: <none>
Afterwords deploying the default example app yaml (no changes), results in buggy behavior, where we still have the previously deployed app but the new one is not showing. Even though the applications.qos-scheduler.siemens.com CR named acm-swm-app shows the new one. Only one CR exists for the new app, changing the app names in CAM does not deploy a new pods either ( as i see in the CRD of CAM this is not possible to change the app name, as we have to adjust acodecoapp_types.go ):
appName:
description: Name is an used to identify the CODECO application. Edit
codecoapp_types.go to remove/update
type: string
$k get pods -n he-codeco-acm
NAME READY STATUS RESTARTS AGE
acm-operator-controller-manager-568889675d-bglnm 2/2 Running 0 46m
acm-swm-app-backend-v4 1/1 Running 0 2m17s
acm-swm-app-front-end-v4 1/1 Running 0 2m17s
$k get codecoapps.codeco.he-codeco.eu -A
NAMESPACE NAME AGE
he-codeco-acm codecoappinstance3 76s
he-codeco-acm codecoappinstance4 3m14s
$ kubectl get applications.qos-scheduler.siemens.com -n he-codeco-acm acm-swm-app -o yaml
apiVersion: qos-scheduler.siemens.com/v1alpha1
kind: Application
metadata:
creationTimestamp: "2025-06-20T13:07:46Z"
generation: 5
labels:
application-group: acm-applicationgroup
name: acm-swm-app
namespace: he-codeco-acm
ownerReferences:
- apiVersion: qos-scheduler.siemens.com/v1alpha1
blockOwnerDeletion: true
controller: true
kind: ApplicationGroup
name: acm-applicationgroup
uid: 9cf96d79-3d8d-4bab-89cf-4381d01c9385
resourceVersion: "62479"
uid: 232546ba-b8f0-44ff-9809-e7a54c630cdd
spec:
workloads:
- basename: backend
channels:
- bandwidth: "5"
basename: frontend
framesize: "100"
maxDelay: "1"
otherWorkload:
applicationName: acm-swm-app
basename: front-end
port: 9090
sendInterval: "10"
serviceClass: ASSURED
costs: {}
nodeRecommendations:
c1: 100
c2: 58.68350816212525
kind-control-plane: 0
template:
metadata: {}
spec:
containers:
- image: quay.io/skupper/hello-world-backend:latest
name: skupper-backend
ports:
- containerPort: 8080
name: skupper-backend
protocol: TCP
resources:
limits:
cpu: "2"
memory: 4Gi
- basename: front-end
channels:
- bandwidth: "5"
basename: backend
framesize: "100"
maxDelay: "1"
otherWorkload:
applicationName: acm-swm-app
basename: backend
port: 8080
sendInterval: "10"
serviceClass: ASSURED
costs: {}
nodeRecommendations:
c1: 100
c2: 58.68350816212525
kind-control-plane: 0
template:
metadata: {}
spec:
containers:
- image: quay.io/dekelly/frontend-app:v0.0.2
name: front-end
ports:
- containerPort: 8080
protocol: TCP
resources: {}
status:
phase: Running
Could you confirm on what fields needs to be changed such that i can run two applications group in parallel in the cluster, so that its clear where does the root issue.