fix: revert memory request to 128Mi to fix pod scheduling failure

Increasing the request to 256Mi caused backend pods to be Pending with no
node assignment — the scheduler couldn't fit them alongside MySQL (512Mi
request) and existing pods on the on-premise nodes.

Memory REQUEST drives scheduling (how much the node reserves).
Memory LIMIT drives OOMKill (the actual cap at runtime).

Keep request at 128Mi so pods schedule, limit at 512Mi so Node.js +
Socket.io + MySQL pool don't get OOMKilled on startup.

Also add terminationGracePeriodSeconds: 15 so pods from failed/previous
builds release their node slot quickly instead of blocking new pod scheduling.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
tusuii
2026-02-27 23:32:58 +05:30
parent 55287c6f1d
commit bd9a952399

View File

@@ -17,6 +17,7 @@ spec:
app.kubernetes.io/name: backend app.kubernetes.io/name: backend
app.kubernetes.io/component: api app.kubernetes.io/component: api
spec: spec:
terminationGracePeriodSeconds: 15
initContainers: initContainers:
- name: wait-for-mysql - name: wait-for-mysql
image: busybox:1.36 image: busybox:1.36
@@ -64,10 +65,10 @@ spec:
resources: resources:
requests: requests:
cpu: 100m cpu: 100m
memory: 256Mi memory: 128Mi # Request drives scheduling — keep low so pods fit on nodes
limits: limits:
cpu: 500m cpu: 500m
memory: 512Mi memory: 512Mi # Limit prevents OOMKill during startup spikes
livenessProbe: livenessProbe:
httpGet: httpGet:
path: /api/health path: /api/health