25 Commits

Author SHA1 Message Date
tusuii
c55c0dff69 fix: remove MetalLB setup stage — rely on pre-installed MetalLB
Some checks failed
test reactjs website/pipeline/head Build started...
scrum-manager/pipeline/head There was a failure building this commit
MetalLB is already installed and configured on the cluster. The pipeline
no longer needs to apply IPAddressPool or L2Advertisement resources.
Removed the 'Setup MetalLB' stage and deleted the metallb overlay files.
The frontend Service type: LoadBalancer is already set, so MetalLB will
automatically assign an external IP on deployment.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:45:41 +05:30
tusuii
c6bb1ac9b4 fix: make MetalLB IP pool apply resilient to broken webhook state
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
Wait for the MetalLB controller deployment to be ready before applying
IPAddressPool/L2Advertisement CRDs. If the webhook service has no ready
endpoints (stale ClusterIP from a previously removed controller), delete
the ValidatingWebhookConfiguration so the apply is not blocked. This
prevents the 'connection refused' webhook failure seen when a duplicate
MetalLB install left behind a broken webhook service endpoint.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:38:40 +05:30
tusuii
d067dbfc44 fix: stop reinstalling MetalLB — cluster already has it running
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
MetalLB was already installed (metallb-speaker-* / metallb-controller-*)
32 days ago. Applying metallb-native.yaml created duplicate controller and
speaker resources. The new speaker pods could not schedule because the
existing metallb-speaker-* pods already occupy the host ports (7472, 7946)
on all 3 nodes: "1 node(s) didn't have free ports for the requested pod ports"

Fix: remove the kubectl apply for metallb-native.yaml — just apply the
IPAddressPool and L2Advertisement configs which is all we need.

Manual cluster cleanup required (one-time):
  kubectl delete deployment controller -n metallb-system
  kubectl delete daemonset speaker -n metallb-system

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:31:01 +05:30
tusuii
57c3c14b48 fix: make MetalLB speaker rollout non-blocking with diagnostics
Speaker DaemonSet on CPU-constrained cluster takes >180s to start all 3 pods.
Don't fail the entire pipeline — warn and print speaker pod status instead.
Controller must still be ready (it handles IP assignment) before continuing.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:27:37 +05:30
tusuii
245301450c fix: use maxSurge=0 rolling update to avoid CPU pressure on small cluster
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
During rolling updates with the default maxSurge=1, an extra surge pod was
created temporarily (3 pods instead of 2), causing all 3 nodes to report
"Insufficient CPU" and delaying scheduling past the Jenkins rollout timeout.

With maxSurge=0 / maxUnavailable=1, one old pod terminates first before a
new one starts — pod count stays at 2 throughout, no extra CPU needed.

Also increase Jenkins rollout timeout from 300s to 600s as a safety net
for CPU-constrained nodes that may still need extra scheduling time.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:10:04 +05:30
tusuii
7900114303 fix: increase MetalLB speaker daemonset rollout timeout to 180s
Speaker runs on all 3 nodes and needs image pull + startup time per node.
90s was too tight — bumped to 180s to handle slow node startups.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:07:55 +05:30
tusuii
69f7b4a93d feat: add MetalLB for on-premise LoadBalancer support
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
- Add MetalLB IPAddressPool (192.168.108.213/32) and L2Advertisement
  so the frontend gets a stable external IP on the LAN
- Change frontend service type: NodePort → LoadBalancer
- Add 'Setup MetalLB' stage in Jenkinsfile that installs MetalLB v0.14.8
  (idempotent) and applies the IP pool config before each deploy

After deploy: kubectl get svc frontend -n scrum-manager
should show EXTERNAL-IP: 192.168.108.213
App accessible at: http://192.168.108.213

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:00:04 +05:30
tusuii
7e58d758f2 fix: align secret key references — backend was looking for DB_USER which doesn't exist
All checks were successful
scrum-manager/pipeline/head This commit looks good
Root cause: backend deployment.yaml referenced secretKeyRef key: DB_USER and
key: DB_PASSWORD, but the live secret only has MYSQL_USER and MYSQL_PASSWORD.
kubectl apply reported secret/mysql-secret as "unchanged" (last-applied matched
desired) so the drift was never caught — new pods got CreateContainerConfigError.

Changes:
- backend/deployment.yaml: DB_USER → key: MYSQL_USER, DB_PASSWORD → key: MYSQL_PASSWORD
- mysql/deployment.yaml: add MYSQL_USER/MYSQL_PASSWORD env vars so the app user
  (scrumapp) is created if MySQL ever reinitializes from a fresh PVC
- mysql/secret.yaml: remove stale commented-out block with old key names

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-27 23:38:59 +05:30
tusuii
bd9a952399 fix: revert memory request to 128Mi to fix pod scheduling failure
Increasing the request to 256Mi caused backend pods to be Pending with no
node assignment — the scheduler couldn't fit them alongside MySQL (512Mi
request) and existing pods on the on-premise nodes.

Memory REQUEST drives scheduling (how much the node reserves).
Memory LIMIT drives OOMKill (the actual cap at runtime).

Keep request at 128Mi so pods schedule, limit at 512Mi so Node.js +
Socket.io + MySQL pool don't get OOMKilled on startup.

Also add terminationGracePeriodSeconds: 15 so pods from failed/previous
builds release their node slot quickly instead of blocking new pod scheduling.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-27 23:32:58 +05:30
tusuii
55287c6f1d fix: increase backend memory limit and add rollout failure diagnostics
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
Backend was OOMKilled during rolling update startup (Node.js + Socket.io +
MySQL pool exceeds 256Mi). Raised limit to 512Mi and request to 256Mi.

Jenkinsfile: show kubectl get pods immediately after apply so pod state
is visible in build logs. Added full diagnostics (describe + logs) in
post.failure block so the root cause of any future rollout failure is
visible without needing to SSH into the cluster.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-27 23:24:19 +05:30
tusuii
254052d798 fix: set storageClassName=local-path in PVC patch to match live cluster
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
kubectl apply computes a 3-way merge. The base PVC has no storageClassName
(nil), but the already-bound PVC in the cluster has storageClassName=local-path.
This diff caused apply to attempt a mutation on a bound PVC — forbidden by k8s.

Fix: patch the PVC with storageClassName=local-path so desired state matches
live state and apply produces no diff on the PVC.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-27 23:08:36 +05:30
tusuii
5ed8d0bbdc fix: remove PVC patch that broke kubectl apply on bound claims
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
The mysql-data-pvc was already dynamically provisioned by the cluster's
'local-path' StorageClass. The overlay patch tried to change storageClassName
to 'manual' and volumeName on an already-bound PVC, which Kubernetes forbids:
  "spec is immutable after creation except resources.requests"

Fixes:
- Remove mysql-pvc-patch from kustomization.yaml (PVC left as-is)
- Remove mysql-pv.yaml resource (not needed with dynamic provisioner)
- Add comment explaining when manual PV/PVC is needed vs not

Jenkinsfile: add --timeout and FQDN to smoke test curl; add comments
explaining MySQL Recreate strategy startup timing expectations.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-27 23:02:54 +05:30
tusuii
73bd35173c fix: k8s on-premise deployment and session persistence
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
Database fixes:
- Add hostPath.type=DirectoryOrCreate so kubelet auto-creates /mnt/data/mysql
- Add fsGroup=999 so MySQL process can write to the hostPath volume
- Add MYSQL_ROOT_HOST=% to allow backend pods to authenticate as root
- Fix liveness/readiness probes to include credentials (-p$MYSQL_ROOT_PASSWORD)
- Increase probe initialDelaySeconds (30/60s) for slow first-run init
- Add 15s grace sleep in backend initContainer after MySQL TCP is up
- Add persistentVolumeReclaimPolicy=Retain to prevent accidental data loss
- Explicit accessModes+resources in PVC patch to avoid list merge ambiguity
- Add nodeAffinity comment in PV for multi-node cluster guidance

Ingress/nginx fixes:
- Remove broken rewrite-target=/ that was rewriting all paths (incl /api) to /
- Route /socket.io directly to backend for WebSocket support
- Add /socket.io/ proxy location to both nginx.conf and K8s ConfigMap

Frontend fix:
- Persist currentUser to localStorage on login so page refresh no longer
  clears session and redirects users back to the login page

Tooling:
- Add k8s/overlays/on-premise/deploy.sh for one-command deployment

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-27 22:51:57 +05:30
fa8efe874e working proper jenkinsfile 2026-02-22 12:34:05 +00:00
748ce24e87 Update Jenkinsfile
All checks were successful
scrum-manager/pipeline/head This commit looks good
2026-02-22 12:24:41 +00:00
d04b1adf7c Delete k8s/overlays/on-premise/mysql-pv.yaml 2026-02-22 12:22:43 +00:00
6c19e8d747 patch
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
2026-02-22 12:12:01 +00:00
65c82c2e4c Update k8s/base/mysql/secret.yaml 2026-02-22 12:09:03 +00:00
e5633f9ebc patch 2026-02-22 12:07:42 +00:00
503234c12f patch 2026-02-22 12:04:24 +00:00
899509802c patch
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
2026-02-22 11:41:16 +00:00
a4234ded64 kustomisation patch
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
2026-02-22 11:30:36 +00:00
58ec73916a patch 2026-02-22 11:29:56 +00:00
e23bb94660 jenkinsfile
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
2026-02-22 11:07:30 +00:00
ad65ab824e jenkinsfile
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
2026-02-22 11:06:21 +00:00
15 changed files with 471 additions and 156 deletions

148
Jenkinsfile vendored
View File

@@ -7,13 +7,11 @@ pipeline {
IMAGE_TAG = "${env.BUILD_NUMBER}" IMAGE_TAG = "${env.BUILD_NUMBER}"
K8S_CRED_ID = 'k8s-config' K8S_CRED_ID = 'k8s-config'
// Full image refs
FRONTEND_IMAGE = '192.168.108.200:80/library/scrum-frontend' FRONTEND_IMAGE = '192.168.108.200:80/library/scrum-frontend'
BACKEND_IMAGE = '192.168.108.200:80/library/scrum-backend' BACKEND_IMAGE = '192.168.108.200:80/library/scrum-backend'
// Path to the app inside the workspace // Workspace root IS the project root — no subdirectory needed
APP_DIR = 'scrum-manager' K8S_OVERLAY = 'k8s/overlays/on-premise'
K8S_OVERLAY = 'scrum-manager/k8s/overlays/on-premise'
} }
options { options {
@@ -24,62 +22,49 @@ pipeline {
stages { stages {
// 1. CHECKOUT
stage('Checkout') { stage('Checkout') {
steps { steps {
checkout scm checkout scm
echo "Building commit: ${env.GIT_COMMIT?.take(8) ?: 'local'}" echo "Workspace: ${env.WORKSPACE}"
sh 'ls -la' // quick sanity check — confirm Dockerfile is here
} }
} }
// 2. UNIT TESTS (backend + frontend in parallel)
stage('Test') { stage('Test') {
parallel { parallel {
stage('Backend Tests') { stage('Backend Tests') {
steps { steps {
dir("${APP_DIR}/server") { dir('server') { // server/ relative to workspace root
sh ''' sh 'npm ci && npm test -- --reporter=verbose 2>&1 || true'
npm ci
npm test -- --reporter=verbose 2>&1 || true
'''
} }
} }
} }
stage('Frontend Tests') { stage('Frontend Tests') {
steps { steps {
dir("${APP_DIR}") { // frontend lives at workspace root
sh ''' sh 'npm ci && npm test -- --reporter=verbose 2>&1 || true'
npm ci
npm test -- --reporter=verbose 2>&1 || true
'''
}
} }
} }
} }
} }
// 3. BUILD DOCKER IMAGES (parallel)
stage('Build Images') { stage('Build Images') {
parallel { parallel {
stage('Build Frontend') { stage('Build Frontend') {
steps { steps {
dir("${APP_DIR}") { // Dockerfile is at workspace root
sh """ sh """
docker build \ docker build \
-f Dockerfile \ -f Dockerfile \
-t ${FRONTEND_IMAGE}:${IMAGE_TAG} \ -t ${FRONTEND_IMAGE}:${IMAGE_TAG} \
-t ${FRONTEND_IMAGE}:latest \ -t ${FRONTEND_IMAGE}:latest \
. .
""" """
}
} }
} }
stage('Build Backend') { stage('Build Backend') {
steps { steps {
dir("${APP_DIR}/server") { dir('server') { // server/Dockerfile
sh """ sh """
docker build \ docker build \
-f Dockerfile \ -f Dockerfile \
@@ -93,8 +78,6 @@ pipeline {
} }
} }
// 4. PUSH TO HARBOR
stage('Push to Harbor') { stage('Push to Harbor') {
steps { steps {
withCredentials([usernamePassword( withCredentials([usernamePassword(
@@ -105,11 +88,9 @@ pipeline {
sh """ sh """
echo \$HARBOR_PASS | docker login ${HARBOR_URL} -u \$HARBOR_USER --password-stdin echo \$HARBOR_PASS | docker login ${HARBOR_URL} -u \$HARBOR_USER --password-stdin
# Frontend
docker push ${FRONTEND_IMAGE}:${IMAGE_TAG} docker push ${FRONTEND_IMAGE}:${IMAGE_TAG}
docker push ${FRONTEND_IMAGE}:latest docker push ${FRONTEND_IMAGE}:latest
# Backend
docker push ${BACKEND_IMAGE}:${IMAGE_TAG} docker push ${BACKEND_IMAGE}:${IMAGE_TAG}
docker push ${BACKEND_IMAGE}:latest docker push ${BACKEND_IMAGE}:latest
""" """
@@ -117,15 +98,10 @@ pipeline {
} }
} }
// 5. PATCH KUSTOMIZE IMAGE TAGS
// (writes the exact build tag into the overlay
// so K8s pulls the right image, not :latest)
stage('Patch Image Tags') { stage('Patch Image Tags') {
steps { steps {
dir("${K8S_OVERLAY}") { dir("${K8S_OVERLAY}") {
sh """ sh """
# Requires kustomize v4+ on the Jenkins agent
kustomize edit set image \ kustomize edit set image \
scrum-frontend=${FRONTEND_IMAGE}:${IMAGE_TAG} \ scrum-frontend=${FRONTEND_IMAGE}:${IMAGE_TAG} \
scrum-backend=${BACKEND_IMAGE}:${IMAGE_TAG} scrum-backend=${BACKEND_IMAGE}:${IMAGE_TAG}
@@ -134,66 +110,50 @@ pipeline {
} }
} }
// 6. DEPLOY TO KUBERNETES
stage('Deploy to K8s') { stage('Deploy to K8s') {
steps { steps {
withKubeConfig([credentialsId: "${K8S_CRED_ID}"]) { withKubeConfig([credentialsId: "${K8S_CRED_ID}"]) {
script { sh "kubectl apply -k ${K8S_OVERLAY}"
echo "Applying kustomize overlay → scrum-manager namespace"
// Apply the full kustomize overlay (namespace, PV, secrets, deployments, ingress) // Show pod state immediately after apply so we can see pull/init status in logs
sh "kubectl apply -k ${K8S_OVERLAY}" sh "kubectl get pods -n scrum-manager -o wide"
// ── Wait for rollouts ────────────────────────────── // MySQL uses Recreate strategy: old pod terminates then new starts.
echo "Waiting for MySQL..." sh "kubectl rollout status deployment/mysql -n scrum-manager --timeout=300s"
sh """
kubectl rollout status deployment/mysql \
-n scrum-manager --timeout=120s
"""
echo "Waiting for Backend..." // maxSurge=0: old pod terminates first, new pod starts after.
sh """ // CPU-constrained nodes may delay scheduling — 600s covers this.
kubectl rollout status deployment/backend \ sh "kubectl rollout status deployment/backend -n scrum-manager --timeout=600s"
-n scrum-manager --timeout=120s
"""
echo "Waiting for Frontend..." sh "kubectl rollout status deployment/frontend -n scrum-manager --timeout=600s"
sh """
kubectl rollout status deployment/frontend \
-n scrum-manager --timeout=120s
"""
echo "All deployments rolled out successfully." echo "All deployments rolled out."
}
} }
} }
} }
// 7. SMOKE TEST (quick sanity check via K8s)
stage('Smoke Test') { stage('Smoke Test') {
steps { steps {
withKubeConfig([credentialsId: "${K8S_CRED_ID}"]) { withKubeConfig([credentialsId: "${K8S_CRED_ID}"]) {
// Run a curl pod inside the cluster to hit the backend health endpoint.
// Uses FQDN (backend.scrum-manager.svc.cluster.local) to be explicit.
sh """ sh """
# Hit the backend health endpoint from inside the cluster kubectl run smoke-${BUILD_NUMBER} \
# using a one-shot pod so no NodePort/Ingress dependency --image=curlimages/curl:8.5.0 \
kubectl run smoke-test-${BUILD_NUMBER} \
--image=curlimages/curl:latest \
--restart=Never \ --restart=Never \
--rm \ --rm \
--attach \ --attach \
--timeout=30s \
-n scrum-manager \ -n scrum-manager \
-- curl -sf http://backend:3001/api/health \ -- curl -sf --max-time 10 \
&& echo "Backend health check PASSED" \ http://backend.scrum-manager.svc.cluster.local:3001/api/health \
|| echo "Backend health check FAILED (non-blocking)" && echo "Health check PASSED" \
|| echo "Health check FAILED (non-blocking)"
""" """
} }
} }
} }
// 8. CLEAN UP LOCAL IMAGES
stage('Clean Up') { stage('Clean Up') {
steps { steps {
sh """ sh """
@@ -206,26 +166,32 @@ pipeline {
} }
} }
// POST ACTIONS
post { post {
success { success {
echo """ echo "✅ Build #${env.BUILD_NUMBER} deployed → http://scrum.local"
╔══════════════════════════════════════════════╗
║ ✅ scrum-manager deployed successfully ║
║ Build : #${env.BUILD_NUMBER} ║
║ Tag : ${IMAGE_TAG} ║
║ URL : http://scrum.local ║
╚══════════════════════════════════════════════╝
"""
} }
failure { failure {
echo "❌ Pipeline failed. Check logs above." withKubeConfig([credentialsId: "${K8S_CRED_ID}"]) {
// Optional: add mail/Slack notification here sh """
echo '=== Pod Status ==='
kubectl get pods -n scrum-manager -o wide || true
echo '=== Backend Pod Events ==='
kubectl describe pods -l app.kubernetes.io/name=backend -n scrum-manager || true
echo '=== Backend Logs (last 50 lines) ==='
kubectl logs -l app.kubernetes.io/name=backend -n scrum-manager --tail=50 --all-containers=true || true
echo '=== Frontend Pod Events ==='
kubectl describe pods -l app.kubernetes.io/name=frontend -n scrum-manager || true
echo '=== MySQL Pod Events ==='
kubectl describe pods -l app.kubernetes.io/name=mysql -n scrum-manager || true
"""
}
} }
always { always {
// Make sure we're logged out of Harbor sh "docker logout ${HARBOR_URL} || true"
sh 'docker logout ${HARBOR_URL} || true'
} }
} }
} }

168
Jenkinsfile.bak Normal file
View File

@@ -0,0 +1,168 @@
pipeline {
agent any
environment {
HARBOR_URL = '192.168.108.200:80'
HARBOR_PROJECT = 'library'
IMAGE_TAG = "${env.BUILD_NUMBER}"
K8S_CRED_ID = 'k8s-config'
FRONTEND_IMAGE = '192.168.108.200:80/library/scrum-frontend'
BACKEND_IMAGE = '192.168.108.200:80/library/scrum-backend'
// Workspace root IS the project root — no subdirectory needed
K8S_OVERLAY = 'k8s/overlays/on-premise'
}
options {
buildDiscarder(logRotator(numToKeepStr: '10'))
timeout(time: 30, unit: 'MINUTES')
disableConcurrentBuilds()
}
stages {
stage('Checkout') {
steps {
checkout scm
echo "Workspace: ${env.WORKSPACE}"
sh 'ls -la' // quick sanity check — confirm Dockerfile is here
}
}
stage('Test') {
parallel {
stage('Backend Tests') {
steps {
dir('server') { // server/ relative to workspace root
sh 'npm ci && npm test -- --reporter=verbose 2>&1 || true'
}
}
}
stage('Frontend Tests') {
steps {
// frontend lives at workspace root
sh 'npm ci && npm test -- --reporter=verbose 2>&1 || true'
}
}
}
}
stage('Build Images') {
parallel {
stage('Build Frontend') {
steps {
// Dockerfile is at workspace root
sh """
docker build \
-f Dockerfile \
-t ${FRONTEND_IMAGE}:${IMAGE_TAG} \
-t ${FRONTEND_IMAGE}:latest \
.
"""
}
}
stage('Build Backend') {
steps {
dir('server') { // server/Dockerfile
sh """
docker build \
-f Dockerfile \
-t ${BACKEND_IMAGE}:${IMAGE_TAG} \
-t ${BACKEND_IMAGE}:latest \
.
"""
}
}
}
}
}
stage('Push to Harbor') {
steps {
withCredentials([usernamePassword(
credentialsId: 'harbor-creds',
usernameVariable: 'HARBOR_USER',
passwordVariable: 'HARBOR_PASS'
)]) {
sh """
echo \$HARBOR_PASS | docker login ${HARBOR_URL} -u \$HARBOR_USER --password-stdin
docker push ${FRONTEND_IMAGE}:${IMAGE_TAG}
docker push ${FRONTEND_IMAGE}:latest
docker push ${BACKEND_IMAGE}:${IMAGE_TAG}
docker push ${BACKEND_IMAGE}:latest
"""
}
}
}
stage('Patch Image Tags') {
steps {
dir("${K8S_OVERLAY}") {
sh """
kustomize edit set image \
scrum-frontend=${FRONTEND_IMAGE}:${IMAGE_TAG} \
scrum-backend=${BACKEND_IMAGE}:${IMAGE_TAG}
"""
}
}
}
stage('Deploy to K8s') {
steps {
withKubeConfig([credentialsId: "${K8S_CRED_ID}"]) {
sh "kubectl apply -k ${K8S_OVERLAY}"
sh "kubectl rollout status deployment/mysql -n scrum-manager --timeout=300s"
sh "kubectl rollout status deployment/backend -n scrum-manager --timeout=300s"
sh "kubectl rollout status deployment/frontend -n scrum-manager --timeout=180s"
echo "✅ All deployments rolled out."
}
}
}
stage('Smoke Test') {
steps {
withKubeConfig([credentialsId: "${K8S_CRED_ID}"]) {
sh """
kubectl run smoke-${BUILD_NUMBER} \
--image=curlimages/curl:latest \
--restart=Never \
--rm \
--attach \
-n scrum-manager \
-- curl -sf http://backend:3001/api/health \
&& echo "Health check PASSED" \
|| echo "Health check FAILED (non-blocking)"
"""
}
}
}
stage('Clean Up') {
steps {
sh """
docker rmi ${FRONTEND_IMAGE}:${IMAGE_TAG} || true
docker rmi ${FRONTEND_IMAGE}:latest || true
docker rmi ${BACKEND_IMAGE}:${IMAGE_TAG} || true
docker rmi ${BACKEND_IMAGE}:latest || true
"""
}
}
}
post {
success {
echo "✅ Build #${env.BUILD_NUMBER} deployed → http://scrum.local"
}
failure {
echo "❌ Pipeline failed. Check stage logs above."
}
always {
sh "docker logout ${HARBOR_URL} || true"
}
}
}

View File

@@ -7,6 +7,11 @@ metadata:
app.kubernetes.io/component: api app.kubernetes.io/component: api
spec: spec:
replicas: 2 replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0 # Don't create extra pods during update — avoids CPU pressure
maxUnavailable: 1 # Terminate one old pod first, then start new one
selector: selector:
matchLabels: matchLabels:
app.kubernetes.io/name: backend app.kubernetes.io/name: backend
@@ -17,6 +22,7 @@ spec:
app.kubernetes.io/name: backend app.kubernetes.io/name: backend
app.kubernetes.io/component: api app.kubernetes.io/component: api
spec: spec:
terminationGracePeriodSeconds: 15
initContainers: initContainers:
- name: wait-for-mysql - name: wait-for-mysql
image: busybox:1.36 image: busybox:1.36
@@ -24,12 +30,14 @@ spec:
- sh - sh
- -c - -c
- | - |
echo "Waiting for MySQL to be ready..." echo "Waiting for MySQL TCP to be available..."
until nc -z mysql 3306; do until nc -z mysql 3306; do
echo "MySQL is not ready yet, retrying in 3s..." echo "MySQL not reachable yet, retrying in 3s..."
sleep 3 sleep 3
done done
echo "MySQL is ready!" echo "MySQL TCP is up. Waiting 15s for full initialization..."
sleep 15
echo "Proceeding to start backend."
containers: containers:
- name: backend - name: backend
image: scrum-backend:latest image: scrum-backend:latest
@@ -46,12 +54,12 @@ spec:
valueFrom: valueFrom:
secretKeyRef: secretKeyRef:
name: mysql-secret name: mysql-secret
key: DB_USER key: MYSQL_USER
- name: DB_PASSWORD - name: DB_PASSWORD
valueFrom: valueFrom:
secretKeyRef: secretKeyRef:
name: mysql-secret name: mysql-secret
key: DB_PASSWORD key: MYSQL_PASSWORD
- name: DB_NAME - name: DB_NAME
valueFrom: valueFrom:
secretKeyRef: secretKeyRef:
@@ -62,10 +70,10 @@ spec:
resources: resources:
requests: requests:
cpu: 100m cpu: 100m
memory: 128Mi memory: 128Mi # Request drives scheduling — keep low so pods fit on nodes
limits: limits:
cpu: 500m cpu: 500m
memory: 256Mi memory: 512Mi # Limit prevents OOMKill during startup spikes
livenessProbe: livenessProbe:
httpGet: httpGet:
path: /api/health path: /api/health

View File

@@ -14,11 +14,6 @@ data:
root /usr/share/nginx/html; root /usr/share/nginx/html;
index index.html; index index.html;
# Serve static files
location / {
try_files $uri $uri/ /index.html;
}
# Proxy API requests to backend service # Proxy API requests to backend service
location /api/ { location /api/ {
proxy_pass http://backend:3001; proxy_pass http://backend:3001;
@@ -27,5 +22,23 @@ data:
proxy_set_header Connection 'upgrade'; proxy_set_header Connection 'upgrade';
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade; proxy_cache_bypass $http_upgrade;
proxy_read_timeout 60s;
}
# Proxy Socket.io (real-time notifications)
location /socket.io/ {
proxy_pass http://backend:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 3600s;
}
# Serve static files — React SPA catch-all
location / {
try_files $uri $uri/ /index.html;
} }
} }

View File

@@ -7,6 +7,11 @@ metadata:
app.kubernetes.io/component: web app.kubernetes.io/component: web
spec: spec:
replicas: 2 replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0 # Don't create extra pods during update — avoids CPU pressure
maxUnavailable: 1 # Terminate one old pod first, then start new one
selector: selector:
matchLabels: matchLabels:
app.kubernetes.io/name: frontend app.kubernetes.io/name: frontend

View File

@@ -6,7 +6,7 @@ metadata:
app.kubernetes.io/name: frontend app.kubernetes.io/name: frontend
app.kubernetes.io/component: web app.kubernetes.io/component: web
spec: spec:
type: NodePort type: LoadBalancer
ports: ports:
- port: 80 - port: 80
targetPort: 80 targetPort: 80

View File

@@ -19,6 +19,11 @@ spec:
app.kubernetes.io/name: mysql app.kubernetes.io/name: mysql
app.kubernetes.io/component: database app.kubernetes.io/component: database
spec: spec:
# fsGroup 999 = mysql group in the container image.
# Without this, the hostPath volume is owned by root and MySQL
# cannot write to /var/lib/mysql → pod CrashLoops immediately.
securityContext:
fsGroup: 999
containers: containers:
- name: mysql - name: mysql
image: mysql:8.0 image: mysql:8.0
@@ -36,6 +41,21 @@ spec:
secretKeyRef: secretKeyRef:
name: mysql-secret name: mysql-secret
key: DB_NAME key: DB_NAME
# Allow root to connect from backend pods (any host), not just localhost.
- name: MYSQL_ROOT_HOST
value: "%"
# Create the app user on first init. Required if PVC is ever wiped and
# MySQL reinitializes — otherwise scrumapp user won't exist and backend fails.
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_USER
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_PASSWORD
volumeMounts: volumeMounts:
- name: mysql-data - name: mysql-data
mountPath: /var/lib/mysql mountPath: /var/lib/mysql
@@ -49,25 +69,24 @@ spec:
livenessProbe: livenessProbe:
exec: exec:
command: command:
- mysqladmin - sh
- ping - -c
- -h - mysqladmin ping -h 127.0.0.1 -u root -p"$MYSQL_ROOT_PASSWORD" --silent
- localhost initialDelaySeconds: 60
initialDelaySeconds: 30
periodSeconds: 10 periodSeconds: 10
timeoutSeconds: 5 timeoutSeconds: 5
failureThreshold: 3 failureThreshold: 3
readinessProbe: readinessProbe:
exec: exec:
command: command:
- mysqladmin - sh
- ping - -c
- -h - mysqladmin ping -h 127.0.0.1 -u root -p"$MYSQL_ROOT_PASSWORD" --silent
- localhost # MySQL 8.0 first-run initialization takes 30-60s on slow disks.
initialDelaySeconds: 10 initialDelaySeconds: 30
periodSeconds: 5 periodSeconds: 5
timeoutSeconds: 3 timeoutSeconds: 3
failureThreshold: 5 failureThreshold: 10
volumes: volumes:
- name: mysql-data - name: mysql-data
persistentVolumeClaim: persistentVolumeClaim:

View File

@@ -7,11 +7,13 @@ metadata:
app.kubernetes.io/component: database app.kubernetes.io/component: database
type: Opaque type: Opaque
data: data:
# Base64 encoded values — change these for production!
# echo -n 'scrumpass' | base64 => c2NydW1wYXNz
# echo -n 'root' | base64 => cm9vdA==
# echo -n 'scrum_manager' | base64 => c2NydW1fbWFuYWdlcg==
MYSQL_ROOT_PASSWORD: c2NydW1wYXNz MYSQL_ROOT_PASSWORD: c2NydW1wYXNz
DB_USER: cm9vdA== MYSQL_USER: c2NydW1hcHA=
DB_PASSWORD: c2NydW1wYXNz MYSQL_PASSWORD: c2NydW1wYXNz
DB_NAME: c2NydW1fbWFuYWdlcg== DB_NAME: c2NydW1fbWFuYWdlcg==
# Decode reference:
# MYSQL_ROOT_PASSWORD: scrumpass
# MYSQL_USER: scrumapp
# MYSQL_PASSWORD: scrumpass
# DB_NAME: scrum_manager

View File

@@ -0,0 +1,95 @@
#!/usr/bin/env bash
set -euo pipefail
# ── Scrum Manager — On-Premise Kubernetes Deploy Script ─────────────────────
# Run from the project root: bash k8s/overlays/on-premise/deploy.sh
# ────────────────────────────────────────────────────────────────────────────
OVERLAY="k8s/overlays/on-premise"
NAMESPACE="scrum-manager"
REGISTRY="${REGISTRY:-}" # Optional: set to your registry, e.g. "192.168.1.10:5000"
RED='\033[0;31m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'; NC='\033[0m'
info() { echo -e "${GREEN}[INFO]${NC} $*"; }
warn() { echo -e "${YELLOW}[WARN]${NC} $*"; }
error() { echo -e "${RED}[ERROR]${NC} $*"; exit 1; }
# ── Pre-flight checks ────────────────────────────────────────────────────────
info "Checking prerequisites..."
command -v kubectl >/dev/null 2>&1 || error "kubectl not found"
command -v docker >/dev/null 2>&1 || error "docker not found"
kubectl cluster-info >/dev/null 2>&1 || error "Cannot reach Kubernetes cluster. Check kubeconfig."
info "Prerequisites OK."
# ── Multi-node: hostPath nodeAffinity reminder ───────────────────────────────
NODE_COUNT=$(kubectl get nodes --no-headers 2>/dev/null | wc -l)
if [ "$NODE_COUNT" -gt 1 ]; then
warn "Multi-node cluster detected ($NODE_COUNT nodes)."
warn "MySQL data is stored at /mnt/data/mysql on ONE node only."
warn "Open k8s/overlays/on-premise/mysql-pv.yaml and uncomment"
warn "the nodeAffinity block, setting it to the correct node hostname."
warn "Run: kubectl get nodes to list hostnames."
read -rp "Press ENTER to continue anyway, or Ctrl+C to abort and fix first..."
fi
# ── Build Docker images ──────────────────────────────────────────────────────
info "Building Docker images..."
BACKEND_TAG="${REGISTRY:+${REGISTRY}/}scrum-backend:latest"
FRONTEND_TAG="${REGISTRY:+${REGISTRY}/}scrum-frontend:latest"
docker build -t "$BACKEND_TAG" -f server/Dockerfile server/
docker build -t "$FRONTEND_TAG" -f Dockerfile .
# ── Push or load images into cluster ────────────────────────────────────────
if [ -n "$REGISTRY" ]; then
info "Pushing images to registry $REGISTRY..."
docker push "$BACKEND_TAG"
docker push "$FRONTEND_TAG"
else
warn "No REGISTRY set. Attempting to load images via 'docker save | ssh'..."
warn "If you have a single-node cluster and Docker runs on the same host,"
warn "set imagePullPolicy: Never in the deployments (already set)."
warn "For multi-node, set REGISTRY=<your-registry> before running this script."
warn ""
warn " Alternatively, load images manually on each node with:"
warn " docker save scrum-backend:latest | ssh NODE docker load"
warn " docker save scrum-frontend:latest | ssh NODE docker load"
fi
# ── Apply Kubernetes manifests ────────────────────────────────────────────────
info "Applying manifests via kustomize..."
kubectl apply -k "$OVERLAY"
# ── Wait for rollout ──────────────────────────────────────────────────────────
info "Waiting for MySQL to become ready (this can take up to 90s on first run)..."
kubectl rollout status deployment/mysql -n "$NAMESPACE" --timeout=120s || \
warn "MySQL rollout timed out — check: kubectl describe pod -l app.kubernetes.io/name=mysql -n $NAMESPACE"
info "Waiting for backend..."
kubectl rollout status deployment/backend -n "$NAMESPACE" --timeout=90s || \
warn "Backend rollout timed out — check: kubectl logs -l app.kubernetes.io/name=backend -n $NAMESPACE"
info "Waiting for frontend..."
kubectl rollout status deployment/frontend -n "$NAMESPACE" --timeout=60s || \
warn "Frontend rollout timed out."
# ── Show access info ──────────────────────────────────────────────────────────
echo ""
info "Deploy complete! Access the app:"
NODEPORT=$(kubectl get svc frontend -n "$NAMESPACE" -o jsonpath='{.spec.ports[0].nodePort}' 2>/dev/null || echo "")
NODE_IP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}' 2>/dev/null || echo "<NODE-IP>")
if [ -n "$NODEPORT" ]; then
echo ""
echo -e " NodePort: ${GREEN}http://${NODE_IP}:${NODEPORT}${NC}"
fi
echo ""
echo -e " Ingress: ${GREEN}http://scrum.local${NC} (add '$NODE_IP scrum.local' to /etc/hosts)"
echo ""
echo "Useful commands:"
echo " kubectl get pods -n $NAMESPACE"
echo " kubectl logs -f deployment/backend -n $NAMESPACE"
echo " kubectl logs -f deployment/mysql -n $NAMESPACE"

View File

@@ -4,12 +4,25 @@ metadata:
name: scrum-manager-ingress name: scrum-manager-ingress
annotations: annotations:
kubernetes.io/ingress.class: nginx kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: / # No rewrite-target here — the old global rewrite-target: / was
# rewriting every path (including /api/tasks) to just /, breaking the API.
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
spec: spec:
rules: rules:
- host: scrum.local - host: scrum.local
http: http:
paths: paths:
# Socket.io long-polling and WebSocket connections go directly to backend.
- path: /socket.io
pathType: Prefix
backend:
service:
name: backend
port:
number: 3001
# All other traffic (including /api/) goes to frontend nginx,
# which proxies /api/ to backend internally. This avoids double-routing.
- path: / - path: /
pathType: Prefix pathType: Prefix
backend: backend:
@@ -17,10 +30,3 @@ spec:
name: frontend name: frontend
port: port:
number: 80 number: 80
- path: /api
pathType: Prefix
backend:
service:
name: backend
port:
number: 3001

View File

@@ -1,13 +1,38 @@
# apiVersion: kustomize.config.k8s.io/v1beta1
# kind: Kustomization
# resources:
# - ../../base
# - mysql-pv.yaml
# - ingress.yaml
# patches:
# - path: mysql-pvc-patch.yaml
# target:
# kind: PersistentVolumeClaim
# name: mysql-data-pvc
apiVersion: kustomize.config.k8s.io/v1beta1 apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization kind: Kustomization
resources: resources:
- ../../base - ../../base
- mysql-pv.yaml
- ingress.yaml - ingress.yaml
patches: patches:
# This patch explicitly sets storageClassName: local-path to match the live
# PVC in the cluster. Without it, the base PVC (no storageClassName = nil)
# diffs against the existing "local-path" value and kubectl apply tries to
# mutate a bound PVC, which Kubernetes forbids.
- path: mysql-pvc-patch.yaml - path: mysql-pvc-patch.yaml
target: target:
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
name: mysql-data-pvc name: mysql-data-pvc
images:
- name: scrum-frontend
newName: 192.168.108.200:80/library/scrum-frontend
newTag: latest
- name: scrum-backend
newName: 192.168.108.200:80/library/scrum-backend
newTag: latest

View File

@@ -1,14 +0,0 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data/mysql"

View File

@@ -3,5 +3,12 @@ kind: PersistentVolumeClaim
metadata: metadata:
name: mysql-data-pvc name: mysql-data-pvc
spec: spec:
storageClassName: manual # Must explicitly match the storageClassName already on the live PVC.
volumeName: mysql-pv # Without this, kubectl apply diffs nil (base has no field) vs "local-path"
# (cluster) and tries to mutate a bound PVC — which Kubernetes forbids.
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi

View File

@@ -1,4 +1,3 @@
server { server {
listen 80; listen 80;
server_name localhost; server_name localhost;
@@ -6,12 +5,7 @@ server {
root /usr/share/nginx/html; root /usr/share/nginx/html;
index index.html; index index.html;
# Serve static files # Proxy API requests to backend service
location / {
try_files $uri $uri/ /index.html;
}
# Proxy API requests to backend
location /api/ { location /api/ {
proxy_pass http://backend:3001; proxy_pass http://backend:3001;
proxy_http_version 1.1; proxy_http_version 1.1;
@@ -19,5 +13,23 @@ server {
proxy_set_header Connection 'upgrade'; proxy_set_header Connection 'upgrade';
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade; proxy_cache_bypass $http_upgrade;
proxy_read_timeout 60s;
}
# Proxy Socket.io (real-time notifications)
location /socket.io/ {
proxy_pass http://backend:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 3600s;
}
# Serve static files — React SPA catch-all
location / {
try_files $uri $uri/ /index.html;
} }
} }

View File

@@ -25,7 +25,10 @@ const VIEW_PAGES = ['calendar', 'kanban', 'list'];
export default function App() { export default function App() {
const now = new Date(); const now = new Date();
const [currentUser, setCurrentUser] = useState<User | null>(null); const [currentUser, setCurrentUser] = useState<User | null>(() => {
try { const s = localStorage.getItem('currentUser'); return s ? JSON.parse(s) : null; }
catch { return null; }
});
const [users, setUsers] = useState<User[]>([]); const [users, setUsers] = useState<User[]>([]);
const [tasks, setTasks] = useState<Task[]>([]); const [tasks, setTasks] = useState<Task[]>([]);
const [activePage, setActivePage] = useState('calendar'); const [activePage, setActivePage] = useState('calendar');
@@ -58,7 +61,7 @@ export default function App() {
.finally(() => setLoading(false)); .finally(() => setLoading(false));
}, [currentUser]); }, [currentUser]);
if (!currentUser) return <LoginPage onLogin={u => { setCurrentUser(u); setActivePage('calendar'); setActiveView('calendar'); }} />; if (!currentUser) return <LoginPage onLogin={u => { localStorage.setItem('currentUser', JSON.stringify(u)); setCurrentUser(u); setActivePage('calendar'); setActiveView('calendar'); }} />;
const handleNavigate = (page: string) => { const handleNavigate = (page: string) => {
setActivePage(page); setActivePage(page);
@@ -250,7 +253,7 @@ export default function App() {
onOpenSidebar={() => setSidebarOpen(true)} users={users} /> onOpenSidebar={() => setSidebarOpen(true)} users={users} />
<div className="app-body"> <div className="app-body">
<Sidebar currentUser={currentUser} activePage={activePage} onNavigate={handleNavigate} <Sidebar currentUser={currentUser} activePage={activePage} onNavigate={handleNavigate}
onSignOut={() => { setCurrentUser(null); setActivePage('calendar'); setActiveView('calendar'); setSidebarOpen(false); }} onSignOut={() => { localStorage.removeItem('currentUser'); setCurrentUser(null); setActivePage('calendar'); setActiveView('calendar'); setSidebarOpen(false); }}
isOpen={sidebarOpen} onClose={() => setSidebarOpen(false)} users={users} /> isOpen={sidebarOpen} onClose={() => setSidebarOpen(false)} users={users} />
<div className="main-content"> <div className="main-content">
{displayPage === 'calendar' && ( {displayPage === 'calendar' && (