Commit Graph

36 Commits

Author SHA1 Message Date
tusuii
c55c0dff69 fix: remove MetalLB setup stage — rely on pre-installed MetalLB
Some checks failed
test reactjs website/pipeline/head Build started...
scrum-manager/pipeline/head There was a failure building this commit
MetalLB is already installed and configured on the cluster. The pipeline
no longer needs to apply IPAddressPool or L2Advertisement resources.
Removed the 'Setup MetalLB' stage and deleted the metallb overlay files.
The frontend Service type: LoadBalancer is already set, so MetalLB will
automatically assign an external IP on deployment.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:45:41 +05:30
tusuii
c6bb1ac9b4 fix: make MetalLB IP pool apply resilient to broken webhook state
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
Wait for the MetalLB controller deployment to be ready before applying
IPAddressPool/L2Advertisement CRDs. If the webhook service has no ready
endpoints (stale ClusterIP from a previously removed controller), delete
the ValidatingWebhookConfiguration so the apply is not blocked. This
prevents the 'connection refused' webhook failure seen when a duplicate
MetalLB install left behind a broken webhook service endpoint.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:38:40 +05:30
tusuii
d067dbfc44 fix: stop reinstalling MetalLB — cluster already has it running
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
MetalLB was already installed (metallb-speaker-* / metallb-controller-*)
32 days ago. Applying metallb-native.yaml created duplicate controller and
speaker resources. The new speaker pods could not schedule because the
existing metallb-speaker-* pods already occupy the host ports (7472, 7946)
on all 3 nodes: "1 node(s) didn't have free ports for the requested pod ports"

Fix: remove the kubectl apply for metallb-native.yaml — just apply the
IPAddressPool and L2Advertisement configs which is all we need.

Manual cluster cleanup required (one-time):
  kubectl delete deployment controller -n metallb-system
  kubectl delete daemonset speaker -n metallb-system

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:31:01 +05:30
tusuii
57c3c14b48 fix: make MetalLB speaker rollout non-blocking with diagnostics
Speaker DaemonSet on CPU-constrained cluster takes >180s to start all 3 pods.
Don't fail the entire pipeline — warn and print speaker pod status instead.
Controller must still be ready (it handles IP assignment) before continuing.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:27:37 +05:30
tusuii
245301450c fix: use maxSurge=0 rolling update to avoid CPU pressure on small cluster
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
During rolling updates with the default maxSurge=1, an extra surge pod was
created temporarily (3 pods instead of 2), causing all 3 nodes to report
"Insufficient CPU" and delaying scheduling past the Jenkins rollout timeout.

With maxSurge=0 / maxUnavailable=1, one old pod terminates first before a
new one starts — pod count stays at 2 throughout, no extra CPU needed.

Also increase Jenkins rollout timeout from 300s to 600s as a safety net
for CPU-constrained nodes that may still need extra scheduling time.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:10:04 +05:30
tusuii
7900114303 fix: increase MetalLB speaker daemonset rollout timeout to 180s
Speaker runs on all 3 nodes and needs image pull + startup time per node.
90s was too tight — bumped to 180s to handle slow node startups.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:07:55 +05:30
tusuii
69f7b4a93d feat: add MetalLB for on-premise LoadBalancer support
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
- Add MetalLB IPAddressPool (192.168.108.213/32) and L2Advertisement
  so the frontend gets a stable external IP on the LAN
- Change frontend service type: NodePort → LoadBalancer
- Add 'Setup MetalLB' stage in Jenkinsfile that installs MetalLB v0.14.8
  (idempotent) and applies the IP pool config before each deploy

After deploy: kubectl get svc frontend -n scrum-manager
should show EXTERNAL-IP: 192.168.108.213
App accessible at: http://192.168.108.213

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:00:04 +05:30
tusuii
7e58d758f2 fix: align secret key references — backend was looking for DB_USER which doesn't exist
All checks were successful
scrum-manager/pipeline/head This commit looks good
Root cause: backend deployment.yaml referenced secretKeyRef key: DB_USER and
key: DB_PASSWORD, but the live secret only has MYSQL_USER and MYSQL_PASSWORD.
kubectl apply reported secret/mysql-secret as "unchanged" (last-applied matched
desired) so the drift was never caught — new pods got CreateContainerConfigError.

Changes:
- backend/deployment.yaml: DB_USER → key: MYSQL_USER, DB_PASSWORD → key: MYSQL_PASSWORD
- mysql/deployment.yaml: add MYSQL_USER/MYSQL_PASSWORD env vars so the app user
  (scrumapp) is created if MySQL ever reinitializes from a fresh PVC
- mysql/secret.yaml: remove stale commented-out block with old key names

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-27 23:38:59 +05:30
tusuii
bd9a952399 fix: revert memory request to 128Mi to fix pod scheduling failure
Increasing the request to 256Mi caused backend pods to be Pending with no
node assignment — the scheduler couldn't fit them alongside MySQL (512Mi
request) and existing pods on the on-premise nodes.

Memory REQUEST drives scheduling (how much the node reserves).
Memory LIMIT drives OOMKill (the actual cap at runtime).

Keep request at 128Mi so pods schedule, limit at 512Mi so Node.js +
Socket.io + MySQL pool don't get OOMKilled on startup.

Also add terminationGracePeriodSeconds: 15 so pods from failed/previous
builds release their node slot quickly instead of blocking new pod scheduling.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-27 23:32:58 +05:30
tusuii
55287c6f1d fix: increase backend memory limit and add rollout failure diagnostics
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
Backend was OOMKilled during rolling update startup (Node.js + Socket.io +
MySQL pool exceeds 256Mi). Raised limit to 512Mi and request to 256Mi.

Jenkinsfile: show kubectl get pods immediately after apply so pod state
is visible in build logs. Added full diagnostics (describe + logs) in
post.failure block so the root cause of any future rollout failure is
visible without needing to SSH into the cluster.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-27 23:24:19 +05:30
tusuii
254052d798 fix: set storageClassName=local-path in PVC patch to match live cluster
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
kubectl apply computes a 3-way merge. The base PVC has no storageClassName
(nil), but the already-bound PVC in the cluster has storageClassName=local-path.
This diff caused apply to attempt a mutation on a bound PVC — forbidden by k8s.

Fix: patch the PVC with storageClassName=local-path so desired state matches
live state and apply produces no diff on the PVC.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-27 23:08:36 +05:30
tusuii
5ed8d0bbdc fix: remove PVC patch that broke kubectl apply on bound claims
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
The mysql-data-pvc was already dynamically provisioned by the cluster's
'local-path' StorageClass. The overlay patch tried to change storageClassName
to 'manual' and volumeName on an already-bound PVC, which Kubernetes forbids:
  "spec is immutable after creation except resources.requests"

Fixes:
- Remove mysql-pvc-patch from kustomization.yaml (PVC left as-is)
- Remove mysql-pv.yaml resource (not needed with dynamic provisioner)
- Add comment explaining when manual PV/PVC is needed vs not

Jenkinsfile: add --timeout and FQDN to smoke test curl; add comments
explaining MySQL Recreate strategy startup timing expectations.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-27 23:02:54 +05:30
tusuii
73bd35173c fix: k8s on-premise deployment and session persistence
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
Database fixes:
- Add hostPath.type=DirectoryOrCreate so kubelet auto-creates /mnt/data/mysql
- Add fsGroup=999 so MySQL process can write to the hostPath volume
- Add MYSQL_ROOT_HOST=% to allow backend pods to authenticate as root
- Fix liveness/readiness probes to include credentials (-p$MYSQL_ROOT_PASSWORD)
- Increase probe initialDelaySeconds (30/60s) for slow first-run init
- Add 15s grace sleep in backend initContainer after MySQL TCP is up
- Add persistentVolumeReclaimPolicy=Retain to prevent accidental data loss
- Explicit accessModes+resources in PVC patch to avoid list merge ambiguity
- Add nodeAffinity comment in PV for multi-node cluster guidance

Ingress/nginx fixes:
- Remove broken rewrite-target=/ that was rewriting all paths (incl /api) to /
- Route /socket.io directly to backend for WebSocket support
- Add /socket.io/ proxy location to both nginx.conf and K8s ConfigMap

Frontend fix:
- Persist currentUser to localStorage on login so page refresh no longer
  clears session and redirects users back to the login page

Tooling:
- Add k8s/overlays/on-premise/deploy.sh for one-command deployment

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-27 22:51:57 +05:30
fa8efe874e working proper jenkinsfile 2026-02-22 12:34:05 +00:00
748ce24e87 Update Jenkinsfile
All checks were successful
scrum-manager/pipeline/head This commit looks good
2026-02-22 12:24:41 +00:00
d04b1adf7c Delete k8s/overlays/on-premise/mysql-pv.yaml 2026-02-22 12:22:43 +00:00
6c19e8d747 patch
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
2026-02-22 12:12:01 +00:00
65c82c2e4c Update k8s/base/mysql/secret.yaml 2026-02-22 12:09:03 +00:00
e5633f9ebc patch 2026-02-22 12:07:42 +00:00
503234c12f patch 2026-02-22 12:04:24 +00:00
899509802c patch
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
2026-02-22 11:41:16 +00:00
a4234ded64 kustomisation patch
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
2026-02-22 11:30:36 +00:00
58ec73916a patch 2026-02-22 11:29:56 +00:00
e23bb94660 jenkinsfile
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
2026-02-22 11:07:30 +00:00
ad65ab824e jenkinsfile
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
2026-02-22 11:06:21 +00:00
606eeed4c3 jenkinsfile
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
2026-02-22 10:48:45 +00:00
tusuii
82077d38e6 added changes ready to ship 2026-02-21 12:06:16 +05:30
tusuii
1788e364f1 close to final version added the subtaskand comment working section 2026-02-16 19:50:23 +05:30
tusuii
6aec1445e9 feat: data export — CSV export for tasks, users, activities
- Backend: GET /api/export/{tasks,users,activities}?month=YYYY-MM
- Frontend: Export panel on Reports page (CEO/CTO/Manager only)
- API: apiExportCsv helper for browser download
2026-02-16 13:26:36 +05:30
tusuii
0fa2302b26 feat: employee management — add/delete users from Members page
- Backend: POST /api/auth/users (create user), DELETE /api/auth/users/:id (delete user, unassign tasks)
- Frontend API: apiCreateUser, apiDeleteUser
- MembersPage: working Add Employee modal (name/email/password/role/dept), delete button with confirmation
- Only CEO/CTO/Manager roles see management controls
- CSS: btn-danger, btn-danger-sm styles
2026-02-16 12:48:20 +05:30
tusuii
22f048989a feat: add CEO role to all dropdowns and access lists 2026-02-16 12:33:42 +05:30
tusuii
c604df281d feat: add more roles (tech_lead, scrum_master, product_owner, designer, qa)
- Registration form: added 5 new role options to dropdown
- Sidebar: new roles get proper nav access via ALL_ROLES/LEADER_ROLES
- Dashboard: isLeader check expanded to include new leadership roles
- Shared/Pages: role badge colors added for all new roles
- Invite modal: expanded role dropdown
2026-02-16 12:31:54 +05:30
tusuii
2db45de4c4 feat: add Kubernetes Kustomize deployment manifests
Add k8s/base/ directory with Kustomize manifests for deploying
the scrum-manager application to Kubernetes:

- Namespace (scrum-manager)
- MySQL: Deployment, Service, PVC, Secret
- Backend: Deployment (2 replicas) with init container, Service
- Frontend: Deployment (2 replicas), Service (NodePort), ConfigMap (nginx.conf)

All deployments include resource requests/limits, liveness/readiness
probes, and proper label selectors.
2026-02-16 12:25:56 +05:30
tusuii
892a2ceba1 feat: MySQL integration, Docker setup, drag-and-drop kanban 2026-02-16 10:20:27 +05:30
tusuii
5d8af1f173 feat: add mobile responsive support (768px breakpoint)
- Add CSS media queries for all sections: sidebar overlay, navbar,
  calendar, kanban, dashboard, list view, drawer, modal, reports,
  team tasks, members
- Add hamburger menu button (hidden on desktop, visible on mobile)
- Add sidebar slide-in overlay with backdrop for mobile
- Auto-close sidebar on navigation for mobile UX
- Login card, drawer, and modal go full-width on mobile
- Dashboard stats grid collapses to 2-column on mobile
- Report charts stack to single column on mobile
2026-02-15 13:10:39 +05:30
tusuii
e46d8773ee feat: complete Scrum-manager MVP — dark-themed multi-user task manager
- Login with role-based auth (CTO/Manager/Employee)
- Calendar view (month/week) with task chips and quick-add
- Kanban board with status columns
- Sortable list view with action menus
- Task detail drawer with subtasks, comments, activity
- Add task modal with validation
- Dashboard with stats, workload, priority breakdown
- Team tasks grouped by assignee
- Reports page with recharts (bar, pie, line, horizontal bar)
- Members page with invite modal
- Search and assignee filter across views
- ErrorBoundary for production error handling
- Full dark design system via index.css
2026-02-15 11:36:38 +05:30