27 Commits

Author SHA1 Message Date
tusuii
c55c0dff69 fix: remove MetalLB setup stage — rely on pre-installed MetalLB
Some checks failed
test reactjs website/pipeline/head Build started...
scrum-manager/pipeline/head There was a failure building this commit
MetalLB is already installed and configured on the cluster. The pipeline
no longer needs to apply IPAddressPool or L2Advertisement resources.
Removed the 'Setup MetalLB' stage and deleted the metallb overlay files.
The frontend Service type: LoadBalancer is already set, so MetalLB will
automatically assign an external IP on deployment.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:45:41 +05:30
tusuii
c6bb1ac9b4 fix: make MetalLB IP pool apply resilient to broken webhook state
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
Wait for the MetalLB controller deployment to be ready before applying
IPAddressPool/L2Advertisement CRDs. If the webhook service has no ready
endpoints (stale ClusterIP from a previously removed controller), delete
the ValidatingWebhookConfiguration so the apply is not blocked. This
prevents the 'connection refused' webhook failure seen when a duplicate
MetalLB install left behind a broken webhook service endpoint.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:38:40 +05:30
tusuii
d067dbfc44 fix: stop reinstalling MetalLB — cluster already has it running
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
MetalLB was already installed (metallb-speaker-* / metallb-controller-*)
32 days ago. Applying metallb-native.yaml created duplicate controller and
speaker resources. The new speaker pods could not schedule because the
existing metallb-speaker-* pods already occupy the host ports (7472, 7946)
on all 3 nodes: "1 node(s) didn't have free ports for the requested pod ports"

Fix: remove the kubectl apply for metallb-native.yaml — just apply the
IPAddressPool and L2Advertisement configs which is all we need.

Manual cluster cleanup required (one-time):
  kubectl delete deployment controller -n metallb-system
  kubectl delete daemonset speaker -n metallb-system

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:31:01 +05:30
tusuii
57c3c14b48 fix: make MetalLB speaker rollout non-blocking with diagnostics
Speaker DaemonSet on CPU-constrained cluster takes >180s to start all 3 pods.
Don't fail the entire pipeline — warn and print speaker pod status instead.
Controller must still be ready (it handles IP assignment) before continuing.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:27:37 +05:30
tusuii
245301450c fix: use maxSurge=0 rolling update to avoid CPU pressure on small cluster
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
During rolling updates with the default maxSurge=1, an extra surge pod was
created temporarily (3 pods instead of 2), causing all 3 nodes to report
"Insufficient CPU" and delaying scheduling past the Jenkins rollout timeout.

With maxSurge=0 / maxUnavailable=1, one old pod terminates first before a
new one starts — pod count stays at 2 throughout, no extra CPU needed.

Also increase Jenkins rollout timeout from 300s to 600s as a safety net
for CPU-constrained nodes that may still need extra scheduling time.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:10:04 +05:30
tusuii
7900114303 fix: increase MetalLB speaker daemonset rollout timeout to 180s
Speaker runs on all 3 nodes and needs image pull + startup time per node.
90s was too tight — bumped to 180s to handle slow node startups.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:07:55 +05:30
tusuii
69f7b4a93d feat: add MetalLB for on-premise LoadBalancer support
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
- Add MetalLB IPAddressPool (192.168.108.213/32) and L2Advertisement
  so the frontend gets a stable external IP on the LAN
- Change frontend service type: NodePort → LoadBalancer
- Add 'Setup MetalLB' stage in Jenkinsfile that installs MetalLB v0.14.8
  (idempotent) and applies the IP pool config before each deploy

After deploy: kubectl get svc frontend -n scrum-manager
should show EXTERNAL-IP: 192.168.108.213
App accessible at: http://192.168.108.213

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 00:00:04 +05:30
tusuii
7e58d758f2 fix: align secret key references — backend was looking for DB_USER which doesn't exist
All checks were successful
scrum-manager/pipeline/head This commit looks good
Root cause: backend deployment.yaml referenced secretKeyRef key: DB_USER and
key: DB_PASSWORD, but the live secret only has MYSQL_USER and MYSQL_PASSWORD.
kubectl apply reported secret/mysql-secret as "unchanged" (last-applied matched
desired) so the drift was never caught — new pods got CreateContainerConfigError.

Changes:
- backend/deployment.yaml: DB_USER → key: MYSQL_USER, DB_PASSWORD → key: MYSQL_PASSWORD
- mysql/deployment.yaml: add MYSQL_USER/MYSQL_PASSWORD env vars so the app user
  (scrumapp) is created if MySQL ever reinitializes from a fresh PVC
- mysql/secret.yaml: remove stale commented-out block with old key names

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-27 23:38:59 +05:30
tusuii
bd9a952399 fix: revert memory request to 128Mi to fix pod scheduling failure
Increasing the request to 256Mi caused backend pods to be Pending with no
node assignment — the scheduler couldn't fit them alongside MySQL (512Mi
request) and existing pods on the on-premise nodes.

Memory REQUEST drives scheduling (how much the node reserves).
Memory LIMIT drives OOMKill (the actual cap at runtime).

Keep request at 128Mi so pods schedule, limit at 512Mi so Node.js +
Socket.io + MySQL pool don't get OOMKilled on startup.

Also add terminationGracePeriodSeconds: 15 so pods from failed/previous
builds release their node slot quickly instead of blocking new pod scheduling.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-27 23:32:58 +05:30
tusuii
55287c6f1d fix: increase backend memory limit and add rollout failure diagnostics
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
Backend was OOMKilled during rolling update startup (Node.js + Socket.io +
MySQL pool exceeds 256Mi). Raised limit to 512Mi and request to 256Mi.

Jenkinsfile: show kubectl get pods immediately after apply so pod state
is visible in build logs. Added full diagnostics (describe + logs) in
post.failure block so the root cause of any future rollout failure is
visible without needing to SSH into the cluster.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-27 23:24:19 +05:30
tusuii
254052d798 fix: set storageClassName=local-path in PVC patch to match live cluster
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
kubectl apply computes a 3-way merge. The base PVC has no storageClassName
(nil), but the already-bound PVC in the cluster has storageClassName=local-path.
This diff caused apply to attempt a mutation on a bound PVC — forbidden by k8s.

Fix: patch the PVC with storageClassName=local-path so desired state matches
live state and apply produces no diff on the PVC.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-27 23:08:36 +05:30
tusuii
5ed8d0bbdc fix: remove PVC patch that broke kubectl apply on bound claims
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
The mysql-data-pvc was already dynamically provisioned by the cluster's
'local-path' StorageClass. The overlay patch tried to change storageClassName
to 'manual' and volumeName on an already-bound PVC, which Kubernetes forbids:
  "spec is immutable after creation except resources.requests"

Fixes:
- Remove mysql-pvc-patch from kustomization.yaml (PVC left as-is)
- Remove mysql-pv.yaml resource (not needed with dynamic provisioner)
- Add comment explaining when manual PV/PVC is needed vs not

Jenkinsfile: add --timeout and FQDN to smoke test curl; add comments
explaining MySQL Recreate strategy startup timing expectations.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-27 23:02:54 +05:30
tusuii
73bd35173c fix: k8s on-premise deployment and session persistence
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
Database fixes:
- Add hostPath.type=DirectoryOrCreate so kubelet auto-creates /mnt/data/mysql
- Add fsGroup=999 so MySQL process can write to the hostPath volume
- Add MYSQL_ROOT_HOST=% to allow backend pods to authenticate as root
- Fix liveness/readiness probes to include credentials (-p$MYSQL_ROOT_PASSWORD)
- Increase probe initialDelaySeconds (30/60s) for slow first-run init
- Add 15s grace sleep in backend initContainer after MySQL TCP is up
- Add persistentVolumeReclaimPolicy=Retain to prevent accidental data loss
- Explicit accessModes+resources in PVC patch to avoid list merge ambiguity
- Add nodeAffinity comment in PV for multi-node cluster guidance

Ingress/nginx fixes:
- Remove broken rewrite-target=/ that was rewriting all paths (incl /api) to /
- Route /socket.io directly to backend for WebSocket support
- Add /socket.io/ proxy location to both nginx.conf and K8s ConfigMap

Frontend fix:
- Persist currentUser to localStorage on login so page refresh no longer
  clears session and redirects users back to the login page

Tooling:
- Add k8s/overlays/on-premise/deploy.sh for one-command deployment

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-27 22:51:57 +05:30
fa8efe874e working proper jenkinsfile 2026-02-22 12:34:05 +00:00
748ce24e87 Update Jenkinsfile
All checks were successful
scrum-manager/pipeline/head This commit looks good
2026-02-22 12:24:41 +00:00
d04b1adf7c Delete k8s/overlays/on-premise/mysql-pv.yaml 2026-02-22 12:22:43 +00:00
6c19e8d747 patch
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
2026-02-22 12:12:01 +00:00
65c82c2e4c Update k8s/base/mysql/secret.yaml 2026-02-22 12:09:03 +00:00
e5633f9ebc patch 2026-02-22 12:07:42 +00:00
503234c12f patch 2026-02-22 12:04:24 +00:00
899509802c patch
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
2026-02-22 11:41:16 +00:00
a4234ded64 kustomisation patch
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
2026-02-22 11:30:36 +00:00
58ec73916a patch 2026-02-22 11:29:56 +00:00
e23bb94660 jenkinsfile
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
2026-02-22 11:07:30 +00:00
ad65ab824e jenkinsfile
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
2026-02-22 11:06:21 +00:00
606eeed4c3 jenkinsfile
Some checks failed
scrum-manager/pipeline/head There was a failure building this commit
2026-02-22 10:48:45 +00:00
tusuii
82077d38e6 added changes ready to ship 2026-02-21 12:06:16 +05:30
48 changed files with 1279 additions and 5639 deletions

197
Jenkinsfile vendored Normal file
View File

@@ -0,0 +1,197 @@
pipeline {
agent any
environment {
HARBOR_URL = '192.168.108.200:80'
HARBOR_PROJECT = 'library'
IMAGE_TAG = "${env.BUILD_NUMBER}"
K8S_CRED_ID = 'k8s-config'
FRONTEND_IMAGE = '192.168.108.200:80/library/scrum-frontend'
BACKEND_IMAGE = '192.168.108.200:80/library/scrum-backend'
// Workspace root IS the project root — no subdirectory needed
K8S_OVERLAY = 'k8s/overlays/on-premise'
}
options {
buildDiscarder(logRotator(numToKeepStr: '10'))
timeout(time: 30, unit: 'MINUTES')
disableConcurrentBuilds()
}
stages {
stage('Checkout') {
steps {
checkout scm
echo "Workspace: ${env.WORKSPACE}"
sh 'ls -la' // quick sanity check — confirm Dockerfile is here
}
}
stage('Test') {
parallel {
stage('Backend Tests') {
steps {
dir('server') { // server/ relative to workspace root
sh 'npm ci && npm test -- --reporter=verbose 2>&1 || true'
}
}
}
stage('Frontend Tests') {
steps {
// frontend lives at workspace root
sh 'npm ci && npm test -- --reporter=verbose 2>&1 || true'
}
}
}
}
stage('Build Images') {
parallel {
stage('Build Frontend') {
steps {
// Dockerfile is at workspace root
sh """
docker build \
-f Dockerfile \
-t ${FRONTEND_IMAGE}:${IMAGE_TAG} \
-t ${FRONTEND_IMAGE}:latest \
.
"""
}
}
stage('Build Backend') {
steps {
dir('server') { // server/Dockerfile
sh """
docker build \
-f Dockerfile \
-t ${BACKEND_IMAGE}:${IMAGE_TAG} \
-t ${BACKEND_IMAGE}:latest \
.
"""
}
}
}
}
}
stage('Push to Harbor') {
steps {
withCredentials([usernamePassword(
credentialsId: 'harbor-creds',
usernameVariable: 'HARBOR_USER',
passwordVariable: 'HARBOR_PASS'
)]) {
sh """
echo \$HARBOR_PASS | docker login ${HARBOR_URL} -u \$HARBOR_USER --password-stdin
docker push ${FRONTEND_IMAGE}:${IMAGE_TAG}
docker push ${FRONTEND_IMAGE}:latest
docker push ${BACKEND_IMAGE}:${IMAGE_TAG}
docker push ${BACKEND_IMAGE}:latest
"""
}
}
}
stage('Patch Image Tags') {
steps {
dir("${K8S_OVERLAY}") {
sh """
kustomize edit set image \
scrum-frontend=${FRONTEND_IMAGE}:${IMAGE_TAG} \
scrum-backend=${BACKEND_IMAGE}:${IMAGE_TAG}
"""
}
}
}
stage('Deploy to K8s') {
steps {
withKubeConfig([credentialsId: "${K8S_CRED_ID}"]) {
sh "kubectl apply -k ${K8S_OVERLAY}"
// Show pod state immediately after apply so we can see pull/init status in logs
sh "kubectl get pods -n scrum-manager -o wide"
// MySQL uses Recreate strategy: old pod terminates then new starts.
sh "kubectl rollout status deployment/mysql -n scrum-manager --timeout=300s"
// maxSurge=0: old pod terminates first, new pod starts after.
// CPU-constrained nodes may delay scheduling — 600s covers this.
sh "kubectl rollout status deployment/backend -n scrum-manager --timeout=600s"
sh "kubectl rollout status deployment/frontend -n scrum-manager --timeout=600s"
echo "All deployments rolled out."
}
}
}
stage('Smoke Test') {
steps {
withKubeConfig([credentialsId: "${K8S_CRED_ID}"]) {
// Run a curl pod inside the cluster to hit the backend health endpoint.
// Uses FQDN (backend.scrum-manager.svc.cluster.local) to be explicit.
sh """
kubectl run smoke-${BUILD_NUMBER} \
--image=curlimages/curl:8.5.0 \
--restart=Never \
--rm \
--attach \
--timeout=30s \
-n scrum-manager \
-- curl -sf --max-time 10 \
http://backend.scrum-manager.svc.cluster.local:3001/api/health \
&& echo "Health check PASSED" \
|| echo "Health check FAILED (non-blocking)"
"""
}
}
}
stage('Clean Up') {
steps {
sh """
docker rmi ${FRONTEND_IMAGE}:${IMAGE_TAG} || true
docker rmi ${FRONTEND_IMAGE}:latest || true
docker rmi ${BACKEND_IMAGE}:${IMAGE_TAG} || true
docker rmi ${BACKEND_IMAGE}:latest || true
"""
}
}
}
post {
success {
echo "✅ Build #${env.BUILD_NUMBER} deployed → http://scrum.local"
}
failure {
withKubeConfig([credentialsId: "${K8S_CRED_ID}"]) {
sh """
echo '=== Pod Status ==='
kubectl get pods -n scrum-manager -o wide || true
echo '=== Backend Pod Events ==='
kubectl describe pods -l app.kubernetes.io/name=backend -n scrum-manager || true
echo '=== Backend Logs (last 50 lines) ==='
kubectl logs -l app.kubernetes.io/name=backend -n scrum-manager --tail=50 --all-containers=true || true
echo '=== Frontend Pod Events ==='
kubectl describe pods -l app.kubernetes.io/name=frontend -n scrum-manager || true
echo '=== MySQL Pod Events ==='
kubectl describe pods -l app.kubernetes.io/name=mysql -n scrum-manager || true
"""
}
}
always {
sh "docker logout ${HARBOR_URL} || true"
}
}
}

168
Jenkinsfile.bak Normal file
View File

@@ -0,0 +1,168 @@
pipeline {
agent any
environment {
HARBOR_URL = '192.168.108.200:80'
HARBOR_PROJECT = 'library'
IMAGE_TAG = "${env.BUILD_NUMBER}"
K8S_CRED_ID = 'k8s-config'
FRONTEND_IMAGE = '192.168.108.200:80/library/scrum-frontend'
BACKEND_IMAGE = '192.168.108.200:80/library/scrum-backend'
// Workspace root IS the project root — no subdirectory needed
K8S_OVERLAY = 'k8s/overlays/on-premise'
}
options {
buildDiscarder(logRotator(numToKeepStr: '10'))
timeout(time: 30, unit: 'MINUTES')
disableConcurrentBuilds()
}
stages {
stage('Checkout') {
steps {
checkout scm
echo "Workspace: ${env.WORKSPACE}"
sh 'ls -la' // quick sanity check — confirm Dockerfile is here
}
}
stage('Test') {
parallel {
stage('Backend Tests') {
steps {
dir('server') { // server/ relative to workspace root
sh 'npm ci && npm test -- --reporter=verbose 2>&1 || true'
}
}
}
stage('Frontend Tests') {
steps {
// frontend lives at workspace root
sh 'npm ci && npm test -- --reporter=verbose 2>&1 || true'
}
}
}
}
stage('Build Images') {
parallel {
stage('Build Frontend') {
steps {
// Dockerfile is at workspace root
sh """
docker build \
-f Dockerfile \
-t ${FRONTEND_IMAGE}:${IMAGE_TAG} \
-t ${FRONTEND_IMAGE}:latest \
.
"""
}
}
stage('Build Backend') {
steps {
dir('server') { // server/Dockerfile
sh """
docker build \
-f Dockerfile \
-t ${BACKEND_IMAGE}:${IMAGE_TAG} \
-t ${BACKEND_IMAGE}:latest \
.
"""
}
}
}
}
}
stage('Push to Harbor') {
steps {
withCredentials([usernamePassword(
credentialsId: 'harbor-creds',
usernameVariable: 'HARBOR_USER',
passwordVariable: 'HARBOR_PASS'
)]) {
sh """
echo \$HARBOR_PASS | docker login ${HARBOR_URL} -u \$HARBOR_USER --password-stdin
docker push ${FRONTEND_IMAGE}:${IMAGE_TAG}
docker push ${FRONTEND_IMAGE}:latest
docker push ${BACKEND_IMAGE}:${IMAGE_TAG}
docker push ${BACKEND_IMAGE}:latest
"""
}
}
}
stage('Patch Image Tags') {
steps {
dir("${K8S_OVERLAY}") {
sh """
kustomize edit set image \
scrum-frontend=${FRONTEND_IMAGE}:${IMAGE_TAG} \
scrum-backend=${BACKEND_IMAGE}:${IMAGE_TAG}
"""
}
}
}
stage('Deploy to K8s') {
steps {
withKubeConfig([credentialsId: "${K8S_CRED_ID}"]) {
sh "kubectl apply -k ${K8S_OVERLAY}"
sh "kubectl rollout status deployment/mysql -n scrum-manager --timeout=300s"
sh "kubectl rollout status deployment/backend -n scrum-manager --timeout=300s"
sh "kubectl rollout status deployment/frontend -n scrum-manager --timeout=180s"
echo "✅ All deployments rolled out."
}
}
}
stage('Smoke Test') {
steps {
withKubeConfig([credentialsId: "${K8S_CRED_ID}"]) {
sh """
kubectl run smoke-${BUILD_NUMBER} \
--image=curlimages/curl:latest \
--restart=Never \
--rm \
--attach \
-n scrum-manager \
-- curl -sf http://backend:3001/api/health \
&& echo "Health check PASSED" \
|| echo "Health check FAILED (non-blocking)"
"""
}
}
}
stage('Clean Up') {
steps {
sh """
docker rmi ${FRONTEND_IMAGE}:${IMAGE_TAG} || true
docker rmi ${FRONTEND_IMAGE}:latest || true
docker rmi ${BACKEND_IMAGE}:${IMAGE_TAG} || true
docker rmi ${BACKEND_IMAGE}:latest || true
"""
}
}
}
post {
success {
echo "✅ Build #${env.BUILD_NUMBER} deployed → http://scrum.local"
}
failure {
echo "❌ Pipeline failed. Check stage logs above."
}
always {
sh "docker logout ${HARBOR_URL} || true"
}
}
}

View File

@@ -1,44 +0,0 @@
# Running Scrum Manager with Fermyon Spin
This project has been configured to run on [Fermyon Spin](https://developer.fermyon.com/spin/index), allowing for quick deployment as WebAssembly components.
## Prerequisites
- [Install Spin](https://developer.fermyon.com/spin/install) (v2.0 or later)
- Node.js and npm
## Build
To build both the frontend and the Wasm-compatible backend:
```bash
# Build Frontend (outputs to dist/)
npm run build
# Build Backend (outputs to server/dist/spin.js)
cd server
npm install
npm run build:spin
cd ..
```
## Running Locally
You can run the application locally using `spin up`.
Note: The application requires a MySQL database. Spin connects to it via the address specified in `spin.toml`.
1. Ensure your MySQL database is running (e.g., via Docker).
2. Run Spin:
```bash
spin up --sqlite # If using SQLite support (not fully implemented yet, defaults to MySQL config in spin.toml)
# OR for MySQL:
spin up
```
*Note: You may need to adjust the `db_host` and credentials in `spin.toml` or via environment variables if your DB is not at localhost:3306.*
## Structure
- **`spin.toml`**: The Spin manifest file defining the application components.
- **`server/app_spin.js`**: The Wasm entry point for the backend, using Hono.
- **`server/db_spin.js`**: A database adapter adapting MySQL calls for the Spin environment.

View File

@@ -1,217 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) The Spin Framework Contributors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--- LLVM Exceptions to the Apache 2.0 License ----
As an exception, if, as a result of your compiling your source code, portions
of this Software are embedded into an Object form of such source code, you
may redistribute such embedded portions in such Object form without complying
with the conditions of Sections 4(a), 4(b) and 4(d) of the License.
In addition, if you combine or link compiled forms of this Software with
software that is licensed under the GPLv2 ("Combined Software") and if a
court of competent jurisdiction determines that the patent provision (Section
3), the indemnity provision (Section 9) or other Section of the License
conflicts with the conditions of the GPLv2, you may retroactively and
prospectively choose to deem waived or otherwise exclude such Section(s) of
the License, but only in their entirety and only with respect to the Combined
Software.

View File

@@ -1,136 +0,0 @@
<div align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="./docs/static/image/logo-dark.png">
<img alt="spin logo" src="./docs/static/image/logo.png" width="300" height="128">
</picture>
<p>Spin is a framework for building, deploying, and running fast, secure, and composable cloud microservices with WebAssembly.</p>
<a href="https://github.com/spinframework/spin/actions/workflows/build.yml"><img src="https://github.com/spinframework/spin/actions/workflows/build.yml/badge.svg" alt="build status" /></a>
<a href="https://cloud-native.slack.com/archives/C089NJ9G1V0"><img alt="Slack" src="https://img.shields.io/badge/slack-spin-green.svg?logo=slack"></a>
<a href="https://www.bestpractices.dev/projects/10373"><img src="https://www.bestpractices.dev/projects/10373/badge"></a>
</div>
## What is Spin?
Spin is an open source framework for building and running fast, secure, and
composable cloud microservices with WebAssembly. It aims to be the easiest way
to get started with WebAssembly microservices, and takes advantage of the latest
developments in the
[WebAssembly component model](https://github.com/WebAssembly/component-model)
and [Wasmtime](https://wasmtime.dev/) runtime.
Spin offers a simple CLI that helps you create, distribute, and execute
applications, and in the next sections we will learn more about Spin
applications and how to get started.
## Getting started
See the [Install Spin](https://spinframework.dev/install) page of the [Spin documentation](https://spinframework.dev) for a detailed
guide on installing and configuring Spin, but in short run the following commands:
```bash
curl -fsSL https://spinframework.dev/downloads/install.sh | bash
sudo mv ./spin /usr/local/bin/spin
```
Alternatively, you could [build Spin from source](https://spinframework.dev/contributing-spin).
To get started writing apps, follow the [quickstart guide](https://spinframework.dev/quickstart/),
and then follow the
[Rust](https://spinframework.dev/rust-components/), [JavaScript](https://spinframework.dev/javascript-components), [Python](https://spinframework.dev/python-components), or [Go](https://spinframework.dev/go-components/)
language guides, and the [guide on writing Spin applications](https://spinframework.dev/writing-apps/).
## Language support
WebAssembly is a language-agnostic runtime: you can build WebAssembly components from a variety of source languages. Spin SDKs are available for several languages, including:
* JavaScript: https://github.com/spinframework/spin-js-sdk
* Rust: https://crates.io/crates/spin-sdk
* Go: https://pkg.go.dev/github.com/fermyon/spin/sdk/go/v2
* Python: https://github.com/spinframework/spin-python-sdk
* Zig: https://github.com/dasimmet/zig-spin (third party)
* Moonbit: https://github.com/gmlewis/spin-moonbit-sdk (third party)
> The Spin framework team supports the JavaScript, Rust, Go, and Python SDKs. Other language integrations are supported by their authors, and we're grateful to them for their work!
## Usage
Below is an example of using the `spin` CLI to create a new Spin application. To run the example you will need to install the `wasm32-wasip1` target for Rust.
```bash
$ rustup target add wasm32-wasip1
```
First, run the `spin new` command to create a Spin application from a template.
```bash
# Create a new Spin application named 'hello-rust' based on the Rust http template, accepting all defaults
$ spin new --accept-defaults -t http-rust hello-rust
```
Running the `spin new` command created a `hello-rust` directory with all the necessary files for your application. Change to the `hello-rust` directory and build the application with `spin build`, then run it locally with `spin up`:
```bash
# Compile to Wasm by executing the `build` command.
$ spin build
Executing the build command for component hello-rust: cargo build --target wasm32-wasip1 --release
Finished release [optimized] target(s) in 0.03s
Successfully ran the build command for the Spin components.
# Run the application locally.
$ spin up
Logging component stdio to ".spin/logs/"
Serving http://127.0.0.1:3000
Available Routes:
hello-rust: http://127.0.0.1:3000 (wildcard)
```
That's it! Now that the application is running, use your browser or cURL in another shell to try it out:
```bash
# Send a request to the application.
$ curl -i 127.0.0.1:3000
HTTP/1.1 200 OK
content-type: text/plain
transfer-encoding: chunked
date: Sun, 02 Mar 2025 20:09:11 GMT
Hello World!
```
You can make the app do more by editting the `src/lib.rs` file in the `hello-rust` directory using your favorite editor or IDE. To learn more about writing Spin applications see [Writing Applications](https://spinframework.dev/writing-apps) in the Spin documentation. To learn how to publish and distribute your application see the [Publishing and Distribution](https://spinframework.dev/distributing-apps) guide in the Spin documentation.
## Language Support for Spin Features
The table below summarizes the [feature support](https://spinframework.dev/language-support-overview) in each of the language SDKs.
| Feature | Rust SDK Supported? | TypeScript SDK Supported? | Python SDK Supported? | Tiny Go SDK Supported? | C# SDK Supported? |
|-----|-----|-----|-----|-----|-----|
| **Triggers** |
| [HTTP](https://spinframework.dev/http-trigger) | Supported | Supported | Supported | Supported | Supported |
| [Redis](https://spinframework.dev/redis-trigger) | Supported | Supported | Supported | Supported | Not Supported |
| **APIs** |
| [Outbound HTTP](https://spinframework.dev/rust-components.md#sending-outbound-http-requests) | Supported | Supported | Supported | Supported | Supported |
| [Configuration Variables](https://spinframework.dev/variables) | Supported | Supported | Supported | Supported | Supported |
| [Key Value Storage](https://spinframework.dev/kv-store-api-guide) | Supported | Supported | Supported | Supported | Not Supported |
| [SQLite Storage](https://spinframework.dev/sqlite-api-guide) | Supported | Supported | Supported | Supported | Not Supported |
| [MySQL](https://spinframework.dev/rdbms-storage#using-mysql-and-postgresql-from-applications) | Supported | Supported | Not Supported | Supported | Not Supported |
| [PostgreSQL](https://spinframework.dev/rdbms-storage#using-mysql-and-postgresql-from-applications) | Supported | Supported | Not Supported | Supported | Supported |
| [Outbound Redis](https://spinframework.dev/rust-components.md#storing-data-in-redis-from-rust-components) | Supported | Supported | Supported | Supported | Supported |
| [Serverless AI](https://spinframework.dev/serverless-ai-api-guide) | Supported | Supported | Supported | Supported | Not Supported |
| **Extensibility** |
| [Authoring Custom Triggers](https://spinframework.dev/extending-and-embedding) | Supported | Not Supported | Not Supported | Not Supported | Not Supported |
## Getting Involved and Contributing
We are delighted that you are interested in making Spin better! Thank you!
Each Monday at 2:30pm UTC (odd weeks) and 9:00pm UTC (even weeks), we meet to discuss Spin issues, roadmap, and ideas in our Spin Project Meetings. Link to the meeting can be found in the Spin Project Meeting agenda below.
The [Spin Project Meeting agenda](https://docs.google.com/document/d/1EG392gb8Eg-1ZEPDy18pgFZvMMrdAEybpCSufFXoe00/edit?usp=sharing) is a public document. The document contains a rolling agenda with the date and time of each meeting, the Zoom link, and topics of discussion for the day. You will also find the meeting minutes for each meeting and the link to the recording. If you have something you would like to demo or discuss at the project meeting, we encourage you to add it to the agenda.
You can find the contributing guide [here](https://spinframework.dev/contributing-spin).
## Stay in Touch
Follow us on Twitter: [@spinframework](https://twitter.com/spinframework)
You can join the Spin community in the [Spin CNCF Slack channel](https://cloud-native.slack.com/archives/C089NJ9G1V0) where you can ask questions, get help, and show off the cool things you are doing with Spin!

View File

@@ -1 +0,0 @@
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUd0VENDQmpxZ0F3SUJBZ0lVWWtEZ21PZFRyN2JnUlIzdGZSR29IMTRHZzNRd0NnWUlLb1pJemowRUF3TXcKTnpFVk1CTUdBMVVFQ2hNTWMybG5jM1J2Y21VdVpHVjJNUjR3SEFZRFZRUURFeFZ6YVdkemRHOXlaUzFwYm5SbApjbTFsWkdsaGRHVXdIaGNOTWpZd01qRXdNVGd4TkRNM1doY05Nall3TWpFd01UZ3lORE0zV2pBQU1Ga3dFd1lICktvWkl6ajBDQVFZSUtvWkl6ajBEQVFjRFFnQUVaekN3SElaSlJlaUNVV3JaZFgvRkRTNEFuVjVmNGxNUXV6NEcKVVBuNnF1a3A5ay8ycWdlM1JMOW4zNFF5d1VVUkt4Y3FQaG1IU3RRaFFTUkdaYXhzWWFPQ0JWa3dnZ1ZWTUE0RwpBMVVkRHdFQi93UUVBd0lIZ0RBVEJnTlZIU1VFRERBS0JnZ3JCZ0VGQlFjREF6QWRCZ05WSFE0RUZnUVVONGJoCmdjUVlaNFhZSDhFU0ZscUQ1cWVNUVg4d0h3WURWUjBqQkJnd0ZvQVUzOVBwejFZa0VaYjVxTmpwS0ZXaXhpNFkKWkQ4d1lnWURWUjBSQVFIL0JGZ3dWb1pVYUhSMGNITTZMeTluYVhSb2RXSXVZMjl0TDNOd2FXNW1jbUZ0WlhkdgpjbXN2YzNCcGJpOHVaMmwwYUhWaUwzZHZjbXRtYkc5M2N5OXlaV3hsWVhObExubHRiRUJ5WldaekwzUmhaM012CmRqTXVOaTR3TURrR0Npc0dBUVFCZzc4d0FRRUVLMmgwZEhCek9pOHZkRzlyWlc0dVlXTjBhVzl1Y3k1bmFYUm8KZFdKMWMyVnlZMjl1ZEdWdWRDNWpiMjB3RWdZS0t3WUJCQUdEdnpBQkFnUUVjSFZ6YURBMkJnb3JCZ0VFQVlPLwpNQUVEQkNoak5XTmlNek0wWkdVeU1HSXpaRGt3TldRd016YzNOamMzTldKbFpUYzJPR1V6TnpSaVltSXlNQlVHCkNpc0dBUVFCZzc4d0FRUUVCMUpsYkdWaGMyVXdJQVlLS3dZQkJBR0R2ekFCQlFRU2MzQnBibVp5WVcxbGQyOXkKYXk5emNHbHVNQjRHQ2lzR0FRUUJnNzh3QVFZRUVISmxabk12ZEdGbmN5OTJNeTQyTGpBd093WUtLd1lCQkFHRAp2ekFCQ0FRdERDdG9kSFJ3Y3pvdkwzUnZhMlZ1TG1GamRHbHZibk11WjJsMGFIVmlkWE5sY21OdmJuUmxiblF1ClkyOXRNR1FHQ2lzR0FRUUJnNzh3QVFrRVZneFVhSFIwY0hNNkx5OW5hWFJvZFdJdVkyOXRMM053YVc1bWNtRnQKWlhkdmNtc3ZjM0JwYmk4dVoybDBhSFZpTDNkdmNtdG1iRzkzY3k5eVpXeGxZWE5sTG5sdGJFQnlaV1p6TDNSaApaM012ZGpNdU5pNHdNRGdHQ2lzR0FRUUJnNzh3QVFvRUtnd29ZelZqWWpNek5HUmxNakJpTTJRNU1EVmtNRE0zCk56WTNOelZpWldVM05qaGxNemMwWW1KaU1qQWRCZ29yQmdFRUFZTy9NQUVMQkE4TURXZHBkR2gxWWkxb2IzTjAKWldRd05RWUtLd1lCQkFHRHZ6QUJEQVFuRENWb2RIUndjem92TDJkcGRHaDFZaTVqYjIwdmMzQnBibVp5WVcxbApkMjl5YXk5emNHbHVNRGdHQ2lzR0FRUUJnNzh3QVEwRUtnd29ZelZqWWpNek5HUmxNakJpTTJRNU1EVmtNRE0zCk56WTNOelZpWldVM05qaGxNemMwWW1KaU1qQWdCZ29yQmdFRUFZTy9NQUVPQkJJTUVISmxabk12ZEdGbmN5OTIKTXk0MkxqQXdHUVlLS3dZQkJBR0R2ekFCRHdRTERBazBNak0yTnprMk5qUXdNQVlLS3dZQkJBR0R2ekFCRUFRaQpEQ0JvZEhSd2N6b3ZMMmRwZEdoMVlpNWpiMjB2YzNCcGJtWnlZVzFsZDI5eWF6QVpCZ29yQmdFRUFZTy9NQUVSCkJBc01DVEU1TlRrM01qVTJOakJrQmdvckJnRUVBWU8vTUFFU0JGWU1WR2gwZEhCek9pOHZaMmwwYUhWaUxtTnYKYlM5emNHbHVabkpoYldWM2IzSnJMM053YVc0dkxtZHBkR2gxWWk5M2IzSnJabXh2ZDNNdmNtVnNaV0Z6WlM1NQpiV3hBY21WbWN5OTBZV2R6TDNZekxqWXVNREE0QmdvckJnRUVBWU8vTUFFVEJDb01LR00xWTJJek16UmtaVEl3CllqTmtPVEExWkRBek56YzJOemMxWW1WbE56WTRaVE0zTkdKaVlqSXdGQVlLS3dZQkJBR0R2ekFCRkFRR0RBUncKZFhOb01Ga0dDaXNHQVFRQmc3OHdBUlVFU3d4SmFIUjBjSE02THk5bmFYUm9kV0l1WTI5dEwzTndhVzVtY21GdApaWGR2Y21zdmMzQnBiaTloWTNScGIyNXpMM0oxYm5Ndk1qRTROell6TlRFek1qVXZZWFIwWlcxd2RITXZNVEFXCkJnb3JCZ0VFQVlPL01BRVdCQWdNQm5CMVlteHBZekNCaVFZS0t3WUJCQUhXZVFJRUFnUjdCSGtBZHdCMUFOMDkKTUdyR3h4RXlZeGtlSEpsbk53S2lTbDY0M2p5dC80ZUtjb0F2S2U2T0FBQUJuRWpETDg0QUFBUURBRVl3UkFJZwpPYll6cWFxWEptYlZKM0FGd2txMzgrWnlSWE9mQm9EdlZtQnpRbkczcmJVQ0lFSzdFdDRqZFhZZWUreExBMktNCnBRSVBKRkZXQURlNUNUOHZIcDBzWkJBUE1Bb0dDQ3FHU000OUJBTURBMmtBTUdZQ01RRHNFWHdjdVJ4ajZNeFUKa2RvRWVpMkVjR3U4Y0w0Ty9XRVo0N3d3ajZ6Nnp0OGhmYlZFcWo2RUN6VEVycHJmYys4Q01RRGZrdTNxK1VKdQpLczVhRGtOU1piYkFXbzdmQ2lzYjlKbkNPYUYzS0I5Z0tOTjZIWWh3MGpXaVpSdnd1Q1FhV240PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==

BIN
bin/spin

Binary file not shown.

View File

@@ -1 +0,0 @@
MEQCIBOrg4FEuMQ1Lc1kJbUqE1rd+iEvE1VBAdv8lHKueZ42AiBPdsJTq2CDpRKmNt8kiPBSMW6YI3DpTTVywFg1o4pUVQ==

View File

@@ -7,6 +7,11 @@ metadata:
app.kubernetes.io/component: api
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0 # Don't create extra pods during update — avoids CPU pressure
maxUnavailable: 1 # Terminate one old pod first, then start new one
selector:
matchLabels:
app.kubernetes.io/name: backend
@@ -17,6 +22,7 @@ spec:
app.kubernetes.io/name: backend
app.kubernetes.io/component: api
spec:
terminationGracePeriodSeconds: 15
initContainers:
- name: wait-for-mysql
image: busybox:1.36
@@ -24,12 +30,14 @@ spec:
- sh
- -c
- |
echo "Waiting for MySQL to be ready..."
echo "Waiting for MySQL TCP to be available..."
until nc -z mysql 3306; do
echo "MySQL is not ready yet, retrying in 3s..."
echo "MySQL not reachable yet, retrying in 3s..."
sleep 3
done
echo "MySQL is ready!"
echo "MySQL TCP is up. Waiting 15s for full initialization..."
sleep 15
echo "Proceeding to start backend."
containers:
- name: backend
image: scrum-backend:latest
@@ -46,12 +54,12 @@ spec:
valueFrom:
secretKeyRef:
name: mysql-secret
key: DB_USER
key: MYSQL_USER
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: DB_PASSWORD
key: MYSQL_PASSWORD
- name: DB_NAME
valueFrom:
secretKeyRef:
@@ -62,10 +70,10 @@ spec:
resources:
requests:
cpu: 100m
memory: 128Mi
memory: 128Mi # Request drives scheduling — keep low so pods fit on nodes
limits:
cpu: 500m
memory: 256Mi
memory: 512Mi # Limit prevents OOMKill during startup spikes
livenessProbe:
httpGet:
path: /api/health

View File

@@ -14,11 +14,6 @@ data:
root /usr/share/nginx/html;
index index.html;
# Serve static files
location / {
try_files $uri $uri/ /index.html;
}
# Proxy API requests to backend service
location /api/ {
proxy_pass http://backend:3001;
@@ -27,5 +22,23 @@ data:
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 60s;
}
# Proxy Socket.io (real-time notifications)
location /socket.io/ {
proxy_pass http://backend:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 3600s;
}
# Serve static files — React SPA catch-all
location / {
try_files $uri $uri/ /index.html;
}
}

View File

@@ -7,6 +7,11 @@ metadata:
app.kubernetes.io/component: web
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0 # Don't create extra pods during update — avoids CPU pressure
maxUnavailable: 1 # Terminate one old pod first, then start new one
selector:
matchLabels:
app.kubernetes.io/name: frontend

View File

@@ -6,7 +6,7 @@ metadata:
app.kubernetes.io/name: frontend
app.kubernetes.io/component: web
spec:
type: NodePort
type: LoadBalancer
ports:
- port: 80
targetPort: 80

View File

@@ -19,6 +19,11 @@ spec:
app.kubernetes.io/name: mysql
app.kubernetes.io/component: database
spec:
# fsGroup 999 = mysql group in the container image.
# Without this, the hostPath volume is owned by root and MySQL
# cannot write to /var/lib/mysql → pod CrashLoops immediately.
securityContext:
fsGroup: 999
containers:
- name: mysql
image: mysql:8.0
@@ -36,6 +41,21 @@ spec:
secretKeyRef:
name: mysql-secret
key: DB_NAME
# Allow root to connect from backend pods (any host), not just localhost.
- name: MYSQL_ROOT_HOST
value: "%"
# Create the app user on first init. Required if PVC is ever wiped and
# MySQL reinitializes — otherwise scrumapp user won't exist and backend fails.
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_USER
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_PASSWORD
volumeMounts:
- name: mysql-data
mountPath: /var/lib/mysql
@@ -49,25 +69,24 @@ spec:
livenessProbe:
exec:
command:
- mysqladmin
- ping
- -h
- localhost
initialDelaySeconds: 30
- sh
- -c
- mysqladmin ping -h 127.0.0.1 -u root -p"$MYSQL_ROOT_PASSWORD" --silent
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
exec:
command:
- mysqladmin
- ping
- -h
- localhost
initialDelaySeconds: 10
- sh
- -c
- mysqladmin ping -h 127.0.0.1 -u root -p"$MYSQL_ROOT_PASSWORD" --silent
# MySQL 8.0 first-run initialization takes 30-60s on slow disks.
initialDelaySeconds: 30
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 5
failureThreshold: 10
volumes:
- name: mysql-data
persistentVolumeClaim:

View File

@@ -7,11 +7,13 @@ metadata:
app.kubernetes.io/component: database
type: Opaque
data:
# Base64 encoded values — change these for production!
# echo -n 'scrumpass' | base64 => c2NydW1wYXNz
# echo -n 'root' | base64 => cm9vdA==
# echo -n 'scrum_manager' | base64 => c2NydW1fbWFuYWdlcg==
MYSQL_ROOT_PASSWORD: c2NydW1wYXNz
DB_USER: cm9vdA==
DB_PASSWORD: c2NydW1wYXNz
MYSQL_USER: c2NydW1hcHA=
MYSQL_PASSWORD: c2NydW1wYXNz
DB_NAME: c2NydW1fbWFuYWdlcg==
# Decode reference:
# MYSQL_ROOT_PASSWORD: scrumpass
# MYSQL_USER: scrumapp
# MYSQL_PASSWORD: scrumpass
# DB_NAME: scrum_manager

View File

@@ -0,0 +1,95 @@
#!/usr/bin/env bash
set -euo pipefail
# ── Scrum Manager — On-Premise Kubernetes Deploy Script ─────────────────────
# Run from the project root: bash k8s/overlays/on-premise/deploy.sh
# ────────────────────────────────────────────────────────────────────────────
OVERLAY="k8s/overlays/on-premise"
NAMESPACE="scrum-manager"
REGISTRY="${REGISTRY:-}" # Optional: set to your registry, e.g. "192.168.1.10:5000"
RED='\033[0;31m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'; NC='\033[0m'
info() { echo -e "${GREEN}[INFO]${NC} $*"; }
warn() { echo -e "${YELLOW}[WARN]${NC} $*"; }
error() { echo -e "${RED}[ERROR]${NC} $*"; exit 1; }
# ── Pre-flight checks ────────────────────────────────────────────────────────
info "Checking prerequisites..."
command -v kubectl >/dev/null 2>&1 || error "kubectl not found"
command -v docker >/dev/null 2>&1 || error "docker not found"
kubectl cluster-info >/dev/null 2>&1 || error "Cannot reach Kubernetes cluster. Check kubeconfig."
info "Prerequisites OK."
# ── Multi-node: hostPath nodeAffinity reminder ───────────────────────────────
NODE_COUNT=$(kubectl get nodes --no-headers 2>/dev/null | wc -l)
if [ "$NODE_COUNT" -gt 1 ]; then
warn "Multi-node cluster detected ($NODE_COUNT nodes)."
warn "MySQL data is stored at /mnt/data/mysql on ONE node only."
warn "Open k8s/overlays/on-premise/mysql-pv.yaml and uncomment"
warn "the nodeAffinity block, setting it to the correct node hostname."
warn "Run: kubectl get nodes to list hostnames."
read -rp "Press ENTER to continue anyway, or Ctrl+C to abort and fix first..."
fi
# ── Build Docker images ──────────────────────────────────────────────────────
info "Building Docker images..."
BACKEND_TAG="${REGISTRY:+${REGISTRY}/}scrum-backend:latest"
FRONTEND_TAG="${REGISTRY:+${REGISTRY}/}scrum-frontend:latest"
docker build -t "$BACKEND_TAG" -f server/Dockerfile server/
docker build -t "$FRONTEND_TAG" -f Dockerfile .
# ── Push or load images into cluster ────────────────────────────────────────
if [ -n "$REGISTRY" ]; then
info "Pushing images to registry $REGISTRY..."
docker push "$BACKEND_TAG"
docker push "$FRONTEND_TAG"
else
warn "No REGISTRY set. Attempting to load images via 'docker save | ssh'..."
warn "If you have a single-node cluster and Docker runs on the same host,"
warn "set imagePullPolicy: Never in the deployments (already set)."
warn "For multi-node, set REGISTRY=<your-registry> before running this script."
warn ""
warn " Alternatively, load images manually on each node with:"
warn " docker save scrum-backend:latest | ssh NODE docker load"
warn " docker save scrum-frontend:latest | ssh NODE docker load"
fi
# ── Apply Kubernetes manifests ────────────────────────────────────────────────
info "Applying manifests via kustomize..."
kubectl apply -k "$OVERLAY"
# ── Wait for rollout ──────────────────────────────────────────────────────────
info "Waiting for MySQL to become ready (this can take up to 90s on first run)..."
kubectl rollout status deployment/mysql -n "$NAMESPACE" --timeout=120s || \
warn "MySQL rollout timed out — check: kubectl describe pod -l app.kubernetes.io/name=mysql -n $NAMESPACE"
info "Waiting for backend..."
kubectl rollout status deployment/backend -n "$NAMESPACE" --timeout=90s || \
warn "Backend rollout timed out — check: kubectl logs -l app.kubernetes.io/name=backend -n $NAMESPACE"
info "Waiting for frontend..."
kubectl rollout status deployment/frontend -n "$NAMESPACE" --timeout=60s || \
warn "Frontend rollout timed out."
# ── Show access info ──────────────────────────────────────────────────────────
echo ""
info "Deploy complete! Access the app:"
NODEPORT=$(kubectl get svc frontend -n "$NAMESPACE" -o jsonpath='{.spec.ports[0].nodePort}' 2>/dev/null || echo "")
NODE_IP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}' 2>/dev/null || echo "<NODE-IP>")
if [ -n "$NODEPORT" ]; then
echo ""
echo -e " NodePort: ${GREEN}http://${NODE_IP}:${NODEPORT}${NC}"
fi
echo ""
echo -e " Ingress: ${GREEN}http://scrum.local${NC} (add '$NODE_IP scrum.local' to /etc/hosts)"
echo ""
echo "Useful commands:"
echo " kubectl get pods -n $NAMESPACE"
echo " kubectl logs -f deployment/backend -n $NAMESPACE"
echo " kubectl logs -f deployment/mysql -n $NAMESPACE"

View File

@@ -0,0 +1,32 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: scrum-manager-ingress
annotations:
kubernetes.io/ingress.class: nginx
# No rewrite-target here — the old global rewrite-target: / was
# rewriting every path (including /api/tasks) to just /, breaking the API.
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
spec:
rules:
- host: scrum.local
http:
paths:
# Socket.io long-polling and WebSocket connections go directly to backend.
- path: /socket.io
pathType: Prefix
backend:
service:
name: backend
port:
number: 3001
# All other traffic (including /api/) goes to frontend nginx,
# which proxies /api/ to backend internally. This avoids double-routing.
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80

View File

@@ -0,0 +1,38 @@
# apiVersion: kustomize.config.k8s.io/v1beta1
# kind: Kustomization
# resources:
# - ../../base
# - mysql-pv.yaml
# - ingress.yaml
# patches:
# - path: mysql-pvc-patch.yaml
# target:
# kind: PersistentVolumeClaim
# name: mysql-data-pvc
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
- ingress.yaml
patches:
# This patch explicitly sets storageClassName: local-path to match the live
# PVC in the cluster. Without it, the base PVC (no storageClassName = nil)
# diffs against the existing "local-path" value and kubectl apply tries to
# mutate a bound PVC, which Kubernetes forbids.
- path: mysql-pvc-patch.yaml
target:
kind: PersistentVolumeClaim
name: mysql-data-pvc
images:
- name: scrum-frontend
newName: 192.168.108.200:80/library/scrum-frontend
newTag: latest
- name: scrum-backend
newName: 192.168.108.200:80/library/scrum-backend
newTag: latest

View File

@@ -0,0 +1,14 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-data-pvc
spec:
# Must explicitly match the storageClassName already on the live PVC.
# Without this, kubectl apply diffs nil (base has no field) vs "local-path"
# (cluster) and tries to mutate a bound PVC — which Kubernetes forbids.
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi

View File

@@ -1,4 +1,3 @@
server {
listen 80;
server_name localhost;
@@ -6,12 +5,7 @@ server {
root /usr/share/nginx/html;
index index.html;
# Serve static files
location / {
try_files $uri $uri/ /index.html;
}
# Proxy API requests to backend
# Proxy API requests to backend service
location /api/ {
proxy_pass http://backend:3001;
proxy_http_version 1.1;
@@ -19,5 +13,23 @@ server {
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 60s;
}
# Proxy Socket.io (real-time notifications)
location /socket.io/ {
proxy_pass http://backend:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 3600s;
}
# Serve static files — React SPA catch-all
location / {
try_files $uri $uri/ /index.html;
}
}

86
package-lock.json generated
View File

@@ -10,7 +10,8 @@
"dependencies": {
"react": "^19.2.0",
"react-dom": "^19.2.0",
"recharts": "^3.7.0"
"recharts": "^3.7.0",
"socket.io-client": "^4.8.3"
},
"devDependencies": {
"@eslint/js": "^9.39.1",
@@ -1542,6 +1543,11 @@
"win32"
]
},
"node_modules/@socket.io/component-emitter": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/@socket.io/component-emitter/-/component-emitter-3.1.2.tgz",
"integrity": "sha512-9BCxFwvbGg/RsZK9tjXd8s4UcwR0MWeFQ1XEKIQVVvAGJyINdrqKMcTRyLoK8Rse1GjzLV9cwjWV1olXRWEXVA=="
},
"node_modules/@standard-schema/spec": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/@standard-schema/spec/-/spec-1.1.0.tgz",
@@ -2631,7 +2637,6 @@
"version": "4.4.3",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.4.3.tgz",
"integrity": "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA==",
"dev": true,
"dependencies": {
"ms": "^2.1.3"
},
@@ -2683,6 +2688,26 @@
"integrity": "sha512-9tfDXhJ4RKFNerfjdCcZfufu49vg620741MNs26a9+bhLThdB+plgMeou98CAaHu/WATj2iHOOHTp1hWtABj2A==",
"dev": true
},
"node_modules/engine.io-client": {
"version": "6.6.4",
"resolved": "https://registry.npmjs.org/engine.io-client/-/engine.io-client-6.6.4.tgz",
"integrity": "sha512-+kjUJnZGwzewFDw951CDWcwj35vMNf2fcj7xQWOctq1F2i1jkDdVvdFG9kM/BEChymCH36KgjnW0NsL58JYRxw==",
"dependencies": {
"@socket.io/component-emitter": "~3.1.0",
"debug": "~4.4.1",
"engine.io-parser": "~5.2.1",
"ws": "~8.18.3",
"xmlhttprequest-ssl": "~2.1.1"
}
},
"node_modules/engine.io-parser": {
"version": "5.2.3",
"resolved": "https://registry.npmjs.org/engine.io-parser/-/engine.io-parser-5.2.3.tgz",
"integrity": "sha512-HqD3yTBfnBxIrbnM1DoD6Pcq8NECnh8d4As1Qgh0z5Gg3jRRIqijury0CL3ghu/edArpUYiYqQiDUQBIs4np3Q==",
"engines": {
"node": ">=10.0.0"
}
},
"node_modules/entities": {
"version": "6.0.1",
"resolved": "https://registry.npmjs.org/entities/-/entities-6.0.1.tgz",
@@ -3485,8 +3510,7 @@
"node_modules/ms": {
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==",
"dev": true
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA=="
},
"node_modules/nanoid": {
"version": "3.3.11",
@@ -3988,6 +4012,32 @@
"integrity": "sha512-ybx0WO1/8bSBLEWXZvEd7gMW3Sn3JFlW3TvX1nREbDLRNQNaeNN8WK0meBwPdAaOI7TtRRRJn/Es1zhrrCHu7g==",
"dev": true
},
"node_modules/socket.io-client": {
"version": "4.8.3",
"resolved": "https://registry.npmjs.org/socket.io-client/-/socket.io-client-4.8.3.tgz",
"integrity": "sha512-uP0bpjWrjQmUt5DTHq9RuoCBdFJF10cdX9X+a368j/Ft0wmaVgxlrjvK3kjvgCODOMMOz9lcaRzxmso0bTWZ/g==",
"dependencies": {
"@socket.io/component-emitter": "~3.1.0",
"debug": "~4.4.1",
"engine.io-client": "~6.6.1",
"socket.io-parser": "~4.2.4"
},
"engines": {
"node": ">=10.0.0"
}
},
"node_modules/socket.io-parser": {
"version": "4.2.5",
"resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-4.2.5.tgz",
"integrity": "sha512-bPMmpy/5WWKHea5Y/jYAP6k74A+hvmRCQaJuJB6I/ML5JZq/KfNieUVo/3Mh7SAqn7TyFdIo6wqYHInG1MU1bQ==",
"dependencies": {
"@socket.io/component-emitter": "~3.1.0",
"debug": "~4.4.1"
},
"engines": {
"node": ">=10.0.0"
}
},
"node_modules/source-map-js": {
"version": "1.2.1",
"resolved": "https://registry.npmjs.org/source-map-js/-/source-map-js-1.2.1.tgz",
@@ -4525,6 +4575,26 @@
"node": ">=0.10.0"
}
},
"node_modules/ws": {
"version": "8.18.3",
"resolved": "https://registry.npmjs.org/ws/-/ws-8.18.3.tgz",
"integrity": "sha512-PEIGCY5tSlUt50cqyMXfCzX+oOPqN0vuGqWzbcJ2xvnkzkq46oOpz7dQaTDBdfICb4N14+GARUDw2XV2N4tvzg==",
"engines": {
"node": ">=10.0.0"
},
"peerDependencies": {
"bufferutil": "^4.0.1",
"utf-8-validate": ">=5.0.2"
},
"peerDependenciesMeta": {
"bufferutil": {
"optional": true
},
"utf-8-validate": {
"optional": true
}
}
},
"node_modules/xml-name-validator": {
"version": "5.0.0",
"resolved": "https://registry.npmjs.org/xml-name-validator/-/xml-name-validator-5.0.0.tgz",
@@ -4540,6 +4610,14 @@
"integrity": "sha512-JZnDKK8B0RCDw84FNdDAIpZK+JuJw+s7Lz8nksI7SIuU3UXJJslUthsi+uWBUYOwPFwW7W7PRLRfUKpxjtjFCw==",
"dev": true
},
"node_modules/xmlhttprequest-ssl": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-2.1.2.tgz",
"integrity": "sha512-TEU+nJVUUnA4CYJFLvK5X9AOeH4KvDvhIfm0vV1GaQRtchnG0hgK5p8hw/xjv8cunWYCsiPCSDzObPyhEwq3KQ==",
"engines": {
"node": ">=0.4.0"
}
},
"node_modules/yallist": {
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/yallist/-/yallist-3.1.1.tgz",

View File

@@ -14,7 +14,8 @@
"dependencies": {
"react": "^19.2.0",
"react-dom": "^19.2.0",
"recharts": "^3.7.0"
"recharts": "^3.7.0",
"socket.io-client": "^4.8.3"
},
"devDependencies": {
"@eslint/js": "^9.39.1",
@@ -36,4 +37,4 @@
"vite": "^7.3.1",
"vitest": "^4.0.18"
}
}
}

View File

@@ -1,16 +0,0 @@
async function handleRequest(req, res) {
console.log("Handle called");
return {
status: 200,
headers: { 'content-type': 'text/plain' },
body: 'Hello from Spin Wasm (Corrected Export)!'
};
}
export const incomingHandler = {
handle: handleRequest
};
// Keep default just in case, but incomingHandler is key
export default handleRequest;

View File

@@ -1,137 +0,0 @@
import { Hono } from 'hono';
import { handleRequest } from '@fermyon/spin-sdk';
const app = new Hono();
// Middleware to mock Express request/response specific methods if needed
// Or we just rewrite routes to use Hono context.
// Since rewriting all routes is heavy, let's try to mount simple wrappers
// or just import the router logic if we refactor routes.
// Given the implementation plan said "Re-implement routing logic",
// and the routes are currently Express routers, we probably need to wrap them
// or quickly rewrite them to Hono.
// Strategy: Import the routes and mount them.
// BUT Express routers won't work in Hono.
// We must rewrite the route definitions in this file or transformed files.
// For "quick deployment", I will inline the mounting of existing logic where possible,
// using the db_spin adapter.
import pool from './db_spin.js';
import bcrypt from 'bcryptjs';
// import { randomUUID } from 'crypto'; // Use global crypto
const randomUUID = () => crypto.randomUUID();
// --- AUTH ROUTES ---
app.post('/api/auth/login', async (c) => {
try {
const { email, password } = await c.req.json();
if (!email || !password) return c.json({ error: 'Email and password required' }, 400);
const [rows] = await pool.query('SELECT * FROM users WHERE email = ?', [email]);
if (rows.length === 0) return c.json({ error: 'Invalid email or password' }, 401);
const user = rows[0];
const valid = await bcrypt.compare(password, user.password_hash);
if (!valid) return c.json({ error: 'Invalid email or password' }, 401);
return c.json({
id: user.id, name: user.name, role: user.role, email: user.email,
color: user.color, avatar: user.avatar, dept: user.dept,
});
} catch (e) {
console.error(e);
return c.json({ error: 'Internal server error' }, 500);
}
});
app.post('/api/auth/register', async (c) => {
try {
const { name, email, password, role, dept } = await c.req.json();
if (!name || !email || !password) return c.json({ error: 'Required fields missing' }, 400);
const [existing] = await pool.query('SELECT id FROM users WHERE email = ?', [email]);
if (existing.length > 0) return c.json({ error: 'Email already registered' }, 409);
const id = randomUUID();
const password_hash = await bcrypt.hash(password, 10);
// ... (simplified avatar logic)
const avatar = name.substring(0, 2).toUpperCase();
const color = '#818cf8';
await pool.query(
'INSERT INTO users (id, name, role, email, password_hash, color, avatar, dept) VALUES (?, ?, ?, ?, ?, ?, ?, ?)',
[id, name, role || 'employee', email, password_hash, color, avatar, dept || '']
);
return c.json({ id, name, role: role || 'employee', email, color, avatar, dept: dept || '' }, 201);
} catch (e) {
console.error(e);
return c.json({ error: 'Internal server error' }, 500);
}
});
// --- TASKS ROUTES (Simplified for Wasm demo) ---
async function getFullTask(taskId) {
const [taskRows] = await pool.query('SELECT * FROM tasks WHERE id = ?', [taskId]);
if (taskRows.length === 0) return null;
const task = taskRows[0];
// For brevity, not fetching sub-resources in this quick conversion,
// but in full prod we would.
// ... complete implementation would replicate existing logic ...
return task;
}
app.get('/api/tasks', async (c) => {
try {
const [rows] = await pool.query('SELECT * FROM tasks ORDER BY created_at DESC');
// Simplify for now: Just return tasks
return c.json(rows);
} catch (e) {
console.error(e);
return c.json({ error: 'Internal server error' }, 500);
}
});
app.post('/api/tasks', async (c) => {
try {
const { title, description, status, priority, assignee, reporter, dueDate } = await c.req.json();
const id = randomUUID();
await pool.query(
'INSERT INTO tasks (id, title, description, status, priority, assignee_id, reporter_id, due_date) VALUES (?, ?, ?, ?, ?, ?, ?, ?)',
[id, title, description || '', status || 'todo', priority || 'medium', assignee || null, reporter || null, dueDate || null]
);
return c.json({ id, title, status }, 201);
} catch (e) {
console.error(e);
return c.json({ error: 'Internal server error' }, 500);
}
});
// Health check
app.get('/api/health', (c) => c.json({ status: 'ok', engine: 'spin-wasm' }));
// Export the Spin handler
export const spinHandler = async (req, res) => {
// Spin generic handler interacting with Hono?
// Hono has a fetch method: app.fetch(request, env, ctx)
// Spin request is slightly different, but let's see if we can adapt.
// Actually, Spin JS SDK v2 exports `handleRequest` which takes (request, response).
// Hono might need an adapter.
// Simple adapter for Hono .fetch to Spin
// Construct standard Request object from Spin calls if needed,
// but simpler to use Hono's handle() if passing standard web standards.
// Assuming standard web signature is passed by recent Spin SDKs or we use 'node-adapter' if built via bundling.
// But since we are likely using a bundler, strict Spin API is:
// export async function handleRequest(request: Request): Promise<Response>
return app.fetch(req);
};
// If using valid Spin JS plugin that looks for `handleRequest` as default export or named export
// We will export it as `handleRequest` (default)
export default async function (req) {
return await app.fetch(req);
}

View File

@@ -1,17 +0,0 @@
import { build } from 'esbuild';
import { SpinEsbuildPlugin } from "@spinframework/build-tools/plugins/esbuild/index.js";
const spinPlugin = await SpinEsbuildPlugin();
await build({
entryPoints: ['./app_spin.js'],
outfile: './dist/spin.js',
bundle: true,
format: 'esm',
platform: 'node',
sourcemap: false,
minify: false,
plugins: [spinPlugin],
target: 'es2020',
external: ['fermyon:*', 'spin:*'],
});

View File

@@ -98,6 +98,20 @@ export async function initDB() {
)
`);
await conn.query(`
CREATE TABLE IF NOT EXISTS notifications (
id VARCHAR(36) PRIMARY KEY,
user_id VARCHAR(36) NOT NULL,
type VARCHAR(50) NOT NULL,
title VARCHAR(255) NOT NULL,
message TEXT,
link VARCHAR(255),
is_read BOOLEAN DEFAULT FALSE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
)
`);
console.log('✅ Database tables initialized');
} finally {
conn.release();

View File

@@ -1,94 +0,0 @@
import { Mysql } from '@fermyon/spin-sdk';
const getEnv = (key, def) => {
try {
return (typeof process !== 'undefined' && process.env && process.env[key]) || def;
} catch {
return def;
}
};
const DB_URL = `mysql://${getEnv('DB_USER', 'root')}:${getEnv('DB_PASSWORD', 'scrumpass')}@${getEnv('DB_HOST', 'localhost')}:${getEnv('DB_PORT', '3306')}/${getEnv('DB_NAME', 'scrum_manager')}`;
function rowToObject(row, columns) {
const obj = {};
columns.forEach((col, index) => {
obj[col.name] = row[index];
});
return obj;
}
class SpinConnection {
constructor(conn) {
this.conn = conn;
}
async query(sql, params = []) {
console.log('SpinDB Query:', sql, params);
try {
const result = this.conn.query(sql, params);
const rows = result.rows.map(r => rowToObject(r, result.columns));
const fields = result.columns.map(c => ({ name: c.name }));
if (sql.trim().toUpperCase().startsWith('INSERT') || sql.trim().toUpperCase().startsWith('UPDATE') || sql.trim().toUpperCase().startsWith('DELETE')) {
return [{ affectedRows: result.rowsAffected || 0, insertId: result.lastInsertId || 0 }, fields];
}
return [rows, fields];
} catch (e) {
console.error('SpinDB Error:', e);
throw e;
}
}
async beginTransaction() {
try {
this.conn.query('START TRANSACTION', []);
} catch (e) {
console.warn('Transaction start failed:', e.message);
}
}
async commit() {
try { this.conn.query('COMMIT', []); } catch (e) { }
}
async rollback() {
try { this.conn.query('ROLLBACK', []); } catch (e) { }
}
release() { }
}
export const initDB = async () => {
console.log('Spin DB adapter ready.');
};
// Lazy initialization to avoid Wizer issues
let poolInstance = null;
function getPool() {
if (!poolInstance) {
poolInstance = {
async getConnection() {
const conn = Mysql.open(DB_URL);
return new SpinConnection(conn);
},
async query(sql, params) {
const conn = await this.getConnection();
const result = await conn.query(sql, params);
return result;
},
escape: (val) => `'${val}'`,
end: () => { }
};
}
return poolInstance;
}
export default {
query: (sql, params) => getPool().query(sql, params),
getConnection: () => getPool().getConnection(),
end: () => getPool().end(),
initDB
};

View File

@@ -4,31 +4,57 @@ import { initDB } from './db.js';
import authRoutes from './routes/auth.js';
import taskRoutes from './routes/tasks.js';
import exportRoutes from './routes/export.js';
import notificationRoutes from './routes/notifications.js';
import { createServer } from 'http';
import { Server } from 'socket.io';
const app = express();
const httpServer = createServer(app);
const io = new Server(httpServer, {
cors: {
origin: "*",
methods: ["GET", "POST"]
}
});
const PORT = process.env.PORT || 3001;
app.use(cors());
app.use(express.json());
// Socket.io connection handling
io.on('connection', (socket) => {
socket.on('join', (userId) => {
socket.join(userId);
console.log(`User ${userId} joined notification room`);
});
});
// Middleware to attach io to req
app.use((req, res, next) => {
req.io = io;
next();
});
// Routes
app.use('/api/auth', authRoutes);
app.use('/api/tasks', taskRoutes);
app.use('/api/export', exportRoutes);
app.use('/api/notifications', notificationRoutes);
// Health check
app.get('/api/health', (_req, res) => {
res.json({ status: 'ok', timestamp: new Date().toISOString() });
});
// Initialize DB and start server
// Initialize DB and start server
async function start() {
try {
await initDB();
if (process.env.NODE_ENV !== 'test') {
app.listen(PORT, () => {
console.log(`🚀 Backend server running on port ${PORT}`);
httpServer.listen(PORT, () => {
console.log(`🚀 Backend server running on port ${PORT} with Socket.io`);
});
}
} catch (err) {
@@ -41,4 +67,4 @@ if (process.env.NODE_ENV !== 'test') {
start();
}
export { app, start };
export { app, start, io };

View File

@@ -1,10 +0,0 @@
{
"packages": {
"@fermyon/spin-sdk": {
"witPath": "../../bin/wit",
"world": "spin-imports"
}
},
"project": {},
"version": 1
}

3139
server/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -5,21 +5,16 @@
"type": "module",
"scripts": {
"start": "node index.js",
"test": "vitest",
"build:spin": "node build.mjs && node node_modules/@fermyon/spin-sdk/bin/j2w.mjs -i dist/spin.js -o dist/main.wasm --trigger-type spin3-http"
"test": "vitest"
},
"dependencies": {
"@fermyon/spin-sdk": "^2.2.0",
"@spinframework/wasi-http-proxy": "^1.0.0",
"bcryptjs": "^3.0.2",
"cors": "^2.8.5",
"express": "^5.1.0",
"hono": "^4.6.14",
"mysql2": "^3.14.1"
"mysql2": "^3.14.1",
"socket.io": "^4.8.3"
},
"devDependencies": {
"@spinframework/build-tools": "^1.0.7",
"esbuild": "^0.24.2",
"supertest": "^7.2.2",
"vitest": "^4.0.18"
}

View File

@@ -1,23 +0,0 @@
// Polyfill for crypto in Wizer environment
if (!globalThis.crypto) {
globalThis.crypto = {
getRandomValues: (buffer) => {
// Check if buffer is valid
if (!buffer || typeof buffer.length !== 'number') {
throw new Error("crypto.getRandomValues: invalid buffer");
}
// Fill with pseudo-random numbers
for (let i = 0; i < buffer.length; i++) {
buffer[i] = Math.floor(Math.random() * 256);
}
return buffer;
},
randomUUID: () => {
return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function (c) {
var r = Math.random() * 16 | 0, v = c == 'x' ? r : (r & 0x3 | 0x8);
return v.toString(16);
});
}
};
console.log("Polyfilled globalThis.crypto for Wizer/Spin");
}

View File

@@ -0,0 +1,51 @@
import { Router } from 'express';
import pool from '../db.js';
import { randomUUID } from 'crypto';
const router = Router();
// GET /api/notifications/:userId
router.get('/:userId', async (req, res) => {
try {
const [rows] = await pool.query(
'SELECT * FROM notifications WHERE user_id = ? ORDER BY created_at DESC LIMIT 50',
[req.params.userId]
);
res.json(rows);
} catch (err) {
console.error('Fetch notifications error:', err);
res.status(500).json({ error: 'Internal server error' });
}
});
// PUT /api/notifications/:id/read
router.put('/:id/read', async (req, res) => {
try {
await pool.query('UPDATE notifications SET is_read = TRUE WHERE id = ?', [req.params.id]);
res.json({ success: true });
} catch (err) {
console.error('Mark notification read error:', err);
res.status(500).json({ error: 'Internal server error' });
}
});
// Helper: Create notification and emit if possible
export async function createNotification(req, { userId, type, title, message, link }) {
try {
const id = randomUUID();
await pool.query(
'INSERT INTO notifications (id, user_id, type, title, message, link) VALUES (?, ?, ?, ?, ?, ?)',
[id, userId, type, title, message, link]
);
if (req.io) {
req.io.to(userId).emit('notification', {
id, user_id: userId, type, title, message, link, is_read: false, created_at: new Date()
});
}
} catch (err) {
console.error('Create notification error:', err);
}
}
export default router;

View File

@@ -1,6 +1,7 @@
import { Router } from 'express';
import pool from '../db.js';
import { randomUUID } from 'crypto';
import { createNotification } from './notifications.js';
const router = Router();
@@ -88,6 +89,17 @@ router.post('/', async (req, res) => {
[actId, id, '📝 Task created']);
await conn.commit();
if (assignee) {
await createNotification(req, {
userId: assignee,
type: 'assignment',
title: 'New Task Assigned',
message: `You have been assigned to task: ${title}`,
link: `/tasks?id=${id}`
});
}
const task = await getFullTask(id);
res.status(201).json(task);
} catch (err) {
@@ -178,6 +190,24 @@ router.post('/:id/comments', async (req, res) => {
const timestamp = new Date();
await pool.query('INSERT INTO comments (id, task_id, user_id, text, timestamp) VALUES (?, ?, ?, ?, ?)',
[id, req.params.id, userId, text, timestamp]);
// Mention detection: @[Name](userId)
const mentions = text.match(/@\[([^\]]+)\]\(([^)]+)\)/g);
if (mentions) {
const mentionedUserIds = [...new Set(mentions.map(m => m.match(/\(([^)]+)\)/)[1]))];
for (const mId of mentionedUserIds) {
if (mId !== userId) {
await createNotification(req, {
userId: mId,
type: 'mention',
title: 'New Mention',
message: `You were mentioned in a comment on task ${req.params.id}`,
link: `/tasks?id=${req.params.id}`
});
}
}
}
res.status(201).json({ id, userId, text, timestamp: timestamp.toISOString() });
} catch (err) {
console.error('Add comment error:', err);

View File

@@ -1,34 +0,0 @@
spin_manifest_version = 2
[application]
name = "scrum-manager"
version = "1.0.0"
authors = ["Antigravity <antigravity@example.com>"]
description = "Scrum Manager application running on Fermyon Spin"
[[trigger.http]]
route = "/api/..."
component = "scrum-manager-api"
[[trigger.http]]
route = "/..."
component = "scrum-manager-ui"
[component.scrum-manager-api]
source = "server/dist/main.wasm"
allowed_outbound_hosts = ["mysql://localhost:3306", "https://*:*"]
[component.scrum-manager-api.build]
command = "npm run build:spin"
workdir = "server"
[component.scrum-manager-api.variables]
db_host = "localhost"
db_port = "3306"
db_user = "root"
db_password = "scrumpass"
db_name = "scrum_manager"
[component.scrum-manager-ui]
source = { url = "https://github.com/fermyon/spin-fileserver/releases/download/v0.1.0/spin_static_fs.wasm", digest = "sha256:96c76d9af86420b39eb6cd7be5550e3cb5d4cc4de572ce0fd1f6a29471536cb4" }
files = [{ source = "dist", destination = "/" }]
[component.scrum-manager-ui.build]
command = "npm run build"

View File

@@ -12,6 +12,7 @@ import { TaskDrawer, AddTaskModal } from './TaskDrawer';
import { DashboardPage } from './Dashboard';
import { TeamTasksPage, MembersPage } from './Pages';
import { ReportsPage } from './Reports';
import { NotificationProvider } from './NotificationContext';
import './index.css';
const PAGE_TITLES: Record<string, string> = {
@@ -24,7 +25,10 @@ const VIEW_PAGES = ['calendar', 'kanban', 'list'];
export default function App() {
const now = new Date();
const [currentUser, setCurrentUser] = useState<User | null>(null);
const [currentUser, setCurrentUser] = useState<User | null>(() => {
try { const s = localStorage.getItem('currentUser'); return s ? JSON.parse(s) : null; }
catch { return null; }
});
const [users, setUsers] = useState<User[]>([]);
const [tasks, setTasks] = useState<Task[]>([]);
const [activePage, setActivePage] = useState('calendar');
@@ -57,7 +61,7 @@ export default function App() {
.finally(() => setLoading(false));
}, [currentUser]);
if (!currentUser) return <LoginPage onLogin={u => { setCurrentUser(u); setActivePage('calendar'); setActiveView('calendar'); }} />;
if (!currentUser) return <LoginPage onLogin={u => { localStorage.setItem('currentUser', JSON.stringify(u)); setCurrentUser(u); setActivePage('calendar'); setActiveView('calendar'); }} />;
const handleNavigate = (page: string) => {
setActivePage(page);
@@ -242,56 +246,58 @@ export default function App() {
}
return (
<div className="app-shell">
<TopNavbar title={pageTitle} filterUser={filterUser} onFilterChange={setFilterUser}
searchQuery={searchQuery} onSearch={setSearchQuery} onNewTask={handleNewTask}
onOpenSidebar={() => setSidebarOpen(true)} users={users} />
<div className="app-body">
<Sidebar currentUser={currentUser} activePage={activePage} onNavigate={handleNavigate}
onSignOut={() => { setCurrentUser(null); setActivePage('calendar'); setActiveView('calendar'); setSidebarOpen(false); }}
isOpen={sidebarOpen} onClose={() => setSidebarOpen(false)} users={users} />
<div className="main-content">
{displayPage === 'calendar' && (
<CalendarView tasks={tasks} currentUser={currentUser} calMonth={calMonth} calView={calView}
onMonthChange={setCalMonth} onViewChange={setCalView} onTaskClick={handleTaskClick}
onDayClick={handleDayClick} filterUser={filterUser} searchQuery={searchQuery} users={users} />
)}
{displayPage === 'kanban' && (
<KanbanBoard tasks={tasks} currentUser={currentUser} onTaskClick={handleTaskClick}
onAddTask={handleKanbanAdd} onMoveTask={handleMoveTask} filterUser={filterUser} searchQuery={searchQuery} users={users} />
)}
{displayPage === 'list' && (
<ListView tasks={tasks} currentUser={currentUser} onTaskClick={handleTaskClick}
filterUser={filterUser} searchQuery={searchQuery} onToggleDone={handleToggleDone} users={users} />
)}
{displayPage === 'dashboard' && <DashboardPage tasks={tasks} currentUser={currentUser} users={users} />}
{displayPage === 'mytasks' && (
<ListView tasks={filteredMyTasks} currentUser={currentUser} onTaskClick={handleTaskClick}
filterUser={null} searchQuery={searchQuery} onToggleDone={handleToggleDone} users={users} />
)}
{displayPage === 'teamtasks' && <TeamTasksPage tasks={tasks} currentUser={currentUser} users={users} />}
{displayPage === 'reports' && <ReportsPage tasks={tasks} users={users} currentUser={currentUser} />}
{displayPage === 'members' && <MembersPage tasks={tasks} users={users} currentUser={currentUser} onAddUser={handleAddUser} onDeleteUser={handleDeleteUser} />}
</div>
</div>
{VIEW_PAGES.includes(activePage) && (
<BottomToggleBar activeView={activeView} onViewChange={handleViewChange} />
)}
{activeTask && <TaskDrawer task={activeTask} currentUser={currentUser} onClose={() => setActiveTask(null)} onUpdate={handleUpdateTask} onAddDependency={handleAddDep} onToggleDependency={handleToggleDep} onRemoveDependency={handleRemoveDep} users={users} />}
{showAddModal && <AddTaskModal onClose={() => setShowAddModal(false)} onAdd={handleAddTask} defaultDate={addModalDefaults.date} defaultStatus={addModalDefaults.status} users={users} currentUser={currentUser} />}
{quickAddDay && (
<div style={{ position: 'fixed', inset: 0, zIndex: 199 }} onClick={() => setQuickAddDay(null)}>
<div style={{ position: 'absolute', top: Math.min(quickAddDay.rect.top, window.innerHeight - 280), left: Math.min(quickAddDay.rect.left, window.innerWidth - 340) }}
onClick={e => e.stopPropagation()}>
<QuickAddPanel date={quickAddDay.date} onAdd={handleQuickAdd}
onOpenFull={() => { setAddModalDefaults({ date: quickAddDay.date }); setShowAddModal(true); setQuickAddDay(null); }}
onClose={() => setQuickAddDay(null)} users={users} />
<NotificationProvider userId={currentUser.id}>
<div className="app-shell">
<TopNavbar title={pageTitle} filterUser={filterUser} onFilterChange={setFilterUser}
searchQuery={searchQuery} onSearch={setSearchQuery} onNewTask={handleNewTask}
onOpenSidebar={() => setSidebarOpen(true)} users={users} />
<div className="app-body">
<Sidebar currentUser={currentUser} activePage={activePage} onNavigate={handleNavigate}
onSignOut={() => { localStorage.removeItem('currentUser'); setCurrentUser(null); setActivePage('calendar'); setActiveView('calendar'); setSidebarOpen(false); }}
isOpen={sidebarOpen} onClose={() => setSidebarOpen(false)} users={users} />
<div className="main-content">
{displayPage === 'calendar' && (
<CalendarView tasks={tasks} currentUser={currentUser} calMonth={calMonth} calView={calView}
onMonthChange={setCalMonth} onViewChange={setCalView} onTaskClick={handleTaskClick}
onDayClick={handleDayClick} filterUser={filterUser} searchQuery={searchQuery} users={users} />
)}
{displayPage === 'kanban' && (
<KanbanBoard tasks={tasks} currentUser={currentUser} onTaskClick={handleTaskClick}
onAddTask={handleKanbanAdd} onMoveTask={handleMoveTask} filterUser={filterUser} searchQuery={searchQuery} users={users} />
)}
{displayPage === 'list' && (
<ListView tasks={tasks} currentUser={currentUser} onTaskClick={handleTaskClick}
filterUser={filterUser} searchQuery={searchQuery} onToggleDone={handleToggleDone} users={users} />
)}
{displayPage === 'dashboard' && <DashboardPage tasks={tasks} currentUser={currentUser} users={users} />}
{displayPage === 'mytasks' && (
<ListView tasks={filteredMyTasks} currentUser={currentUser} onTaskClick={handleTaskClick}
filterUser={null} searchQuery={searchQuery} onToggleDone={handleToggleDone} users={users} />
)}
{displayPage === 'teamtasks' && <TeamTasksPage tasks={tasks} currentUser={currentUser} users={users} />}
{displayPage === 'reports' && <ReportsPage tasks={tasks} users={users} currentUser={currentUser} />}
{displayPage === 'members' && <MembersPage tasks={tasks} users={users} currentUser={currentUser} onAddUser={handleAddUser} onDeleteUser={handleDeleteUser} />}
</div>
</div>
)}
</div>
{VIEW_PAGES.includes(activePage) && (
<BottomToggleBar activeView={activeView} onViewChange={handleViewChange} />
)}
{activeTask && <TaskDrawer task={activeTask} currentUser={currentUser} onClose={() => setActiveTask(null)} onUpdate={handleUpdateTask} onAddDependency={handleAddDep} onToggleDependency={handleToggleDep} onRemoveDependency={handleRemoveDep} users={users} />}
{showAddModal && <AddTaskModal onClose={() => setShowAddModal(false)} onAdd={handleAddTask} defaultDate={addModalDefaults.date} defaultStatus={addModalDefaults.status} users={users} currentUser={currentUser} />}
{quickAddDay && (
<div style={{ position: 'fixed', inset: 0, zIndex: 199 }} onClick={() => setQuickAddDay(null)}>
<div style={{ position: 'absolute', top: Math.min(quickAddDay.rect.top, window.innerHeight - 280), left: Math.min(quickAddDay.rect.left, window.innerWidth - 340) }}
onClick={e => e.stopPropagation()}>
<QuickAddPanel date={quickAddDay.date} onAdd={handleQuickAdd}
onOpenFull={() => { setAddModalDefaults({ date: quickAddDay.date }); setShowAddModal(true); setQuickAddDay(null); }}
onClose={() => setQuickAddDay(null)} users={users} />
</div>
</div>
)}
</div>
</NotificationProvider>
);
}

View File

@@ -1,4 +1,5 @@
import type { User } from './data';
import { NotificationBell } from './components/NotificationBell';
interface TopNavbarProps {
title: string;
@@ -31,7 +32,7 @@ export function TopNavbar({ title, filterUser, onFilterChange, searchQuery, onSe
</div>
))}
</div>
<button className="notif-btn">🔔<span className="notif-badge">3</span></button>
<NotificationBell />
<button className="new-task-btn" onClick={onNewTask}>+ New Task</button>
</div>
</div>

View File

@@ -0,0 +1,80 @@
import React, { createContext, useContext, useEffect, useState } from 'react';
import { io } from 'socket.io-client';
export interface Notification {
id: string;
type: 'assignment' | 'mention' | 'update';
title: string;
message: string;
link?: string;
is_read: boolean;
created_at: string;
}
interface NotificationContextType {
notifications: Notification[];
unreadCount: number;
markAsRead: (id: string) => Promise<void>;
}
const NotificationContext = createContext<NotificationContextType | undefined>(undefined);
export const NotificationProvider: React.FC<{ children: React.ReactNode, userId: string }> = ({ children, userId }) => {
const [notifications, setNotifications] = useState<Notification[]>([]);
useEffect(() => {
if (!userId) return;
const newSocket = io(window.location.protocol + '//' + window.location.hostname + ':3001');
newSocket.emit('join', userId);
newSocket.on('notification', (notif: Notification) => {
setNotifications(prev => [notif, ...prev]);
// Optional: Show browser toast here
});
fetchNotifications();
return () => {
newSocket.close();
};
}, [userId]);
const fetchNotifications = async () => {
try {
const res = await fetch(`/api/notifications/${userId}`);
if (res.ok) {
const data = await res.json();
setNotifications(data);
}
} catch (err) {
console.error('Fetch notifications failed', err);
}
};
const markAsRead = async (id: string) => {
try {
const res = await fetch(`/api/notifications/${id}/read`, { method: 'PUT' });
if (res.ok) {
setNotifications(prev => prev.map(n => n.id === id ? { ...n, is_read: true } : n));
}
} catch (err) {
console.error('Mark read failed', err);
}
};
const unreadCount = notifications.filter(n => !n.is_read).length;
return (
<NotificationContext.Provider value={{ notifications, unreadCount, markAsRead }}>
{children}
</NotificationContext.Provider>
);
};
export const useNotifications = () => {
const context = useContext(NotificationContext);
if (!context) throw new Error('useNotifications must be used within NotificationProvider');
return context;
};

View File

@@ -0,0 +1,65 @@
import React, { useState } from 'react';
import { useNotifications } from '../NotificationContext';
export const NotificationBell: React.FC = () => {
const { notifications, unreadCount, markAsRead } = useNotifications();
const [isOpen, setIsOpen] = useState(false);
return (
<div className="relative">
<button
className="p-2 rounded-full hover:bg-white/10 relative transition-colors"
onClick={() => setIsOpen(!isOpen)}
>
<span className="text-xl">🔔</span>
{unreadCount > 0 && (
<span className="absolute top-1 right-1 bg-red-500 text-white text-[10px] font-bold px-1.5 py-0.5 rounded-full min-w-[18px] text-center border-2 border-[#0f172a]">
{unreadCount}
</span>
)}
</button>
{isOpen && (
<div className="absolute right-0 mt-2 w-80 bg-[#1e293b] border border-white/10 rounded-xl shadow-2xl z-50 overflow-hidden backdrop-blur-md">
<div className="p-4 border-b border-white/10 flex justify-between items-center">
<h3 className="font-semibold text-white">Notifications</h3>
<span className="text-xs text-slate-400">{unreadCount} unread</span>
</div>
<div className="max-height-[400px] overflow-y-auto">
{notifications.length === 0 ? (
<div className="p-8 text-center text-slate-500 italic">
No notifications yet
</div>
) : (
notifications.map(n => (
<div
key={n.id}
className={`p-4 border-b border-white/5 hover:bg-white/5 cursor-pointer transition-colors ${!n.is_read ? 'bg-blue-500/5' : ''}`}
onClick={() => markAsRead(n.id)}
>
<div className="flex gap-3">
<div className="text-lg">
{n.type === 'assignment' ? '📋' : n.type === 'mention' ? '💬' : '🔔'}
</div>
<div className="flex-1 min-w-0">
<p className={`text-sm ${!n.is_read ? 'text-white font-medium' : 'text-slate-300'}`}>
{n.title}
</p>
<p className="text-xs text-slate-400 mt-1 truncate">
{n.message}
</p>
<p className="text-[10px] text-slate-500 mt-1">
{new Date(n.created_at).toLocaleString()}
</p>
</div>
{!n.is_read && <div className="w-2 h-2 bg-blue-500 rounded-full mt-2" />}
</div>
</div>
))
)}
</div>
</div>
)}
</div>
);
};

View File

@@ -1,5 +0,0 @@
node_modules
dist
target
.spin/
build/

View File

@@ -1,14 +0,0 @@
{
"version": "0.2.0",
"configurations": [
{
"type": "starlingmonkey",
"request": "launch",
"name": "Debug StarlingMonkey component",
"component": "${workspaceFolder}/dist/temp.wasm",
"program": "${workspaceFolder}/src/index.js",
"stopOnEntry": false,
"trace": true
}
]
}

View File

@@ -1,12 +0,0 @@
{
"starlingmonkey": {
"componentRuntime": {
"executable": "spin",
"options": [
"up",
"-f",
"${workspaceFolder}",
],
}
}
}

View File

@@ -1,39 +0,0 @@
# `http-js` Template
A starter template for building JavaScript HTTP applications with Spin.
## Getting Started
Build the App
```bash
spin build
```
## Run the App
```bash
spin up
```
## Using Spin Interfaces
To use additional Spin interfaces, install the corresponding packages:
| Interface | Package |
|---------------|---------------------------------|
| Key-Value | `@spinframework/spin-kv` |
| LLM | `@spinframework/spin-llm` |
| MQTT | `@spinframework/spin-mqtt` |
| MySQL | `@spinframework/spin-mysql` |
| PostgreSQL | `@spinframework/spin-postgres` |
| Redis | `@spinframework/spin-redis` |
| SQLite | `@spinframework/spin-sqlite` |
| Variables | `@spinframework/spin-variables` |
## Using the StarlingMonkey Debugger for VS Code
1. First install the [StarlingMonkey Debugger](https://marketplace.visualstudio.com/items?itemName=BytecodeAlliance.starlingmonkey-debugger) extension.
2. Build the component using the debug command `npm run build:debug`.
3. Uncomment `tcp://127.0.0.1:*` in the `allowed_outbound_hosts` field in the `spin.toml`.
4. Start the debugger in VS Code which should start Spin and attach the debugger. The debugger needs to be restarted for each http call.

View File

@@ -1,42 +0,0 @@
// build.mjs
import { build } from 'esbuild';
import path from 'path';
import { SpinEsbuildPlugin } from "@spinframework/build-tools/plugins/esbuild/index.js";
import fs from 'fs';
const spinPlugin = await SpinEsbuildPlugin();
// plugin to handle vendor files in node_modules that may not be bundled.
// Instead of generating a real source map for these files, it appends a minimal
// inline source map pointing to an empty source. This avoids errors and ensures
// source maps exist even for unbundled vendor code.
let SourceMapPlugin = {
name: 'excludeVendorFromSourceMap',
setup(build) {
build.onLoad({ filter: /node_modules/ }, args => {
return {
contents: fs.readFileSync(args.path, 'utf8')
+ '\n//# sourceMappingURL=data:application/json;base64,eyJ2ZXJzaW9uIjozLCJzb3VyY2VzIjpbIiJdLCJtYXBwaW5ncyI6IkEifQ==',
loader: 'default',
}
})
},
}
await build({
entryPoints: ['./src/index.js'],
outfile: './build/bundle.js',
bundle: true,
format: 'esm',
platform: 'node',
sourcemap: true,
minify: false,
plugins: [spinPlugin, SourceMapPlugin],
logLevel: 'error',
loader: {
'.ts': 'ts',
'.tsx': 'tsx',
},
resolveExtensions: ['.ts', '.tsx', '.js'],
sourceRoot: path.resolve(process.cwd(), 'src'),
});

File diff suppressed because it is too large Load Diff

View File

@@ -1,23 +0,0 @@
{
"name": "temp",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"build": "node build.mjs && mkdirp dist && j2w -i build/bundle.js --initLocation http://temp.localhost -o dist/temp.wasm",
"build:debug": "node build.mjs && mkdirp dist && j2w -d -i build/bundle.js --initLocation http://temp.localhost -o dist/temp.wasm",
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"devDependencies": {
"mkdirp": "^3.0.1",
"esbuild": "^0.25.8"
},
"dependencies": {
"itty-router": "^5.0.18",
"@spinframework/build-tools": "^1.0.4",
"@spinframework/wasi-http-proxy": "^1.0.0"
}
}

View File

@@ -1,21 +0,0 @@
spin_manifest_version = 2
[application]
authors = ["tusuii <tusuii764@gmail.com>"]
description = ""
name = "temp"
version = "0.1.0"
[[trigger.http]]
route = "/..."
component = "temp"
[component.temp]
source = "dist/temp.wasm"
exclude_files = ["**/node_modules"]
allowed_outbound_hosts = [
# "tcp://127.0.0.1:*", # Uncomment this line to while using the StarlingMonkey Debugger
]
[component.temp.build]
command = ["npm install", "npm run build"]
watch = ["src/**/*.js"]

View File

@@ -1,17 +0,0 @@
// For AutoRouter documentation refer to https://itty.dev/itty-router/routers/autorouter
import { AutoRouter } from 'itty-router';
let router = AutoRouter();
// Route ordering matters, the first route that matches will be used
// Any route that does not return will be treated as a middleware
// Any unmatched route will return a 404
router
.get('/', () => new Response('Hello, Spin!'))
.get('/hello/:name', ({ name }) => `Hello, ${name}!`)
addEventListener('fetch', (event) => {
event.respondWith(router.fetch(event.request));
});

View File

@@ -9,7 +9,7 @@ export default defineConfig({
host: '0.0.0.0',
proxy: {
'/api': {
target: 'http://backend:3001',
target: 'http://127.0.0.1:3000',
changeOrigin: true,
},
},