Initial commit
Some checks failed
Continuous Integration - Pull Request / code-tests (pull_request) Has been cancelled
Continuous Integration - Pull Request / deployment-tests (local-code) (pull_request) Has been cancelled
helm-chart-ci / helm-chart-ci (pull_request) Has been cancelled
kubevious-manifests-ci / kubevious-manifests-ci (pull_request) Has been cancelled
kustomize-build-ci / kustomize-build-ci (pull_request) Has been cancelled
terraform-validate-ci / terraform-validate-ci (pull_request) Has been cancelled
Clean up deployment / cleanup-namespace (pull_request) Has been cancelled
Continuous Integration - Main/Release / code-tests (push) Has been cancelled
Continuous Integration - Main/Release / deployment-tests (local-code) (push) Has been cancelled
helm-chart-ci / helm-chart-ci (push) Has been cancelled
kubevious-manifests-ci / kubevious-manifests-ci (push) Has been cancelled
kustomize-build-ci / kustomize-build-ci (push) Has been cancelled
terraform-validate-ci / terraform-validate-ci (push) Has been cancelled

This commit is contained in:
2026-02-04 20:47:56 +05:30
commit dafcd9777f
363 changed files with 52703 additions and 0 deletions

View File

@@ -0,0 +1,155 @@
# Integrate Online Boutique with AlloyDB
By default the `cartservice` stores its data in an in-cluster Redis database.
Using a fully managed database service outside your GKE cluster (such as [AlloyDB](https://cloud.google.com/alloydb)) could bring more resiliency and more security.
Note that because of AlloyDB's current connectivity, you'll need to run all this from a VM with
VPC access to the network you want to use for everything (out of the box this should just use the
default network). The Cloud Shell doesn't work because of transitive VPC peering not working.
## Provision an AlloyDB database and the supporting infrastructure
Environmental variables needed for setup. These should be set in a .bashrc or similar as some of the variables are used in the application itself. Default values are supplied in this readme, but any of them can be changed. Anything in <> needs to be replaced.
```bash
# PROJECT_ID should be set to the project ID that was created to hold the demo
PROJECT_ID=<project_id>
#Pick a region near you that also has AlloyDB available. See available regions: https://cloud.google.com/alloydb/docs/locations
REGION=<region>
USE_GKE_GCLOUD_AUTH_PLUGIN=True
ALLOYDB_NETWORK=default
ALLOYDB_SERVICE_NAME=onlineboutique-network-range
ALLOYDB_CLUSTER_NAME=onlineboutique-cluster
ALLOYDB_INSTANCE_NAME=onlineboutique-instance
# **Note:** Primary and Read IP will need to be set after you create the instance. The command to set this in the shell is included below, but it would also be a good idea to run the command, and manually set the IP address in the .bashrc
ALLOYDB_PRIMARY_IP=<ip set below after instance created>
ALLOYDB_READ_IP=<ip set below after instance created>
ALLOYDB_DATABASE_NAME=carts
ALLOYDB_TABLE_NAME=cart_items
ALLOYDB_USER_GSA_NAME=alloydb-user-sa
ALLOYDB_USER_GSA_ID=${ALLOYDB_USER_GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com
CARTSERVICE_KSA_NAME=cartservice
ALLOYDB_SECRET_NAME=alloydb-secret
# PGPASSWORD needs to be set in order to run the psql from the CLI easily. The value for this
# needs to be set behind the Secret mentioned above
PGPASSWORD=<password>
```
To provision an AlloyDB instance you can follow the following instructions:
```bash
gcloud services enable alloydb.googleapis.com
gcloud services enable servicenetworking.googleapis.com
gcloud services enable secretmanager.googleapis.com
# Set our DB credentials behind the secret. Replace <password> with whatever you want
# to use as the credentials for the database. Don't use $ in the password.
echo <password> | gcloud secrets create ${ALLOYDB_SECRET_NAME} --data-file=-
# Setting up needed service connection
gcloud compute addresses create ${ALLOYDB_SERVICE_NAME} \
--global \
--purpose=VPC_PEERING \
--prefix-length=16 \
--description="Online Boutique Private Services" \
--network=${ALLOYDB_NETWORK}
gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--ranges=${ALLOYDB_SERVICE_NAME} \
--network=${ALLOYDB_NETWORK}
gcloud alloydb clusters create ${ALLOYDB_CLUSTER_NAME} \
--region=${REGION} \
--password=${PGPASSWORD} \
--disable-automated-backup \
--network=${ALLOYDB_NETWORK}
gcloud alloydb instances create ${ALLOYDB_INSTANCE_NAME} \
--cluster=${ALLOYDB_CLUSTER_NAME} \
--region=${REGION} \
--cpu-count=4 \
--instance-type=PRIMARY
gcloud alloydb instances create ${ALLOYDB_INSTANCE_NAME}-replica \
--cluster=${ALLOYDB_CLUSTER_NAME} \
--region=${REGION} \
--cpu-count=4 \
--instance-type=READ_POOL \
--read-pool-node-count=2
# Need to grab and store the IP addresses for our primary and read replicas
# Don't forget to set these two values in the environment for later use.
ALLOYDB_PRIMARY_IP=gcloud alloydb instances list --region=${REGION} --cluster=${ALLOYDB_CLUSTER_NAME} --filter="INSTANCE_TYPE:PRIMARY" --format=flattened | sed -nE "s/ipAddress:\s*(.*)/\1/p"
ALLOYDB_READ_IP=gcloud alloydb instances list --region=${REGION} --cluster=${ALLOYDB_CLUSTER_NAME} --filter="INSTANCE_TYPE:READ_POOL" --format=flattened | sed -nE "s/ipAddress:\s*(.*)/\1/p"
psql -h ${ALLOYDB_PRIMARY_IP} -U postgres -c "CREATE DATABASE ${ALLOYDB_DATABASE_NAME}"
psql -h ${ALLOYDB_PRIMARY_IP} -U postgres -d ${ALLOYDB_DATABASE_NAME} -c "CREATE TABLE ${ALLOYDB_TABLE_NAME} (userId text, productId text, quantity int, PRIMARY KEY(userId, productId))"
psql -h ${ALLOYDB_PRIMARY_IP} -U postgres -d ${ALLOYDB_DATABASE_NAME} -c "CREATE INDEX cartItemsByUserId ON ${ALLOYDB_TABLE_NAME}(userId)"
```
_Note: It can take more than 20 minutes for the AlloyDB instances to be created._
## Grant the `cartservice`'s service account access to the AlloyDB database
**Important note:** Your GKE cluster should have [Workload Identity enabled](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#enable).
As a good practice, let's create a dedicated least privilege Google Service Account to allow the `cartservice` to communicate with the AlloyDB database and grab the database password from the Secret manager.:
```bash
gcloud iam service-accounts create ${ALLOYDB_USER_GSA_NAME} \
--display-name=${ALLOYDB_USER_GSA_NAME}
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${ALLOYDB_USER_GSA_ID} --role=roles/alloydb.client
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${ALLOYDB_USER_GSA_ID} --role=roles/secretmanager.secretAccessor
gcloud iam service-accounts add-iam-policy-binding ${ALLOYDB_USER_GSA_ID} \
--member "serviceAccount:${PROJECT_ID}.svc.id.goog[default/${CARTSERVICE_KSA_NAME}]" \
--role roles/iam.workloadIdentityUser
```
## Deploy Online Boutique connected to an AlloyDB database
To automate the deployment of Online Boutique integrated with AlloyDB you can leverage the following variation with [Kustomize](../..).
From the `kustomize/` folder at the root level of this repository, execute these commands:
```bash
kustomize edit add component components/alloydb
```
_**Note:** this Kustomize component will also remove the `redis-cart` `Deployment` and `Service` not used anymore._
This will update the `kustomize/kustomization.yaml` file which could be similar to:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- base
components:
- components/alloydb
```
Update current Kustomize manifest to target this AlloyDB database.
```bash
sed -i "s/PROJECT_ID_VAL/${PROJECT_ID}/g" components/alloydb/kustomization.yaml
sed -i "s/ALLOYDB_PRIMARY_IP_VAL/${ALLOYDB_PRIMARY_IP}/g" components/alloydb/kustomization.yaml
sed -i "s/ALLOYDB_USER_GSA_ID/${ALLOYDB_USER_GSA_ID}/g" components/alloydb/kustomization.yaml
sed -i "s/ALLOYDB_DATABASE_NAME_VAL/${ALLOYDB_DATABASE_NAME}/g" components/alloydb/kustomization.yaml
sed -i "s/ALLOYDB_TABLE_NAME_VAL/${ALLOYDB_TABLE_NAME}/g" components/alloydb/kustomization.yaml
sed -i "s/ALLOYDB_SECRET_NAME_VAL/${ALLOYDB_SECRET_NAME}/g" components/alloydb/kustomization.yaml
```
You can locally render these manifests by running `kubectl kustomize .` as well as deploying them by running `kubectl apply -k .`.
## Extra cleanup steps
```bash
gcloud compute addresses delete ${ALLOYDB_SERVICE_NAME} --global
# Force takes care of cleaning up the instances inside the cluster automatically
gcloud alloydb clusters delete ${ALLOYDB_CLUSTER_NAME} --force --region ${REGION}
gcloud iam service-accounts delete ${ALLOYDB_USER_GSA_ID}
gcloud secrets delete ${ALLOYDB_SECRET_NAME}
```

View File

@@ -0,0 +1,99 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
# cartservice - replace REDIS_ADDR by ALLOYDB_PRIMARY_IP for the cartservice Deployment
# Potentially later we'll factor in splitting traffic to primary/read pool, but for now
# we'll just manage the primary instance
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: cartservice
spec:
template:
spec:
containers:
- name: server
env:
- name: REDIS_ADDR
$patch: delete
- name: ALLOYDB_PRIMARY_IP
value: ALLOYDB_PRIMARY_IP_VAL
- name: ALLOYDB_DATABASE_NAME
value: ALLOYDB_CARTS_DATABASE_NAME_VAL
- name: ALLOYDB_TABLE_NAME
value: ALLOYDB_CARTS_TABLE_NAME_VAL
- name: ALLOYDB_SECRET_NAME
value: ALLOYDB_SECRET_NAME_VAL
- name: PROJECT_ID
value: PROJECT_ID_VAL
# cartservice - add the GSA annotation for the cartservice KSA
- patch: |-
apiVersion: v1
kind: ServiceAccount
metadata:
name: cartservice
annotations:
iam.gke.io/gcp-service-account: ALLOYDB_USER_GSA_ID
# productcatalogservice - replace ALLOYDB environments
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: productcatalogservice
spec:
template:
spec:
containers:
- name: server
env:
- name: ALLOYDB_CLUSTER_NAME
value: ALLOYDB_CLUSTER_NAME_VAL
- name: ALLOYDB_INSTANCE_NAME
value: ALLOYDB_INSTANCE_NAME_VAL
- name: ALLOYDB_DATABASE_NAME
value: ALLOYDB_PRODUCTS_DATABASE_NAME_VAL
- name: ALLOYDB_TABLE_NAME
value: ALLOYDB_PRODUCTS_TABLE_NAME_VAL
- name: ALLOYDB_SECRET_NAME
value: ALLOYDB_SECRET_NAME_VAL
- name: PROJECT_ID
value: PROJECT_ID_VAL
- name: REGION
value: REGION_VAL
# productcatalogservice - add the GSA annotation for the productcatalogservice KSA
- patch: |-
apiVersion: v1
kind: ServiceAccount
metadata:
name: productcatalogservice
annotations:
iam.gke.io/gcp-service-account: ALLOYDB_USER_GSA_ID
# redis - remove the redis-cart Deployment
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-cart
$patch: delete
# redis - remove the redis-cart Service
- patch: |-
apiVersion: v1
kind: Service
metadata:
name: redis-cart
$patch: delete

View File

@@ -0,0 +1,31 @@
# Update the container registry of the Online Boutique apps
By default, Online Boutique's services' container images are pulled from a public container registry (`us-central1-docker.pkg.dev/google-samples/microservices-demo`). One best practice is to have these container images in your own private container registry. The Kustomize variation in this folder can help with using your own private container registry.
## Change the default container registry via Kustomize
To automate the deployment of Online Boutique integrated with your own container registry, you can leverage the following variation with [Kustomize](../..).
From the `kustomize/` folder at the root level of this repository, execute this command:
```bash
REGISTRY=my-registry # Example: us-central1-docker.pkg.dev/my-project/my-directory
sed -i "s|CONTAINER_IMAGES_REGISTRY|${REGISTRY}|g" components/container-images-registry/kustomization.yaml
kustomize edit add component components/container-images-registry
```
_Note: this Kustomize component will update the container registry in the `image:` field in all `Deployments`._
This will update the `kustomize/kustomization.yaml` file which could be similar to:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- base
components:
- components/container-images-registry
```
You can (optionally) locally render these manifests by running `kubectl kustomize .`.
You can deploy them by running `kubectl apply -k .`.

View File

@@ -0,0 +1,41 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
images:
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/adservice
newName: CONTAINER_IMAGES_REGISTRY/adservice
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/cartservice
newName: CONTAINER_IMAGES_REGISTRY/cartservice
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/checkoutservice
newName: CONTAINER_IMAGES_REGISTRY/checkoutservice
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/currencyservice
newName: CONTAINER_IMAGES_REGISTRY/currencyservice
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/emailservice
newName: CONTAINER_IMAGES_REGISTRY/emailservice
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/frontend
newName: CONTAINER_IMAGES_REGISTRY/frontend
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/loadgenerator
newName: CONTAINER_IMAGES_REGISTRY/loadgenerator
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/paymentservice
newName: CONTAINER_IMAGES_REGISTRY/paymentservice
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/productcatalogservice
newName: CONTAINER_IMAGES_REGISTRY/productcatalogservice
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/recommendationservice
newName: CONTAINER_IMAGES_REGISTRY/recommendationservice
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/shippingservice
newName: CONTAINER_IMAGES_REGISTRY/shippingservice
- name: redis
newName: CONTAINER_IMAGES_REGISTRY/redis

View File

@@ -0,0 +1,53 @@
# Add a suffix to the image tag of the Online Boutique container images
You may want to add a suffix to the Online Boutique container image tag to target a specific version.
The Kustomize Component inside this folder can help.
## Add a suffix to the container image tag via Kustomize
To automate the deployment of the Online Boutique apps with a suffix added to the container imag tag, you can leverage the following variation with [Kustomize](../..).
From the `kustomize/` folder at the root level of this repository, execute this command:
```bash
SUFFIX=-my-suffix
sed -i "s/CONTAINER_IMAGES_TAG_SUFFIX/$SUFFIX/g" components/container-images-tag-suffix/kustomization.yaml
kustomize edit add component components/container-images-tag-suffix
```
_Note: this Kustomize component will add a suffix to the container image tag of the `image:` field in all `Deployments`._
This will update the `kustomize/kustomization.yaml` file which could be similar to:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- base
components:
- components/container-images-tag-suffix
```
You can locally render these manifests by running `kubectl kustomize . | sed "s/$SUFFIX$SUFFIX/$SUFFIX/g"` as well as deploying them by running `kubectl kustomize . | sed "s/$SUFFIX$SUFFIX/$SUFFIX/g" | kubectl apply -f`.
_Note: for this variation, `kubectl apply -k .` alone won't work because there is a [known issue currently in Kustomize](https://github.com/kubernetes-sigs/kustomize/issues/4814) where the `tagSuffix` is duplicated. The `sed "s/$SUFFIX$SUFFIX/$SUFFIX/g"` commands above are a temporary workaround._
## Combine with other Kustomize Components
If you're combining this Kustomize Component with other variations, here are some considerations:
- `components/container-images-tag-suffix` should be placed before `components/container-images-registry`
- `components/container-images-tag-suffix` should be placed after `components/container-images-tag`
So for example here is the order respected:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- base
components:
- components/container-images-tag
- components/container-images-tag-suffix
- components/container-images-registry
```

View File

@@ -0,0 +1,39 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
images:
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/adservice
tagSuffix: CONTAINER_IMAGES_TAG_SUFFIX
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/cartservice
tagSuffix: CONTAINER_IMAGES_TAG_SUFFIX
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/checkoutservice
tagSuffix: CONTAINER_IMAGES_TAG_SUFFIX
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/currencyservice
tagSuffix: CONTAINER_IMAGES_TAG_SUFFIX
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/emailservice
tagSuffix: CONTAINER_IMAGES_TAG_SUFFIX
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/frontend
tagSuffix: CONTAINER_IMAGES_TAG_SUFFIX
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/loadgenerator
tagSuffix: CONTAINER_IMAGES_TAG_SUFFIX
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/paymentservice
tagSuffix: CONTAINER_IMAGES_TAG_SUFFIX
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/productcatalogservice
tagSuffix: CONTAINER_IMAGES_TAG_SUFFIX
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/recommendationservice
tagSuffix: CONTAINER_IMAGES_TAG_SUFFIX
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/shippingservice
tagSuffix: CONTAINER_IMAGES_TAG_SUFFIX

View File

@@ -0,0 +1,46 @@
# Update the container image tag of the Online Boutique apps
By default, the Online Boutique apps are targeting the latest release version (see the list of versions [here](https://github.com/GoogleCloudPlatform/microservices-demo/releases)). You may need to change this image tag to target a specific version, this Kustomize variation will help you setting this up.
## Change the default container image tag via Kustomize
To automate the deployment of the Online Boutique apps with a specific container imag tag, you can leverage the following variation with [Kustomize](../..).
From the `kustomize/` folder at the root level of this repository, execute this command:
```bash
TAG=v1.0.0
sed -i "s/CONTAINER_IMAGES_TAG/$TAG/g" components/container-images-tag/kustomization.yaml
kustomize edit add component components/container-images-tag
```
_Note: this Kustomize component will update the container image tag of the `image:` field in all `Deployments`._
This will update the `kustomize/kustomization.yaml` file which could be similar to:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- base
components:
- components/container-images-tag
```
You can locally render these manifests by running `kubectl kustomize .` as well as deploying them by running `kubectl apply -k .`.
**Important notes:** if combining with the other variations, here are some considerations:
- should be placed before `components/container-images-registry`
So for example here is the order respected:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- base
components:
- components/container-images-tag
- components/container-images-registry
```

View File

@@ -0,0 +1,39 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
images:
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/adservice
newTag: CONTAINER_IMAGES_TAG
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/cartservice
newTag: CONTAINER_IMAGES_TAG
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/checkoutservice
newTag: CONTAINER_IMAGES_TAG
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/currencyservice
newTag: CONTAINER_IMAGES_TAG
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/emailservice
newTag: CONTAINER_IMAGES_TAG
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/frontend
newTag: CONTAINER_IMAGES_TAG
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/loadgenerator
newTag: CONTAINER_IMAGES_TAG
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/paymentservice
newTag: CONTAINER_IMAGES_TAG
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/productcatalogservice
newTag: CONTAINER_IMAGES_TAG
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/recommendationservice
newTag: CONTAINER_IMAGES_TAG
- name: us-central1-docker.pkg.dev/google-samples/microservices-demo/shippingservice
newTag: CONTAINER_IMAGES_TAG

View File

@@ -0,0 +1,62 @@
# Customize the Base URL for Online Boutique
This component allows you to change the base URL for the Online Boutique application. By default, the application uses the root path ("/") as its base URL. This customization sets the base URL to "/online-boutique" and updates the health check paths accordingly.
## What it does
1. Sets the `BASE_URL` environment variable to "/online-boutique" for the frontend deployment.
2. Updates the liveness probe path to "/online-boutique/_healthz".
3. Updates the readiness probe path to "/online-boutique/_healthz".
## How to use
To apply this customization, you can use Kustomize to include this component in your deployment.
From the `kustomize/` folder at the root level of this repository, execute this command:
```bash
kustomize edit add component components/custom-base-url
```
This will update the `kustomize/kustomization.yaml` file, which could look similar to:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- base
components:
- components/custom-base-url
```
## Render and Deploy
You can locally render these manifests by running:
```bash
kubectl kustomize .
```
To deploy the customized application, run:
```bash
kubectl apply -k .
```
## Customizing the Base URL
If you want to use a different base URL, you can modify the `value` fields in the kustomization.yaml file. Make sure to update all three occurrences:
1. The `BASE_URL` environment variable
2. The liveness probe path
3. The readiness probe path
For example, to change the base URL to "/shop", you would modify the values as follows:
```yaml
value: /shop
value: /shop/_healthz
value: /shop/_healthz
```
Note: After changing the base URL, make sure to update any internal links or references within your application to use the new base URL.

View File

@@ -0,0 +1,32 @@
# Copyright 2024 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
- target:
kind: Deployment
name: frontend
patch: |-
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: BASE_URL
value: /online-boutique
- op: replace
path: /spec/template/spec/containers/0/livenessProbe/httpGet/path
value: /online-boutique/_healthz
- op: replace
path: /spec/template/spec/containers/0/readinessProbe/httpGet/path
value: /online-boutique/_healthz

View File

@@ -0,0 +1,48 @@
# Change the Online Boutique theme to the Cymbal Shops Branding
By default, when you deploy this sample app, the "Online Boutique" branding (logo and wording) will be used.
But you may want to use Google Cloud's fictitious company, _Cymbal Shops_, instead.
To use "Cymbal Shops" branding, set the `CYMBAL_BRANDING` environment variable to `"true"` in the the Kubernetes manifest (`.yaml`) for the `frontend` Deployment.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
...
template:
...
spec:
...
containers:
...
env:
...
- name: CYMBAL_BRANDING
value: "true"
```
## Deploy Online Boutique with the Cymbal Shops branding via Kustomize
To automate the deployment of Online Boutique with the Cymbal Shops branding you can leverage the following variation with [Kustomize](../..).
From the `kustomize/` folder at the root level of this repository, execute this command:
```bash
kustomize edit add component components/cymbal-branding
```
This will update the `kustomize/kustomization.yaml` file which could be similar to:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- base
components:
- components/cymbal-branding
```
You can locally render these manifests by running `kubectl kustomize .` as well as deploying them by running `kubectl apply -k .`.

View File

@@ -0,0 +1,30 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
template:
spec:
containers:
- name: server
env:
- name: CYMBAL_BRANDING
value: "true"

View File

@@ -0,0 +1,123 @@
# Integrate Online Boutique with Google Cloud Operations
By default, [Google Cloud Operations](https://cloud.google.com/products/operations) instrumentation is **turned off** for Online Boutique deployments. This includes Monitoring (Stats), Tracing, and Profiler. This means that even if you're running this app on [GKE](https://cloud.google.com/kubernetes-engine), traces (for example) will not be exported to [Google Cloud Trace](https://cloud.google.com/trace).
If you want to re-enable Google Cloud Operations instrumentation, the easiest way is to enable the included kustomize module, which enables traces, metrics, and adds a deployment of the [Open Telemetry Collector](https://opentelemetry.io/docs/collector/) to gather the traces and metrics and forward them to the appropriate Google Cloud backend.
From the `kustomize/` folder at the root level of this repository, execute this command:
```bash
kustomize edit add component components/google-cloud-operations
```
This will update the `kustomize/kustomization.yaml` file which could be similar to:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- base
components:
- components/google-cloud-operations
```
You can locally render these manifests by running `kubectl kustomize .` as well as deploying them by running `kubectl apply -k .`.
You will also need to make sure that you have the associated Google APIs enabled in your Google Cloud project:
```bash
PROJECT_ID=<your-gcp-project-id>
gcloud services enable \
monitoring.googleapis.com \
cloudtrace.googleapis.com \
cloudprofiler.googleapis.com \
--project ${PROJECT_ID}
```
In addition to that, you will need to grant the following IAM roles associated to your Google Service Account (GSA):
```bash
PROJECT_ID=<your-gcp-project-id>
GSA_NAME=<your-gsa>
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member "serviceAccount:${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \
--role roles/cloudtrace.agent
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member "serviceAccount:${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \
--role roles/monitoring.metricWriter
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member "serviceAccount:${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \
--role roles/cloudprofiler.agent
```
**Note**
Currently only trace is supported. Support for metrics, and more is coming soon.
## Changes
When enabling this kustomize module, most services will be patched with a configuration similar to the following:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: productcatalogservice
spec:
template:
spec:
containers:
- name: server
env:
- name: COLLECTOR_SERVICE_ADDR
value: "opentelemetrycollector:4317"
- name: ENABLE_STATS
value: "1"
- name: ENABLE_TRACING
value: "1"
```
This patch sets environment variables to enable export of stats and tracing, as well as a variable to tell the service how to reach the new collector deployment.
## OpenTelemetry Collector
Currently, this component adds a single collector service which collects traces and metrics from individual services and forwards them to the appropriate Google Cloud backend.
![Collector Architecture Diagram](collector-model.png)
If you wish to experiment with different backends, you can modify the appropriate lines in [otel-collector.yaml](otel-collector.yaml) to export traces or metrics to a different backend. See the [OpenTelemetry docs](https://opentelemetry.io/docs/collector/configuration/) for more details.
## Workload Identity
If you are running this sample on GKE, your GKE cluster may be configured to use [Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) to manage access to Google Cloud APIs (like Cloud Trace). If this is the case, you may not see traces properly exported, or may see an error message like `failed to export to Google Cloud Trace: rpc error: code = PermissionDenied desc = The caller does not have permission` logged by your `opentelemetrycollector` Pod(s). In order to export traces with such a setup, you need to associate the Kubernetes [ServiceAccount](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) (`default/default`) with your [default compute service account](https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) on Google Cloud (or a custom Google Cloud service account you may create for this purpose).
* To get the email address associated with your Google service account, check in the IAM section of the Cloud Console. Or run the following command in your terminal:
```bash
gcloud iam service-accounts list
```
* Then, allow the Kubernetes service account to act as your Google service account with the following command (using your own `PROJECT_ID` and the `GSA_EMAIL` you found in the previous step):
```bash
gcloud iam service-accounts add-iam-policy-binding ${GSA_EMAIL} \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:${PROJECT_ID}.svc.id.goog[default/default]"
```
* Annotate your Kubernetes service account (`default/default` for the `default` namespace) to use the Google IAM service account:
```bash
kubectl annotate serviceaccount default \
iam.gke.io/gcp-service-account=${GSA_EMAIL}
```
* Finally, restart your `opentelemetrycollector` deployment to reflect the new settings:
```bash
kubectl rollout restart deployment opentelemetrycollector
```
When the new Pod rolls out, you should start to see traces appear in the cloud console.

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

View File

@@ -0,0 +1,174 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
resources:
- otel-collector.yaml
patches:
# adservice - not yet implemented
# checkoutservice - tracing, profiler
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: checkoutservice
spec:
template:
spec:
containers:
- name: server
env:
- name: COLLECTOR_SERVICE_ADDR
value: "opentelemetrycollector:4317"
- name: OTEL_SERVICE_NAME
value: "checkoutservice"
- name: ENABLE_TRACING
value: "1"
- name: ENABLE_PROFILER
value: "1"
# currencyservice - tracing, profiler
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: currencyservice
spec:
template:
spec:
containers:
- name: server
env:
- name: COLLECTOR_SERVICE_ADDR
value: "opentelemetrycollector:4317"
- name: OTEL_SERVICE_NAME
value: "currencyservice"
- name: ENABLE_TRACING
value: "1"
- name: DISABLE_PROFILER
$patch: delete
# emailservice - tracing, profiler
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: emailservice
spec:
template:
spec:
containers:
- name: server
env:
- name: COLLECTOR_SERVICE_ADDR
value: "opentelemetrycollector:4317"
- name: OTEL_SERVICE_NAME
value: "emailservice"
- name: ENABLE_TRACING
value: "1"
- name: DISABLE_PROFILER
$patch: delete
# frontend - tracing, profiler
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
template:
spec:
containers:
- name: server
env:
- name: ENABLE_TRACING
value: "1"
- name: COLLECTOR_SERVICE_ADDR
value: "opentelemetrycollector:4317"
- name: OTEL_SERVICE_NAME
value: "frontend"
- name: ENABLE_PROFILER
value: "1"
# paymentservice - tracing, profiler
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: paymentservice
spec:
template:
spec:
containers:
- name: server
env:
- name: COLLECTOR_SERVICE_ADDR
value: "opentelemetrycollector:4317"
- name: OTEL_SERVICE_NAME
value: "paymentservice"
- name: ENABLE_TRACING
value: "1"
- name: DISABLE_PROFILER
$patch: delete
# productcatalogservice - tracing, profiler
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: productcatalogservice
spec:
template:
spec:
containers:
- name: server
env:
- name: COLLECTOR_SERVICE_ADDR
value: "opentelemetrycollector:4317"
- name: OTEL_SERVICE_NAME
value: "productcatalogservice"
- name: ENABLE_TRACING
value: "1"
- name: DISABLE_PROFILER
value: "1"
# recommendationservice - tracing, profiler
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: recommendationservice
spec:
template:
spec:
containers:
- name: server
env:
- name: COLLECTOR_SERVICE_ADDR
value: "opentelemetrycollector:4317"
- name: OTEL_SERVICE_NAME
value: "recommendationservice"
- name: ENABLE_TRACING
value: "1"
- name: DISABLE_PROFILER
$patch: delete
# shippingservice - stats, tracing, profiler
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: shippingservice
spec:
template:
spec:
containers:
- name: server
env:
- name: DISABLE_PROFILER
$patch: delete

View File

@@ -0,0 +1,126 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetrycollector
spec:
replicas: 1
selector:
matchLabels:
app: opentelemetrycollector
template:
metadata:
labels:
app: opentelemetrycollector
spec:
securityContext:
fsGroup: 1000
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
# Init container retrieves the current cloud project id from the metadata server
# and inserts it into the collector config template
# https://cloud.google.com/compute/docs/storing-retrieving-metadata
initContainers:
- name: otel-gateway-init
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
image: busybox:latest@sha256:e226d6308690dbe282443c8c7e57365c96b5228f0fe7f40731b5d84d37a06839
command:
- '/bin/sh'
- '-c'
- |
sed "s/{{PROJECT_ID}}/$(curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/project/project-id)/" /template/collector-gateway-config-template.yaml >> /conf/collector-gateway-config.yaml
volumeMounts:
- name: collector-gateway-config-template
mountPath: /template
- name: collector-gateway-config
mountPath: /conf
containers:
# This gateway container will receive traces and metrics from each microservice
# and forward it to GCP
- name: otel-gateway
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
args:
- --config=/conf/collector-gateway-config.yaml
image: otel/opentelemetry-collector-contrib:0.144.0@sha256:213886eb6407af91b87fa47551c3632be1a6419ff3a5114ef1e6fc364628496f
volumeMounts:
- name: collector-gateway-config
mountPath: /conf
volumes:
# Simple ConfigMap volume with template file
- name: collector-gateway-config-template
configMap:
items:
- key: collector-gateway-config-template.yaml
path: collector-gateway-config-template.yaml
name: collector-gateway-config-template
# Create a volume to store the expanded template (with correct cloud project ID)
- name: collector-gateway-config
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: opentelemetrycollector
spec:
ports:
- name: grpc-otlp
port: 4317
protocol: TCP
targetPort: 4317
selector:
app: opentelemetrycollector
type: ClusterIP
---
apiVersion: v1
kind: ConfigMap
metadata:
name: collector-gateway-config-template
# Open Telemetry Collector config
# https://opentelemetry.io/docs/collector/configuration/
data:
collector-gateway-config-template.yaml: |
receivers:
otlp:
protocols:
grpc:
processors:
exporters:
googlecloud:
project: {{PROJECT_ID}}
service:
pipelines:
traces:
receivers: [otlp] # Receive otlp-formatted data from other collector instances
processors: []
exporters: [googlecloud] # Export traces directly to Google Cloud
metrics:
receivers: [otlp]
processors: []
exporters: [googlecloud] # Export metrics to Google Cloud

View File

@@ -0,0 +1,67 @@
# Integrate Online Boutique with Memorystore (Redis)
By default the `cartservice` app is serializing the data in an in-cluster Redis database. Using a database outside your GKE cluster could bring more resiliency and more security with a managed service like Google Cloud Memorystore (Redis).
![Architecture diagram with Memorystore](/docs/img/memorystore.png)
## Provision a Memorystore (Redis) instance
Important notes:
- You can connect to a Memorystore (Redis) instance from GKE clusters that are in the same region and use the same network as your instance.
- You cannot connect to a Memorystore (Redis) instance from a GKE cluster without VPC-native/IP aliasing enabled.
To provision a Memorystore (Redis) instance you can follow the following instructions:
```bash
ZONE="<your-GCP-zone>"
REGION="<your-GCP-region>"
gcloud services enable redis.googleapis.com
gcloud redis instances create redis-cart \
--size=1 \
--region=${REGION} \
--zone=${ZONE} \
--redis-version=redis_7_0
```
_Note: You can also find in this repository the Terraform script to provision the Memorystore (Redis) instance alongside the GKE cluster, more information [here](/terraform)._
## Deploy Online Boutique connected to a Memorystore (Redis) instance
To automate the deployment of Online Boutique integrated with Memorystore (Redis) you can leverage the following variation with [Kustomize](../..).
From the `kustomize/` folder at the root level of this repository, execute this command:
```bash
kustomize edit add component components/memorystore
```
_Note: this Kustomize component will also remove the `redis-cart` `Deployment` and `Service` not used anymore._
This will update the `kustomize/kustomization.yaml` file which could be similar to:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- base
components:
- components/memorystore
```
Update current Kustomize manifest to target this Memorystore (Redis) instance.
```bash
REDIS_IP=$(gcloud redis instances describe redis-cart --region=${REGION} --format='get(host)')
REDIS_PORT=$(gcloud redis instances describe redis-cart --region=${REGION} --format='get(port)')
sed -i "s/REDIS_CONNECTION_STRING/${REDIS_IP}:${REDIS_PORT}/g" components/memorystore/kustomization.yaml
```
You can locally render these manifests by running `kubectl kustomize .` as well as deploying them by running `kubectl apply -k .`.
## Resources
- [Connecting to a Redis instance from a Google Kubernetes Engine cluster](https://cloud.google.com/memorystore/docs/redis/connect-redis-instance-gke)
- [Seamlessly encrypt traffic from any apps in your Mesh to Memorystore (Redis)](https://medium.com/google-cloud/64b71969318d)

View File

@@ -0,0 +1,45 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
# cartservice - replace REDIS_ADDR to target new Memorystore (redis) instance
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: cartservice
spec:
template:
spec:
containers:
- name: server
env:
- name: REDIS_ADDR
value: "REDIS_CONNECTION_STRING"
# redis - remove the redis-cart Deployment
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-cart
$patch: delete
# redis - remove the redis-cart Service
- patch: |-
apiVersion: v1
kind: Service
metadata:
name: redis-cart
$patch: delete

View File

@@ -0,0 +1,64 @@
# Secure Online Boutique with Network Policies
You can use [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) enforcement to control the communication between your cluster's Pods and Services.
To use `NetworkPolicies` in Google Kubernetes Engine (GKE), you will need a GKE cluster with network policy enforcement enabled, the recommended approach is to use [GKE Dataplane V2](https://cloud.google.com/kubernetes-engine/docs/how-to/dataplane-v2).
To use `NetworkPolicies` on a local cluster such as [minikube](https://minikube.sigs.k8s.io/docs/start/), you will need to use an alternative CNI that supports `NetworkPolicies` like [Calico](https://projectcalico.docs.tigera.io/getting-started/kubernetes/minikube). To run a minikube cluster with Calico, run `minikube start --cni=calico`. By design, the minikube default CNI [Kindnet](https://github.com/aojea/kindnet) does not support it.
## Deploy Online Boutique with `NetworkPolicies` via Kustomize
To automate the deployment of Online Boutique integrated with fine granular `NetworkPolicies` (one per `Deployment`), you can leverage the following variation with [Kustomize](../..).
From the `kustomize/` folder at the root level of this repository, execute this command:
```bash
kustomize edit add component components/network-policies
```
This will update the `kustomize/kustomization.yaml` file which could be similar to:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- base
components:
- components/network-policies
```
You can locally render these manifests by running `kubectl kustomize .` as well as deploying them by running `kubectl apply -k .`.
Once deployed, you can verify that the `NetworkPolicies` are successfully deployed:
```bash
kubectl get networkpolicy
```
The output could be similar to:
```output
NAME POD-SELECTOR AGE
adservice app=adservice 2m58s
cartservice app=cartservice 2m58s
checkoutservice app=checkoutservice 2m58s
currencyservice app=currencyservice 2m58s
deny-all <none> 2m58s
emailservice app=emailservice 2m58s
frontend app=frontend 2m58s
loadgenerator app=loadgenerator 2m58s
paymentservice app=paymentservice 2m58s
productcatalogservice app=productcatalogservice 2m58s
recommendationservice app=recommendationservice 2m58s
redis-cart app=redis-cart 2m58s
shippingservice app=shippingservice 2m58s
```
_Note: `Egress` is wide open in these `NetworkPolicies` . In our case, we do this is on purpose because there are multiple egress destinations to take into consideration like the Kubernetes DNS, Istio control plane (`istiod`), Cloud Trace API, Cloud Profiler API, etc._
## Related Resources
- [GKE Dataplane V2 announcement](https://cloud.google.com/blog/products/containers-kubernetes/bringing-ebpf-and-cilium-to-google-kubernetes-engine)
- [Kubernetes Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
- [Kubernetes Network Policy Recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes)
- [Network policy logging](https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy-logging)

View File

@@ -0,0 +1,30 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
resources:
- network-policy-deny-all.yaml
- network-policy-adservice.yaml
- network-policy-cartservice.yaml
- network-policy-checkoutservice.yaml
- network-policy-currencyservice.yaml
- network-policy-emailservice.yaml
- network-policy-frontend.yaml
- network-policy-loadgenerator.yaml
- network-policy-paymentservice.yaml
- network-policy-productcatalogservice.yaml
- network-policy-recommendationservice.yaml
- network-policy-redis.yaml
- network-policy-shippingservice.yaml

View File

@@ -0,0 +1,35 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: adservice
spec:
podSelector:
matchLabels:
app: adservice
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- port: 9555
protocol: TCP
egress:
- {}

View File

@@ -0,0 +1,38 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: cartservice
spec:
podSelector:
matchLabels:
app: cartservice
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
- podSelector:
matchLabels:
app: checkoutservice
ports:
- port: 7070
protocol: TCP
egress:
- {}

View File

@@ -0,0 +1,35 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: checkoutservice
spec:
podSelector:
matchLabels:
app: checkoutservice
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- port: 5050
protocol: TCP
egress:
- {}

View File

@@ -0,0 +1,38 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: currencyservice
spec:
podSelector:
matchLabels:
app: currencyservice
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
- podSelector:
matchLabels:
app: checkoutservice
ports:
- port: 7000
protocol: TCP
egress:
- {}

View File

@@ -0,0 +1,23 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress

View File

@@ -0,0 +1,35 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: emailservice
spec:
podSelector:
matchLabels:
app: emailservice
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: checkoutservice
ports:
- port: 8080
protocol: TCP
egress:
- {}

View File

@@ -0,0 +1,29 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend
spec:
podSelector:
matchLabels:
app: frontend
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- {}

View File

@@ -0,0 +1,26 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: loadgenerator
spec:
podSelector:
matchLabels:
app: loadgenerator
policyTypes:
- Egress
egress:
- {}

View File

@@ -0,0 +1,35 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: paymentservice
spec:
podSelector:
matchLabels:
app: paymentservice
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: checkoutservice
ports:
- port: 50051
protocol: TCP
egress:
- {}

View File

@@ -0,0 +1,41 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: productcatalogservice
spec:
podSelector:
matchLabels:
app: productcatalogservice
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
- podSelector:
matchLabels:
app: checkoutservice
- podSelector:
matchLabels:
app: recommendationservice
ports:
- port: 3550
protocol: TCP
egress:
- {}

View File

@@ -0,0 +1,35 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: recommendationservice
spec:
podSelector:
matchLabels:
app: recommendationservice
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- port: 8080
protocol: TCP
egress:
- {}

View File

@@ -0,0 +1,35 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: redis-cart
spec:
podSelector:
matchLabels:
app: redis-cart
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: cartservice
ports:
- port: 6379
protocol: TCP
egress:
- {}

View File

@@ -0,0 +1,38 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: shippingservice
spec:
podSelector:
matchLabels:
app: shippingservice
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
- podSelector:
matchLabels:
app: checkoutservice
ports:
- port: 50051
protocol: TCP
egress:
- {}

View File

@@ -0,0 +1,27 @@
# Remove the public exposure of Online Boutique's frontend
By default, when you deploy Online Boutique, a `Service` (named `frontend-external`) of type `LoadBalancer` is deployed with a publicly accessible IP address.
But you may not want to expose this sample app publicly.
## Deploy Online Boutique without the default public endpoint
To automate the deployment of Online Boutique without the default public endpoint you can leverage the following variation with [Kustomize](../..).
From the `kustomize/` folder at the root level of this repository, execute this command:
```bash
kustomize edit add component components/non-public-frontend
```
This will update the `kustomize/kustomization.yaml` file which could be similar to:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- base
components:
- components/non-public-frontend
```
You can locally render these manifests by running `kubectl kustomize .` as well as deploying them by running `kubectl apply -k .`.

View File

@@ -0,0 +1,24 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
# frontend - delete frontend-external service
- patch: |-
apiVersion: v1
kind: Service
metadata:
name: frontend-external
$patch: delete

View File

@@ -0,0 +1,297 @@
# Service mesh with Istio
You can use [Istio](https://istio.io) to enable [service mesh features](https://cloud.google.com/service-mesh/docs/overview) such as traffic management, observability, and security. Istio can be provisioned using Cloud Service Mesh (CSM), the Open Source Software (OSS) istioctl tool, or via other Istio providers. You can then label individual namespaces for sidecar injection and configure an Istio gateway to replace the frontend-external load balancer.
# Setup
The following CLI tools needs to be installed and in the PATH:
- `gcloud`
- `kubectl`
- `kustomize`
- `istioctl` (optional)
1. Set-up some default environment variables.
```sh
PROJECT_ID="<your-project-id>"
REGION="<your-google-cloud-region"
CLUSTER_NAME="online-boutique"
gcloud config set project $PROJECT_ID
```
# Provision a GKE Cluster
1. Create an Autopilot GKE cluster.
```sh
gcloud container clusters create-auto $CLUSTER_NAME \
--location=$REGION
```
To make the best use of our service mesh, we need to have [GKE Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity), and the [Kubernetes Gateway API resource definitions](https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-gateways) enabled. Autopilot takes care of this for us.
1. Change our kubectl context for the newly created cluster.
```sh
gcloud container clusters get-credentials $CLUSTER_NAME \
--region $REGION
```
# Provision and Configure Istio Service Mesh
## (Option A) Provision managed Istio using Cloud Service Mesh
Cloud Service Mesh (CSM) provides a service mesh experience that includes a fully managed control plane and data plane. The recommended way to [install CSM](https://cloud.google.com/service-mesh/docs/onboarding/provision-control-plane) uses [fleet management](https://cloud.google.com/kubernetes-engine/fleet-management/docs/fleet-creation).
1. Enable the Cloud Service Mesh and GKE Enterprise APIs.
```sh
gcloud services enable mesh.googleapis.com anthos.googleapis.com
```
1. Enable service mesh support fleet-wide.
```sh
gcloud container fleet mesh enable
```
1. Register the GKE cluster to the fleet.
```sh
gcloud container clusters update $CLUSTER_NAME \
--location $REGION \
--fleet-project $PROJECT_ID
1. Enable automatic management of the service mesh feature in the cluster.
```sh
gcloud container fleet mesh update \
--management automatic \
--memberships $CLUSTER_NAME \
--project $PROJECT_ID \
--location $REGION
```
1. Add the Istio injection labels to the default namespace.
```sh
kubectl label namespace default \
istio.io/rev- istio-injection=enabled --overwrite
```
1. Verify that the service mesh is fully provisioned. It will take several minutes for both the control plane and data plane to be ready.
```sh
gcloud container fleet mesh describe
```
The output should be similar to:
```
createTime: '2024-09-18T15:52:36.133664725Z'
fleetDefaultMemberConfig:
mesh:
management: MANAGEMENT_AUTOMATIC
membershipSpecs:
projects/12345/locations/us-central1/memberships/online-boutique:
mesh:
management: MANAGEMENT_AUTOMATIC
origin:
type: USER
membershipStates:
projects/12345/locations/us-central1/memberships/online-boutique:
servicemesh:
conditions:
- code: VPCSC_GA_SUPPORTED
details: This control plane supports VPC-SC GA.
documentationLink: http://cloud.google.com/service-mesh/docs/managed/vpc-sc
severity: INFO
controlPlaneManagement:
details:
- code: REVISION_READY
details: 'Ready: asm-managed'
implementation: TRAFFIC_DIRECTOR
state: ACTIVE
dataPlaneManagement:
details:
- code: OK
details: Service is running.
state: ACTIVE
state:
code: OK
description: 'Revision ready for use: asm-managed.'
updateTime: '2024-09-18T16:30:37.632583401Z'
name: projects/my-project/locations/global/features/servicemesh
resourceState:
state: ACTIVE
spec: {}
updateTime: '2024-09-18T16:15:05.957266437Z'
```
1. (Optional) If you require Certificate Authority Service, you can configure it by [following these instructions](https://cloud.google.com/service-mesh/docs/security/certificate-authority-service).
## (Option B) Provision Istio using istioctl
1. Alternatively you can install the open source version of Istio by following the [getting started guide](https://istio.io/latest/docs/setup/getting-started/).
```sh
# Install istio 1.17 or above
istioctl install --set profile=minimal -y
# Enable sidecar injection for Kubernetes namespace(s) where microservices-demo is deployed
kubectl label namespace default istio-injection=enabled
# Make sure the istiod injection webhook port 15017 is accessible via GKE master nodes
# Otherwise your replicaset-controller may be blocked when trying to create new pods with:
# Error creating: Internal error occurred: failed calling
# webhook "namespace.sidecar-injector.istio.io" ... context deadline exceeded
gcloud compute firewall-rules list --filter="name~gke-[0-9a-z-]*-master"
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
gke-online-boutique-c94d71e8-master gke-vpc INGRESS 1000 tcp:10250,tcp:443 False
# Update firewall rule (or create a new one) to allow webhook port 15017
gcloud compute firewall-rules update gke-online-boutique-c94d71e8-master \
--allow tcp:10250,tcp:443,tcp:15017
```
# Deploy Online Boutique with the Istio component
Once the service mesh and namespace injection are configured, you can then deploy the Istio manifests using Kustomize. You should also include the [service-accounts component](../service-accounts) if you plan on using AuthorizationPolicies.
1. Enable the service-mesh-istio component.
```sh
cd kustomize/
kustomize edit add component components/service-mesh-istio
```
This will update the `kustomize/kustomization.yaml` file which could be similar to:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- base
components:
- components/service-mesh-istio
```
_Note: `service-mesh-istio` component includes the same delete patch as the `non-public-frontend` component. Trying to use both those components in your kustomization.yaml file will result in an error._
1. Deploy the manifests.
```sh
kubectl apply -k .
```
The output should be similar to:
```
serviceaccount/adservice created
serviceaccount/cartservice created
serviceaccount/checkoutservice created
serviceaccount/currencyservice created
serviceaccount/emailservice created
serviceaccount/frontend created
serviceaccount/loadgenerator created
serviceaccount/paymentservice created
serviceaccount/productcatalogservice created
serviceaccount/recommendationservice created
serviceaccount/shippingservice created
service/adservice created
service/cartservice created
service/checkoutservice created
service/currencyservice created
service/emailservice created
service/frontend created
service/paymentservice created
service/productcatalogservice created
service/recommendationservice created
service/redis-cart created
service/shippingservice created
deployment.apps/adservice created
deployment.apps/cartservice created
deployment.apps/checkoutservice created
deployment.apps/currencyservice created
deployment.apps/emailservice created
deployment.apps/frontend created
deployment.apps/loadgenerator created
deployment.apps/paymentservice created
deployment.apps/productcatalogservice created
deployment.apps/recommendationservice created
deployment.apps/redis-cart created
deployment.apps/shippingservice created
gateway.gateway.networking.k8s.io/istio-gateway created
httproute.gateway.networking.k8s.io/frontend-route created
serviceentry.networking.istio.io/allow-egress-google-metadata created
serviceentry.networking.istio.io/allow-egress-googleapis created
virtualservice.networking.istio.io/frontend created
```
# Verify that the deployment succeeded
1. Check that the pods and the gateway are in a healthy and ready state.
```sh
kubectl get pods,gateways,services
```
The output should be similar to:
```
NAME READY STATUS RESTARTS AGE
pod/adservice-6cbd9794f9-8c4gv 2/2 Running 0 47s
pod/cartservice-667bbd5f6-84j8v 2/2 Running 0 47s
pod/checkoutservice-547557f445-bw46n 2/2 Running 0 47s
pod/currencyservice-6bd8885d9c-2cszv 2/2 Running 0 47s
pod/emailservice-64997dcf97-8fpsd 2/2 Running 0 47s
pod/frontend-c54778dcf-wbgmr 2/2 Running 0 46s
pod/istio-gateway-istio-8577b948c6-cxl8j 1/1 Running 0 45s
pod/loadgenerator-ccfd4d598-jh6xj 2/2 Running 0 46s
pod/paymentservice-79b77cd7c-6hth7 2/2 Running 0 46s
pod/productcatalogservice-5f75795545-nk5wv 2/2 Running 0 46s
pod/recommendationservice-56dd4c7df5-gnwwr 2/2 Running 0 46s
pod/redis-cart-799c85c644-pxsvt 2/2 Running 0 46s
pod/shippingservice-64f8df74f5-7wllf 2/2 Running 0 45s
NAME CLASS ADDRESS READY AGE
gateway.gateway.networking.k8s.io/istio-gateway istio 35.247.123.146 True 45s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/adservice ClusterIP 10.68.231.142 <none> 9555/TCP 49s
service/cartservice ClusterIP 10.68.184.25 <none> 7070/TCP 49s
service/checkoutservice ClusterIP 10.68.177.213 <none> 5050/TCP 49s
service/currencyservice ClusterIP 10.68.249.87 <none> 7000/TCP 49s
service/emailservice ClusterIP 10.68.205.123 <none> 5000/TCP 49s
service/frontend ClusterIP 10.68.94.203 <none> 80/TCP 48s
service/istio-gateway-istio LoadBalancer 10.68.147.158 35.247.123.146 15021:30376/TCP,80:30332/TCP 45s
service/kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 65m
service/paymentservice ClusterIP 10.68.114.19 <none> 50051/TCP 48s
service/productcatalogservice ClusterIP 10.68.240.153 <none> 3550/TCP 48s
service/recommendationservice ClusterIP 10.68.117.97 <none> 8080/TCP 48s
service/redis-cart ClusterIP 10.68.189.126 <none> 6379/TCP 48s
service/shippingservice ClusterIP 10.68.221.62 <none> 50051/TCP 48s
```
1. Find the external IP address of your Istio gateway.
```sh
INGRESS_HOST="$(kubectl get gateway istio-gateway \
-o jsonpath='{.status.addresses[*].value}')"
```
1. Navigate to the frontend in a web browser.
```
http://$INGRESS_HOST
```
# Additional service mesh demos using Online Boutique
- [Canary deployment](https://github.com/GoogleCloudPlatform/istio-samples/tree/master/istio-canary-gke)
- [Security (mTLS, JWT, Authorization)](https://github.com/GoogleCloudPlatform/istio-samples/tree/master/security-intro)
- [Cloud Operations (Stackdriver)](https://github.com/GoogleCloudPlatform/istio-samples/tree/master/istio-stackdriver)
- [Stackdriver metrics (Open source Istio)](https://github.com/GoogleCloudPlatform/istio-samples/tree/master/stackdriver-metrics)
# Related resources
- [Deploying classic istio-ingressgateway in ASM](https://cloud.google.com/service-mesh/docs/gateways#deploy_gateways)
- [Uninstall Istio via istioctl](https://istio.io/latest/docs/setup/install/istioctl/#uninstall-istio)
- [Uninstall Cloud Service Mesh](https://cloud.google.com/service-mesh/docs/uninstall)

View File

@@ -0,0 +1,46 @@
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: allow-egress-googleapis
spec:
hosts:
- "accounts.google.com" # Used to get token
- "*.googleapis.com"
ports:
- number: 80
protocol: HTTP
name: http
- number: 443
protocol: HTTPS
name: https
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: allow-egress-google-metadata
spec:
hosts:
- metadata.google.internal
addresses:
- 169.254.169.254 # GCE metadata server
ports:
- number: 80
name: http
protocol: HTTP
- number: 443
name: https
protocol: HTTPS

View File

@@ -0,0 +1,42 @@
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: istio-gateway
spec:
gatewayClassName: istio
listeners:
- name: http
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: Same
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: frontend-route
spec:
parentRefs:
- name: istio-gateway
rules:
- matches:
- path:
value: /
backendRefs:
- name: frontend
port: 80

View File

@@ -0,0 +1,27 @@
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: frontend
spec:
hosts:
- "frontend.default.svc.cluster.local"
http:
- route:
- destination:
host: frontend
port:
number: 80

View File

@@ -0,0 +1,28 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
resources:
- allow-egress-googleapis.yaml
- frontend-gateway.yaml
- frontend.yaml
patches:
# frontend - delete frontend-external service (same as non-public-frontend component)
- patch: |-
apiVersion: v1
kind: Service
metadata:
name: frontend-external
$patch: delete

View File

@@ -0,0 +1,145 @@
# Shopping Assistant with RAG & AlloyDB
This demo adds a new service to Online Boutique called `shoppingassistantservice` which, alongside an Alloy-DB backed products catalog, adds a RAG-featured AI assistant to the frontend experience, helping users suggest products matching their home decor.
## Setup instructions
**Note:** This demo requires a Google Cloud project where you to have the `owner` role, else you may be unable to enable APIs or modify VPC rules that are needed for this demo.
1. Set some environment variables.
```sh
export PROJECT_ID=<project_id>
export PROJECT_NUMBER=<project_number>
export PGPASSWORD=<pgpassword>
```
**Note**: The project ID and project number of your Google Cloud project can be found in the Console. The PostgreSQL password can be set to anything you want, but make sure to note it down.
1. Change your default Google Cloud project.
```sh
gcloud auth login
gcloud config set project $PROJECT_ID
```
1. Enable the Google Kubernetes Engine (GKE) and Artifact Registry (AR) APIs.
```sh
gcloud services enable container.googleapis.com
gcloud services enable artifactregistry.googleapis.com
```
1. Create a GKE Autopilot cluster. This may take a few minutes.
```sh
gcloud container clusters create-auto cymbal-shops \
--region=us-central1
```
1. Change your Kubernetes context to your newly created GKE cluster.
```sh
gcloud container clusters get-credentials cymbal-shops \
--region us-central1
```
1. Create an Artifact Registry container image repository.
```sh
gcloud artifacts repositories create images \
--repository-format=docker \
--location=us-central1
```
1. Clone the `microservices-demo` repository locally.
```sh
git clone https://github.com/GoogleCloudPlatform/microservices-demo \
&& cd microservices-demo/
```
1. Run script #1. If it asks about policy bindings, select the option `None`. This may take a few minutes.
```sh
./kustomize/components/shopping-assistant/scripts/1_deploy_alloydb_infra.sh
```
**Note**: If you are on macOS and use a non-GNU version of `sed`, you may have to tweak the script to use `gsed` instead.
1. Create a Linux VM in Compute Engine (GCE).
```sh
gcloud compute instances create gce-linux \
--zone=us-central1-a \
--machine-type=e2-micro \
--image-family=debian-12 \
--image-project=debian-cloud
```
1. SSH into the VM. From here until we exit, all steps happen in the VM.
```sh
gcloud compute ssh gce-linux \
--zone "us-central1-a"
```
1. Install the Postgres client and set your default Google Cloud project.
```sh
sudo apt-get install -y postgresql-client
gcloud auth login
gcloud config set project <PROJECT_ID>
```
1. Copy script #2, the python script, and the products.json to the VM. Make sure the scripts are executable.
```sh
nano 2_create_populate_alloydb_tables.sh # paste content
nano generate_sql_from_products.py # paste content
nano products.json # paste content
chmod +x 2_create_populate_alloydb_tables.sh
chmod +x generate_sql_from_products.py
```
**Note:** You can find the files at the following places:
- `kustomize/components/shopping-assistant/scripts/2_create_populate_alloydb_tables.sh`
- `kustomize/components/shopping-assistant/scripts/generate_sql_from_products.py`
- `src/productcatalogservice/products.json`
1. Run script #2 in the VM. If it asks for a postgres password, it should be the same that you set in script #1 earlier. This may take a few minutes.
```sh
./2_create_populate_alloydb_tables.sh
```
1. Exit SSH.
```sh
exit
```
1. Create an API key in the [Credentials page](https://pantheon.corp.google.com/apis/credentials) with permissions for "Generative Language API", and make note of the secret key.
1. Replace the Google API key placeholder in the shoppingassistant service.
```sh
export GOOGLE_API_KEY=<google_api_key>
sed -i "s/GOOGLE_API_KEY_VAL/${GOOGLE_API_KEY}/g" kustomize/components/shopping-assistant/shoppingassistantservice.yaml
```
1. Edit the root Kustomize file to enable the `alloydb` and `shopping-assistant` components.
```sh
nano kubernetes-manifests/kustomization.yaml # make the modifications below
```
```yaml
# ...head of the file
components: # remove this comment
# - ../kustomize/components/cymbal-branding
# - ../kustomize/components/google-cloud-operations
# - ../kustomize/components/memorystore
# - ../kustomize/components/network-policies
- ../kustomize/components/alloydb # remove this comment
- ../kustomize/components/shopping-assistant # remove this comment
# - ../kustomize/components/spanner
# - ../kustomize/components/container-images-tag
# - ../kustomize/components/container-images-tag-suffix
# - ../kustomize/components/container-images-registry
```
1. Deploy to the GKE cluster.
```sh
skaffold run --default-repo=us-central1-docker.pkg.dev/$PROJECT_ID/images
```
1. Wait for all the pods to be up and running. You can then find the external IP and navigate to it.
```sh
kubectl get pods
kubectl get services
```

View File

@@ -0,0 +1,32 @@
# Copyright 2024 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
resources:
- shoppingassistantservice.yaml
patches:
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
template:
spec:
containers:
- name: server
env:
- name: ENABLE_ASSISTANT
value: "true"

View File

@@ -0,0 +1,135 @@
#!/bin/sh
#
# Copyright 2024 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
set -x
# Replace me
PROJECT_ID=$PROJECT_ID
PROJECT_NUMBER=$PROJECT_NUMBER
PGPASSWORD=$PGPASSWORD
# Set sensible defaults
REGION=us-central1
USE_GKE_GCLOUD_AUTH_PLUGIN=True
ALLOYDB_NETWORK=default
ALLOYDB_SERVICE_NAME=onlineboutique-network-range
ALLOYDB_CLUSTER_NAME=onlineboutique-cluster
ALLOYDB_INSTANCE_NAME=onlineboutique-instance
ALLOYDB_CARTS_DATABASE_NAME=carts
ALLOYDB_CARTS_TABLE_NAME=cart_items
ALLOYDB_PRODUCTS_DATABASE_NAME=products
ALLOYDB_PRODUCTS_TABLE_NAME=catalog_items
ALLOYDB_USER_GSA_NAME=alloydb-user-sa
ALLOYDB_USER_GSA_ID=${ALLOYDB_USER_GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com
ALLOYDB_SECRET_NAME=alloydb-secret
# Enable services
gcloud services enable alloydb.googleapis.com
gcloud services enable servicenetworking.googleapis.com
gcloud services enable secretmanager.googleapis.com
gcloud services enable aiplatform.googleapis.com
gcloud services enable generativelanguage.googleapis.com
# Set our DB credentials behind the secret
echo $PGPASSWORD | gcloud secrets create ${ALLOYDB_SECRET_NAME} --data-file=-
# Set up needed service connection
gcloud compute addresses create ${ALLOYDB_SERVICE_NAME} \
--global \
--purpose=VPC_PEERING \
--prefix-length=16 \
--description="Online Boutique Private Services" \
--network=${ALLOYDB_NETWORK}
gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--ranges=${ALLOYDB_SERVICE_NAME} \
--network=${ALLOYDB_NETWORK}
gcloud alloydb clusters create ${ALLOYDB_CLUSTER_NAME} \
--region=${REGION} \
--password=${PGPASSWORD} \
--disable-automated-backup \
--network=${ALLOYDB_NETWORK}
gcloud alloydb instances create ${ALLOYDB_INSTANCE_NAME} \
--cluster=${ALLOYDB_CLUSTER_NAME} \
--region=${REGION} \
--cpu-count=4 \
--instance-type=PRIMARY
gcloud alloydb instances create ${ALLOYDB_INSTANCE_NAME}-replica \
--cluster=${ALLOYDB_CLUSTER_NAME} \
--region=${REGION} \
--cpu-count=4 \
--instance-type=READ_POOL \
--read-pool-node-count=2
gcloud beta alloydb instances update ${ALLOYDB_INSTANCE_NAME} \
--cluster=${ALLOYDB_CLUSTER_NAME} \
--region=${REGION} \
--assign-inbound-public-ip=ASSIGN_IPV4 \
--database-flags password.enforce_complexity=on
# Fetch the primary and read IPs
ALLOYDB_PRIMARY_IP=`gcloud alloydb instances list --region=${REGION} --cluster=${ALLOYDB_CLUSTER_NAME} --filter="INSTANCE_TYPE:PRIMARY" --format=flattened | sed -nE "s/ipAddress:\s*(.*)/\1/p"`
ALLOYDB_READ_IP=`gcloud alloydb instances list --region=${REGION} --cluster=${ALLOYDB_CLUSTER_NAME} --filter="INSTANCE_TYPE:READ_POOL" --format=flattened | sed -nE "s/ipAddress:\s*(.*)/\1/p"`
# Substitute environment values (alloydb/kustomization.yaml)
sed -i "s/PROJECT_ID_VAL/${PROJECT_ID}/g" kustomize/components/alloydb/kustomization.yaml
sed -i "s/REGION_VAL/${REGION}/g" kustomize/components/alloydb/kustomization.yaml
sed -i "s/ALLOYDB_PRIMARY_IP_VAL/${ALLOYDB_PRIMARY_IP}/g" kustomize/components/alloydb/kustomization.yaml
sed -i "s/ALLOYDB_USER_GSA_ID/${ALLOYDB_USER_GSA_ID}/g" kustomize/components/alloydb/kustomization.yaml
sed -i "s/ALLOYDB_CLUSTER_NAME_VAL/${ALLOYDB_CLUSTER_NAME}/g" kustomize/components/alloydb/kustomization.yaml
sed -i "s/ALLOYDB_INSTANCE_NAME_VAL/${ALLOYDB_INSTANCE_NAME}/g" kustomize/components/alloydb/kustomization.yaml
sed -i "s/ALLOYDB_CARTS_DATABASE_NAME_VAL/${ALLOYDB_CARTS_DATABASE_NAME}/g" kustomize/components/alloydb/kustomization.yaml
sed -i "s/ALLOYDB_CARTS_TABLE_NAME_VAL/${ALLOYDB_CARTS_TABLE_NAME}/g" kustomize/components/alloydb/kustomization.yaml
sed -i "s/ALLOYDB_PRODUCTS_DATABASE_NAME_VAL/${ALLOYDB_PRODUCTS_DATABASE_NAME}/g" kustomize/components/alloydb/kustomization.yaml
sed -i "s/ALLOYDB_PRODUCTS_TABLE_NAME_VAL/${ALLOYDB_PRODUCTS_TABLE_NAME}/g" kustomize/components/alloydb/kustomization.yaml
sed -i "s/ALLOYDB_SECRET_NAME_VAL/${ALLOYDB_SECRET_NAME}/g" kustomize/components/alloydb/kustomization.yaml
# Substitute environment values (kustomize/components/shopping-assistant/shoppingassistantservice.yaml)
sed -i "s/PROJECT_ID_VAL/${PROJECT_ID}/g" kustomize/components/shopping-assistant/shoppingassistantservice.yaml
sed -i "s/REGION_VAL/${REGION}/g" kustomize/components/shopping-assistant/shoppingassistantservice.yaml
sed -i "s/ALLOYDB_CLUSTER_NAME_VAL/${ALLOYDB_CLUSTER_NAME}/g" kustomize/components/shopping-assistant/shoppingassistantservice.yaml
sed -i "s/ALLOYDB_INSTANCE_NAME_VAL/${ALLOYDB_INSTANCE_NAME}/g" kustomize/components/shopping-assistant/shoppingassistantservice.yaml
sed -i "s/ALLOYDB_DATABASE_NAME_VAL/${ALLOYDB_PRODUCTS_DATABASE_NAME}/g" kustomize/components/shopping-assistant/shoppingassistantservice.yaml
sed -i "s/ALLOYDB_TABLE_NAME_VAL/${ALLOYDB_PRODUCTS_TABLE_NAME}/g" kustomize/components/shopping-assistant/shoppingassistantservice.yaml
sed -i "s/ALLOYDB_SECRET_NAME_VAL/${ALLOYDB_SECRET_NAME}/g" kustomize/components/shopping-assistant/shoppingassistantservice.yaml
sed -i "s/ALLOYDB_USER_GSA_ID/${ALLOYDB_USER_GSA_ID}/g" kustomize/components/shopping-assistant/shoppingassistantservice.yaml
# Create service account for the cart and shopping assistant services
gcloud iam service-accounts create ${ALLOYDB_USER_GSA_NAME} \
--display-name=${ALLOYDB_USER_GSA_NAME}
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${ALLOYDB_USER_GSA_ID} --role=roles/alloydb.client
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${ALLOYDB_USER_GSA_ID} --role=roles/alloydb.databaseUser
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${ALLOYDB_USER_GSA_ID} --role=roles/secretmanager.secretAccessor
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${ALLOYDB_USER_GSA_ID} --role=roles/serviceusage.serviceUsageConsumer
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:service-${PROJECT_NUMBER}@gcp-sa-alloydb.iam.gserviceaccount.com --role=roles/aiplatform.user
gcloud iam service-accounts add-iam-policy-binding ${ALLOYDB_USER_GSA_ID} \
--member "serviceAccount:${PROJECT_ID}.svc.id.goog[default/cartservice]" \
--role roles/iam.workloadIdentityUser
gcloud iam service-accounts add-iam-policy-binding ${ALLOYDB_USER_GSA_ID} \
--member "serviceAccount:${PROJECT_ID}.svc.id.goog[default/shoppingassistantservice]" \
--role roles/iam.workloadIdentityUser
gcloud iam service-accounts add-iam-policy-binding ${ALLOYDB_USER_GSA_ID} \
--member "serviceAccount:${PROJECT_ID}.svc.id.goog[default/productcatalogservice]" \
--role roles/iam.workloadIdentityUser

View File

@@ -0,0 +1,49 @@
#!/bin/sh
#
# Copyright 2024 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
set -x
# Set sensible defaults
REGION=us-central1
ALLOYDB_CLUSTER_NAME=onlineboutique-cluster
ALLOYDB_CARTS_DATABASE_NAME=carts
ALLOYDB_CARTS_TABLE_NAME=cart_items
ALLOYDB_PRODUCTS_DATABASE_NAME=products
ALLOYDB_PRODUCTS_TABLE_NAME=catalog_items
# Fetch the primary and read IPs
ALLOYDB_PRIMARY_IP=`gcloud alloydb instances list --region=${REGION} --cluster=${ALLOYDB_CLUSTER_NAME} --filter="INSTANCE_TYPE:PRIMARY" --format=flattened | sed -nE "s/ipAddress:\s*(.*)/\1/p"`
# Create carts database and table
psql -h ${ALLOYDB_PRIMARY_IP} -U postgres -c "CREATE DATABASE ${ALLOYDB_CARTS_DATABASE_NAME}"
psql -h ${ALLOYDB_PRIMARY_IP} -U postgres -d ${ALLOYDB_CARTS_DATABASE_NAME} -c "CREATE TABLE ${ALLOYDB_CARTS_TABLE_NAME} (userId text, productId text, quantity int, PRIMARY KEY(userId, productId))"
psql -h ${ALLOYDB_PRIMARY_IP} -U postgres -d ${ALLOYDB_CARTS_DATABASE_NAME} -c "CREATE INDEX cartItemsByUserId ON ${ALLOYDB_CARTS_TABLE_NAME}(userId)"
# Create products database, table, and extensions
psql -h ${ALLOYDB_PRIMARY_IP} -U postgres -c "CREATE DATABASE ${ALLOYDB_PRODUCTS_DATABASE_NAME}"
psql -h ${ALLOYDB_PRIMARY_IP} -U postgres -d ${ALLOYDB_PRODUCTS_DATABASE_NAME} -c "CREATE EXTENSION IF NOT EXISTS vector"
psql -h ${ALLOYDB_PRIMARY_IP} -U postgres -d ${ALLOYDB_PRODUCTS_DATABASE_NAME} -c "CREATE EXTENSION IF NOT EXISTS google_ml_integration CASCADE;"
psql -h ${ALLOYDB_PRIMARY_IP} -U postgres -d ${ALLOYDB_PRODUCTS_DATABASE_NAME} -c "GRANT EXECUTE ON FUNCTION embedding TO postgres;"
psql -h ${ALLOYDB_PRIMARY_IP} -U postgres -d ${ALLOYDB_PRODUCTS_DATABASE_NAME} -c "CREATE TABLE ${ALLOYDB_PRODUCTS_TABLE_NAME} (id TEXT PRIMARY KEY, name TEXT, description TEXT, picture TEXT, price_usd_currency_code TEXT, price_usd_units INTEGER, price_usd_nanos BIGINT, categories TEXT, product_embedding VECTOR(768), embed_model TEXT)"
# Generate and insert products table entries
python3 ./generate_sql_from_products.py > products.sql
psql -h ${ALLOYDB_PRIMARY_IP} -U postgres -d ${ALLOYDB_PRODUCTS_DATABASE_NAME} -f products.sql
rm products.sql
# Generate vector embeddings
psql -h ${ALLOYDB_PRIMARY_IP} -U postgres -d ${ALLOYDB_PRODUCTS_DATABASE_NAME} -c "UPDATE ${ALLOYDB_PRODUCTS_TABLE_NAME} SET product_embedding = embedding('textembedding-gecko@003', description), embed_model='textembedding-gecko@003';"

View File

@@ -0,0 +1,50 @@
# Copyright 2024 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
table_name = "catalog_items"
fields = [
'id', 'name', 'description', 'picture',
'price_usd_currency_code', 'price_usd_units', 'price_usd_nanos',
'categories'
]
# Load the produts JSON
with open("products.json", 'r') as f:
data = json.load(f)
# Generate SQL INSERT statements
for product in data['products']:
columns = ', '.join(fields)
placeholders = ', '.join(['{}'] * len(fields))
sql = f"INSERT INTO {table_name} ({columns}) VALUES ({placeholders});"
# Escape single quotes within product data
product['name'] = product['name'].replace("'", "")
product['description'] = product['description'].replace("'", "")
escaped_values = (
f"'{product['id']}'",
f"'{product['name']}'",
f"'{product['description']}'",
f"'{product['picture']}'",
f"'{product['priceUsd']['currencyCode']}'",
product['priceUsd']['units'],
product['priceUsd']['nanos'],
f"'{','.join(product['categories'])}'"
)
# Render the formatted SQL query
print(sql.format(*escaped_values))

View File

@@ -0,0 +1,95 @@
# Copyright 2024 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: apps/v1
kind: Deployment
metadata:
name: shoppingassistantservice
labels:
app: shoppingassistantservice
spec:
selector:
matchLabels:
app: shoppingassistantservice
template:
metadata:
labels:
app: shoppingassistantservice
spec:
serviceAccountName: shoppingassistantservice
terminationGracePeriodSeconds: 5
securityContext:
fsGroup: 1000
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
containers:
- name: server
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
image: shoppingassistantservice
ports:
- name: http
containerPort: 8080
env:
- name: GOOGLE_API_KEY
value: GOOGLE_API_KEY_VAL
- name: ALLOYDB_CLUSTER_NAME
value: ALLOYDB_CLUSTER_NAME_VAL
- name: ALLOYDB_INSTANCE_NAME
value: ALLOYDB_INSTANCE_NAME_VAL
- name: ALLOYDB_DATABASE_NAME
value: ALLOYDB_DATABASE_NAME_VAL
- name: ALLOYDB_TABLE_NAME
value: ALLOYDB_TABLE_NAME_VAL
- name: ALLOYDB_SECRET_NAME
value: ALLOYDB_SECRET_NAME_VAL
- name: PROJECT_ID
value: PROJECT_ID_VAL
- name: REGION
value: REGION_VAL
resources:
requests:
cpu: 100m
memory: 64Mi
limits:
cpu: 200m
memory: 128Mi
---
apiVersion: v1
kind: Service
metadata:
name: shoppingassistantservice
labels:
app: shoppingassistantservice
spec:
type: ClusterIP
selector:
app: shoppingassistantservice
ports:
- name: http
port: 80
targetPort: 8080
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: shoppingassistantservice
annotations:
iam.gke.io/gcp-service-account: ALLOYDB_USER_GSA_ID

View File

@@ -0,0 +1,28 @@
# Manage a single shared session for the Online Boutique apps
By default, when you deploy this sample app, the Online Boutique's `frontend` generates a `shop_session-id` cookie per browser session.
But you may want to share one unique `shop_session-id` cookie across all browser sessions.
This is useful for multi-cluster environments.
## Deploy Online Boutique to generate a single shared session
To automate the deployment of Online Boutique to manage a single shared session you can leverage the following variation with [Kustomize](../..).
From the `kustomize/` folder at the root level of this repository, execute this command:
```bash
kustomize edit add component components/single-shared-session
```
This will update the `kustomize/kustomization.yaml` file which could be similar to:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- base
components:
- components/single-shared-session
```
You can locally render these manifests by running `kubectl kustomize .` as well as deploying them by running `kubectl apply -k .`.

View File

@@ -0,0 +1,31 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
# frontend - ENABLE_SINGLE_SHARED_SESSION
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
template:
spec:
containers:
- name: server
env:
- name: ENABLE_SINGLE_SHARED_SESSION
value: "true"

View File

@@ -0,0 +1,104 @@
# Integrate Online Boutique with Spanner
By default the `cartservice` stores its data in an in-cluster Redis database.
Using a fully managed database service outside your GKE cluster (such as [Google Cloud Spanner](https://cloud.google.com/spanner)) could bring more resiliency and more security.
## Provision a Spanner database
To provision a Spanner instance you can follow the following instructions:
```bash
gcloud services enable spanner.googleapis.com
SPANNER_REGION_CONFIG="<your-spanner-region-config-name>" # e.g. "regional-us-east5"
SPANNER_INSTANCE_NAME=onlineboutique
gcloud spanner instances create ${SPANNER_INSTANCE_NAME} \
--description="online boutique shopping cart" \
--config ${SPANNER_REGION_CONFIG} \
--instance-type free-instance
```
_Note: With latest version of `gcloud` we are creating a free Spanner instance._
To provision a Spanner database you can follow the following instructions:
```bash
SPANNER_DATABASE_NAME=carts
gcloud spanner databases create ${SPANNER_DATABASE_NAME} \
--instance ${SPANNER_INSTANCE_NAME} \
--database-dialect GOOGLE_STANDARD_SQL \
--ddl "CREATE TABLE CartItems (userId STRING(1024), productId STRING(1024), quantity INT64) PRIMARY KEY (userId, productId); CREATE INDEX CartItemsByUserId ON CartItems(userId);"
```
## Grant the `cartservice`'s service account access to the Spanner database
**Important note:** Your GKE cluster should have [Workload Identity enabled](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#enable).
As a good practice, let's create a dedicated least privilege Google Service Account to allow the `cartservice` to communicate with the Spanner database:
```bash
PROJECT_ID=<your-project-id>
SPANNER_DB_USER_GSA_NAME=spanner-db-user-sa
SPANNER_DB_USER_GSA_ID=${SPANNER_DB_USER_GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com
ONLINEBOUTIQUE_NAMESPACE=default
CARTSERVICE_KSA_NAME=cartservice
gcloud iam service-accounts create ${SPANNER_DB_USER_GSA_NAME} \
--display-name=${SPANNER_DB_USER_GSA_NAME}
gcloud spanner databases add-iam-policy-binding ${SPANNER_DATABASE_NAME} \
--member "serviceAccount:${SPANNER_DB_USER_GSA_ID}" \
--role roles/spanner.databaseUser
gcloud iam service-accounts add-iam-policy-binding ${SPANNER_DB_USER_GSA_ID} \
--member "serviceAccount:${PROJECT_ID}.svc.id.goog[${ONLINEBOUTIQUE_NAMESPACE}/${CARTSERVICE_KSA_NAME}]" \
--role roles/iam.workloadIdentityUser
```
## Deploy Online Boutique connected to a Spanner database
To automate the deployment of Online Boutique integrated with Spanner you can leverage the following variation with [Kustomize](../..).
From the `kustomize/` folder at the root level of this repository, execute these commands:
```bash
kustomize edit add component components/spanner
```
_Note: this Kustomize component will also remove the `redis-cart` `Deployment` and `Service` not used anymore._
This will update the `kustomize/kustomization.yaml` file which could be similar to:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- base
components:
- components/spanner
```
Update current Kustomize manifest to target this Spanner database.
```bash
sed -i "s/SPANNER_PROJECT/${PROJECT_ID}/g" components/spanner/kustomization.yaml
sed -i "s/SPANNER_INSTANCE/${SPANNER_INSTANCE_NAME}/g" components/spanner/kustomization.yaml
sed -i "s/SPANNER_DATABASE/${SPANNER_DATABASE_NAME}/g" components/spanner/kustomization.yaml
sed -i "s/SPANNER_DB_USER_GSA_ID/${SPANNER_DB_USER_GSA_ID}/g" components/spanner/kustomization.yaml
```
You can locally render these manifests by running `kubectl kustomize .` as well as deploying them by running `kubectl apply -k .`.
## Note on Spanner connection environment variables
The following environment variables will be used by the `cartservice`, if present:
- `SPANNER_INSTANCE`: defaults to `onlineboutique`, unless specified.
- `SPANNER_DATABASE`: defaults to `carts`, unless specified.
- `SPANNER_CONNECTION_STRING`: defaults to `projects/${SPANNER_PROJECT}/instances/${SPANNER_INSTANCE}/databases/${SPANNER_DATABASE}`. If this variable is defined explicitly, all other environment variables will be ignored.
## Resources
- [Use Google Cloud Spanner with the Online Boutique sample apps](https://medium.com/google-cloud/f7248e077339)

View File

@@ -0,0 +1,55 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
# cartservice - replace REDIS_ADDR by SPANNER_CONNECTION_STRING for the cartservice Deployment
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: cartservice
spec:
template:
spec:
containers:
- name: server
env:
- name: REDIS_ADDR
$patch: delete
- name: SPANNER_CONNECTION_STRING
value: projects/SPANNER_PROJECT/instances/SPANNER_INSTANCE/databases/SPANNER_DATABASE
# cartservice - add the GSA annotation for the cartservice KSA
- patch: |-
apiVersion: v1
kind: ServiceAccount
metadata:
name: cartservice
annotations:
iam.gke.io/gcp-service-account: SPANNER_DB_USER_GSA_ID
# redis - remove the redis-cart Deployment
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-cart
$patch: delete
# redis - remove the redis-cart Service
- patch: |-
apiVersion: v1
kind: Service
metadata:
name: redis-cart
$patch: delete

View File

@@ -0,0 +1,30 @@
# Exclude the loadgenerator
By default, when you deploy Online Boutique, its [loadgenerator](/src/loadgenerator/) will also be deployed.
You can use this Kustomize component to exclude the loadgenerator.
Note: This Kustomize component has not been tested with [other Kustomize Components](/kustomize/components/) that rely on the loadgenerator.
## Use this component
From the `kustomize/` folder at the root level of this repository, execute this command:
```bash
kustomize edit add component components/without-loadgenerator
```
This will update the `kustomize/kustomization.yaml` file which could be similar to:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- base
components:
- components/without-loadgenerator
```
You can then deploy Online Boutique and this component to your cluster using `kubectl apply -k .`. If you just want to render the YAML manifest (without deploying to your cluster), run `kubectl kustomize .`.
Learn more about Online Boutique's kustomize components at [/kustomize](/kustomize#readme).

View File

@@ -0,0 +1,5 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: loadgenerator
$patch: delete

View File

@@ -0,0 +1,18 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
- path: delete-loadgenerator.patch.yaml