Initial commit
Some checks failed
Continuous Integration - Pull Request / code-tests (pull_request) Has been cancelled
Continuous Integration - Pull Request / deployment-tests (local-code) (pull_request) Has been cancelled
helm-chart-ci / helm-chart-ci (pull_request) Has been cancelled
kubevious-manifests-ci / kubevious-manifests-ci (pull_request) Has been cancelled
kustomize-build-ci / kustomize-build-ci (pull_request) Has been cancelled
terraform-validate-ci / terraform-validate-ci (pull_request) Has been cancelled
Clean up deployment / cleanup-namespace (pull_request) Has been cancelled
Continuous Integration - Main/Release / code-tests (push) Has been cancelled
Continuous Integration - Main/Release / deployment-tests (local-code) (push) Has been cancelled
helm-chart-ci / helm-chart-ci (push) Has been cancelled
kubevious-manifests-ci / kubevious-manifests-ci (push) Has been cancelled
kustomize-build-ci / kustomize-build-ci (push) Has been cancelled
terraform-validate-ci / terraform-validate-ci (push) Has been cancelled
Some checks failed
Continuous Integration - Pull Request / code-tests (pull_request) Has been cancelled
Continuous Integration - Pull Request / deployment-tests (local-code) (pull_request) Has been cancelled
helm-chart-ci / helm-chart-ci (pull_request) Has been cancelled
kubevious-manifests-ci / kubevious-manifests-ci (pull_request) Has been cancelled
kustomize-build-ci / kustomize-build-ci (pull_request) Has been cancelled
terraform-validate-ci / terraform-validate-ci (pull_request) Has been cancelled
Clean up deployment / cleanup-namespace (pull_request) Has been cancelled
Continuous Integration - Main/Release / code-tests (push) Has been cancelled
Continuous Integration - Main/Release / deployment-tests (local-code) (push) Has been cancelled
helm-chart-ci / helm-chart-ci (push) Has been cancelled
kubevious-manifests-ci / kubevious-manifests-ci (push) Has been cancelled
kustomize-build-ci / kustomize-build-ci (push) Has been cancelled
terraform-validate-ci / terraform-validate-ci (push) Has been cancelled
This commit is contained in:
155
kustomize/components/alloydb/README.md
Normal file
155
kustomize/components/alloydb/README.md
Normal file
@@ -0,0 +1,155 @@
|
||||
# Integrate Online Boutique with AlloyDB
|
||||
|
||||
By default the `cartservice` stores its data in an in-cluster Redis database.
|
||||
Using a fully managed database service outside your GKE cluster (such as [AlloyDB](https://cloud.google.com/alloydb)) could bring more resiliency and more security.
|
||||
|
||||
Note that because of AlloyDB's current connectivity, you'll need to run all this from a VM with
|
||||
VPC access to the network you want to use for everything (out of the box this should just use the
|
||||
default network). The Cloud Shell doesn't work because of transitive VPC peering not working.
|
||||
|
||||
## Provision an AlloyDB database and the supporting infrastructure
|
||||
|
||||
Environmental variables needed for setup. These should be set in a .bashrc or similar as some of the variables are used in the application itself. Default values are supplied in this readme, but any of them can be changed. Anything in <> needs to be replaced.
|
||||
|
||||
```bash
|
||||
# PROJECT_ID should be set to the project ID that was created to hold the demo
|
||||
PROJECT_ID=<project_id>
|
||||
|
||||
#Pick a region near you that also has AlloyDB available. See available regions: https://cloud.google.com/alloydb/docs/locations
|
||||
REGION=<region>
|
||||
USE_GKE_GCLOUD_AUTH_PLUGIN=True
|
||||
ALLOYDB_NETWORK=default
|
||||
ALLOYDB_SERVICE_NAME=onlineboutique-network-range
|
||||
ALLOYDB_CLUSTER_NAME=onlineboutique-cluster
|
||||
ALLOYDB_INSTANCE_NAME=onlineboutique-instance
|
||||
|
||||
# **Note:** Primary and Read IP will need to be set after you create the instance. The command to set this in the shell is included below, but it would also be a good idea to run the command, and manually set the IP address in the .bashrc
|
||||
ALLOYDB_PRIMARY_IP=<ip set below after instance created>
|
||||
ALLOYDB_READ_IP=<ip set below after instance created>
|
||||
|
||||
ALLOYDB_DATABASE_NAME=carts
|
||||
ALLOYDB_TABLE_NAME=cart_items
|
||||
ALLOYDB_USER_GSA_NAME=alloydb-user-sa
|
||||
ALLOYDB_USER_GSA_ID=${ALLOYDB_USER_GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com
|
||||
CARTSERVICE_KSA_NAME=cartservice
|
||||
ALLOYDB_SECRET_NAME=alloydb-secret
|
||||
|
||||
# PGPASSWORD needs to be set in order to run the psql from the CLI easily. The value for this
|
||||
# needs to be set behind the Secret mentioned above
|
||||
PGPASSWORD=<password>
|
||||
```
|
||||
|
||||
To provision an AlloyDB instance you can follow the following instructions:
|
||||
```bash
|
||||
gcloud services enable alloydb.googleapis.com
|
||||
gcloud services enable servicenetworking.googleapis.com
|
||||
gcloud services enable secretmanager.googleapis.com
|
||||
|
||||
# Set our DB credentials behind the secret. Replace <password> with whatever you want
|
||||
# to use as the credentials for the database. Don't use $ in the password.
|
||||
echo <password> | gcloud secrets create ${ALLOYDB_SECRET_NAME} --data-file=-
|
||||
|
||||
# Setting up needed service connection
|
||||
gcloud compute addresses create ${ALLOYDB_SERVICE_NAME} \
|
||||
--global \
|
||||
--purpose=VPC_PEERING \
|
||||
--prefix-length=16 \
|
||||
--description="Online Boutique Private Services" \
|
||||
--network=${ALLOYDB_NETWORK}
|
||||
|
||||
gcloud services vpc-peerings connect \
|
||||
--service=servicenetworking.googleapis.com \
|
||||
--ranges=${ALLOYDB_SERVICE_NAME} \
|
||||
--network=${ALLOYDB_NETWORK}
|
||||
|
||||
gcloud alloydb clusters create ${ALLOYDB_CLUSTER_NAME} \
|
||||
--region=${REGION} \
|
||||
--password=${PGPASSWORD} \
|
||||
--disable-automated-backup \
|
||||
--network=${ALLOYDB_NETWORK}
|
||||
|
||||
gcloud alloydb instances create ${ALLOYDB_INSTANCE_NAME} \
|
||||
--cluster=${ALLOYDB_CLUSTER_NAME} \
|
||||
--region=${REGION} \
|
||||
--cpu-count=4 \
|
||||
--instance-type=PRIMARY
|
||||
|
||||
gcloud alloydb instances create ${ALLOYDB_INSTANCE_NAME}-replica \
|
||||
--cluster=${ALLOYDB_CLUSTER_NAME} \
|
||||
--region=${REGION} \
|
||||
--cpu-count=4 \
|
||||
--instance-type=READ_POOL \
|
||||
--read-pool-node-count=2
|
||||
|
||||
# Need to grab and store the IP addresses for our primary and read replicas
|
||||
# Don't forget to set these two values in the environment for later use.
|
||||
ALLOYDB_PRIMARY_IP=gcloud alloydb instances list --region=${REGION} --cluster=${ALLOYDB_CLUSTER_NAME} --filter="INSTANCE_TYPE:PRIMARY" --format=flattened | sed -nE "s/ipAddress:\s*(.*)/\1/p"
|
||||
ALLOYDB_READ_IP=gcloud alloydb instances list --region=${REGION} --cluster=${ALLOYDB_CLUSTER_NAME} --filter="INSTANCE_TYPE:READ_POOL" --format=flattened | sed -nE "s/ipAddress:\s*(.*)/\1/p"
|
||||
|
||||
psql -h ${ALLOYDB_PRIMARY_IP} -U postgres -c "CREATE DATABASE ${ALLOYDB_DATABASE_NAME}"
|
||||
psql -h ${ALLOYDB_PRIMARY_IP} -U postgres -d ${ALLOYDB_DATABASE_NAME} -c "CREATE TABLE ${ALLOYDB_TABLE_NAME} (userId text, productId text, quantity int, PRIMARY KEY(userId, productId))"
|
||||
psql -h ${ALLOYDB_PRIMARY_IP} -U postgres -d ${ALLOYDB_DATABASE_NAME} -c "CREATE INDEX cartItemsByUserId ON ${ALLOYDB_TABLE_NAME}(userId)"
|
||||
```
|
||||
|
||||
_Note: It can take more than 20 minutes for the AlloyDB instances to be created._
|
||||
|
||||
## Grant the `cartservice`'s service account access to the AlloyDB database
|
||||
|
||||
**Important note:** Your GKE cluster should have [Workload Identity enabled](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#enable).
|
||||
|
||||
As a good practice, let's create a dedicated least privilege Google Service Account to allow the `cartservice` to communicate with the AlloyDB database and grab the database password from the Secret manager.:
|
||||
```bash
|
||||
gcloud iam service-accounts create ${ALLOYDB_USER_GSA_NAME} \
|
||||
--display-name=${ALLOYDB_USER_GSA_NAME}
|
||||
|
||||
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${ALLOYDB_USER_GSA_ID} --role=roles/alloydb.client
|
||||
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${ALLOYDB_USER_GSA_ID} --role=roles/secretmanager.secretAccessor
|
||||
|
||||
gcloud iam service-accounts add-iam-policy-binding ${ALLOYDB_USER_GSA_ID} \
|
||||
--member "serviceAccount:${PROJECT_ID}.svc.id.goog[default/${CARTSERVICE_KSA_NAME}]" \
|
||||
--role roles/iam.workloadIdentityUser
|
||||
```
|
||||
|
||||
## Deploy Online Boutique connected to an AlloyDB database
|
||||
|
||||
To automate the deployment of Online Boutique integrated with AlloyDB you can leverage the following variation with [Kustomize](../..).
|
||||
|
||||
From the `kustomize/` folder at the root level of this repository, execute these commands:
|
||||
```bash
|
||||
kustomize edit add component components/alloydb
|
||||
```
|
||||
_**Note:** this Kustomize component will also remove the `redis-cart` `Deployment` and `Service` not used anymore._
|
||||
|
||||
This will update the `kustomize/kustomization.yaml` file which could be similar to:
|
||||
```yaml
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- base
|
||||
components:
|
||||
- components/alloydb
|
||||
```
|
||||
|
||||
Update current Kustomize manifest to target this AlloyDB database.
|
||||
```bash
|
||||
sed -i "s/PROJECT_ID_VAL/${PROJECT_ID}/g" components/alloydb/kustomization.yaml
|
||||
sed -i "s/ALLOYDB_PRIMARY_IP_VAL/${ALLOYDB_PRIMARY_IP}/g" components/alloydb/kustomization.yaml
|
||||
sed -i "s/ALLOYDB_USER_GSA_ID/${ALLOYDB_USER_GSA_ID}/g" components/alloydb/kustomization.yaml
|
||||
sed -i "s/ALLOYDB_DATABASE_NAME_VAL/${ALLOYDB_DATABASE_NAME}/g" components/alloydb/kustomization.yaml
|
||||
sed -i "s/ALLOYDB_TABLE_NAME_VAL/${ALLOYDB_TABLE_NAME}/g" components/alloydb/kustomization.yaml
|
||||
sed -i "s/ALLOYDB_SECRET_NAME_VAL/${ALLOYDB_SECRET_NAME}/g" components/alloydb/kustomization.yaml
|
||||
```
|
||||
|
||||
You can locally render these manifests by running `kubectl kustomize .` as well as deploying them by running `kubectl apply -k .`.
|
||||
|
||||
## Extra cleanup steps
|
||||
```bash
|
||||
gcloud compute addresses delete ${ALLOYDB_SERVICE_NAME} --global
|
||||
|
||||
# Force takes care of cleaning up the instances inside the cluster automatically
|
||||
gcloud alloydb clusters delete ${ALLOYDB_CLUSTER_NAME} --force --region ${REGION}
|
||||
|
||||
gcloud iam service-accounts delete ${ALLOYDB_USER_GSA_ID}
|
||||
|
||||
gcloud secrets delete ${ALLOYDB_SECRET_NAME}
|
||||
```
|
||||
99
kustomize/components/alloydb/kustomization.yaml
Normal file
99
kustomize/components/alloydb/kustomization.yaml
Normal file
@@ -0,0 +1,99 @@
|
||||
# Copyright 2022 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
apiVersion: kustomize.config.k8s.io/v1alpha1
|
||||
kind: Component
|
||||
patches:
|
||||
# cartservice - replace REDIS_ADDR by ALLOYDB_PRIMARY_IP for the cartservice Deployment
|
||||
# Potentially later we'll factor in splitting traffic to primary/read pool, but for now
|
||||
# we'll just manage the primary instance
|
||||
- patch: |-
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: cartservice
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: server
|
||||
env:
|
||||
- name: REDIS_ADDR
|
||||
$patch: delete
|
||||
- name: ALLOYDB_PRIMARY_IP
|
||||
value: ALLOYDB_PRIMARY_IP_VAL
|
||||
- name: ALLOYDB_DATABASE_NAME
|
||||
value: ALLOYDB_CARTS_DATABASE_NAME_VAL
|
||||
- name: ALLOYDB_TABLE_NAME
|
||||
value: ALLOYDB_CARTS_TABLE_NAME_VAL
|
||||
- name: ALLOYDB_SECRET_NAME
|
||||
value: ALLOYDB_SECRET_NAME_VAL
|
||||
- name: PROJECT_ID
|
||||
value: PROJECT_ID_VAL
|
||||
# cartservice - add the GSA annotation for the cartservice KSA
|
||||
- patch: |-
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: cartservice
|
||||
annotations:
|
||||
iam.gke.io/gcp-service-account: ALLOYDB_USER_GSA_ID
|
||||
# productcatalogservice - replace ALLOYDB environments
|
||||
- patch: |-
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: productcatalogservice
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: server
|
||||
env:
|
||||
- name: ALLOYDB_CLUSTER_NAME
|
||||
value: ALLOYDB_CLUSTER_NAME_VAL
|
||||
- name: ALLOYDB_INSTANCE_NAME
|
||||
value: ALLOYDB_INSTANCE_NAME_VAL
|
||||
- name: ALLOYDB_DATABASE_NAME
|
||||
value: ALLOYDB_PRODUCTS_DATABASE_NAME_VAL
|
||||
- name: ALLOYDB_TABLE_NAME
|
||||
value: ALLOYDB_PRODUCTS_TABLE_NAME_VAL
|
||||
- name: ALLOYDB_SECRET_NAME
|
||||
value: ALLOYDB_SECRET_NAME_VAL
|
||||
- name: PROJECT_ID
|
||||
value: PROJECT_ID_VAL
|
||||
- name: REGION
|
||||
value: REGION_VAL
|
||||
# productcatalogservice - add the GSA annotation for the productcatalogservice KSA
|
||||
- patch: |-
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: productcatalogservice
|
||||
annotations:
|
||||
iam.gke.io/gcp-service-account: ALLOYDB_USER_GSA_ID
|
||||
# redis - remove the redis-cart Deployment
|
||||
- patch: |-
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: redis-cart
|
||||
$patch: delete
|
||||
# redis - remove the redis-cart Service
|
||||
- patch: |-
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: redis-cart
|
||||
$patch: delete
|
||||
Reference in New Issue
Block a user