Installation and initial setup
Please ensure your local development machine meets the Prerequisites for Appcket.
First time setup
Docker Compose is used in a limited capacity and gives the ability to easily run containers in local development mode instead of executing individual Docker commands. You will be running a local registry container that allows you to push and host images that k8s needs in order to start Services. Using Kubernetes locally allows us to spin up services and create resources similar to production. Additionally, your local Postgres database will run in a container started by Docker Compose.
The following steps need to be performed the first time on your local development machine.
Setup Directories and Mounts
When running any commands below with {PROJECT_MACHINE_NAME}, change this to your own project's name.
- Using Windows Terminal in your Ubuntu WSL distribution, create a ~/dev directory if one is not already created
mkdir ~/devcd ~/dev
- Fork and clone the appcket-org repo into ~/dev/appcket
git clone https://github.com/appcket/appcket-org.git -b main {PROJECT_MACHINE_NAME}
- Using Windows Terminal, create a bind mount directory from your Ubuntu home
devdirectory to the Docker host mount path by:sudo mkdir -p /mnt/wsl/docker-desktop-bind-mounts/Ubuntu/dev/{PROJECT_MACHINE_NAME}sudo mount --bind ~/dev/{PROJECT_MACHINE_NAME} /mnt/wsl/docker-desktop-bind-mounts/Ubuntu/dev/{PROJECT_MACHINE_NAME}- need to do the above steps every time you restart your computer :( but there is a handy start.sh script in deployment/environment/local you can use that does this plus some other local setup commands for you
- Use this path for the volume.hostPath.path value for mounting volumes in your k8s pods:
/run/desktop/mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu/dev/{PROJECT_MACHINE_NAME}. These paths are already set in the yaml resource-manifest files. This is just documented here as a note and something to be aware of. - See this github issue comment for more info
Add hosts Entries
- Hardcode the following host entries in your hosts file (on Windows it's @ C:\Windows\System32\drivers\etc\hosts and Linux is usually /etc/hosts)
127.0.0.1 {PROJECT_MACHINE_NAME}.localhost accounts.{PROJECT_MACHINE_NAME}.localhost app.{PROJECT_MACHINE_NAME}.localhost api.{PROJECT_MACHINE_NAME}.localhost
Run the Bootstrap Script
- A deployment/bootstrap.sh script is provided that will perform the main setup steps for a fresh appcket project.
- Edit the
deployment/bootstrap.shfile and change the values of thePROJECT_MACHINE_NAMEandPROJECT_HUMAN_NAMEvariables. - Then run the script from inside the deployment folder
cd ~/dev/{PROJECT_MACHINE_NAME}/deploymentchmod +x ./bootstrap.sh./bootstrap.sh
- Edit the
Running the bootstrap script will take some time depending on your internet connection speed.
Create Tables and Seed Data
MikroOrm is used in the api to interact with the database. We also use the MikroOrm CLI to migrate and seed the initial sample data for the application. Accounts (Keycloak) data was setup when you ran the bootstrap script.
cd database- if you are already inside the deployment folderpnpm installpnpm schema-seedpnpm post-seed
Start Containers
- Start containers by installing Appcket to cluster via Helm Chart in deployment folder
cd ../helm package helmhelm install {PROJECT_MACHINE_NAME} ./{PROJECT_MACHINE_NAME}-0.1.0.tgz -n {PROJECT_MACHINE_NAME} -f helm/values-local.yaml --dry-run --debughelm install {PROJECT_MACHINE_NAME} ./{PROJECT_MACHINE_NAME}-0.1.0.tgz -n {PROJECT_MACHINE_NAME} -f helm/values-local.yaml
- Exec into running pods and pnpm start them up and get to work. Be sure to run
pnpm installto install npm moduleskubectl exec -n {PROJECT_MACHINE_NAME} -it svc/api -- bashkubectl exec -n {PROJECT_MACHINE_NAME} -it svc/app -- bashkubectl exec -n {PROJECT_MACHINE_NAME} -it svc/marketing -- bash- Source code is mounted into the
/srcfolder inside each container.
- Source code is mounted into the
- You can also now use VS Code Remote Containers to work on the volume mounted files directly in the container
- Shift + ctrl + P
- Attach to Running Container
- Select a dev container:
k8s_app_app-...ork8s_api_api-...
- Once you have an active shell in each container, you need to run
pnpm installto install dependencies and thenpnpm start:debugto start the api in debug mode,pnpm startfor the app andpnpm startfor the marketing site. - You can also select
Run -> Start Debugging (F5)to run each app in VS Code. - The Keycloak/accounts server will start automatically (you need to give the accounts service a couple minutes to completely load).
- Access these local containers in your browser
- Marketing:
https://{PROJECT_MACHINE_NAME}.localhost - API:
https://api.{PROJECT_MACHINE_NAME}.localhost - App:
https://app.{PROJECT_MACHINE_NAME}.localhost- Login with any username below and
abc123as the passwordart(Manager role)ryan(Captain role)kel(Teammate role)he(Teammate role)lloyd(Spectator role)
- ex: the Spectator role is view-only, so the lloyd user will only be able to see but can't edit or create anything
- Login with any username below and
- Accounts:
https://accounts.{PROJECT_MACHINE_NAME}.localhost- The default admin account username and password is
admin/admin
- The default admin account username and password is
- Marketing:
After Initial Setup
After going through the steps above for the initial setup, you can run the start.sh script that will execute the commands you need after everytime you restart your computer.
sudo mkdir -p /mnt/wsl/docker-desktop-bind-mounts/Ubuntu/dev/{PROJECT_MACHINE_NAME}sudo mount --bind ~/dev/{PROJECT_MACHINE_NAME} /mnt/wsl/docker-desktop-bind-mounts/Ubuntu/dev/{PROJECT_MACHINE_NAME}cd ./deploymentchmod +x ./start.sh./environment/local/start.sh
Database Schema Changes
TODO: Change this for use with Mikro-ORM specific steps.
As you develop your application, you will need to update your data model. This is done by modfifying the deployment/database/schema.prisma file and then generating/seeding those changes.
./node_modules/.bin/dotenv -e .env.local -- ./node_modules/.bin/prisma generate./node_modules/.bin/dotenv -e .env.local -- ./node_modules/.bin/prisma migrate dev./node_modules/.bin/dotenv -e .env.local -- ./node_modules/.bin/prisma db seed
Whenever you change deployment/database/prisma/schema.prisma, you should copy/paste into api/prisma/schema.prisma and generate there too so the api will also have the latest schema. TODO: Need to find a way to automate this process to keep both in sync where changing one updates the other.
Teardown/Delete Project (if necessary)
Executing the following commands will delete your local database and any data you had for this project!
cd ./deployment/environment/localdocker-compose -p {PROJECT_MACHINE_NAME} downhelm delete {PROJECT_MACHINE_NAME} -n {PROJECT_MACHINE_NAME}- Delete Kubernetes secrets
kubectl delete secret database-secret -n {PROJECT_MACHINE_NAME}kubectl delete secret accounts-secret -n {PROJECT_MACHINE_NAME}kubectl delete secret api-keycloak-client-secret -n {PROJECT_MACHINE_NAME}
- Delete Kubernetes project namespace
kubectl delete namespace {PROJECT_MACHINE_NAME}
- Delete Docker database volume
docker volume rm {PROJECT_MACHINE_NAME}-database
- Unmount project directory
sudo umount /mnt/wsl/docker-desktop-bind-mounts/Ubuntu/dev/{PROJECT_MACHINE_NAME}sudo rm -rf /mnt/wsl/docker-desktop-bind-mounts/Ubuntu/dev/{PROJECT_MACHINE_NAME}
- TODO: delete project-specific images from local registry (can manually delete unused images using Docker Desktop GUI)