This document outlines the deployment plan for the OctoCAT Supply Chain Management application to Azure Container Apps, focusing on cost-effectiveness and maintainability.
infra/
├── main.bicep # Main orchestration file
├── modules/
│ ├── webapps.bicep # Web Apps (Sites) connected to ACR for running containers
│ ├── logAnalytics.bicep # Log Analytics workspace
├── parameters/
│ ├── staging.parameters.json # Staging environment parameters
│ └── production.parameters.json # Production environment parameters
- environmentName
- appName
- acrName
- imageTag
The ACR resource ID must be correctly calculated in the main.bicep file and passed to the webapps.bicep file along with the acrName, imageTag and other necessary variables. Make sure the location is defaulted to the resource group location for all resources.
Make sure you output the URL of both the API and Frotend in the main.bicep file so that it can be used as an output parameter for the environments in the Actions workflows.
Configure the Web Apps to use Managed Identity (SystemAssigned) to authenticate to the ACR via an AcrPull role assignment to the principalID. Set acrUseManagedIdentityCreds: true for the web apps.
-
Logging
- Log Analytics integration for monitoring
-
Container Registry
- Use AZCR.io for container registry (the configuration script will create one)
- builds that push images must push to this Container Registry
-
Two Azure Web Apps (Sites)
- both running on B1 SKU
- API web app running API container
- Frontend web app running Frontend container
- Configured to point to ACR
- use respective API or Frontend image and tag
- ensure frontend URL is allowed in CORS for API
- Use
uniqueString()to add a unique post-fix to the web app names - ensure that API_HOST and API_PORT are correctly configured for the frontend deployment
- SQLite persistence for API: mount a writable volume for the DB file and set
DB_FILEaccordingly (e.g.,/home/site/data/app.db)
- SQLite persistence for API: mount a writable volume for the DB file and set
-
No role assignments
- do not include ACR Pull role assignments - the Service Principal is already configured for this.
-
Avoid circular assignments
- be careful: the API needs frontend info and vice-versa, so ensure that you account for this and don't end up with circular logic!
- Configure CORS in the API container app to allow requests from the Frontend container app
- Use environment variables for dynamic CORS origins based on environment (make sure it starts with
https://!)
API_CORS_ORIGINS=https://${frontendAppFqdn},https://${stagingFrontendAppFqdn}
- Ensure that the Web App for the frontend has these env vars:
API_HOST=<host_of_API_web_app>
API_PORT=80
- Make sure that the
API_HOSTdoes NOT containhttps://(it is the host, not the URL).
- The API uses SQLite. Configure persistence in hosted environments:
- Set
DB_FILEto a path under the Web App persistent storage (e.g.,/home/site/data/app.dbon Linux) - Ensure the containing directory exists or is created on startup
- Optionally enable WAL with
DB_ENABLE_WAL=truefor better concurrency - Foreign key enforcement is enabled by default; override with
DB_FOREIGN_KEYS=falseif needed
- Set
- For containers (compose/k8s), mount a host or managed volume to persist the DB file
Example env vars for API:
DB_FILE=/home/site/data/app.db
DB_ENABLE_WAL=true
DB_FOREIGN_KEYS=true
DB_TIMEOUT=30000
.github/workflows/
├── build-test.yml # CI workflow for PRs
├── deploy.yml # CD workflow for deploying to staging on PR and approval for PROD
- Triggered on push to any branch
- Build both API and Frontend
- Run unit tests
- Lint code
- Build Docker images but don't push them
- Build
- Triggered on PR
- Build and run tests
- 2 jobs that run in parallel: one for API and one for Frontend
- Build and push Docker image to GHCR - use the short SHA of the commit to version the image
- Staging Job
- requires both build jobs
- Deploy Bicep templates with staging parameters to Staging environment
- Prod Job
- requires Staging job
- wait for approval (configured on Environment)
- Deploy Bicep templates with staging parameters to Prod environment
-
Setup Prereqs
az cli(usebrew install az) and then runaz loginto log in to your Azure subscriptiongh cli(usebrew install gh) and then rungh loginto log in to your GitHub account
-
CI/CD Pipeline Configuration
- Use OIDC for authentication from workflows:
- Run this script to create a service principal and configure OIDC for the repo.
- The script sets up:
- a Service Principal configured with OIDC
- Resource Groups for staging and prod
- An Azure Container Registry for hosting the images
- Actions Variables needed for workflows
- AZURE_CLIENT_ID: ID of the SP
- AZURE_TENANT_ID: ID of the tenant
- AZURE_SUBSCRIPTION_ID: ID of the subscription
- AZURE_RESOURCE_GROUP: Stating Resource Group
- AZURE_RESOURCE_GROUP_PROD: Prod Resource Group
- AZURE_ACR_NAME: Name of the AZCR
- 2 environments in the repo: Staging and Prod, with a manual approval on Prod
- use the step output of the deply step to update the frontend URL as the URL of the enviroment
- Use OIDC for authentication from workflows:
-
Deployment Strategy
- for any push, execute the CI build
- for PRs and pushes to
main:- build and test the code
- create the docker images using the short SHA for the version tag
- use AZ login to get the credentials for the AZCR from the AZURE_ACR_NAME and AZURE_RESOURCE_GROUP_PROD vars
- push the images to the AZCR
- deploy to Staging first using the Staging environment
- wait for approval for Prod (configured on Prod environment)
- deploy to Prod using the Prod environment
- ensure that you use the following format for the environments so that the URL is populated in the workflow view:
environment: name: Staging|Production url: ${{ steps.deploy.outputs.frontendUrl }}
- Since SQLite is file-based, implement periodic backups of the DB file location
- On Azure Web Apps, use WebJobs or scheduled workflows to copy
/home/site/data/app.dbto blob storage - For Docker, copy the volume or bind-mount target to backup storage
- Ensure that the bicep files don't have any unused declarations/varialbes
- Ensure that the
main.bicepfile outputs the hostnames of the API and the frontend, and that these are used (via${{ steps.<deploy>.ouputs}}) to set the URL for the environment (ensuring thathttps://` is prepended) - Ensure that AcrPull role assignment is performed correctly for the Web Apps, and that both apps have
acrUseManagedIdentityCredsis set to true - Ensure that the resources are using the location of the resource group
This plan provides a solid foundation for deploying the OctoCAT Supply Chain Management application to Azure Web Apps using infrastructure as code with Bicep and automated CI/CD with GitHub Actions.
The OctoCAT Supply Chain application supports optional integration with JFrog Artifactory to demonstrate advanced supply chain security capabilities, including:
- Code-to-Cloud Traceability: Complete visibility from source code commit to deployed artifacts
- SLSA Build Level 3 Compliance: Provenance attestations that meet SLSA Level 3 requirements
- Centralized Artifact Management: Unified repository for all container images across your organization
- Enhanced Security Scanning: Integration with JFrog Xray for vulnerability detection and license compliance
- Audit Trail: Complete metadata tracking for compliance and security investigations
This integration is designed to showcase the capabilities described in the GitHub changelog on supply chain security.
Before enabling JFrog integration, ensure you have:
- JFrog Artifactory Instance: Access to a JFrog Artifactory server (cloud or self-hosted)
- Docker Repository: A Docker repository configured in JFrog Artifactory (e.g.,
moose-fishsticks) - Service Account: JFrog user credentials or API token with push permissions
- GitHub Repository Access: Admin access to configure secrets
-
Log in to your JFrog Artifactory instance
-
Create a new Docker repository (or use an existing one):
- Navigate to Administration → Repositories → Create Repository
- Select Docker as the package type
- Name it (e.g.,
moose-fishsticks) - Configure repository settings as needed
- Click Create
-
Create a service account or use existing credentials:
- Navigate to Administration → Identity and Access → Users
- Create a new user or use an existing service account
- Assign appropriate permissions (deployer role recommended)
- Generate an API key/token for authentication
Add the following secrets to your GitHub repository:
- Go to Settings → Secrets and variables → Actions
- Click New repository secret and add:
| Secret Name | Description | Example Value |
|---|---|---|
JFROG_URL |
Your JFrog Artifactory URL (without https://) | octocat.jfrog.io |
JFROG_REPO |
JFrog Docker repo name | moose-fishsticks |
JFROG_USER |
JFrog username or service account | moose06321@github.com |
JFROG_PASSWORD |
JFrog password, API token, access token, or base64 refresh token | BASE64_REFRESH_TOKEN |
Security Best Practices:
- Use API tokens instead of passwords when possible
- Create a dedicated service account with minimal required permissions
- Rotate credentials regularly
- Use repository-level secrets for single-repo access
- Use organization-level secrets for multi-repo access
Secrets (required)
| Secret | Description | Example Value |
|---|---|---|
JFROG_URL |
JFrog Artifactory URL (without https://) | octocat.jfrog.io |
JFROG_USER |
JFrog user associated with the token | ci-bot or moose0621@github.com |
JFROG_PASSWORD |
Preferred: Access/API token (not password). Can also be base64 refresh token. | reftkn:... or raw access token |
Variables (recommended)
| Variable | Description | Default |
|---|---|---|
ENABLE_JFROG |
Turn on JFrog push job | true |
JFROG_REPO |
JFrog Docker repo name | moose-fishsticks |
Token guidance (JFrog recommended)
- Prefer Access Tokens (Administration → Identity & Access → Access Tokens) scoped to required repos.
- Scope:
api:*(or narrower applied-permissions); set expiry (e.g., 90 days) and rotate. - Use the token as
JFROG_PASSWORD(GitHub secret). The username must match the token’s subject. - Refresh tokens (
reftkn:*) work locally withscripts/jfrog_push.sh, but Actions should use access/API tokens for simplicity.
There are two ways to enable JFrog Artifactory integration:
Option A: Repository Variable (Persistent)
Defaults:
JFROG_URL=octocat.jfrog.io,JFROG_REPO=moose-fishsticks
- Go to Settings → Secrets and variables → Actions → Variables tab
- Click New repository variable
- Add variable:
- Name:
ENABLE_JFROG - Value:
true
- Name:
This enables JFrog pushes for all workflow runs.
Option B: Workflow Dispatch (On-Demand)
- Go to Actions → 🐳 Build and Publish
- Click Run workflow
- Check the box: Enable JFrog Artifactory push
- Click Run workflow
This enables JFrog pushes for a single workflow run only.
By default, images are pushed to the moose-fishsticks repository in JFrog. To use a different repository:
- Edit
.github/workflows/build-and-publish.yml - Find the
push_to_jfrogjob - Update the
JFROG_REPOenvironment variable:env: JFROG_REPO: your-repository-name
# From repo root
export JFROG_URL=octocat.jfrog.io
export JFROG_REPO=moose-fishsticks
export JFROG_USER=<service-account>
export JFROG_PASSWORD=<token-or-password>
TAG=local make docker-push-jfrog
# or single invocation
JFROG_USER=<user> JFROG_PASSWORD=<token> make docker-push-jfrog- The Makefile target builds, tags, and pushes
apiandfrontendimages. - Override
TAGto avoid clobbering CI tags (defaults to short git SHA). - Verify locally:
docker images | grep octocat.jfrog.io/moose-fishsticks - Token tips:
- If you have an access/API token, pass it as
JFROG_PASSWORD(no decoding). - If you have a base64 refresh token (decodes to
reftkn:*), pass it as-is; usescripts/jfrog_push.shwhich tries docker login first, then refresh exchange.
- If you have an access/API token, pass it as
Local
- Use
make docker-push-jfrogorscripts/jfrog_push.sh. - Accepts
JFROG_PASSWORDas access token or base64 refresh token.
GitHub Actions
- Set repository secrets:
JFROG_URL,JFROG_USER,JFROG_PASSWORD(access/API token). - Set variables:
ENABLE_JFROG=true, optionalJFROG_REPO. - Run 🐳 Build and Publish workflow with
enable_jfrogchecked (or variable enabled).
When JFrog integration is enabled, the workflow:
- Builds Images: Builds and pushes container images to GitHub Container Registry (GHCR) as usual
- Pulls from GHCR: Downloads the newly built images from GHCR
- Authenticates to JFrog: Logs in to JFrog Artifactory using configured secrets
- Retags Images: Creates JFrog-specific tags following the pattern:
Example:
<JFROG_URL>/<repository>/<service>:<git-sha>mycompany.jfrog.io/moose-fishsticks/api:abc123def456 - Pushes to JFrog: Uploads the retagged images to JFrog Artifactory
- Generates Provenance: Creates metadata artifacts with complete traceability information including:
- Source image location (GHCR)
- Target registry and repository (JFrog)
- Git commit SHA and reference
- Workflow run details
- Actor and timestamp
After a successful workflow run with JFrog enabled:
-
Check Workflow Output:
- Go to Actions → Select the workflow run
- Expand the "Push to JFrog Artifactory" job
- Look for success messages like:
✅ Successfully pushed api to JFrog Artifactory
-
Verify in JFrog Artifactory:
- Log in to your JFrog instance
- Navigate to Artifactory → Artifacts
- Browse to your repository (e.g.,
moose-fishsticks) - Confirm the images are present with correct tags
-
Download Provenance Metadata:
- In the workflow run, scroll to Artifacts
- Download
jfrog-metadata-apiandjfrog-metadata-frontend - Review the JSON files for complete traceability information
Every image pushed to JFrog includes complete provenance information:
- Source Code: Git commit SHA and repository
- Build Process: GitHub Actions workflow and run ID
- Artifacts: Links to both GHCR and JFrog locations
- Identity: Who triggered the build and when
This creates an unbroken chain of custody from code commit to deployed artifact.
The workflow meets SLSA Level 3 requirements:
- ✅ Build platform: GitHub Actions (trusted, hardened build system)
- ✅ Provenance: Automatically generated with complete build metadata
- ✅ Immutability: Images tagged with Git SHA are immutable
- ✅ Isolation: Each build runs in isolated, ephemeral runners
- ✅ Hermetic: Build dependencies are versioned and pinned
JFrog Artifactory enables:
- Xray Scanning: Automatic vulnerability scanning on push
- License Compliance: Detection of license violations
- Policy Enforcement: Block deployment of non-compliant artifacts
- Audit Logs: Complete history of all artifact access
Issue: "Error: Cannot perform an interactive login from a non TTY device"
- Solution: Ensure secrets are correctly configured and available to the workflow
- Check: Verify secret names match exactly (case-sensitive)
Issue: "Error: denied: unauthorized"
- Solution: Verify JFrog credentials have push permissions
- Check: Test credentials manually with
docker login <JFROG_URL>
Issue: "Error: repository not found"
- Solution: Update
JFROG_REPOvariable to match your actual repository name - Check: Confirm the repository exists in JFrog Artifactory
Issue: Images pushed but not visible in JFrog
- Solution: Check repository permissions and virtual repository configuration
- Check: Verify you're looking at the correct repository (local vs. virtual)
To disable JFrog integration:
If using repository variable:
- Go to Settings → Secrets and variables → Actions → Variables
- Find
ENABLE_JFROGvariable - Delete it or change value to
false
If using workflow dispatch:
- Simply don't check the "Enable JFrog Artifactory push" option when running workflows
The workflow will continue to build and push to GHCR as normal, skipping JFrog steps.
# Workflow execution with JFrog enabled:
1. Build API → Push to GHCR (with SLSA attestations)
2. Build Frontend → Push to GHCR (with SLSA attestations)
3. Push to JFrog Artifactory:
a. Pull from GHCR: ghcr.io/octodemo/octocat_supply-jubilant-fishstick/api:abc123def
b. Login to JFrog: mycompany.jfrog.io
c. Retag: mycompany.jfrog.io/moose-fishsticks/api:abc123def
d. Push to JFrog ✅
e. Generate provenance metadata ✅
4. Repeat 3a-e for Frontend
5. Trigger Deployment (optional, main branch only)