Skip to content

Commit 62df34e

Browse files
cicoriasCopilotCopilot
authored
feat: Add Azure Container Apps Job GPU Infrastructure (#14)
* feat: add CUDA/gsplat environment check script Add scripts/gsplat_check — a lightweight Python tool (managed by uv) that verifies whether the current device can run the gsplat 3DGS training backend. Checks performed: - CUDA GPU detection via nvidia-smi + PyTorch tensor smoke-test - gsplat library import and rasterization kernel validation (8 Gaussians) - External tool availability (nvidia-smi, python3, ffmpeg, colmap) Reports a structured pass/fail verdict similar to the Rust preflight binary. Usage: cd scripts/gsplat_check && uv run main.py Also adds a reference to the new tool in the root README documentation section. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * feat: add Azure Container App Job infrastructure for GPU 3DGS processing Add complete azd-based infrastructure for running the 3DGS video processor as a GPU Container App Job on Azure: Infrastructure (infra/): - Bicep modules for ACR, Storage, Container Apps Environment (GPU T4 profile), Container Apps Job, Managed Identity, Log Analytics, and RBAC - Standalone RBAC deployment (infra/rbac/) for privileged user separation - Parameter bindings for azd environment variables Scripts (infra/scripts/): - assign-rbac.sh / verify-rbac.sh / cleanup-rbac.sh for RBAC management - hooks/acr-build.sh: builds GPU image via ACR Tasks with minimal staging dir - hooks/preprovision.sh: captures deployer identity, RBAC preflight - hooks/postprovision.sh: builds image and updates job post-provision - run-job.sh: start job with --wait/--logs support - deploy-job.sh: rebuild and redeploy image - upload-testdata.sh: upload South Building test videos to blob storage Bug fixes required for GPU batch mode: - src/azure/sdk.rs: pass AZURE_CLIENT_ID to ManagedIdentityCredential for user-assigned managed identity support in Container Apps - src/backends/gsplat.rs: fix COLMAP sparse dir and images dir resolution for batch mode (TEMP_PATH-based layout vs workspace-relative) - src/backends/gsplat.rs: add inline PLY-to-SPLAT converter fallback when external converter tools are unavailable - scripts/gsplat_train.py: fix cameras.bin parser to read correct number of parameters per camera model instead of reading to EOF - .dockerignore: exclude .venv dirs, output/, infra/ from build context; keep Dockerfile for ACR Tasks Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: disable ANSI color codes in container log output Use std::io::IsTerminal to detect non-TTY environments (containers, log aggregators) and disable ANSI escape sequences. This ensures clean text output in Azure Container Apps Log Analytics and other log collection systems. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * docs: add Azure Container Apps Job GPU deployment guide Add comprehensive section to DEPLOYMENT.md covering: - Quick start steps for azd-based GPU deployment - What resources get provisioned (table of Bicep modules) - Detailed RBAC requirements with specific permissions, role IDs, and clear guidance on what fails without each role - Deployer vs Managed Identity permission separation - Test data upload and job execution instructions - Configuration variables and GPU region availability - Scripts reference with privilege requirements - Troubleshooting guide for common failures Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * docs: add changelog for Azure Container Apps GPU infrastructure Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: enable BuildKit inline cache for ACR builds Add BUILDKIT_INLINE_CACHE=1 build arg to embed cache metadata in pushed images. While ACR Tasks don't persist Docker layer cache between runs (each gets a fresh VM), the inline cache metadata enables faster rebuilds when using BuildKit-aware builders. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: address PR #14 review feedback (7 items) 1. gsplat_train.py: Fix COLMAP camera model parameter counts (OPENCV_FISHEYE=8, FULL_OPENCV=12, FOV=5, THIN_PRISM_FISHEYE=12). Fail fast on unknown model_id instead of defaulting to 4. 2. gsplat.rs: Fix PLY-to-SPLAT converter fallback order — try configured converter binary before inline Python fallback. 3. gsplat.rs: Remove unused numpy import from inline Python script; keep it stdlib-only (struct + sys) for maximum portability. 4. run-job.sh: Apply BATCH_INPUT_PREFIX to job env vars via az containerapp job update before starting execution. 5. storage.bicep: Default allowSharedKeyAccess to false; only enable when useStorageKeys=true (reduces blast radius of leaked keys). 6. main.bicep + storage.bicep: Wire storageConnectionString from storage module to job module when useStorageKeys=true, preventing empty-secret deployment failures. 7. sdk.rs: Filter empty AZURE_CLIENT_ID strings before constructing ManagedIdentityCredentialOptions to prevent confusing auth errors. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * feat: add local Docker build+push for fast incremental rebuilds Add local-build.sh as alternative to acr-build.sh for development: - Uses Docker BuildKit with persistent layer cache - Incremental rebuilds (src/ change only) take ~2.5 min vs ~35 min on ACR - Push uses delta layers — only changed layers uploaded (~12s vs full push) - deploy-job.sh supports --local flag to select build method Build comparison (src/ change only): ACR Tasks: ~35 min (no cache between runs, fresh VM each time) Local+push: ~2.5 min build + ~12s push = ~3 min total Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --------- Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
1 parent 4bb9b10 commit 62df34e

43 files changed

Lines changed: 3242 additions & 28 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.dockerignore

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,6 @@ flycheck*/
5050

5151
# Docker files (don't need these in the build context)
5252
.dockerignore
53-
Dockerfile
5453
docker-compose*.yml
5554

5655
# Scripts not needed in container
@@ -59,6 +58,8 @@ scripts/download-*.sh
5958
scripts/generate-*.sh
6059
scripts/e2e-*.sh
6160
scripts/run-*.sh
61+
scripts/gsplat_check/
62+
scripts/e2e/
6263

6364
# Python cache
6465
__pycache__/
@@ -67,6 +68,19 @@ __pycache__/
6768
*.pyd
6869
.Python
6970

71+
# Python virtual environments
72+
.venv/
73+
venv/
74+
**/.venv/
75+
**/venv/
76+
.e2e-venv/
77+
78+
# Output and infra directories
79+
output/
80+
infra/
81+
.azure/
82+
container-test/
83+
7084
# IDE and workspace files
7185
.copilot/
7286
*.workspace

.github/instructions/docker-batch-e2e.instructions.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -194,9 +194,6 @@ Mount the project's `container-test/config.yaml` (a **file**, not a directory):
194194
-v /absolute/path/to/container-test/config.yaml:/config/config.yaml:ro
195195
```
196196

197-
⚠️ **Pitfall:** `container-test/config.1.yaml` is a directory (created by a prior bad mount),
198-
not a file. Always use `container-test/config.yaml`.
199-
200197
## Batch mode vs Watch mode (in Docker)
201198

202199
| Aspect | Batch Mode | Watch Mode |

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,3 +36,6 @@ output
3636
# better test files from COLMAP
3737
testdata/south**
3838
.e2e-venv
39+
40+
# Azure Developer CLI
41+
.azure/

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -436,6 +436,7 @@ See [docs/CONFIGURATION.md](docs/CONFIGURATION.md) for the complete configuratio
436436
* [Deployment](docs/DEPLOYMENT.md) - Production deployment patterns and best practices
437437
* [Troubleshooting](docs/TROUBLESHOOTING.md) - Common issues and solutions
438438
* [PRD](docs/3dgs-video-processor-prd.md) - Product requirements specification
439+
* [gsplat Environment Check](scripts/gsplat_check/) - Python script to verify CUDA + gsplat functionality on a device
439440

440441
## Requirements
441442

azure.yaml

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
name: 3dgs-processor
2+
metadata:
3+
template: 3dgs-processor@0.0.1
4+
hooks:
5+
preprovision:
6+
shell: sh
7+
run: ./infra/scripts/hooks/preprovision.sh
8+
continueOnError: true
9+
postprovision:
10+
shell: sh
11+
run: ./infra/scripts/hooks/postprovision.sh

compose.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ services:
1717
restart: unless-stopped
1818
volumes:
1919
- ./output/data:/data
20-
- ./container-test/config.1.yaml:/app/config.yaml:ro
20+
- ./container-test/config.yaml:/app/config.yaml:ro
2121
- ./output/tmp:/tmp/3dgs-work
2222
environment:
2323
CONFIG_PATH: /app/config.yaml
@@ -56,7 +56,7 @@ services:
5656
capabilities: [gpu]
5757
volumes:
5858
- ./output/data:/data
59-
- ./container-test/config.1.yaml:/app/config.yaml:ro
59+
- ./container-test/config.yaml:/app/config.yaml:ro
6060
- ./output/tmp:/tmp/3dgs-work
6161
environment:
6262
CONFIG_PATH: /app/config.yaml

docs/DEPLOYMENT.md

Lines changed: 291 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ Production deployment patterns and operational best practices.
55
## Table of Contents
66

77
* [Container Deployment](#container-deployment)
8+
* [Azure Container Apps Job (GPU) — azd](#azure-container-apps-job-gpu--azd)
89
* [Batch Mode (Azure SDK)](#batch-mode-azure-sdk)
910
* [Resource Requirements](#resource-requirements)
1011
* [Storage Configuration](#storage-configuration)
@@ -308,6 +309,296 @@ az container delete --resource-group $RESOURCE_GROUP --name $CONTAINER_NAME
308309
- Set `--restart-policy OnFailure` for one-time jobs
309310
- Use spot instances for non-critical workloads
310311

312+
---
313+
314+
## Azure Container Apps Job (GPU) — azd
315+
316+
Deploy the 3DGS processor as a **serverless GPU job** on Azure Container Apps using the
317+
Azure Developer CLI (`azd`). The job runs in batch mode: download videos from Azure Blob
318+
Storage → extract frames → COLMAP reconstruction → gsplat GPU training → export PLY/SPLAT
319+
→ upload results → exit.
320+
321+
**Key characteristics:**
322+
- **Serverless GPU** — NVIDIA T4 (16 GB VRAM) via `Consumption-GPU-NC8as-T4` workload profile
323+
- **No local Docker required** — images are built remotely via ACR Tasks
324+
- **Batch mode** — single job execution, no long-running container
325+
- **Managed Identity** — user-assigned MI for RBAC-based access to ACR and Storage
326+
- **RBAC separation** — infrastructure provisioning and role assignments are separate steps
327+
328+
### Prerequisites
329+
330+
| Requirement | Purpose |
331+
|-------------|---------|
332+
| [Azure CLI](https://aka.ms/installazurecli) (`az`) | Azure resource management |
333+
| [Azure Developer CLI](https://aka.ms/install-azd) (`azd`) | Infrastructure-as-code orchestration |
334+
| Azure subscription | With GPU quota in your target region |
335+
| Test data | Download via `./scripts/e2e/01-download-testdata.sh` |
336+
337+
**No local Docker daemon is needed.** The GPU image is built remotely on Azure Container
338+
Registry Tasks.
339+
340+
### Quick Start
341+
342+
```bash
343+
# 1. Initialize the azd environment
344+
azd init
345+
346+
# 2. Configure environment
347+
azd env set AZURE_LOCATION swedencentral
348+
azd env set USE_GPU true
349+
350+
# 3. Provision infrastructure (also builds the GPU image on ACR — ~40 min)
351+
azd provision
352+
353+
# 4. (Privileged user) Assign RBAC roles to the Managed Identity
354+
./infra/scripts/assign-rbac.sh
355+
356+
# 5. Verify RBAC assignments
357+
./infra/scripts/verify-rbac.sh
358+
359+
# 6. Upload test data to Azure Blob Storage
360+
./infra/scripts/upload-testdata.sh
361+
362+
# 7. Run the GPU job
363+
./infra/scripts/run-job.sh --logs
364+
```
365+
366+
### What Gets Provisioned
367+
368+
`azd provision` creates the following Azure resources (all in a single resource group):
369+
370+
| Resource | Bicep Module | Purpose |
371+
|----------|-------------|---------|
372+
| Resource Group | `main.bicep` | `rg-<env-name>` |
373+
| User-Assigned Managed Identity | `modules/managed-identity.bicep` | Authenticate to ACR and Storage |
374+
| Azure Container Registry (Basic) | `modules/acr.bicep` | Store GPU Docker images |
375+
| Storage Account + 4 containers | `modules/storage.bicep` | `input`, `output`, `processed`, `error` |
376+
| Log Analytics Workspace | `modules/monitoring.bicep` | Container log aggregation |
377+
| Container Apps Environment | `modules/container-apps-env.bicep` | GPU workload profile (T4) |
378+
| Container Apps Job (Manual trigger) | `modules/container-apps-job.bicep` | The processor job itself |
379+
380+
After provisioning, the `postprovision` hook automatically:
381+
1. Builds the GPU Docker image via ACR Tasks (`infra/scripts/hooks/acr-build.sh`)
382+
2. Updates the Container Apps Job with the new image
383+
384+
### RBAC Requirements — Read This Carefully
385+
386+
RBAC role assignments are **intentionally separated** from infrastructure provisioning.
387+
This is because in many organizations, the person deploying infrastructure does not have
388+
permission to assign IAM roles. The two operations may require different privilege levels.
389+
390+
#### Permissions Required by the Deployer (the person running `azd provision`)
391+
392+
The deployer needs these permissions **on the Azure subscription or resource group**:
393+
394+
| Permission | Why |
395+
|------------|-----|
396+
| `Microsoft.Resources/subscriptions/resourceGroups/write` | Create the resource group |
397+
| `Microsoft.ContainerRegistry/registries/*` | Create ACR and push images |
398+
| `Microsoft.Storage/storageAccounts/*` | Create storage account and containers |
399+
| `Microsoft.App/managedEnvironments/*` | Create Container Apps Environment |
400+
| `Microsoft.App/jobs/*` | Create Container Apps Job |
401+
| `Microsoft.ManagedIdentity/userAssignedIdentities/*` | Create managed identity |
402+
| `Microsoft.OperationalInsights/workspaces/*` | Create Log Analytics workspace |
403+
404+
Typically the **Contributor** built-in role on the subscription is sufficient.
405+
406+
The `preprovision` hook (`infra/scripts/hooks/preprovision.sh`) also attempts to assign
407+
deployer-level RBAC (AcrPush, Storage Blob Data Contributor) so the deployer can push
408+
images and upload test data. This requires the deployer to have **User Access Administrator**
409+
or **Owner** on the resource group. If the deployer lacks this permission, the hook
410+
continues (it's non-blocking) — a privileged user can assign these roles later.
411+
412+
#### Permissions Required for the Managed Identity (assigned after provisioning)
413+
414+
The Managed Identity needs two role assignments so the container job can pull images
415+
and read/write blobs at runtime:
416+
417+
| Role | Scope | Role Definition ID | Purpose |
418+
|------|-------|-------------------|---------|
419+
| **AcrPull** | Container Registry | `7f951dda-4ed3-4680-a7ca-43fe172d538d` | Pull GPU image from ACR |
420+
| **Storage Blob Data Contributor** | Storage Account | `ba92f5b4-2d11-453d-a403-e96b0029c9fe` | Download input videos, upload outputs, move blobs |
421+
422+
#### Who Can Assign These Roles?
423+
424+
A user with **one** of these roles on the target scope:
425+
- **Owner** (full control, including RBAC)
426+
- **User Access Administrator** (RBAC management only)
427+
- A custom role with `Microsoft.Authorization/roleAssignments/write` permission
428+
429+
> **If the deployer does not have these permissions**, the `azd provision` step will
430+
> succeed but the job will fail at runtime with authentication errors (HTTP 403 on
431+
> Storage or image pull failures on ACR). Have a privileged user run the RBAC scripts.
432+
433+
#### Assigning RBAC Roles
434+
435+
```bash
436+
# Assign roles via Azure CLI (reads values from azd env automatically)
437+
./infra/scripts/assign-rbac.sh
438+
439+
# Or assign via Bicep deployment (alternative)
440+
./infra/scripts/assign-rbac.sh --use-bicep
441+
```
442+
443+
#### Verifying RBAC Roles
444+
445+
```bash
446+
# Check that both roles are assigned
447+
./infra/scripts/verify-rbac.sh
448+
```
449+
450+
Expected output when roles are correctly assigned:
451+
```
452+
🔍 Verifying RBAC role assignments for Managed Identity...
453+
Principal ID: <managed-identity-principal-id>
454+
455+
✅ AcrPull on Container Registry
456+
✅ Storage Blob Data Contributor on Storage Account
457+
458+
✅ All RBAC role assignments are in place.
459+
```
460+
461+
If any roles are missing:
462+
```
463+
❌ AcrPull on Container Registry — MISSING
464+
465+
⚠️ 1 RBAC role assignment(s) missing.
466+
Run './infra/scripts/assign-rbac.sh' as a privileged user to fix.
467+
```
468+
469+
#### What Fails Without RBAC
470+
471+
| Missing Role | Symptom |
472+
|--------------|---------|
473+
| **AcrPull** | Job execution fails immediately — Container Apps cannot pull the image. The execution shows `Failed` status with no container logs (image never starts). |
474+
| **Storage Blob Data Contributor** | Container starts but fails with `Failed to download input blobs from Azure Blob Storage` — the managed identity token is rejected with HTTP 403. |
475+
476+
### Uploading Test Data
477+
478+
The South Building dataset (128 multi-view images from UNC Chapel Hill) is used for testing.
479+
The upload script downloads it if needed, creates 3 test videos, and uploads them:
480+
481+
```bash
482+
# Download test data + upload to blob storage (default prefix: south_building/)
483+
./infra/scripts/upload-testdata.sh
484+
485+
# Custom blob prefix
486+
./infra/scripts/upload-testdata.sh --prefix my_scene/
487+
488+
# Download only (no upload)
489+
./infra/scripts/upload-testdata.sh --download-only
490+
```
491+
492+
### Running the Job
493+
494+
```bash
495+
# Start job and return immediately
496+
./infra/scripts/run-job.sh
497+
498+
# Start job, wait for completion, show status
499+
./infra/scripts/run-job.sh --wait
500+
501+
# Start job, wait, then show container logs
502+
./infra/scripts/run-job.sh --logs
503+
```
504+
505+
A successful run produces these outputs in the `output` blob container:
506+
507+
| File | Description | Typical Size |
508+
|------|-------------|-------------|
509+
| `south_building/south_building.ply` | 3D Gaussian point cloud (real GPU-trained geometry) | ~65 KB |
510+
| `south_building/south_building.splat` | Web-optimized format for real-time rendering | ~53 KB |
511+
| `south_building/manifest.json` | Video metadata (resolution, duration, codec, frame count) | ~7 KB |
512+
| `south_building/.checkpoint.json` | Pipeline progress tracking | ~11 KB |
513+
514+
Input videos are moved from `input/` to `processed/` on success, or `error/` on failure.
515+
516+
### Redeploying Code Changes
517+
518+
After modifying the processor code, rebuild and redeploy without re-provisioning:
519+
520+
```bash
521+
# Build new image on ACR + update the job
522+
./infra/scripts/deploy-job.sh
523+
524+
# Or skip the build and just redeploy the existing image
525+
./infra/scripts/deploy-job.sh --skip-build
526+
```
527+
528+
### Configuration Variables
529+
530+
Set these via `azd env set <NAME> <VALUE>` before provisioning:
531+
532+
| Variable | Default | Description |
533+
|----------|---------|-------------|
534+
| `AZURE_LOCATION` | *(required)* | Azure region. Must support GPU T4 (e.g., `swedencentral`, `eastus`, `westus`) |
535+
| `USE_GPU` | `true` | Enable GPU workload profile |
536+
| `GPU_PROFILE_TYPE` | `Consumption-GPU-NC8as-T4` | GPU type (`Consumption-GPU-NC8as-T4` or `Consumption-GPU-NC24-A100`) |
537+
| `PROCESSOR_BACKEND` | `gsplat` | 3DGS backend (`gsplat`, `gaussian-splatting`, `mock`) |
538+
| `INCLUDE_RBAC` | `true` | Include RBAC assignments in Bicep deployment |
539+
| `USE_STORAGE_KEYS` | `false` | Use storage account keys instead of RBAC (fallback) |
540+
541+
### GPU Region Availability
542+
543+
Serverless GPU (T4) Container Apps is available in these regions:
544+
545+
`swedencentral`, `eastus`, `westus`, `canadacentral`, `brazilsouth`, `australiaeast`,
546+
`italynorth`, `francecentral`, `centralindia`, `japaneast`, `northcentralus`,
547+
`southcentralus`, `southeastasia`, `southindia`, `westeurope`, `westus2`, `westus3`
548+
549+
If you get a deployment error about workload profiles, ensure your subscription has
550+
GPU quota in the selected region. Check quota at:
551+
[Azure Portal → Quotas](https://portal.azure.com/#view/Microsoft_Azure_Capacity/QuotaMenuBlade)
552+
553+
### Cleaning Up RBAC Before Teardown
554+
555+
Before running `azd down`, remove the RBAC assignments:
556+
557+
```bash
558+
./infra/scripts/cleanup-rbac.sh
559+
azd down --purge --force
560+
```
561+
562+
### Scripts Reference
563+
564+
All infrastructure scripts are in `infra/scripts/`:
565+
566+
| Script | Purpose | Requires Privilege? |
567+
|--------|---------|-------------------|
568+
| `hooks/preprovision.sh` | Captures deployer identity, runs RBAC preflight check | No (auto-run by azd) |
569+
| `hooks/postprovision.sh` | Builds GPU image on ACR, updates job | No (auto-run by azd) |
570+
| `hooks/acr-build.sh` | Creates minimal staging dir, runs `az acr build` for GPU target | No |
571+
| `assign-rbac.sh` | Assigns AcrPull + Storage Blob Data Contributor to MI | **Yes** — Owner or User Access Admin |
572+
| `verify-rbac.sh` | Checks if required RBAC roles are assigned | No |
573+
| `cleanup-rbac.sh` | Removes RBAC role assignments | **Yes** — Owner or User Access Admin |
574+
| `run-job.sh` | Starts a job execution with `--wait`/`--logs` options | No |
575+
| `deploy-job.sh` | Rebuilds image on ACR + updates job | No |
576+
| `upload-testdata.sh` | Downloads South Building dataset + uploads videos to blob storage | No (needs Storage Blob Data Contributor on deployer) |
577+
578+
### Troubleshooting
579+
580+
**"MANAGED_IDENTITY_PRINCIPAL_ID is not set"**
581+
Run `azd provision` first — this creates the managed identity and saves its principal ID.
582+
583+
**Image pull failure (no container logs)**
584+
The AcrPull role is missing. Run `./infra/scripts/assign-rbac.sh`.
585+
586+
**"Failed to download input blobs from Azure Blob Storage"**
587+
Either: (a) Storage Blob Data Contributor is missing — run `./infra/scripts/assign-rbac.sh`,
588+
or (b) no blobs exist at the `BATCH_INPUT_PREFIX` — run `./infra/scripts/upload-testdata.sh`.
589+
590+
**"Reconstruction quality too low"**
591+
Increase `FRAME_RATE` (e.g., from 2 to 3) in the job env vars to extract more frames.
592+
593+
**ACR build timeout**
594+
The GPU image build compiles COLMAP from source (~30 min). The default timeout is 3600s.
595+
If it still times out, check ACR Tasks quotas.
596+
597+
**COLMAP matching timeout**
598+
Ensure `COLMAP_MATCHER=sequential` (not `exhaustive`) in the job env vars.
599+
600+
---
601+
311602
## Batch Mode (Azure SDK)
312603

313604
Batch mode processes a single job using the Azure Blob Storage SDK directly — no BlobFuse2

0 commit comments

Comments
 (0)