Skip to content

Align AI Studio port guidance#11

Open
mdrideout wants to merge 3 commits intomasterfrom
ai-studio-compatibility
Open

Align AI Studio port guidance#11
mdrideout wants to merge 3 commits intomasterfrom
ai-studio-compatibility

Conversation

@mdrideout
Copy link
Copy Markdown
Owner

Summary

  • Update Junjo AI Studio docs for the current 2615x local port model.
  • Refresh Docker/reference docs, deployment notes, OpenTelemetry guidance, and examples to use OTLP ingestion on port 26155.
  • Clarify that 50052 and 50053 are private Junjo AI Studio service-to-service RPC ports, not Junjo library telemetry endpoints.
  • Update exporter docstrings and tests to match the new local and same-network defaults.

Validation

  • git diff --check
  • uv run sphinx-build -b html docs docs/_build/html

Update Junjo AI Studio documentation, examples, and exporter guidance for the current 2615x local port model.

Refresh Docker and AI Studio docs to use backend:26154, frontend:26151/26153, and OTLP ingestion on 26155. Clarify that private RPC ports 50052 and 50053 are internal service-to-service ports, not Junjo library telemetry endpoints.

Update README, deployment, OpenTelemetry, base example, and AI chat example guidance so local host applications use localhost:26155 and same-network container apps use ingestion:26155. Adjust example telemetry config comments and exporter tests to match the updated defaults.
@cloudflare-workers-and-pages
Copy link
Copy Markdown

cloudflare-workers-and-pages Bot commented Apr 24, 2026

Deploying junjo-python-api with  Cloudflare Pages  Cloudflare Pages

Latest commit: aa148af
Status: ✅  Deploy successful!
Preview URL: https://0c8880e0.junjo-python-api.pages.dev
Branch Preview URL: https://ai-studio-compatibility.junjo-python-api.pages.dev

View logs

Remove host.docker.internal guidance from AI Studio documentation and exporter examples.

Keep local telemetry documentation focused on host applications using localhost:26155 and same Compose network containers using ingestion:26155.
Update Junjo AI Studio telemetry guidance so Docker-based applications use the ingestion container name on the same Docker network, while local-machine examples use localhost.

Remove unnecessary OTLP host and port configuration from examples, keep the base example as the local uv-run path, and make the AI Chat example use ingestion:26155 directly.

Add same-network Docker Compose examples, keep junjo_network only where a concrete external network is shown, and fix the documented network creation command.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant