feat(jp62): add Jetson Linux 6.2 base layer#61
Conversation
- Upgrade cmake to 3.28 from Kitware PPA. System cmake 3.22 has a bug where find_library() skips the search when the result variable is pre-set to "NOTFOUND" via set(), breaking ament_cmake_export_libraries. - Patch ament export templates to fix _lib cache variable pollution across packages (ament_cmake#182). The template reuses a shared cache variable "_lib" across all packages' find_library() calls. When find_package(A) caches _lib, subsequent find_package(B) sees the stale cache entry and skips the search. - Install ros-humble-tensorrt-cmake-module and ros-humble-cudnn-cmake-module explicitly since --no-nvidia ansible flag skips them. - Restore .env file COPY for Autoware release tags (1.7.1+). - Add common-devel-jp62-debug target for interactive debugging. - Update docker-bake.hcl to target common-devel-jp62-build stage. - Protect ROS packages from apt-get autoremove with apt-mark manual. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Thanks for the PR!
Do you think it makes it easier for you if we extract common-base and common-base-cuda out of Dockerfile so that you don't have to care about autoware code in Dockerfile.jp62? |
Autoware 1.7.1 hardcodes -gencode arch=compute_101 (Blackwell) in 14 CMakeLists.txt files. CUDA 12.6 on JP62 only supports up to compute_90, causing nvcc fatal errors during the sensing-perception build. Add patch-cuda-arch.sh that gates compute_101/120 flags behind CUDA_VERSION >= 12.8, and invoke it automatically from build.sh when --platform jp62 is used. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sounds a good idea. I'll see the way to revise it. BTW, the local build on AGX Orin is successful on my side. The Autoware 1.7.1 source requires patches to work around the compute capability issue. I just pushed a auto-patcher script in the PR. I'd like to ask the proper way to apply upstream patches. I used to maintain a patched Autoware repo myself. In the case we'd build a stable version, I'd like to know the proper way to ship upstream patches. |
Add docker-compose.jp62.yaml that swaps all services to locally-built JP62 images with nvidia runtime and ROS_DISTRO=humble. The visualizer uses the standalone visualizer-jp62 image since it requires VNC/noVNC components not present in the universe image. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
Dockerfile.jp62producingcommon-base-jp62/common-devel-jp62fromnvcr.io/nvidia/l4t-tensorrt:r10.3.0-develcommon-base-cuda/common-devel-cudacontract — downstreamDockerfile.cudacomponent files work unmodifiedbuild.sh --platform jp62and bake targets addedmainJP62-specific handling
sbsarepo--no-nvidia --no-cuda-drivers; L4T provides CUDA--no-nvidiaament_cmake_export_libraries_libcache pollutionCACHE FORCE+unsetbeforefind_libraryKnown issues
cmake
find_libraryunder QEMU (x86 cross-build only)The colcon build step hits intermittent
find_libraryfailures when building under QEMU arm64 emulation on x86. cmake searches the correct path but fails to stat.sofiles that exist on disk. This only occurs in colcon's Python subprocess context, not in direct cmake invocations. Two contributing factors identified and mitigated:set(_lib "NOTFOUND")bug --find_libraryskips search when result variable is pre-set. Fixed by upgrading to cmake 3.28 from Kitware PPA._libcache variable reuse (ament_cmake#182) -- shared cache variable across all packages' export templates causes cross-package pollution. Fixed by patching templates withCACHE FORCE+unset.These fixes resolve most failures (e.g.
builtin_interfaces,tier4_metric_msgsnow build via colcon). Residual failures for some packages (e.g.rcutilsintier4_debug_msgs) persist under QEMU only -- cmake debug output shows the search path is correct but the file is not found despiteif(EXISTS)confirming it. Full colcon build requires native Jetson hardware.Test method
./build.sh --platform jp62 --target {common,components,universe}