add feature gate for tensorrt plugin#3518
Conversation
|
+1 My environment jetpack 6.2 $ python3 -c "import torch_tensorrt" |
|
@shchoi00 yes this is support of Jetson as part of our build system overhaul |
There was a problem hiding this comment.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/plugins/_generate_plugin_converter.py 2025-05-20 15:32:36.323051+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/plugins/_generate_plugin_converter.py 2025-05-20 15:33:02.313006+00:00
@@ -36,10 +36,11 @@
except ImportError as e:
raise RuntimeError(
"Unable to import TensorRT plugin. Please install TensorRT plugin library (https://github.com/NVIDIA/TensorRT-plugin-library?tab=readme-ov-file#installation) to add support for compiling quantized models"
)
from tensorrt.plugin._lib import QDP_REGISTRY
+
torch_target = getattr(getattr(torch.ops, namespace), op_name)
overload_str = overload if overload else ""
overload_name = overload_str if overload else "default"
torch_overload = getattr(torch_target, overload_name)
assert (
narendasan
left a comment
There was a problem hiding this comment.
Minor change then looks good
Description
Tensorrt 10.3.0 does not support tensorrt plugin which causing failures.
Fixes # (issue)
Type of change
Please delete options that are not relevant and/or add your own.
Checklist: