[WIP] Add additional platform page for third-party vendor#2072
[WIP] Add additional platform page for third-party vendor#2072can-gaa-hou wants to merge 2 commits into
Conversation
✅ Deploy Preview for pytorch-dot-org-preview ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
@can-gaa-hou Great job, thank you. |
dvrogozh
left a comment
There was a problem hiding this comment.
I am not sure where Intel XPU cmdline comes from as I don't see it being added in this PR, but it's incorrect as it directs to install deprecated intel-extension-for-pytorch. Intel XPU is a platform available in upstream pytorch and should be installed with:
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/xpu
For the same reason, I think we need to clearly separate those platforms which are supported directly by pytorch team (such as CUDA, ROCm, XPU, Vulkan, etc.) and those which are available as pytorch extensions/plugins supported by 3rd parties. I suggest there should be 2 distinct sections on the page which clear separate these 2 group of platforms.
Also, I am not sure that current install selector matrix widget is the best approach we can take as it has potentially limited capacity and quickly becomes bloated with different buttons. A more straightforward and simple way can be considered. For example, having a simple table of content listing platforms one by one with the further links to the platform descriptions and instructions. For example:
PyTorch Platforms
- CPU -> link
- CUDA -> link
- ROCm -> link
- XPU -> link
- Vulkan -> link
- ...
PyTorch Extensions
- NPU (Huawei) -> link
- ...
Hi @dvrogozh, all the content shown on the page is fake. The purpose of this PR is to introduce the secondary page for PyTorch extension installation and to add the installation widget framework. The content in the widget will be added through JSON files (the format is shown in the description) by each vendor in the following PRs.
This primary page will manage the platforms which are supported directly by pytorch team, and the secondary page will manage those pytorch extensions/plugins supported by 3rd parties.
The format inherits from the primary page. We can discuss more about it in the Slack channel and the Accelerator Working Group. |
|
Hi there, I have update the link: https://cosdt.github.io/get-started/additional-platform/ changing the name from ecosystem-platform --> additional-platform |
|
@dvrogozh I don't think I agree on this. There should not be any distinction between in-core vs out-of-core backends from the end user perspective. We have many workstreams done and in flight working toward ensure the two can match and reach the same level of integration and stability. |
| </div> | ||
| </div> | ||
|
|
||
| <div class="bg-light-grey"> |
There was a problem hiding this comment.
I would suggest removing the light gray as it makes it look like a formatting bug with the background above.
| @@ -0,0 +1,115 @@ | |||
| <p>Select your compute platform and configuration to get the installation command. These platforms provide alternative hardware acceleration options beyond NVIDIA CUDA.</p> | |||
There was a problem hiding this comment.
| <p>Select your compute platform and configuration to get the installation command. These platforms provide alternative hardware acceleration options beyond NVIDIA CUDA.</p> | |
| <p>In the following selector, you can find compute platform and configuration supported by partners and community members. Select your preferences and run the install command provided.</p> |
This is closer to the language of the main page and makes it clear what's the difference between the two.
There was a problem hiding this comment.
I would also suggest adding a note in the selector for each platform with details on where to provide feedback and report issues (which github repo, which label to use there, etc).
There was a problem hiding this comment.
@albanD , can you, please, suggest where Intel XPU should be placed - in the primary page or on this additional platforms page. I do argue that Intel XPU is in-tree backend and Intel provides the necessary infrastructure to the Pytorch team to validate, develop and support XPU backend. Effectively Intel Pytorch team is a part of the greater Pytorch team who makes Pytorch releases.
There was a problem hiding this comment.
I would also suggest adding a note in the selector for each platform with details on where to provide feedback and report issues (which github repo, which label to use there, etc).
Besides that I think it's reasonable to add information on how to install drivers to make pytorch actually runnable on the specific hardware. This might be just a link to the platform documentation or a link to the pytorch side page with the details such as https://docs.pytorch.org/docs/2.11/notes/get_start_xpu.html.
There was a problem hiding this comment.
I would also suggest adding a note in the selector for each platform with details on where to provide feedback and report issues (which github repo, which label to use there, etc).
@albanD Thank you for this valuable suggestion, I will copy that.
There was a problem hiding this comment.
Besides that I think it's reasonable to add information on how to install drivers to make pytorch actually runnable on the specific hardware. This might be just a link to the platform documentation or a link to the pytorch side page with the details such as https://docs.pytorch.org/docs/2.11/notes/get_start_xpu.html.
Thank you@dvrogozh, yes, we will reserve a spot to display this info.
| <div class="row"> | ||
| <div class="col-md-3 headings"> | ||
| <div class="col-md-12 title-block"> | ||
| <div class="option-text">PyTorch Build</div> |
There was a problem hiding this comment.
We might actually want to reshuffle this. To be platform-first?
As I don't expect all platform will support all types of build and OS.
There was a problem hiding this comment.
Great point! We'll update the PR along those lines.
| <div class="option-text">Package</div> | ||
| </div> | ||
| <div class="col-md-12 title-block"> | ||
| <div class="option-text">Language</div> |
There was a problem hiding this comment.
I would suggest removing Package and Language tbh. libtorch + C++ is not really used and I don't think it is a good use of time for new bakends to focus on maintaining these.
If there are just technical reasons distinctions, then yes. However, if code resides in a separate github org, has distinct workflow processes set, stand alone ci coverage and maintenance, separate release process, etc., then there is clear distinction between in-core and such out-of-core backends. |
@can-gaa-hou , where such JSON file is located for XPU? I could not actually find it in the repo. Unfortunately I don't understand where current version is pulling information for XPU from. |
@dvrogozh, this PR is strictly focused on the infrastructure of the secondary page and does not include any specific accelerator data. The XPU and Ascend entries seen in the demo were generated in our local environment using local files; we intentionally excluded them from this PR to keep the focus on the framework. |
|
Hi @albanD, sorry for the ping again. We are drafting a comprehensive mockups that includes both the primary page updates and the creation of the secondary pages, fully aligned with the TAC consensus. We will also incorporate some minor adjustments based on the discussions and suggestions from other vendors in the Accelerator WG. Once finalized, I’ll send it over to you for final feedback and confirmation. We expect to complete this in about three days. Thank you for your patience and support! cc @can-gaa-hou |
Description
This PR adds a new page for users to search for guidelines for more PyTorch backends through the official website.
The new page will look like this:

Here is the page entrance: https://cosdt.github.io/get-started/additional-platform/
How it works
If a new backend would like to add to this page, then simply add a json file under the
_ecosystem_platform/directory. Thegen_ecosystem_platform.pywill automatically read the json file and re-generate the js file needed for rendering the page. Here is an example for the json file:{ "name": "NPU", "vendor": "Huawei", "documentation": "https://www.hiascend.com/document", "stable": { "linux": { "pip": { "python": { "CANN 8.0": "pip3 install torch torchvision --index-url https://download.pytorch.org/whl/npu/cann80", "CANN 9.0": "pip3 install torch torchvision --index-url https://download.pytorch.org/whl/npu/cann90" } } } }, "preview": { "linux": { "pip": { "python": { "CANN 8.0": "pip3 install torch torchvision --pre --index-url https://download.pytorch.org/whl/nightly/npu/cann80", "CANN 9.0": "pip3 install torch torchvision --pre --index-url https://download.pytorch.org/whl/nightly/npu/cann90" } } } } }cc @fffrog