@@ -6,8 +6,7 @@ This example demonstrates how to use the `wasi-nn` crate to run a classification
66It supports CPU and GPU (Nvidia CUDA) execution targets.
77
88** Note:**
9- For the wasi-nn GPU execution target, CUDA (onnx-cuda) is the only supported ONNX execution provider (EP).
10- TPU execution target is not supported and will fall back to CPU execution.
9+ GPU execution target only supports Nvidia CUDA (onnx-cuda) as execution provider (EP) for now.
1110
1211## Build
1312
@@ -27,7 +26,8 @@ cargo build --features component-model,wasi-nn,wasmtime-wasi-nn/onnx-download
2726
2827#### For GPU (Nvidia CUDA) support:
2928``` sh
30- cargo build --features component-model,wasi-nn,wasmtime-wasi-nn/onnx-cuda
29+ # This will automatically download onnxruntime dynamic shared library from cdn.pyke.io
30+ cargo build --features component-model,wasi-nn,wasmtime-wasi-nn/onnx-cuda,wasmtime-wasi-nn/onnx-download
3131```
3232
3333### Running with Different Execution Targets
@@ -46,15 +46,6 @@ Arguments:
4646 ./crates/wasi-nn/examples/classification-component-onnx/target/wasm32-wasip1/debug/classification-component-onnx.wasm
4747```
4848
49- Or explicitly specify CPU:
50- ``` sh
51- ./target/debug/wasmtime run \
52- -Snn \
53- --dir ./crates/wasi-nn/examples/classification-component-onnx/fixture/::fixture \
54- ./crates/wasi-nn/examples/classification-component-onnx/target/wasm32-wasip1/debug/classification-component-onnx.wasm \
55- cpu
56- ```
57-
5849#### GPU (CUDA) Execution:
5950``` sh
6051# path to `libonnxruntime_providers_cuda.so` downloaded by `ort-sys`
@@ -66,12 +57,6 @@ export LD_LIBRARY_PATH={wasmtime_workspace}/target/debug
6657 ./crates/wasi-nn/examples/classification-component-onnx/target/wasm32-wasip1/debug/classification-component-onnx.wasm \
6758 gpu
6859
69- # With debug logging
70- WASMTIME_LOG=wasmtime_wasi_nn=debug ./target/debug/wasmtime run -Snn \
71- --dir ./crates/wasi-nn/examples/classification-component-onnx/fixture/::fixture \
72- ./crates/wasi-nn/examples/classification-component-onnx/target/wasm32-wasip1/debug/classification-component-onnx.wasm \
73- gpu
74-
7560```
7661
7762## Expected Output
@@ -97,12 +82,3 @@ You can monitor GPU usage using cmd `watch -n 1 nvidia-smi`.
9782- NVIDIA GPU with CUDA support
9883- CUDA Toolkit 12.x with cuDNN 9.x
9984- Build wasmtime with ` wasmtime-wasi-nn/onnx-cuda ` feature
100-
101- ## Troubleshooting
102-
103- If you see an error like:
104- ```
105- ONNX GPU execution target requested, but 'onnx-cuda' feature is not enabled
106- ```
107-
108- Make sure you've built wasmtime with the appropriate feature flag (see "Building Wasmtime" section above).
0 commit comments