[Bugfix][W8A8] fixed cutlass block fp8 binding (#14796)

This commit is contained in:
DefTruth
2025-03-14 18:32:42 +08:00
committed by GitHub
parent c77620d22d
commit 40253bab44

View File

@ -370,7 +370,7 @@ TORCH_LIBRARY_EXPAND(TORCH_EXTENSION_NAME, ops) {
"cutlass_scaled_mm_supports_block_fp8(int cuda_device_capability) -> "
"bool");
ops.impl("cutlass_scaled_mm_supports_block_fp8",
&cutlass_scaled_mm_supports_fp8);
&cutlass_scaled_mm_supports_block_fp8);
// Check if cutlass sparse scaled_mm is supported for CUDA devices of the
// given capability