Commit Graph

263 Commits

Author SHA1 Message Date
2b6cfd34d1 fix a typo that fails the compiling when ElementScale is not the same as MmaType (#1977) 2024-12-10 15:54:44 -05:00
4c42f73fda Improve mixed dtype GEMM (#1972)
* update

* fix a typo
2024-12-06 13:33:22 -05:00
80243e0b8c add {uint4, uint2, int2} => {fp16, bf16} conversion (#1966) 2024-12-03 14:03:43 -05:00
8aa95dbb88 Fix the racing condition of mixed-input gemm when writing the registers (#1931)
* move two warpgroup_wait

* merge main

---------

Co-authored-by: Siyuan Fu <siyuanf@nvidia.com>
2024-11-08 13:15:54 -05:00
d656afbd2a fix undefined in device code error (#1880) 2024-11-06 14:56:54 -05:00
19f51596e8 feat: support kFactor 8 used in mma tensor op tile iterator (#1512) 2024-10-29 11:56:59 -04:00
e8a8b69365 Refactor some GroupedGEMM logic (#1899) 2024-10-25 20:14:01 -04:00
08a49953a0 Add a print for the uint{x}b_t type. (#1871) 2024-10-24 14:39:22 -04:00
a424ca6cf9 fix wrong A/BLayout in MMA_Traits for binary mma and append other MMA_Traits support (#1856)
* fix wrong A/BLayout in  MMA_Traits<SM80_16x8x256_S32U1U1S32_TN_XORPOPC> and append support for  m8n8k128, m16n8k128  mma.and.popc in MMA_Traits instantiation

* add "print" template for  subbyte_reference<T>
2024-10-24 14:38:35 -04:00
be692b48b0 remove redundant hardcoded packing configs in mixed dtype gemm (#1894)
Co-authored-by: Siyuan Fu <siyuanf@nvidia.com>
2024-10-23 14:24:09 -04:00
f02913c34e Include of regular_tile_iterator.h fixed for NVRTC (#1765)
* Include of regular_tile_iterator.h fixed for NVRTC

* More include fixed for NVRTC
2024-10-23 12:55:59 -04:00
03e3bffaec Adjusting code indentation (#1639) 2024-10-23 12:55:02 -04:00
b0c09ed077 fix by adding public (#1753) 2024-10-23 12:45:58 -04:00
d65266a868 Add all supported GMMA shapes (#1890) 2024-10-22 18:13:36 -04:00
5b50a8faaf Add GMMA shape m64n40k16 (#1864) 2024-10-21 20:41:47 -04:00
08101d9d0c Improve sm90 mixed dtype kernel (#1883) 2024-10-17 20:06:38 -04:00
755194a7bd add is_last_tile 2024-10-17 12:11:02 -07:00
53668799b2 Handle MNK Sm90{Row, Col}Reduction problem shapes (#1803) 2024-10-14 19:46:20 -04:00
cc3c29a81a CUTLASS 3.6.0 (#1850)
* v3.6

* update changelog

* update readme

* fix typo

* fixing typos

* hopper gemm with weight prefetch

---------

Co-authored-by: yuzhai <yuzhai@nvidia.com>
Co-authored-by: Haicheng Wu <haichengw@nvidia.com>
2024-10-09 15:33:27 -04:00
0837a2a00a Fix typo in comment (#1787) 2024-10-07 12:39:59 -04:00
e2b0789927 Add some can implement rules of hopper convolution. (#1835) 2024-09-25 11:28:10 -04:00
2991ce18d3 Add print_svg for mma (#1733)
* add print_svg for mma

* correct the code indentation
2024-09-18 10:37:24 -04:00
1ebda1ccef Fix MMA promotion interval assertions (#1641) 2024-09-16 12:38:42 -04:00
3a8c01a18b Prefix a member template name with the template keyword. (#1796)
Fixes llvm buld error.
2024-09-11 13:33:56 -04:00
dbdae514e0 Support for TMA Epilogue for Group Gemm and add pingpong ptr array & Group Gemm (#1795) 2024-09-11 00:07:31 -04:00
21d0534167 fix assertion (#1790) 2024-09-09 14:05:27 -04:00
323c8170bf Support ComputeFn where output type differs from input type (#1771)
This is useful for e.g. function taking in 2 float inputs and turn them to complex
2024-09-05 23:25:03 -04:00
82f5075946 set_slice3x3 -> set_slice_3x3 (#1784) 2024-09-05 23:24:10 -04:00
06e337758d Remove extraneous comma in declaration (#1776) 2024-09-05 17:14:15 -04:00
7369adcaca Add Sm90LinCombPerColBias (#1774)
Co-authored-by: Jiayu Sun <jiayus@s4124-0071.nvidia.com>
2024-09-04 15:11:24 -04:00
6c3044136b Update barrier.h (#1782) 2024-09-04 14:52:11 -04:00
e1976daacc Add support for mixed 4-bit/8-bit data types GEMM (#1413)
* Add support for mixed 4-bit/8-bit data types GEMM

* fix ( and )

---------

Co-authored-by: Aleksandar Samardžić <asamardzic@matf.bg.ac.rs>
Co-authored-by: Haicheng Wu <haichengw@nvidia.com>
2024-08-29 23:11:06 -04:00
4dbf5dbed2 Use CUDA runtime API to retrieve function pointer to driver API (#1700)
* Query pfn to driver api

* use default for older toolkits

---------

Co-authored-by: shunfans <shunfans@nvidia.com>
2024-08-19 13:26:09 -04:00
b0296bf682 fix uint128 2024-08-15 21:06:01 -07:00
eqy
fb170439e8 Update half.h (#1709) 2024-08-14 14:59:59 -04:00
7192f4ab23 Add CLayout_64x208 (#1680)
Without this I get compilation error when the extended shapes are enabled
2024-08-08 14:00:24 -04:00
19b4c5e065 Fix isnan namespace qualification in cutlass/functional.h (#1679)
* Fix unrelated MSVC build warnings

* Fix use of isnan in functional.h

Correct namespace qualification of isnan in functional.h
so that it invokes cutlass::isnan for half_t, instead of
converting half_t to float and invoking std::isnan (on host,
or ::isnan on device).
2024-08-05 14:28:13 -04:00
06b21349bc 1x1x1 cluster launch (#1673) 2024-08-01 12:20:28 -04:00
36cbfcf483 Add extended wgmma shapes for all data types (#1666) 2024-07-31 18:33:14 -04:00
1f2b590da6 Skip void-C kernels in the profiler when beta is non zero (#1661)
* Skip void-C kernels in the profiler when beta is non zero

CUTLASS profiler will only skip disposition for void-C kernels when beta
is non zero, when it makes more sense to skip running it in the first
place.

Not all users are aware of void-C kernels (as far as I know it wasn't a
thing in 2.X), and not everyone remembers to filter out voidC kernels
when running the profiler with a non zero beta.

The easiest solution (and as far as I can tell correct way of handling this)
is that `can_implement` return `false` when beta is non zero (or
whatever argument indicates an epilogue source) but we have a void-C
kernel.

Profiler already includes functionality to skip running kernels that
fail `can_implement`.

* Move checks to collectives instead

---------

Co-authored-by: Ali Hassani <ahassani@nvidia.com>
2024-07-31 18:11:58 -04:00
eqy
fbd116c0e5 fix build on SM 5.2 (#1664) 2024-07-31 09:54:57 -04:00
5b283c872c Add more GMMA shapes (#1630)
* Add more GMMA shapes

* Add more shapes for BF16
2024-07-29 19:09:51 -04:00
be60a0b272 CUTLASS 3.5.1 (#1623)
* CUTLASS 3.5.1

* updates, optimizations, fixes
2024-07-29 08:46:24 -04:00
56b46e2d13 Fix grouped gemm invalid memory access to problem shapes (#1543) 2024-07-10 11:55:22 -04:00
52fb43f30f fix mbarrier invalidate (#1494) 2024-07-10 11:35:26 -04:00
81b06ee0e0 Fix B operand variable name and comments (#1458) 2024-07-10 11:06:29 -04:00
637b159063 Fix C++17 version detection in helper_macros.hpp (#1479)
* It seems that __cplusplus can be inconsistent with _MSVC_LANG when discerning C++17 version. See https://github.com/NVIDIA/cutlass/issues/1474. Added switch to check _MSVC_LANG in addition to __cplusplus

* Fixed typo.

* Oops, another typo.

* Changed incorrect logic, ifndef to ifdef

* Define CUTLAS_CPLUSPLUS for language version testing

Co-authored-by: Mark Hoemmen <mhoemmen@users.noreply.github.com>

---------

Co-authored-by: Mark Hoemmen <mhoemmen@users.noreply.github.com>
2024-05-28 11:00:51 -04:00
7d49e6c7e2 Updates for CUTLASS 3.5.0 (#1468) 2024-04-11 21:33:40 -04:00
lzw
8e7d9f483d add missing header for size_t in numeric_types.h (#1420)
* add missing header for size_t in `numeric_types.h`

* make nvrtc happy

* add missing header for int types in `cutlass/arch/memory.h`

---------

Co-authored-by: Haicheng Wu <haichengw@nvidia.com>
2024-04-09 14:15:48 -04:00
19f3cc33f1 Fix uint128 operator add (#1400)
* fix uint128 operator add for 64-bit hilo implemenation

* add uint128 test for operator add

* make clang happy

---------

Co-authored-by: Haicheng Wu <haichengw@nvidia.com>
2024-04-02 13:32:18 -04:00