Skip to content
Closed
Original file line number Diff line number Diff line change
@@ -0,0 +1,193 @@
# Additional CUDA only features matrix extension for DPC++ : sycl_ext_oneapi_matrix_cuda
:source-highlighter: coderay
:coderay-linenums-mode: table
:dpcpp: pass:[DPC++]

// This section needs to be after the document title.
:doctype: book
:toc2:
:toc: left
:encoding: utf-8
:lang: en

:blank: pass:[ +]

// Set the default source code type in this document to C++,
// for syntax highlighting purposes. This is needed because
// docbook uses c++ and html5 uses cpp.
:language: {basebackend@docbook:c++:cpp}


== Notice

Copyright (c) 2022-2022 Intel Corporation. All rights reserved.

NOTE: Khronos(R) is a registered trademark and SYCL(TM) and SPIR(TM) are
trademarks of The Khronos Group Inc. OpenCL(TM) is a trademark of Apple Inc.
used by permission by Khronos.

This extension is written against the SYCL 2020 revision 5 specification. All
references below to the "core SYCL specification" or to section numbers in the
SYCL specification refer to that revision.


**_NOTE:_** This document describes the current design and API for the CUDA only features matrix
extension to {dpcpp}. This is an initial experimental version to try out functionality
and performance, and **future versions of this API may change in ways that are incompatible with this experimental version**.

## Introduction
The CUDA backend supports `joint_matrix`, `joint_matrix_load`, `joint_matrix_store`, `joint_matrix_mad` and `joint_matrix_fill` as they are defined in the sycl_ext_oneapi_matrix extension. Only `row_major`, `col_major` and `dynamic` layouts are supported in the CUDA backend. The complete set of `joint_matrix` types and shapes that are valid in the CUDA backend are listed in this document.
This extension presents some supplementary CUDA backend features not contained within the sycl_ext_oneapi_matrix extension. The additional features are built on top of the sycl_ext_oneapi_matrix extension but are only supported by the CUDA backend.

## Feature test macro

This extension provides a feature-test macro as described in the core SYCL
specification section 6.3.3 "Feature test macros". Therefore, an
implementation supporting this extension must predefine the macro
`SYCL_EXT_ONEAPI_MATRIX_CUDA` to one of the values defined in the table below.
Copy link
Contributor

@dkhaldi dkhaldi Nov 7, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the convention used here in the naming: should this be called SYCL_EXT_ONEAPI_MATRIX_CUDA
or SYCL_EXT_ONEAPI_CUDA_MATRIX?

I just looked at a CUDA only feature. It is called: SYCL_EXT_ONEAPI_CUDA_ASYNC_BARRIER

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I'll make this change.

Copy link
Contributor Author

@JackAKirk JackAKirk Nov 9, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW I will also update #5363 to reflect this extension. However I think I will wait until #7077 is merged before adding the final bmad implementation in #5363 on top of it.

Applications can test for the existence of this macro to determine if the
implementation supports this feature, or applications can test the macro's
value to determine which of the extension's APIs the implementation supports.

[frame="none",options="header"]
|======================
|Value |Description
|1 |Introduced `joint_matrix_bmad` and `get_wi_marray()`.
|======================

#### Valid `joint_matrix` types and shapes

The complete set of matrix data types and shapes that are supported by the CUDA backend are represented in the following table. Tm indicates the matrix element data type held by a "multiplicand" `joint_matrix`: i.e requiring `use::a` or `use::b`. Tc indicates the matrix element data type held by an "accumulator" `joint_matrix`: i.e requiring `use::accumulator`.
--
[.center]
|======================
|Tm (`use::a` or `use::b`) |Tc (`use::accumulator`) |M |N |K | Minimum Compute Capability
.3+|half .3+|float
|16 |16 |16| sm_70
|8 |32 |16| sm_70
|32 |8 |16| sm_70
.3+|half .3+|half
|16 |16 |16| sm_70
|8 |32 |16| sm_70
|32 |8 |16| sm_70
.3+|int8_t .3+|int32_t
|16 |16 |16| sm_72
|8 |32 |16| sm_72
|32 |8 |16| sm_72
.3+|uint8_t .3+|int32_t
|16 |16 |16| sm_72
|8 |32 |16| sm_72
|32 |8 |16| sm_72
|precision::b1 |int32_t |8 |8 |128| sm_75 for XOR (deprecated), sm_80 for AND
|precision::tf32 |float |16 |16 |8| sm_80
.3+|bfloat16 .3+|float
|16 |16 |16 |sm_80
|8 |32 |16 |sm_80
|32 |8 |16 |sm_80
|double |double |8 |8 |4 |sm_80
|======================
--

The M, N, K triple from the above table defines the complete set of matrix shapes constructible:
--
[.center]
|======================
|use |NumRows | NumCols
|a |M |K
|b |K |N
|accumulator | M| N
|======================
--


#### Binary Multiply and Add - `joint_matrix_bmad`

```c++
namespace sycl::ext::oneapi::experimental::matrix {
template <typename Group, std::size_t M,
std::size_t K, std::size_t N, class BinaryOperation>
joint_matrix<precision::b1, use::accumulator, M, N, layout::dynamic, Group>
joint_matrix_bmad(
Group sg, joint_matrix<precision::b1, use::a, M, K, layout::row_major, Group> A,
joint_matrix<precision::b1, use::b, K, N, layout::col_major, Group> B,
joint_matrix<int32_t, use::accumulator, M, N, layout::dynamic, Group> C, BinaryOperation Op);
}
```

Binary Multiply and Add (BMAD) operations replace the usual dot product between a row of a `use::a` matrix with a column of a `use::b` matrix. Instead, a sequence of logical operations are performed. There are two available logical operations, "AND" and "XOR". The AND or XOR logical operations operate on the ith bit of a K bit row of the `use::a` matrix with the ith bit of a K bit column of the `use::b` matrix, to produce a 128 bit intermediate output.
The Population Count (popc) operator then operates on this intermediate output and the result is added with the (M, N)th element of the `use::accumulator` matrix.
An important difference with respect to the `joint_matrix_mad` interface is the addition of the `BinaryOperator Op` parameter. `Op` may be either:

`sycl::bit_and<precision::b1>()`

or

`sycl::bit_xor<precision::b1>()`

The A, B, and C `joint_matrix` objects are constructed and loaded/stored in the normal way, using the `joint_matrix`, `joint_matrix_load`, and `joint_matrix_store` interfaces that are defined in the sycl_ext_oneapi_matrix extension document.
The C matrix must be loaded from an array of 32-bit signed integers, and the A, B single bit `joint_matrix` structs must be loaded from an array of unsigned 32-bit integers storing the packed binary matrix.
Each element of the array of unsigned 32-bit integers, from which a `joint_matrix` of type `use::a` or `use::b` is loaded, should contain 32 elements of a matrix row in packed format.

IMPORTANT: When using Binary Multiply and Add, a `joint_matrix` of type `use::a` must have `layout::row_major`, and a `joint_matrix` of type `use::b` must have `layout::col_major`. In both cases the first template parameter of `joint_matrix` must be `precision::b1`.

IMPORTANT: Binary Multiply and Add operations are an experimental hardware feature and all implementation details are subject to change.

IMPORTANT: `joint_matrix_bmad` with `sycl::bit_xor<precision::b1>()` is deprecated.

##### Example using bitwise operations with `joint_matrix_bmad`

```c++
using namespace sycl::ext::oneapi::experimental::matrix::cuda;

queue q;
q.submit([&](handler &cgh) {
auto accC = bufC.template get_access<access::mode::read_write>(cgh);
auto accA = bufA.template get_access<access::mode::read_write>(cgh);
auto accB = bufB.template get_access<access::mode::read_write>(cgh);
auto accD = bufD.template get_access<access::mode::read_write>(cgh);
range<2> LocalRange = {1, N_THREADS_PER_MATRIX_OP};
range<2> GlobalRange = {Sub_Tiles_M, Sub_Tiles_N * N_THREADS_PER_MATRIX_OP};
cgh.parallel_for<KernelName<M, K, N, BinaryOperation>>(
nd_range<2>(GlobalRange, LocalRange), [=](nd_item<2> item) {
sycl::sub_group sg = item.get_sub_group();
const auto m = item.get_group().get_id()[0]; // row id of current submatrix of BIG C matrix
const auto n = item.get_group().get_id()[1]; // column id of current submatrix of BIG C matrix
joint_matrix<precision::b1, use::a, M, K, layout::row_major> sub_a(sg);
joint_matrix<precision::b1, use::b, K, N, layout::col_major> sub_b(sg);
joint_matrix<int32_t, use::accumulator, M, N> sub_c(sg);
joint_matrix_load(sg, sub_c, accC.get_pointer() + (m * M) * Big_N + n * N, Big_N, layout::row_major);
for (int k = 0; k < Sub_Tiles_K; k++) { // row/col id of current submatrix of BIG A/B matrices
joint_matrix_load(sg, sub_a, accA.get_pointer() + (k * (K / 32)) + (m * M * (Big_K / 32)), Big_K);
joint_matrix_load(sg, sub_b, accB.get_pointer() + (n * N * (Big_K / 32)) + (k * (K / 32)), Big_K);
sub_c = joint_matrix_bmad(sg, sub_a, sub_b, sub_c, Op);
}
joint_matrix_store(sg, sub_c, accD.get_pointer() + (m * M) * Big_N + n * N, Big_N, layout::row_major);
});});
```

#### `get_wi_marray()` : `wi_data` as an `marray`

In the Nvidia® Tensor Cores case the number of elements that are owned by each Work-Item (WI) is known at compile time. Due to this, in the CUDA backend we introduce a new `joint_matrix` member function, `get_wi_marray()`, that returns an `marray` reference to the WI portion of the `joint_matrix` owned by each local work-item. This enables operations to be identically performed on every element of the `joint_matrix` without requiring a loop. In DPC++ some math functions are optimized for marrays using vectorized instructions. An example is the `fma` SYCL math function, whose usage within the context of the matrix extension is given in the following code snippet:

```c++
joint_matrix<T, use::a, M, K, layout::row_major> sub_a(sg);
joint_matrix<T, use::b, K, N, layout::row_major> sub_b(sg);
joint_matrix<T, use::accumulator, M, N> sub_c(sg);
joint_matrix<T, use::accumulator, M, N> sub_d(sg);
joint_matrix_fill(sg, sub_a, -1);
joint_matrix_fill(sg, sub_b, -1);
joint_matrix_fill(sg, sub_c, -1);
sub_d.get_wi_marray() = fma(sub_a.get_wi_marray(), sub_b.get_wi_marray(), sub_c.get_wi_marray());
```

IMPORTANT: `get_wi_marray()` is not available for `precision::b1`.
IMPORTANT: `marray` math functions will not be fully optimized for the cuda backend until the 2023.3 release.


## Revision History

[frame="none",options="header"]
|======================
|Rev |Date |Author |Changes
|1 |2022-10-5 |Jack Kirk |Initial public working draft.
|======================