Skip to content

Conversation

@tmm1
Copy link
Contributor

@tmm1 tmm1 commented Jan 24, 2025

This updates setuptools version used in the build pipeline to v75.8.0, which includes pypa/setuptools@f285d01 and generates Metadata: 2.2 manifests for pypi

With this change, it becomes easier to manage flash-attn installation with uv. See astral-sh/uv#6607 (comment)

for some reason, uv always tries to build flash-attn during sync, even though it was not requested

This is expected and required. flash-attn ships as a source distribution, and only at Metadata-Version: 2.1, so you must ask the build backend for its dependencies per the spec -- which in turn requires that its build dependencies are already installed. There's really nothing we can do about this -- it's a problem with the package. They need to upgrade to Metadata-Version: 2.2.

For more details on the current workarounds, see the uv docs which feature a whole section on how to deal with issues trying to install flash-attn.

Screenshot 2025-01-23 at 6 18 24 PM

cc @tridao @charliermarsh

pip install ninja packaging setuptools wheel twine
pip install ninja packaging wheel twine
# Install latest setuptools with support for pypi metadata 2.2 (improved compat w/ uv)
pip install setuptools==75.8.0

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally I would use >= here but this is also reasonable.

@tridao tridao merged commit cd393e0 into Dao-AILab:main Jan 29, 2025
LucasWilkinson pushed a commit to vllm-project/flash-attention that referenced this pull request Feb 11, 2025
LucasWilkinson pushed a commit to vllm-project/flash-attention that referenced this pull request Feb 11, 2025
LucasWilkinson added a commit to vllm-project/flash-attention that referenced this pull request Feb 11, 2025
* Support ROCM builds from source distribution, and improve error handling (Dao-AILab#1446)

* Always update both submodules to include them in sdist

Always update both submodules, irrespectively of whether a CUDA
or a ROCM build is being done, to ensure that the necessary files
from both are present in sdist.  Otherwise, attempt to perform a ROCM
build from sdist fails because of missing `composable_kernel` srouces.

* Include `*.py` files from composable_kernel in sdist

Include the `*.py` files from `csrc` in sdist, to ensure that
the `generate.py` script is present.

* Replace the `os.system()` calls in `setup.py` with `subprocess.run()`

* Add error checking to `subprocess.run()` calls in `setup.py`

Add error checking to ensure that `setup.py` fails immediately if one
of the commands fail.  Otherwise, the failures result only in messages
to stderr that could be missed, and could lead to more confusing errors
later in the build process.

* Call git in `setup.py` only when working in a git repository

Call git commands in `setup.py` only when the `.git` directory is
present, indicating that we are working in a git checkout.  Otherwise,
just assert that the needed files are there.  With this, building
from a source distribution no longer attempts to call git
in an incorrect directory.

* [Build] Update version of setuptools used to generate core package (Dao-AILab#1460)

* Don't compile for CUDA 11, compile for official pytorch 2.6.0

* Bump to v2.7.4

* Drop Pytorch 2.1

* [FA3] Compile with nvcc 12.8 instead of 12.3

* Fix comment in assert

* [CE] Assert logit_scale > 0

* Implement HeadDim_V != HeadDim_QK, support hdimQK=192, hdimV=128

* Fix shape_O in epilogue params when kHeadDimV != kHeadDim

* Remove old combine.h

* Fix loading paged V when kHeadDimV != kHeadDim

* Fix shape_V for storing new KV when kHeadDimV != kHeadDim

* Implement the case of LargeHeadDimV

* Rename Mma0->MmaQK, Mma1->MmaPV, use Cluster only if hdimV >= 192

* Pass _1 or _0 to cute::aligned_struct

* Fix compilation for FP8 when kHeadDimV != kHeadDim

* Support Qv

* Test varlen_q=True by default for kvcache

* Fix num_splits heuristic being called before get_pack_gqa

* Fix num_splits heuristic again when PackGQA

* Tile fwd_combine kernel along headdim, don't need kBlockM > 128

* Use bf16 instead of fp16 in benchmark_gemm.py

* Update Cutlass to 3.7

* Use nvcc 12.6 but ptxas 12.8

* cicc uses the same version as ptxas

* Split hdimdiff into a separate translation unit

* Update benchmark script

* Update Cutlass to 3.8

* Adjust tile size for hdim 64

* Adjust ninja build file

* build head diff + fix build errors

Signed-off-by: Lucas Wilkinson <[email protected]>

---------

Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Michał Górny <[email protected]>
Co-authored-by: Aman Karmani <[email protected]>
Co-authored-by: Tri Dao <[email protected]>
LucasWilkinson added a commit to vllm-project/flash-attention that referenced this pull request Mar 20, 2025
* Support ROCM builds from source distribution, and improve error handling (Dao-AILab#1446)

* Always update both submodules to include them in sdist

Always update both submodules, irrespectively of whether a CUDA
or a ROCM build is being done, to ensure that the necessary files
from both are present in sdist.  Otherwise, attempt to perform a ROCM
build from sdist fails because of missing `composable_kernel` srouces.

* Include `*.py` files from composable_kernel in sdist

Include the `*.py` files from `csrc` in sdist, to ensure that
the `generate.py` script is present.

* Replace the `os.system()` calls in `setup.py` with `subprocess.run()`

* Add error checking to `subprocess.run()` calls in `setup.py`

Add error checking to ensure that `setup.py` fails immediately if one
of the commands fail.  Otherwise, the failures result only in messages
to stderr that could be missed, and could lead to more confusing errors
later in the build process.

* Call git in `setup.py` only when working in a git repository

Call git commands in `setup.py` only when the `.git` directory is
present, indicating that we are working in a git checkout.  Otherwise,
just assert that the needed files are there.  With this, building
from a source distribution no longer attempts to call git
in an incorrect directory.

* [Build] Update version of setuptools used to generate core package (Dao-AILab#1460)

* Don't compile for CUDA 11, compile for official pytorch 2.6.0

* Bump to v2.7.4

* Drop Pytorch 2.1

* [FA3] Compile with nvcc 12.8 instead of 12.3

* Fix comment in assert

* [CE] Assert logit_scale > 0

* Implement HeadDim_V != HeadDim_QK, support hdimQK=192, hdimV=128

* Fix shape_O in epilogue params when kHeadDimV != kHeadDim

* Remove old combine.h

* Fix loading paged V when kHeadDimV != kHeadDim

* Fix shape_V for storing new KV when kHeadDimV != kHeadDim

* Implement the case of LargeHeadDimV

* Rename Mma0->MmaQK, Mma1->MmaPV, use Cluster only if hdimV >= 192

* Pass _1 or _0 to cute::aligned_struct

* Fix compilation for FP8 when kHeadDimV != kHeadDim

* Support Qv

* Test varlen_q=True by default for kvcache

* Fix num_splits heuristic being called before get_pack_gqa

* Fix num_splits heuristic again when PackGQA

* Tile fwd_combine kernel along headdim, don't need kBlockM > 128

* Use bf16 instead of fp16 in benchmark_gemm.py

* Update Cutlass to 3.7

* Use nvcc 12.6 but ptxas 12.8

* cicc uses the same version as ptxas

* Split hdimdiff into a separate translation unit

* Update benchmark script

* Update Cutlass to 3.8

* Adjust tile size for hdim 64

* Adjust ninja build file

* Rename collective_mainloop -> mainloop, move tile_scheduler variable

* Move functions getting number of m/n blocks to a separate file

* Update cutlass 3.8 to fix error w cudaGetDriverEntryPointByVersion

* Fix FP8 test

* make seqused optional on top level interface (Dao-AILab#1497)

* Temporarily change package name of FA3 to allow FA2 & FA3 install

* Update benchmark_split_kv.py to work w new API

* Add tp_degree to benchmark_split_kv

* Fix divide by 0 in causal tile_scheduler for large seqlen

* Use split for super long sequences that don't fit into L2

* Make rotary test optional in FA3

* Enable MLA flag in FA3 (rope=64, latent=512) (Dao-AILab#1504)

* Enable MLA flag in FA3 (rope=64, latent=512)

* updated HasQv in flash_fwd_launch_template.h

* Add simple script to benchmark MLA decode

* Add dynamic splits

* Update to Cutlass 3.8.0 tag

* Adjust seqlen_q in MLA decode benchmark script

* Fix loop in prepare_scheduler.cu (h/t Jay Shah)

Only affects the case where batch size > 256

* fix: add "typename" prior to dependent type name (Dao-AILab#1517)

This project uses c++17 which still has this requirement.

Signed-off-by: Jiang, Zhiwei <[email protected]>

* Add FLOPS to MLA decode benchmark

* Change margin in prepare_scheduler.cu from 20% to 10%

* Fix cuda 12.1 build (Dao-AILab#1511)

Signed-off-by: Lucas Wilkinson <[email protected]>

* Don't use IntraWGOverlap for hdim 64,512

* Remove sink token

It wasn't working anyway

* fix: prompt index to type longlong to avoid numerical overflow (Dao-AILab#1500)

* Add option for WG1 to use RS MMA but WG2 using SS MMA

* Add kwargs to _write_ninja_file for compatibility with new torch

* Move writing P to smem as separate function

* Fix causal scheduler not considering hdim_v != hdim

* Always split fwd_combine_kernel on batch

* For each batch, if num_splits=1, write to O instead of O_partial

* Enable TMA when page size is a multiple of kBlockN

* Update ptxas to 12.8.93 (i.e. 12.8.1)

* Use tile size 192 x 128 for hdim 64 causal

* Update benchmark_mla_decode.py

* Benchmark MHA, GQA, MQA, MLA in the same script

* Benchmark FlashMLA if it's available

* Run all 4 attn variants in benchmark

* Move scheduler.get_next_work to before the epilogue

* Enable Cluster for hdim128 back

* Move tOrO init in mainloop

* Adjust heuristic for get_pagedkv_tma

* Enable PDL

* Simplify prepare_varlen_num_blocks_kernel, restrict to batch <= 992

* Fix: num_splits_dynamic_ptr needs to be set before get_num_splits

* Loop on num_splits instead of parameterizing it in kvcache test

* Add option to precompute scheduler metadata

* Update MLA decode benchmark to use get_scheduler_metadata

* Fix FP8 test to quantize KV cache for reference impl as well

* Dynamic autotune configs for devices with warp size != 32 (Dao-AILab#1534)

Generate a list of autotune configs based on device warp size to avoid triton error if maximum threads per block is exceeded.

* update binding

Signed-off-by: Lucas Wilkinson <[email protected]>

---------

Signed-off-by: Jiang, Zhiwei <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Michał Górny <[email protected]>
Co-authored-by: Aman Karmani <[email protected]>
Co-authored-by: Tri Dao <[email protected]>
Co-authored-by: Anton Vlasjuk <[email protected]>
Co-authored-by: Ted Zadouri <[email protected]>
Co-authored-by: Jiang, Zhiwei <[email protected]>
Co-authored-by: xin-w8023 <[email protected]>
Co-authored-by: schung-amd <[email protected]>
tlrmchlsmth pushed a commit to vllm-project/flash-attention that referenced this pull request Apr 10, 2025
* Support ROCM builds from source distribution, and improve error handling (Dao-AILab#1446)

* Always update both submodules to include them in sdist

Always update both submodules, irrespectively of whether a CUDA
or a ROCM build is being done, to ensure that the necessary files
from both are present in sdist.  Otherwise, attempt to perform a ROCM
build from sdist fails because of missing `composable_kernel` srouces.

* Include `*.py` files from composable_kernel in sdist

Include the `*.py` files from `csrc` in sdist, to ensure that
the `generate.py` script is present.

* Replace the `os.system()` calls in `setup.py` with `subprocess.run()`

* Add error checking to `subprocess.run()` calls in `setup.py`

Add error checking to ensure that `setup.py` fails immediately if one
of the commands fail.  Otherwise, the failures result only in messages
to stderr that could be missed, and could lead to more confusing errors
later in the build process.

* Call git in `setup.py` only when working in a git repository

Call git commands in `setup.py` only when the `.git` directory is
present, indicating that we are working in a git checkout.  Otherwise,
just assert that the needed files are there.  With this, building
from a source distribution no longer attempts to call git
in an incorrect directory.

* [Build] Update version of setuptools used to generate core package (Dao-AILab#1460)

* Don't compile for CUDA 11, compile for official pytorch 2.6.0

* Bump to v2.7.4

* Drop Pytorch 2.1

* [FA3] Compile with nvcc 12.8 instead of 12.3

* Fix comment in assert

* [CE] Assert logit_scale > 0

* Implement HeadDim_V != HeadDim_QK, support hdimQK=192, hdimV=128

* Fix shape_O in epilogue params when kHeadDimV != kHeadDim

* Remove old combine.h

* Fix loading paged V when kHeadDimV != kHeadDim

* Fix shape_V for storing new KV when kHeadDimV != kHeadDim

* Implement the case of LargeHeadDimV

* Rename Mma0->MmaQK, Mma1->MmaPV, use Cluster only if hdimV >= 192

* Pass _1 or _0 to cute::aligned_struct

* Fix compilation for FP8 when kHeadDimV != kHeadDim

* Support Qv

* Test varlen_q=True by default for kvcache

* Fix num_splits heuristic being called before get_pack_gqa

* Fix num_splits heuristic again when PackGQA

* Tile fwd_combine kernel along headdim, don't need kBlockM > 128

* Use bf16 instead of fp16 in benchmark_gemm.py

* Update Cutlass to 3.7

* Use nvcc 12.6 but ptxas 12.8

* cicc uses the same version as ptxas

* Split hdimdiff into a separate translation unit

* Update benchmark script

* Update Cutlass to 3.8

* Adjust tile size for hdim 64

* Adjust ninja build file

* Rename collective_mainloop -> mainloop, move tile_scheduler variable

* Move functions getting number of m/n blocks to a separate file

* Update cutlass 3.8 to fix error w cudaGetDriverEntryPointByVersion

* Fix FP8 test

* make seqused optional on top level interface (Dao-AILab#1497)

* Temporarily change package name of FA3 to allow FA2 & FA3 install

* Update benchmark_split_kv.py to work w new API

* Add tp_degree to benchmark_split_kv

* Fix divide by 0 in causal tile_scheduler for large seqlen

* Use split for super long sequences that don't fit into L2

* Make rotary test optional in FA3

* Enable MLA flag in FA3 (rope=64, latent=512) (Dao-AILab#1504)

* Enable MLA flag in FA3 (rope=64, latent=512)

* updated HasQv in flash_fwd_launch_template.h

* Add simple script to benchmark MLA decode

* Add dynamic splits

* Update to Cutlass 3.8.0 tag

* Adjust seqlen_q in MLA decode benchmark script

* Fix loop in prepare_scheduler.cu (h/t Jay Shah)

Only affects the case where batch size > 256

* fix: add "typename" prior to dependent type name (Dao-AILab#1517)

This project uses c++17 which still has this requirement.

Signed-off-by: Jiang, Zhiwei <[email protected]>

* Add FLOPS to MLA decode benchmark

* Change margin in prepare_scheduler.cu from 20% to 10%

* Fix cuda 12.1 build (Dao-AILab#1511)

Signed-off-by: Lucas Wilkinson <[email protected]>

* Don't use IntraWGOverlap for hdim 64,512

* Remove sink token

It wasn't working anyway

* fix: prompt index to type longlong to avoid numerical overflow (Dao-AILab#1500)

* Add option for WG1 to use RS MMA but WG2 using SS MMA

* Add kwargs to _write_ninja_file for compatibility with new torch

* Move writing P to smem as separate function

* Fix causal scheduler not considering hdim_v != hdim

* Always split fwd_combine_kernel on batch

* For each batch, if num_splits=1, write to O instead of O_partial

* Enable TMA when page size is a multiple of kBlockN

* Update ptxas to 12.8.93 (i.e. 12.8.1)

* Use tile size 192 x 128 for hdim 64 causal

* Update benchmark_mla_decode.py

* Benchmark MHA, GQA, MQA, MLA in the same script

* Benchmark FlashMLA if it's available

* Run all 4 attn variants in benchmark

* Move scheduler.get_next_work to before the epilogue

* Enable Cluster for hdim128 back

* Move tOrO init in mainloop

* Adjust heuristic for get_pagedkv_tma

* Enable PDL

* Simplify prepare_varlen_num_blocks_kernel, restrict to batch <= 992

* Fix: num_splits_dynamic_ptr needs to be set before get_num_splits

* Loop on num_splits instead of parameterizing it in kvcache test

* Add option to precompute scheduler metadata

* Update MLA decode benchmark to use get_scheduler_metadata

* Fix FP8 test to quantize KV cache for reference impl as well

* Dynamic autotune configs for devices with warp size != 32 (Dao-AILab#1534)

Generate a list of autotune configs based on device warp size to avoid triton error if maximum threads per block is exceeded.

* Add option for rotary_seqlens

* Use StreamkBarrier0/1 barriers instead of TileCountSmemEmpty/Full

* Update Cutlass to 3.9

* Support hdim 64,256

* Update benchmark with GLA

* Adjust warp scheduler sync for HasQv case

* num_head -> args.num_head (Dao-AILab#1552)

Signed-off-by: Ye (Charlotte) Qi <[email protected]>

* Fix zeroing out the scheduler semaphore when reusing metadata

* fix deprecation warning for newer torch versions (Dao-AILab#1565)

* Don't use FusedDense anymore to simplify code

* Fix FA3 qkvpacked interface

* Launch more thread blocks in layer_norm_bwd

* check valid tile before storing num_splits in split_idx (Dao-AILab#1578)

* Tune rotary kernel to use 2 warps if rotary_dim <= 64

* update api

Signed-off-by: Lucas Wilkinson <[email protected]>

---------

Signed-off-by: Jiang, Zhiwei <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: Michał Górny <[email protected]>
Co-authored-by: Aman Karmani <[email protected]>
Co-authored-by: Tri Dao <[email protected]>
Co-authored-by: Anton Vlasjuk <[email protected]>
Co-authored-by: Ted Zadouri <[email protected]>
Co-authored-by: Jiang, Zhiwei <[email protected]>
Co-authored-by: xin-w8023 <[email protected]>
Co-authored-by: schung-amd <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: jayhshah <[email protected]>
playerzer0x pushed a commit to Liqhtworks/flash-attention that referenced this pull request Jul 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants