Skip to content

Commit 58b9221

Browse files
committed
delete comments of get_cache_scale in compressed_tensors.py
Signed-off-by: kewang-xlnx <[email protected]>
1 parent 11851bf commit 58b9221

File tree

1 file changed

+0
-4
lines changed

1 file changed

+0
-4
lines changed

vllm/model_executor/layers/quantization/compressed_tensors/compressed_tensors.py

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -412,10 +412,6 @@ def get_scheme(
412412
self._check_scheme_supported(scheme.get_min_capability())
413413
return scheme
414414

415-
# move the get_compressed_tensors_cache_scale method from
416-
# utils.py to instance method of CompressedTensorsConfig
417-
# class. By doing this, different QuantizationConfig
418-
# classes can implement their own get_cache_scale method.
419415
def get_cache_scale(self, name: str) -> Optional[str]:
420416
"""
421417
Check whether the param name matches the format for k/v cache scales

0 commit comments

Comments
 (0)