tunable.py 10.0 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242
  1. r"""
  2. This module exposes a TunableOp interface.
  3. Some operations, such as GEMMs, could be implemented using more than one library
  4. or more than one technique. For example, a GEMM could be implemented for CUDA or
  5. ROCm using either the blas or blasLt libraries. Further, ROCm's rocblas and
  6. hipblaslt libraries allow the user to query for all possible algorithms and then
  7. choose one. How does one know which implementation is the fastest and should be
  8. chosen? That's what TunableOp provides.
  9. Enabling TunableOp and Tuning Separately
  10. ========================================
  11. The TunableOp feature is enabled separately from enabling the tuning phase
  12. itself. Enabling TunableOp means that PyTorch will replace any standard
  13. operators with their Tunable implementations. Any call to a TunableOp first
  14. checks whether it has already been tuned for the given operator inputs. If so,
  15. it will immediately call the tuned operation; no further tuning will take place
  16. even when the tuning setting is enabled. Instead if no tuning result is found,
  17. and tuning is enabled, the TunableOp will benchmark every registered
  18. implementation of that operator for the given set of inputs and select the
  19. fastest.
  20. File Input and Output
  21. =====================
  22. The first time any TunableOp is invoked, the internal database of tuned
  23. operations will be prepared by attempting to read the results from the given
  24. file. The default filename is 'tunableop_results.csv'. To support tuning when
  25. multiple GPUs are used across multiple processes, the GPU device ordinal is
  26. automatically inserted into the filename to avoid multiple processes overwriting
  27. the same file.
  28. If tuning is enabled and new tunings are discovered during the course of your
  29. workload, it will also write out to this same filename with all tunings, both
  30. the ones it read in at startup as well as the new ones found at runtime. This
  31. can be used, for example, to build up a tunings file across many workloads by
  32. reusing the same file. The output file is automatically created when the
  33. application terminates. This behavior can be controlled by the C++ and Python
  34. APIs but not the environment variables.
  35. Assuming you specified a filename, you'll end up with a CSV file with contents
  36. like so::
  37. Validator,PT_VERSION,2.2.0
  38. Validator,ROCM_VERSION,6.0.0.0-12969-1544e39
  39. Validator,HIPBLASLT_VERSION,0.6.0-a9c5cc7
  40. Validator,ROCBLAS_VERSION,4.0.0-72e57364-dirty
  41. GemmTunableOp_float_NT,nt_25088_4096_64,1219,1.262
  42. GemmTunableOp_float_NT,nt_4096_4096_64,1216,0.033
  43. Note the "Validator" lines. If you change a library verison, or ROCm version, or
  44. PyTorch version, TunableOp will detect this and reject the tunings file because
  45. the prior tunings are likely affected by other software changes.
  46. The remaining lines are the tuned solutions for each TunableOp encountered
  47. during your execution. Each line consists of 4 comma-separated fields: operator
  48. name, operator parameters, solution name, and average execution time. The
  49. execution time is an optional field. The CSV file can be edited, but with
  50. caution. For example, the solution name (field 3) can be changed to "Default"
  51. and it will fall back to the original PyTorch untuned implementation. Or, in the
  52. case of ROCm's hipBLAS or hipBLASLt libraries, if you know the specific solution
  53. index you can override the solution that TunableOp selected by replacing the
  54. value. The operator name and parameters (fields 1 and 2) are internally named
  55. and should not be modified. In the case of GemmTunableOp, field 1 indicates the
  56. datatype and whether the inputs are transposed (T) or not (N) and field 2
  57. indicates the M, N, K input shapes.
  58. There is an option to enable verbose output but it is only recommended for
  59. debugging purposes. This will produce a lot of diagnostic messages but may be
  60. useful to see if TunableOp is being used at all. Otherwise, TunableOp is
  61. completely silent, besides file output, unless there is a warning or error
  62. during its use. The verbose option is only available by setting the environment
  63. variable PYTORCH_TUNABLEOP_VEROBSE=1.
  64. A Note on Tuning Behavior
  65. =========================
  66. Tuning an operator consists of iterating through the list or registered
  67. implementations and profiling each one. The profile is established by running a
  68. single implementation in a loop multiple times and taking the average execution
  69. time.
  70. By default, each possible solution for a given operator will be run for either
  71. 100 iterations or as many iterations that can be run within 30ms, whichever is
  72. smaller, and its average execution will be calculated. The fastest solution
  73. among all that were successfully profiled will be chosen. A profile might fail
  74. if the given solution doesn't achieve the same accuracy as the default
  75. implementation or if the solution returns an error code.
  76. Current Tunable Operators
  77. =========================
  78. TunableGemm for ROCm
  79. --------------------
  80. Currently only a TunableGemm for ROCm is implemented. Note that CUDA builds of
  81. PyTorch will function correctly when using TunableOp but the only solution
  82. available to CUDA builds is the 'Default' implementation i.e. the original
  83. cuBLAS default, now called through TunableOp. Any call to at::cuda::blas::gemm()
  84. or ::bgemm() will be routed through TunableOp when enabled. Calling gemm() for a
  85. given set of input arguments (transa, transb, m, n, k) will attempt to use the
  86. fastest available implementation across both rocblas and hipblaslt.
  87. Tuning Context
  88. ==============
  89. The behavior of TunableOp is currently manipulated through environment
  90. variables, the C++ interface of at::cuda::tunable::getTuningContext(), or the
  91. torch.cuda.tunable python interfaces that wrap the C++ TuningContext. The
  92. environment variables take precedence over any setting you manipulate using the
  93. C++ or Python APIs.
  94. """
  95. from typing import Optional, Tuple
  96. import torch
  97. __all__ = [
  98. "enable",
  99. "is_enabled",
  100. "tuning_enable",
  101. "tuning_is_enabled",
  102. "set_max_tuning_duration",
  103. "get_max_tuning_duration",
  104. "set_max_tuning_iterations",
  105. "get_max_tuning_iterations",
  106. "set_filename",
  107. "get_filename",
  108. "get_results",
  109. "get_validators",
  110. "write_file_on_exit",
  111. "write_file",
  112. "read_file",
  113. ]
  114. def enable(val: bool = True) -> None:
  115. r"""This is the big on/off switch for all TunableOp implementations."""
  116. torch._C._cuda_tunableop_enable(val) # type: ignore[attr-defined]
  117. def is_enabled() -> bool:
  118. r"""Returns whether the TunableOp feature is enabled."""
  119. return torch._C._cuda_tunableop_is_enabled() # type: ignore[attr-defined]
  120. def tuning_enable(val: bool = True) -> None:
  121. r"""Enable tuning of TunableOp implementations.
  122. When enabled, if a tuned entry isn't found, run the tuning step and record
  123. the entry.
  124. """
  125. torch._C._cuda_tunableop_tuning_enable(val) # type: ignore[attr-defined]
  126. def tuning_is_enabled() -> bool:
  127. r"""Returns whether TunableOp implementations can be tuned."""
  128. return torch._C._cuda_tunableop_tuning_is_enabled() # type: ignore[attr-defined]
  129. def set_max_tuning_duration(duration: int) -> None:
  130. r"""Set max time in milliseconds to spend tuning a given solution.
  131. If both max tuning duration and iterations are set, the smaller of the two
  132. will be honored. At minimum 1 tuning iteration will always be run.
  133. """
  134. torch._C._cuda_tunableop_set_max_tuning_duration(duration) # type: ignore[attr-defined]
  135. def get_max_tuning_duration() -> int:
  136. r"""Get max time to spend tuning a given solution."""
  137. return torch._C._cuda_tunableop_get_max_tuning_duration() # type: ignore[attr-defined]
  138. def set_max_tuning_iterations(iterations: int) -> None:
  139. r"""Set max number of iterations to spend tuning a given solution.
  140. If both max tuning duration and iterations are set, the smaller of the two
  141. will be honored. At minimum 1 tuning iteration will always be run.
  142. """
  143. torch._C._cuda_tunableop_set_max_tuning_iterations(iterations) # type: ignore[attr-defined]
  144. def get_max_tuning_iterations() -> int:
  145. r"""Get max iterations to spend tuning a given solution."""
  146. return torch._C._cuda_tunableop_get_max_tuning_iterations() # type: ignore[attr-defined]
  147. def set_filename(filename: str, insert_device_ordinal: bool = False) -> None:
  148. r"""Set the filename to use for input/output of tuning results.
  149. If :attr:`insert_device_ordinal` is ``True`` then the current device ordinal
  150. will be added to the given filename automatically. This can be used in a
  151. 1-process-per-gpu cenario to ensure all processes write to a separate file.
  152. """
  153. torch._C._cuda_tunableop_set_filename(filename, insert_device_ordinal) # type: ignore[attr-defined]
  154. def get_filename() -> str:
  155. r"""Get the results filename."""
  156. return torch._C._cuda_tunableop_get_filename() # type: ignore[attr-defined]
  157. def get_results() -> Tuple[str, str, str, float]:
  158. r"""Return all TunableOp results."""
  159. return torch._C._cuda_tunableop_get_results() # type: ignore[attr-defined]
  160. def get_validators() -> Tuple[str, str]:
  161. r"""Return the TunableOp validators."""
  162. return torch._C._cuda_tunableop_get_validators() # type: ignore[attr-defined]
  163. def write_file_on_exit(val: bool) -> None:
  164. r"""During Tuning Context destruction, write file to disk.
  165. This is useful as a final flush of your results to disk if your application
  166. terminates as result of normal operation or an error. Manual flushing of
  167. your results can be achieved by manually calling ``write_file()``."""
  168. torch._C._cuda_tunableop_write_file_on_exit(val) # type: ignore[attr-defined]
  169. def write_file(filename: Optional[str] = None) -> bool:
  170. r"""Write results to a CSV file.
  171. If :attr:`filename` is not given, ``get_filename()`` is called.
  172. """
  173. if filename is None:
  174. filename = get_filename()
  175. return torch._C._cuda_tunableop_write_file(filename) # type: ignore[attr-defined]
  176. def read_file(filename: Optional[str] = None) -> bool:
  177. r"""Read results from a TunableOp CSV file.
  178. If :attr:`filename` is not given, ``get_filename()`` is called.
  179. """
  180. if filename is None:
  181. filename = get_filename()
  182. return torch._C._cuda_tunableop_read_file(filename) # type: ignore[attr-defined]