Kineto is part of the PyTorch Profiler.
The Kineto project enables:
- performance observability and diagnostics across common ML bottleneck components
- actionable recommendations for common issues
- integration of external system-level profiling tools
- integration with popular visualization platforms and analysis pipelines
A central component is Libkineto, a profiling library with special focus on low-overhead GPU timeline tracing.
Libkineto is an in-process profiling library integrated with the PyTorch Profiler. Please refer to the README file in the libkineto
folder as well as documentation on the new PyTorch Profiler API.
Holistic Trace Analysis (HTA) is an open source performance debugging library aimed at distributed workloads. HTA takes as input PyTorch Profiler traces and elevates the performance bottlenecks to enable faster debugging. Here's a partial list of features in HTA:
- Temporal Breakdown: Breakdown of GPU time in terms of time spent in computation, communication, memory events, and idle time on a single node and across all ranks.
- Idle Time Breakdown: Breakdown of GPU idle time into waiting for the host, waiting for another kernel or attributed to an unknown cause.
- Kernel Breakdown: Find kernels with the longest duration on each rank.
- Kernel Duration Distribution: Distribution of average time taken by longest kernels across different ranks.
- Communication Computation Overlap: Calculate the percentage of time when communication overlaps computation.
For a complete list see here.
The goal of the PyTorch TensorBoard Profiler is to provide a seamless and intuitive end-to-end profiling experience, including straightforward collection from PyTorch and insightful visualizations and recommendations in the TensorBoard UI.
Please refer to the README file in the tb_plugin
folder.
Some areas we're currently working on:
- Support for tracing distributed workloads
- Trace processing, analysis and recommendation engine
- System-level activities, multiple tracing sources
- Profiling and monitoring daemon for larger scale deployments
We will follow the PyTorch release schedule which roughly happens on a 3 month basis.
We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.
If you plan to contribute new features, please first open an issue and discuss the feature with us. Sending a PR without discussion might end up resulting in a rejected PR because we might be taking the infrastructure in a different direction than you might be aware of. We expect the architecture to keep evolving.
Kineto has a BSD-style license, as found in the LICENSE file.