Location via proxy:   
[Report a bug]   [Manage cookies]                

Apache TVM

An End to End Machine Learning Compiler Framework for CPUs, GPUs and accelerators

Learn More

Apache TVM is an open source machine learning compiler framework for CPUs, GPUs, and machine learning accelerators. It aims to enable machine learning engineers to optimize and run computations efficiently on any hardware backend.

  • Aboutimage responsiveAbout
  • About Apache TVM

    The vision of the Apache TVM Project is to host a diverse community of experts and practitioners in machine learning, compilers, and systems architecture to build an accessible, extensible, and automated open-source framework that optimizes current and emerging machine learning models for any hardware platform. TVM provides the following main features:

    • Compilation of deep learning models into minimum deployable modules.
    • Infrastructure to automatic generate and optimize models on more backend with better performance.

Key Features & Capabilities

  • Performance

    Performance

    Compilation and minimal runtimes commonly unlock ML workloads on existing hardware.

  • Run Everywhere

    Run Everywhere

    CPUs, GPUs, browsers, microcontrollers, FPGAs and more.

    Automatically generate and optimize tensor operators on more backends.

  • Flexibility

    Flexibility

    Need support for block sparsity, quantization (1,2,4,8 bit integers, posit), random forests/classical ML, memory planning, MISRA-C compatibility, Python prototyping or all of the above?

    TVM’s flexible design enables all of these things and more.

  • Ease of Use

    Ease of Use

    Compilation of deep learning models in Keras, MXNet, PyTorch, Tensorflow, CoreML, DarkNet and more. Start using TVM with Python today, build out production stacks using C++, Rust, or Java the next day.