Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

uwsampl/3la-tvm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This is a fork of TVM for adding BYOC integrations for the 3LA project.

Right now we have a VTA integration in src/relay/backend/contrib/vta_matmul. Note that you have to include the line SET(USE_VTA_MATMUL ON) in build/config.cmake before building TVM to support this (other flags that should be on: USE_LLVM, USE_VTA_FSIM). We have a test of this backend in tests/python/relay/test_external_codegen.py (see test_extern_vta()).

This version also uses a fork of the VTA repo meant to dump logs. Try vta/python/integration/matmul_tutorial.py to use the dumping facility. VTA can be set into dumping mode by calling vta.testing.simulator.dump_mode(True). You can specify the location at which the dump will be deposited using vta.testing.simulator.dump_target(path); the default is ./vta_sim_dump.json. See the readme at the VTA fork to see a description of the dumping mode and the dumping format.

You can use vta.testing.ila_converter.convert(dump_file, dest_file) to convert a VTA simulator dump into an ILA program fragment.

Open Deep Learning Compiler Stack Documentation | Contributors | Community | Release Notes

Build Status WinMacBuild

Apache TVM (incubating) is a compiler stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end to end compilation to different backends.

License

© Contributors Licensed under an Apache-2.0 license.

Contribute to TVM

TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community. Checkout the Contributor Guide

Acknowledgement

We learned a lot from the following projects when building TVM.

  • Halide: Part of TVM's TIR and arithmetic simplification module originates from Halide. We also learned and adapted some part of lowering pipeline from Halide.
  • Loopy: use of integer set analysis and its loop transformation primitives.
  • Theano: the design inspiration of symbolic scan operator for recurrence.