Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

csarron/awesome-emdl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

60 Commits
 
 
 
 

Repository files navigation

Awesome EMDL

Embedded and mobile deep learning research notes.

Papers

Survey

  1. EfficientDNNs [Repo]
  2. Awesome ML Model Compression [Repo]
  3. TinyML Papers and Projects [Repo]
  4. TinyML Platforms Benchmarking [arXiv '21]
  5. TinyML: A Systematic Review and Synthesis of Existing Research [ICAIIC '21]
  6. TinyML Meets IoT: A Comprehensive Survey [Internet of Things '21]
  7. A review on TinyML: State-of-the-art and prospects [Journal of King Saud Univ. '21]
  8. TinyML Benchmark: Executing Fully Connected Neural Networks on Commodity Microcontrollers [IEEE '21]
  9. Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better [arXiv '21]
  10. Benchmarking TinyML Systems: Challenges and Direction [arXiv '20]
  11. Model Compression and Hardware Acceleration for Neural Networks: A Comprehensive Survey [IEEE '20]
  12. The Deep Learning Compiler: A Comprehensive Survey [arXiv '20]
  13. Recent Advances in Efficient Computation of Deep Convolutional Neural Networks [arXiv '18]
  14. A Survey of Model Compression and Acceleration for Deep Neural Networks [arXiv '17]

Model

  1. EtinyNet: Extremely Tiny Network for TinyML [AAAI '21]
  2. MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning [NeurIPS '21, MIT]
  3. SkyNet: a Hardware-Efficient Method for Object Detection and Tracking on Embedded Systems [MLSys '20, IBM]
  4. Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNets [NeurIPS '20, Huawei]
  5. MCUNet: Tiny Deep Learning on IoT Devices [NeurIPS '20, MIT]
  6. GhostNet: More Features from Cheap Operations [CVPR '20, Huawei]
  7. MicroNet for Efficient Language Modeling [NeurIPS '19, MIT]
  8. Searching for MobileNetV3 [ICCV '19, Google]
  9. MobilenetV2: Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation [CVPR '18, Google]
  10. ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware [arXiv '18, MIT]
  11. DeepRebirth: Accelerating Deep Neural Network Execution on Mobile Devices [AAAI'18, Samsung]
  12. NasNet: Learning Transferable Architectures for Scalable Image Recognition [arXiv '17, Google]
  13. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices [arXiv '17, Megvii]
  14. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications [arXiv '17, Google]
  15. CondenseNet: An Efficient DenseNet using Learned Group Convolutions [arXiv '17]

System

  1. BSC: Block-based Stochastic Computing to Enable Accurate and Efficient TinyML [ASP-DAC '22]
  2. CFU Playground: Full-Stack Open-Source Framework for Tiny Machine Learning (tinyML) Acceleration on FPGAs [arXiv '22, Google]
  3. UDC: Unified DNAS for Compressible TinyML Models [arXiv '22, Arm]
  4. AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator [arXiv '21, Arm]
  5. TinyTL: Reduce Activations, Not Trainable Parameters for Efficient On-Device Learning [NeurIPS '20, MIT]
  6. Once for All: Train One Network and Specialize it for Efficient Deployment [ICLR '20, MIT]
  7. DeepMon: Mobile GPU-based Deep Learning Framework for Continuous Vision Applications [MobiSys '17]
  8. DeepEye: Resource Efficient Local Execution of Multiple Deep Vision Models using Wearable Commodity Hardware [MobiSys '17]
  9. MobiRNN: Efficient Recurrent Neural Network Execution on Mobile GPU [EMDL '17]
  10. fpgaConvNet: A Toolflow for Mapping Diverse Convolutional Neural Networks on Embedded FPGAs [NIPS '17]
  11. DeepSense: A GPU-based deep convolutional neural network framework on commodity mobile devices [WearSys '16]
  12. DeepX: A Software Accelerator for Low-Power Deep Learning Inference on Mobile Devices [IPSN '16]
  13. EIE: Efficient Inference Engine on Compressed Deep Neural Network [ISCA '16]
  14. MCDNN: An Approximation-Based Execution Framework for Deep Stream Processing Under Resource Constraints [MobiSys '16]
  15. DXTK: Enabling Resource-efficient Deep Learning on Mobile and Embedded Devices with the DeepX Toolkit [MobiCASE '16]
  16. Sparsification and Separation of Deep Learning Layers for Constrained Resource Inference on Wearables [SenSys ’16]
  17. An Early Resource Characterization of Deep Learning on Wearables, Smartphones and Internet-of-Things Devices [IoT-App ’15]
  18. CNNdroid: GPU-Accelerated Execution of Trained Deep Convolutional Neural Networks on Android [MM '16]

Quantization

  1. Quantizing deep convolutional networks for efficient inference: A whitepaper [arXiv '18]
  2. LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks [ECCV'18]
  3. Training and Inference with Integers in Deep Neural Networks [ICLR'18]
  4. The ZipML Framework for Training Models with End-to-End Low Precision: The Cans, the Cannots, and a Little Bit of Deep Learning [ICML'17]
  5. Loss-aware Binarization of Deep Networks [ICLR'17]
  6. Towards the Limit of Network Quantization [ICLR'17]
  7. Deep Learning with Low Precision by Half-wave Gaussian Quantization [CVPR'17]
  8. ShiftCNN: Generalized Low-Precision Architecture for Inference of Convolutional Neural Networks [arXiv'17]
  9. Quantized Convolutional Neural Networks for Mobile Devices [CVPR '16]
  10. Fixed-Point Performance Analysis of Recurrent Neural Networks [ICASSP'16]
  11. Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations [arXiv'16]
  12. Compressing Deep Convolutional Networks using Vector Quantization [arXiv'14]

Pruning

  1. Awesome-Pruning [Repo]
  2. Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration [CVPR'19]
  3. To prune, or not to prune: exploring the efficacy of pruning for model compression [ICLR'18]
  4. Pruning Filters for Efficient ConvNets [ICLR'17]
  5. Pruning Convolutional Neural Networks for Resource Efficient Inference [ICLR'17]
  6. Soft Weight-Sharing for Neural Network Compression [ICLR'17]
  7. Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning [CVPR'17]
  8. ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression [ICCV'17]
  9. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding [ICLR'16]
  10. Dynamic Network Surgery for Efficient DNNs [NIPS'16]
  11. Learning both Weights and Connections for Efficient Neural Networks [NIPS'15]

Approximation

  1. High performance ultra-low-precision convolutions on mobile devices [NIPS'17]
  2. Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications [ICLR'16]
  3. Efficient and Accurate Approximations of Nonlinear Convolutional Networks [CVPR'15]
  4. Accelerating Very Deep Convolutional Networks for Classification and Detection (Extended version of above one)
  5. Convolutional neural networks with low-rank regularization [arXiv'15]
  6. Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation [NIPS'14]

Characterization

  1. A First Look at Deep Learning Apps on Smartphones [WWW'19]
  2. Machine Learning at Facebook: Understanding Inference at the Edge [HPCA'19]
  3. NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications [ECCV 2018]
  4. Latency and Throughput Characterization of Convolutional Neural Networks for Mobile Computer Vision [MMSys’18]

Libraries

Inference Framework

  1. Alibaba - MNN - is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba.
  2. Apple - CoreML - is integrate machine learning models into your app. BERT and GPT-2 on iPhone
  3. Arm - ComputeLibrary - is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologies. Intro
  4. Arm - Arm NN - is the most performant machine learning (ML) inference engine for Android and Linux, accelerating ML on Arm Cortex-A CPUs and Arm Mali GPUs.
  5. Baidu - Paddle Lite - is multi-platform high performance deep learning inference engine.
  6. DeepLearningKit - is Open Source Deep Learning Framework for Apple's iOS, OS X and tvOS.
  7. Edge Impulse - Interactive platform to generate models that can run in microcontrollers. They are also quite active on social netwoks talking about recent news on EdgeAI/TinyML.
  8. Google - TensorFlow Lite - is an open source deep learning framework for on-device inference.
  9. Intel - OpenVINO - Comprehensive toolkit to optimize your processes for faster inference.
  10. JDAI Computer Vision - dabnn - is an accelerated binary neural networks inference framework for mobile platform.
  11. Meta - PyTorch Mobile - is a new framework for helping mobile developers and machine learning engineers embed PyTorch ML models on-device.
  12. Microsoft - DeepSpeed - is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
  13. Microsoft - ELL - allows you to design and deploy intelligent machine-learned models onto resource constrained platforms and small single-board computers, like Raspberry Pi, Arduino, and micro:bit.
  14. Microsoft - ONNX RUntime - cross-platform, high performance ML inferencing and training accelerator.
  15. Nvidia - TensorRT - is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
  16. OAID - Tengine - is a lite, high performance, modular inference engine for embedded device
  17. Qualcomm - Neural Processing SDK for AI - Libraries to developers run NN models on Snapdragon mobile platforms taking advantage of the CPU, GPU and/or DSP.
  18. Tencent - ncnn - is a high-performance neural network inference framework optimized for the mobile platform.
  19. uTensor - AI inference library based on mbed (an RTOS for ARM chipsets) and TensorFlow.
  20. XiaoMi - Mace - is a deep learning inference framework optimized for mobile heterogeneous computing platforms.
  21. xmartlabs - Bender - Easily craft fast Neural Networks on iOS! Use TensorFlow models. Metal under the hood.

Optimization Tools

  1. Neural Network Distiller - Python package for neural network compression research.
  2. PocketFlow - An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.

Research Demos

  1. RSTensorFlow - GPU Accelerated TensorFlow for Commodity Android Devices.

Web

  1. mil-tokyo/webdnn - Fastest DNN Execution Framework on Web Browser.

General

  1. Caffe2 AICamera
  2. TensorFlow Android Camera Demo
  3. TensorFlow iOS Example
  4. TensorFlow OpenMV Camera Module

Edge / Tiny MLOps

  1. Tiny-MLOps: a framework for orchestrating ML applications at the far edge of IoT systems [EAIS '22]
  2. MLOps for TinyML: Challenges & Directions in Operationalizing TinyML at Scale [TinyML Talks '22]
  3. TinyMLOps: Operational Challenges for Widespread Edge AI Adoption [arXiv '22]
  4. A TinyMLaaS Ecosystem for Machine Learning in IoT: Overview and Research Challenges [VLSI-DAT '21]
  5. SOLIS: The MLOps journey from data acquisition to actionable insights [arXiv '21]
  6. Edge MLOps: An Automation Framework for AIoT Applications [IC2E '21]
  7. SensiX++: Bringing MLOPs and Multi-tenant Model Serving to Sensory Edge Devices [arXiv '21, Nokia]

Vulkan

  1. Vulkan API Examples and Demos
  2. Neural Machine Translation on Android

OpenCL

  1. DeepMon

RenderScript

  1. Mobile_ConvNet: RenderScript CNN for Android

Tutorials

General

  1. Squeezing Deep Learning Into Mobile Phones
  2. Deep Learning – Tutorial and Recent Trends
  3. Tutorial on Hardware Architectures for Deep Neural Networks
  4. Efficient Convolutional Neural Network Inference on Mobile GPUs

NEON

  1. NEON™ Programmer’s Guide

OpenCL

  1. ARM® Mali™ GPU OpenCL Developer Guide, pdf
  2. Optimal Compute on ARM Mali™ GPUs
  3. GPU Compute for Mobile Devices
  4. Compute for Mobile Devices Performance focused
  5. Hands On OpenCL
  6. Adreno OpenCL Programming Guide
  7. Better OpenCL Performance on Qualcomm Adreno GPU

Courses

  1. UW Deep learning systems
  2. Berkeley Machine Learning Systems

Tools

GPU

  1. Bifrost GPU architecture and ARM Mali-G71 GPU
  2. Midgard GPU Architecture, ARM Mali-T880 GPU
  3. Mobile GPU market share

Driver

  1. [Adreno] csarron/qcom_vendor_binaries: Common Proprietary Qualcomm Binaries
  2. [Mali] Fevax/vendor_samsung_hero2ltexx: Blobs from s7 Edge G935F

Related Repos