default search action
Kota Ando
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j11]Junnosuke Suzuki, Jaehoon Yu, Mari Yasunaga, Ángel López García-Arias, Yasuyuki Okoshi, Shungo Kumazawa, Kota Ando, Kazushi Kawamura, Thiem Van Chu, Masato Motomura:
Pianissimo: A Sub-mW Class DNN Accelerator With Progressively Adjustable Bit-Precision. IEEE Access 12: 2057-2073 (2024) - [j10]Taisei Saito, Kota Ando, Tetsuya Asai:
Extending Binary Neural Networks to Bayesian Neural Networks with Probabilistic Interpretation of Binary Weights. IEICE Trans. Inf. Syst. 107(8): 949-957 (2024) - [j9]Yuki Abe, Kohei Nishida, Kota Ando, Tetsuya Asai:
SPCTRE: sparsity-constrained fully-digital reservoir computing architecture on FPGA. Int. J. Parallel Emergent Distributed Syst. 39(2): 197-213 (2024) - [c21]Itsuki Akeno, Hiiro Yamazaki, Tetsuya Asai, Kota Ando:
Edge AI Online Training Architecture Using Multi-Phase-Quantization Optimizer. IJCNN 2024: 1-8 - [c20]Koki Minagawa, Taisei Saito, Sena Kojima, Kota Ando, Tetsuya Asai:
Out-of-distribution Data Detection using Bayesian Convolutional Neural Network with Variational Inference. IJCNN 2024: 1-8 - 2023
- [j8]Jiale Yan, Kota Ando, Jaehoon Yu, Masato Motomura:
TT-MLP: Tensor Train Decomposition on Deep MLPs. IEEE Access 11: 10398-10411 (2023) - [j7]Daiki Okonogi, Satoru Jimbo, Kota Ando, Thiem Van Chu, Jaehoon Yu, Masato Motomura, Kazushi Kawamura:
A Fully-Parallel Annealing Algorithm with Autonomous Pinning Effect Control for Various Combinatorial Optimization Problems. IEICE Trans. Inf. Syst. 106(12): 1969-1978 (2023) - [c19]Kazushi Kawamura, Jaehoon Yu, Daiki Okonogi, Satoru Jimbo, Genta Inoue, Akira Hyodo, Ángel López García-Arias, Kota Ando, Bruno Hideki Fukushima-Kimura, Ryota Yasudo, Thiem Van Chu, Masato Motomura:
Amorphica: 4-Replica 512 Fully Connected Spin 336MHz Metamorphic Annealer with Programmable Optimization Strategy and Compressed-Spin-Transfer Multi-Chip Extension. ISSCC 2023: 42-43 - [c18]Junnosuke Suzuki, Jaehoon Yu, Mari Yasunaga, Ángel López García-Arias, Yasuyuki Okoshi, Shungo Kumazawa, Kota Ando, Kazushi Kawamura, Thiem Van Chu, Masato Motomura:
Pianissimo: A Sub-mW Class DNN Accelerator with Progressive Bit-by-Bit Datapath Architecture for Adaptive Inference at Edge. VLSI Technology and Circuits 2023: 1-2 - 2022
- [j6]Satoru Jimbo, Daiki Okonogi, Kota Ando, Thiem Van Chu, Jaehoon Yu, Masato Motomura, Kazushi Kawamura:
A Hybrid Integer Encoding Method for Obtaining High-Quality Solutions of Quadratic Knapsack Problems on Solid-State Annealers. IEICE Trans. Inf. Syst. 105-D(12): 2019-2031 (2022) - [c17]Yasuyuki Okoshi, Ángel López García-Arias, Kazutoshi Hirose, Kota Ando, Kazushi Kawamura, Thiem Van Chu, Masato Motomura, Jaehoon Yu:
Multicoated Supermasks Enhance Hidden Networks. ICML 2022: 17045-17055 - [c16]Daiki Okonogi, Satoru Jimbo, Kota Ando, Thiem Van Chu, Jaehoon Yu, Masato Motomura, Kazushi Kawamura:
APC-SCA: A Fully-Parallel Annealing Algorithm with Autonomous Pinning Effect Control. IPDPS Workshops 2022: 414-420 - [c15]Kazutoshi Hirose, Jaehoon Yu, Kota Ando, Yasuyuki Okoshi, Ángel López García-Arias, Junnosuke Suzuki, Thiem Van Chu, Kazushi Kawamura, Masato Motomura:
Hiddenite: 4K-PE Hidden Network Inference 4D-Tensor Engine Exploiting On-Chip Model Construction Achieving 34.8-to-16.0TOPS/W for CIFAR-100 and ImageNet. ISSCC 2022: 1-3 - 2021
- [j5]Junnosuke Suzuki, Tomohiro Kaneko, Kota Ando, Kazutoshi Hirose, Kazushi Kawamura, Thiem Van Chu, Masato Motomura, Jaehoon Yu:
ProgressiveNN: Achieving Computational Scalability with Dynamic Bit-Precision Adjustment by MSB-first Accumulative Computation. Int. J. Netw. Comput. 11(2): 338-353 (2021) - [j4]Kasho Yamamoto, Kazushi Kawamura, Kota Ando, Normann Mertig, Takashi Takemoto, Masanao Yamaoka, Hiroshi Teramoto, Akira Sakai, Shinya Takamaeda-Yamazaki, Masato Motomura:
STATICA: A 512-Spin 0.25M-Weight Annealing Processor With an All-Spin-Updates-at-Once Architecture for Combinatorial Optimization With Complete Spin-Spin Interactions. IEEE J. Solid State Circuits 56(1): 165-178 (2021) - [c14]Kota Ando, Jaehoon Yu, Kazutoshi Hirose, Hiroki Nakahara, Kazushi Kawamura, Thiem Van Chu, Masato Motomura:
Edge Inference Engine for Deep & Random Sparse Neural Networks with 4-bit Cartesian-Product MAC Array and Pipelined Activation Aligner. HCS 2021: 1-21 - 2020
- [c13]Junnosuke Suzuki, Kota Ando, Kazutoshi Hirose, Kazushi Kawamura, Thiem Van Chu, Masato Motomura, Jaehoon Yu:
ProgressiveNN: Achieving Computational Scalability without Network Alteration by MSB-first Accumulative Computation. CANDAR 2020: 215-220 - [c12]Kota Shiba, Tatsuo Omori, Kodai Ueyoshi, Kota Ando, Kazutoshi Hirose, Shinya Takamaeda-Yamazaki, Masato Motomura, Mototsugu Hamada, Tadahiro Kuroda:
A 3D-Stacked SRAM using Inductive Coupling with Low-Voltage Transmitter and 12: 1 SerDes. ISCAS 2020: 1-5 - [c11]Kasho Yamamoto, Kota Ando, Normann Mertig, Takashi Takemoto, Masanao Yamaoka, Hiroshi Teramoto, Akira Sakai, Shinya Takamaeda-Yamazaki, Masato Motomura:
7.3 STATICA: A 512-Spin 0.25M-Weight Full-Digital Annealing Processor with a Near-Memory All-Spin-Updates-at-Once Architecture for Combinatorial Optimization with Complete Spin-Spin Interactions. ISSCC 2020: 138-140
2010 – 2019
- 2019
- [j3]Kota Ando, Kodai Ueyoshi, Yuka Oba, Kazutoshi Hirose, Ryota Uematsu, Takumi Kudo, Masayuki Ikebe, Tetsuya Asai, Shinya Takamaeda-Yamazaki, Masato Motomura:
Dither NN: Hardware/Algorithm Co-Design for Accurate Quantized Neural Networks. IEICE Trans. Inf. Syst. 102-D(12): 2341-2353 (2019) - [j2]Kodai Ueyoshi, Kota Ando, Kazutoshi Hirose, Shinya Takamaeda-Yamazaki, Mototsugu Hamada, Tadahiro Kuroda, Masato Motomura:
QUEST: Multi-Purpose Log-Quantized DNN Inference Engine Stacked on 96-MB 3-D SRAM Using Inductive Coupling Technology in 40-nm CMOS. IEEE J. Solid State Circuits 54(1): 186-196 (2019) - [c10]Yuka Oba, Kota Ando, Tetsuya Asai, Masato Motomura, Shinya Takamaeda-Yamazaki:
DeltaNet: Differential Binary Neural Network. ASAP 2019: 39 - 2018
- [j1]Kota Ando, Kodai Ueyoshi, Kentaro Orimo, Haruyoshi Yonekawa, Shimpei Sato, Hiroki Nakahara, Shinya Takamaeda-Yamazaki, Masayuki Ikebe, Tetsuya Asai, Tadahiro Kuroda, Masato Motomura:
BRein Memory: A Single-Chip Binary/Ternary Reconfigurable in-Memory Deep Neural Network Accelerator Achieving 1.4 TOPS at 0.6 W. IEEE J. Solid State Circuits 53(4): 983-994 (2018) - [c9]Kota Ando, Kodai Ueyoshi, Yuka Oba, Kazutoshi Hirose, Ryota Uematsu, Takumi Kudo, Masayuki Ikebe, Tetsuya Asai, Shinya Takamaeda-Yamazaki, Masato Motomura:
Dither NN: An Accurate Neural Network with Dithering for Low Bit-Precision Hardware. FPT 2018: 6-13 - [c8]Kodai Ueyoshi, Kota Ando, Kazutoshi Hirose, Shinya Takamaeda-Yamazaki, Junichiro Kadomoto, Tomoki Miyata, Mototsugu Hamada, Tadahiro Kuroda, Masato Motomura:
QUEST: A 7.49TOPS multi-purpose log-quantized DNN inference engine stacked on 96MB 3D SRAM using inductive-coupling technology in 40nm CMOS. ISSCC 2018: 216-218 - [c7]Takumi Kudo, Kodai Ueyoshi, Kota Ando, Kazutoshi Hirose, Ryota Uematsu, Yuka Oba, Masayuki Ikebe, Tetsuya Asai, Masato Motomura, Shinya Takamaeda-Yamazaki:
Area and Energy Optimization for Bit-Serial Log-Quantized DNN Accelerator with Shared Accumulators. MCSoC 2018: 237-243 - 2017
- [c6]Shinya Takamaeda-Yamazaki, Kodai Ueyoshi, Kota Ando, Ryota Uematsu, Kazutoshi Hirose, Masayuki Ikebe, Tetsuya Asai, Masato Motomura:
Accelerating deep learning by binarized hardware. APSIPA 2017: 1045-1051 - [c5]Kazutoshi Hirose, Ryota Uematsu, Kota Ando, Kentaro Orimo, Kodai Ueyoshi, Masayuki Ikebe, Tetsuya Asai, Shinya Takamaeda-Yamazaki, Masato Motomura:
Logarithmic Compression for Memory Footprint Reduction in Neural Network Training. CANDAR 2017: 291-297 - [c4]Kodai Ueyoshi, Kota Ando, Kentaro Orimo, Masayuki Ikebe, Tetsuya Asai, Masato Motomura:
Exploring optimized accelerator design for binarized convolutional neural networks. IJCNN 2017: 2510-2516 - [c3]Haruyoshi Yonekawa, Shimpei Sato, Hiroki Nakahara, Kota Ando, Kodai Ueyoshi, Kazutoshi Hirose, Kentaro Orimo, Shinya Takamaeda-Yamazaki, Masayuki Ikebe, Tetsuya Asai, Masato Motomura:
In-memory area-efficient signal streaming processor design for binary neural networks. MWSCAS 2017: 116-119 - [c2]Kazutoshi Hirose, Kota Ando, Kodai Ueyoshi, Masayuki Ikebe, Tetsuya Asai, Masato Motomura, Shinya Takamaeda-Yamazaki:
Quantization Error-Based Regularization in Neural Networks. SGAI Conf. 2017: 137-142 - 2016
- [c1]Kentaro Orimo, Kota Ando, Kodai Ueyoshi, Masayuki Ikebe, Tetsuya Asai, Masato Motomura:
FPGA architecture for feed-forward sequential memory network targeting long-term time-series forecasting. ReConFig 2016: 1-6
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 21:24 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint