Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Accelerating Weather Prediction Using Near-Memory Reconfigurable Fabric

Published: 06 June 2022 Publication History

Abstract

Ongoing climate change calls for fast and accurate weather and climate modeling. However, when solving large-scale weather prediction simulations, state-of-the-art CPU and GPU implementations suffer from limited performance and high energy consumption. These implementations are dominated by complex irregular memory access patterns and low arithmetic intensity that pose fundamental challenges to acceleration. To overcome these challenges, we propose and evaluate the use of near-memory acceleration using a reconfigurable fabric with high-bandwidth memory (HBM). We focus on compound stencils that are fundamental kernels in weather prediction models. By using high-level synthesis techniques, we develop NERO, an field-programmable gate array+HBM-based accelerator connected through Open Coherent Accelerator Processor Interface to an IBM POWER9 host system. Our experimental results show that NERO outperforms a 16-core POWER9 system by \(5.3\times\) and \(12.7\times\) when running two different compound stencil kernels. NERO reduces the energy consumption by \(12\times\) and \(35\times\) for the same two kernels over the POWER9 system with an energy efficiency of 1.61 GFLOPS/W and 21.01 GFLOPS/W. We conclude that employing near-memory acceleration solutions for weather prediction modeling  is promising as a means to achieve both high performance and high energy efficiency.

References

[1]
ADM-PCIE-9H7-High-Speed Communications Hub. Retrieved fromhttps://www.alpha-data.com/dcp/products.php?product=adm-pcie-9h7.
[2]
ADM-PCIE-9V3-High-Performance Network Accelerator. Retrieved fromhttps://www.alpha-data.com/dcp/products.php?product=adm-pcie-9v3.
[5]
[6]
GCC, the GNU Compiler Collection. Retrieved from https://gcc.gnu.org/.
[7]
High Bandwidth Memory (HBM) DRAM (JESD235). Retrieved from https://www.jedec.org/document_search?search_api_views_fulltext=jesd235.
[16]
Ubuntu 20.04.3 LTS (Focal Fossa). Retrieved from https://releases.ubuntu.com/20.04/.
[18]
Virtex UltraScale+ HBM FPGA: A Revolutionary Increase in Memory Performance. Retrieved from https://www.xilinx.com/support/documentation/white_papers/wp485-hbm.pdf.
[24]
Junwhan Ahn, Sungpack Hong, Sungjoo Yoo, Onur Mutlu, and Kiyoung Choi. 2015. A scalable processing-in-memory accelerator for parallel graph processing. In ISCA.
[25]
Junwhan Ahn, Sungjoo Yoo, Onur Mutlu, and Kiyoung Choi. 2015. PIM-Enabled instructions: A low-overhead, locality-aware processing-in-memory architecture. In ISCA.
[26]
Berkin Akin, Franz Franchetti, and James C. Hoe. Data reorganization in memory using 3D-stacked DRAM. 2015. In ISCA.
[27]
M. Alian, S. W. Min, H. Asgharimoghaddam, A. Dhar, D. K. Wang, T. Roewer, A. McPadden, O. O’Halloran, D. Chen, J. Xiong, D. Kim, W. Hwu, and N. S. Kim. 2018. Application-Transparent near-memory processing architecture with memory channel network. In MICRO.
[28]
Mohammed Alser, Zülal Bingöl, Damla Senol Cali, Jeremie Kim, Saugata Ghose, Can Alkan, and Onur Mutlu. 2020. Accelerating genome analysis: A primer on an ongoing journey. In IEEE Micro.
[29]
Mohammed Alser, Hasan Hassan, Akash Kumar, Onur Mutlu, and Can Alkan. 2019. Shouji: A fast and efficient pre-alignment filter for sequence alignment. Bioinformatics 35, 21 (2019), 4255–4263.
[30]
Mohammed Alser, Hasan Hassan, Hongyi Xin, Oǧuz Ergin, Onur Mutlu, and Can Alkan. 2017. GateKeeper: A new hardware architecture for accelerating pre-alignment in DNA short read mapping. Bioinformatics 33, 21 (2017), 3355–3363.
[31]
Mohammed Alser, Jeremy Rotman, Kodi Taraszka, Huwenbo Shi, Pelin Icer Baykal, Harry Taegyun Yang, Victor Xue, Sergey Knyazev, Benjamin D. Singer, Brunilda Balliu, et al. 2020. Technology dictates algorithms: Recent developments in read alignment. In Genome Biology, Vol. 22. 1–34.
[32]
Mohammed Alser, Taha Shahroodi, Juan Gomez-Luna, Can Alkan, and Onur Mutlu. 2020. SneakySnake: A fast and accurate universal genome pre-alignment filter for CPUs, GPUs, and FPGAs. Bioinformatics 36, 22–23 (2020), 5282–5290.
[33]
Shaahin Angizi, Jiao Sun, Wei Zhang, and Deliang Fan. 2019. AlignS: A processing-in-memory accelerator for DNA short read alignment leveraging SOT-MRAM. In DAC.
[34]
Jason Ansel, Shoaib Kamil, Kalyan Veeramachaneni, Jonathan Ragan-Kelley, Jeffrey Bosboom, Una-May O’Reilly, and Saman Amarasinghe. 2014. OpenTuner: An extensible framework for program autotuning. In PACT.
[35]
Adrià Armejach, Helena Caminal, Juan M. Cebrian, Rekai González-Alberquilla, Chris Adeniyi-Jones, Mateo Valero, Marc Casas, and Miquel Moretó. 2018. Stencil codes on a vector length agnostic architecture. In PACT.
[36]
Hadi Asghari-Moghaddam, Young Hoon Son, Jung Ho Ahn, and Nam Sung Kim. 2016. Chameleon: Versatile and practical Near-DRAM acceleration architecture for large memory systems. In MICRO.
[37]
Oreoluwatomiwa O. Babarinsa and Stratos Idreos. 2015. JAFAR: Near-Data processing for databases. In SIGMOD.
[38]
George H. Barnes, Richard M. Brown, Maso Kato, David J. Kuck, Daniel L. Slotnick, and Richard A. Stokes. 1968. The ILLIAC IV computer. In TC.
[39]
Brad Benton. 2017. CCIX, Gen-Z, OpenCAPI: Overview and comparison. In OFA.
[40]
Maciej Besta, Raghavendra Kanakagiri, Grzegorz Kwasniewski, Rachata Ausavarungnirun, Jakub Beránek, Konstantinos Kanellopoulos, Kacper Janda, Zur Vonarburg-Shmaria, Lukas Gianinazzi, Ioana Stefan, et al. 2021. SISA: Set-Centric instruction set architecture for graph mining on processing-in-memory systems. In MICRO.
[41]
M. Bianco, T. Diamanti, O. Fuhrer, T. Gysi, X. Lapillonne, C. Osuna, and T. Schulthess. 2013. A GPU capable version of the COSMO weather model. In ISC.
[42]
Luca Bonaventura. 2000. A semi-implicit semi-Lagrangian scheme using the height coordinate for a nonhydrostatic and fully elastic model of atmospheric flows. In JCP.
[43]
Amirali Boroumand, Saugata Ghose, Berkin Akin, Ravi Narayanaswami, Geraldo F. Oliveira, Xiaoyu Ma, Eric Shiu, and Onur Mutlu. 2021. Google neural network models for edge devices: Analyzing and mitigating machine learning inference bottlenecks. In PACT.
[44]
Amirali Boroumand, Saugata Ghose, Youngsok Kim, Rachata Ausavarungnirun, Eric Shiu, Rahul Thakur, Daehyun Kim, Aki Kuusela, Allan Knies, Parthasarathy Ranganathan, and Onur Mutlu. 2018. Google workloads for consumer devices: Mitigating data movement bottlenecks. In ASPLOS.
[45]
Amirali Boroumand, Saugata Ghose, Minesh Patel, Hasan Hassan, Brandon Lucia, Rachata Ausavarungnirun, Kevin Hsieh, Nastaran Hajinazar, Krishna T Malladi, Hongzhong Zheng, et al. 2019. CoNDA: Efficient cache coherence support for near-data accelerators. In ISCA.
[46]
Amirali Boroumand, Saugata Ghose, Minesh Patel, Hasan Hassan, Brandon Lucia, Kevin Hsieh, Krishna T. Malladi, Hongzhong Zheng, and Onur Mutlu. 2016. LazyPIM: An efficient cache coherence mechanism for processing-in-memory. In CAL.
[47]
Damla Senol Cali, Gurpreet S. Kalsi, Zülal Bingöl, Can Firtina, Lavanya Subramanian, Jeremie S. Kim, Rachata Ausavarungnirun, Mohammed Alser, Juan Gómez Luna, Amirali Boroumand, Anant Nori, Allison Scibisz, Sreenivas Subramoney, Can Alkan, Saugata Ghose, and Onur Mutlu. 2020. GenASM: A high-performance, low-power approximate string matching acceleration framework for genome sequence analysis. In MICRO.
[48]
A. M. Caulfield, E. S. Chung, A. Putnam, H. Angepat, J. Fowers, M. Haselman, S. Heil, M. Humphrey, P. Kaur, J. Kim, D. Lo, T. Massengill, K. Ovtcharov, M. Papamichael, L. Woods, S. Lanka, D. Chiou, and D. Burger. 2016. A cloud-scale acceleration architecture. In MICRO.
[49]
Li-Wen Chang, Juan Gómez-Luna, Izzat El Hajj, Sitao Huang, Deming Chen, and Wen-mei Hwu. 2017. Collaborative computing for heterogeneous integrated systems. In ICPE.
[50]
Ping Chi, Shuangchen Li, Cong Xu, Tao Zhang, Jishen Zhao, Yongpan Liu, Yu Wang, and Yuan Xie. 2016. PRIME: A novel processing-in-memory architecture for neural network computation in ReRAM-based main memory. In ISCA.
[51]
Yuze Chi, Jason Cong, Peng Wei, and Peipei Zhou. 2018. SODA: Stencil with optimized dataflow architecture. In ICCAD.
[52]
Young-kyu Choi, Jason Cong, Zhenman Fang, Yuchen Hao, Glenn Reinman, and Peng Wei. 2016. A quantitative analysis on microarchitectures of modern CPU-FPGA platforms. In DAC.
[53]
Matthias Christen, Olaf Schenk, and Helmar Burkhart. 2011. PATUS: A code generation and autotuning framework for parallel iterative stencil computations on modern microarchitectures. In IPDPS.
[54]
Kaushik Datta, Shoaib Kamil, Samuel Williams, Leonid Oliker, John Shalf, and Katherine Yelick. 2009. Optimization and performance modeling of stencil computations on modern microprocessors. In SIAM Review.
[55]
Johannes de Fine Licht, Michaela Blott, and Torsten Hoefler. 2018. Designing scalable FPGA architectures using high-level synthesis. In PPoPP.
[56]
Johannes de Fine Licht, Andreas Kuster, Tiziano De Matteis, Tal Ben-Nun, Dominic Hofer, and Torsten Hoefler. 2021. StencilFlow: Mapping large stencil programs to distributed spatial computing systems. In CGO.
[57]
Dionysios Diamantopoulos, Heiner Giefers, and Christoph Hagleitner. 2018. ecTALK: Energy efficient coherent transprecision accelerators—The bidirectional long short-term memory neural network case. In COOL CHIPS.
[58]
Dionysios Diamantopoulos and Christoph Hagleitner. 2018. A system-level transprecision FPGA accelerator for BLSTM using on-chip memory reshaping. In FPT.
[59]
G. Doms and U. Schättler. 1999. The nonhydrostatic limited-area model LM (Lokal-model) of the DWD. Part I: Scientific documentation. In DWD, GB Forschung und Entwicklung.
[60]
Mario Drumond, Alexandros Daglis, Nooshin Mirzadeh, Dmitrii Ustiugov, Javier Picorel, Babak Falsafi, Boris Grot, and Dionisios Pnevmatikatos. 2017. The Mondrian data engine. In ISCA.
[61]
Javier Duarte, Song Han, Philip Harris, Sergo Jindariani, Edward Kreinar, Benjamin Kreis, Jennifer Ngadiuba, Maurizio Pierini, Ryan Rivera, Nhan Tran, and Z. Wu. 2018. Fast inference of deep neural networks in FPGAs for pinproceedings physics. In JINST.
[62]
Jian Fang, Yvo T. B. Mulder, Jan Hidders, Jinho Lee, and H. Peter Hofstee. 2020. In-memory database acceleration on FPGAs: A survey. In VLDB.
[63]
A. Farmahini-Farahani, J. H. Ahn, K. Morrow, and N. S. Kim. 2015. NDA: Near-DRAM acceleration architecture leveraging commodity DRAM devices and standard memory modules. In HPCA.
[64]
Ivan Fernandez, Ricardo Quislant, Eladio Gutiérrez, Oscar Plata, Christina Giannoula, Mohammed Alser, Juan Gómez-Luna, and Onur Mutlu. 2020. NATSA: A near-data processing accelerator for time series analysis. In ICCD.
[65]
Michael J. Flynn. 1966. Very high-speed computing systems. Proceedings of the IEEE 54, 12 (1966), 1901–1909.
[66]
Haohuan Fu and Robert G. Clapp. 2011. Eliminating the memory bottleneck: An FPGA-based solution for 3D reverse time migration. In FPGA.
[67]
Brian Gaide, Dinesh Gaitonde, Chirag Ravishankar, and Trevor Bauer. 2019. Xilinx adaptive compute acceleration platform: Versal™ architecture. In FPGA.
[68]
Fei Gao, Georgios Tziantzioulis, and David Wentzlaff. 2019. ComputeDRAM: In-Memory compute using off-the-shelf DRAMs. In MICRO.
[69]
Mingyu Gao, Grant Ayers, and Christos Kozyrakis. 2015. Practical near-data processing for in-memory analytics frameworks. In PACT.
[70]
M. Gao and C. Kozyrakis. 2016. HRL: Efficient and flexible reconfigurable logic for near-data processing. In HPCA.
[71]
Saugata Ghose, Amirali Boroumand, Jeremie S. Kim, Juan Gómez-Luna, and Onur Mutlu. 2019. Processing-in-memory: A workload-driven perspective. In IBM JRD.
[72]
Saugata Ghose, Tianshi Li, Nastaran Hajinazar, Damla Senol Cali, and Onur Mutlu. 2019. Demystifying complex workload-DRAM interactions: An Experimental Study. In POMACS.
[73]
Christina Giannoula, Nandita Vijaykumar, Nikela Papadopoulou, Vasileios Karakostas, Ivan Fernandez, Juan Gómez-Luna, Lois Orosa, Nectarios Koziris, Georgios Goumas, and Onur Mutlu. 2021. SynCron: Efficient synchronization support for near-data-processing architectures. In HPCA.
[74]
Heiner Giefers, Raphael Polig, and Christoph Hagleitner. 2015. Accelerating arithmetic kernels with coherent attached FPGA coprocessors. In DATE.
[75]
Juan Gómez-Luna, Izzat El Hajj, Ivan Fernandez, Christina Giannoula, Geraldo F. Oliveira, and Onur Mutlu. 2021. Benchmarking a new paradigm: An experimental analysis of a real processing-in-memory architecture arxiv.
[76]
Juan Gómez-Luna, Izzat El Hajj, Ivan Fernandez, Christina Giannoula, Geraldo F. Oliveira, and Onur Mutlu. 2021. Benchmarking memory-centric computing systems: Analysis of real processing-in-memory hardware. In CUT.
[77]
José González and Antonio González. 1997. Speculative execution via address prediction and data prefetching. In ICS.
[78]
Boncheol Gu, Andre S. Yoon, Duck-Ho Bae, Insoon Jo, Jinyoung Lee, Jonghyun Yoon, Jeong-Uk Kang, Moonsang Kwon, Chanho Yoon, Sangyeun Cho, Jaeheon Jeong, and Duckhyun Chang. 2016. Biscuit: A framework for near-data processing of big data workloads. In ISCA.
[79]
Tobias Gysi, Tobias Grosser, and Torsten Hoefler. 2015. MODESTO: Data-centric analytic optimization of complex stencil programs on heterogeneous architectures. In SC.
[80]
Nastaran Hajinazar, Geraldo F. Oliveira, Sven Gregorio, João Ferreira, Nika Mansouri Ghiasi, Minesh Patel, Mohammed Alser, Saugata Ghose, Juan Gómez Luna, and Onur Mutlu. 2021. SIMDRAM: An end-to-end framework for bit-serial SIMD computing in DRAM. In ASPLOS.
[81]
Milad Hashemi, Eiman Ebrahimi, Onur Mutlu, Yale N. Patt, et al. 2016. Accelerating dependent cache misses with an enhanced memory controller. In ISCA.
[82]
Milad Hashemi, Onur Mutlu, and Yale N. Patt. 2016. Continuous runahead: Transparent hardware acceleration for memory intensive workloads. In MICRO.
[83]
Tom Henretty, Kevin Stock, Louis-Noël Pouchet, Franz Franchetti, J. Ramanujam, and P. Sadayappan. 2011. Data layout transformation for stencil computations on short-vector SIMD architectures. In CC.
[84]
Txomin Hermosilla, E. Bermejo, A. Balaguer, and Luis A. Ruiz. 2008. Non-linear fourth-order image interpolation for subpixel edge detection and localization. In IMAVIS.
[85]
Kevin Hsieh, Eiman Ebrahimi, Gwangsun Kim, Niladrish Chatterjee, Mike O’Connor, Nandita Vijaykumar, Onur Mutlu, and Stephen W. Keckler. 2016. Transparent offloading and mapping (TOM): Enabling programmer-transparent near-data processing in GPU systems. In ISCA.
[86]
Kevin Hsieh, Samira Khan, Nandita Vijaykumar, Kevin K. Chang, Amirali Boroumand, Saugata Ghose, and Onur Mutlu. 2016. Accelerating pointer chasing in 3D-Stacked memory: Challenges, mechanisms, evaluation. In ICCD.
[87]
Sitao Huang, Li-Wen Chang, Izzat El Hajj, Simon Garcia de Gonzalo, Juan Gómez-Luna, Sai Rahul Chalamalasetti, Mohamed El-Hadedy, Dejan Milojicic, Onur Mutlu, Deming Chen, and Wen-mei Hwu. 2019. Analysis and modeling of collaborative execution strategies for heterogeneous CPU-FPGA architectures. In ICPE.
[88]
H. T. Huynh, Zhi J. Wang, and Peter E. Vincent. 2014. High-order methods for computational fluid dynamics: A brief review of compact differential formulations on unstructured grids. In Computers & Fluids.
[89]
Zsolt István, David Sidler, and Gustavo Alonso. 2017. Caribou: Intelligent distributed storage. In VLDB.
[90]
Jiantong Jiang, Zeke Wang, Xue Liu, Juan Gómez-Luna, Nan Guan, Qingxu Deng, Wei Zhang, and Onur Mutlu. 2020. Boyi: A systematic framework for automatically deciding the right execution model of OpenCL applications on FPGAs. In FPGA.
[91]
R. Jongerius, S. Wijnholds, R. Nijboer, and H. Corporaal. 2014. An end-to-end computing model for the square kilometre array. In Computer.
[92]
Sang-Woo Jun, Ming Liu, Sungjin Lee, Jamey Hicks, John Ankcorn, Myron King, Shuotao Xu, et al. 2015. BlueDBM: An appliance for big data analytics. In ISCA.
[93]
Yangwook Kang, Yang-suk Kee, Ethan L. Miller, and Chanik Park. 2013. Enabling cost-effective data processing with smart SSD. In MSST.
[94]
Kaan Kara, Dan Alistarh, Gustavo Alonso, Onur Mutlu, and Ce Zhang. 2017. FPGA-accelerated dense linear machine learning: A precision-convergence trade-off. In FCCM.
[95]
Kaan Kara, Christoph Hagleitner, Dionysios Diamantopoulos, Dimitris Syrivelis, and Gustavo Alonso. 2020. High bandwidth memory on FPGAs: A data analytics perspective. In FPL.
[96]
L. Ke, U. Gupta, B. Y. Cho, D. Brooks, V. Chandra, U. Diril, A. Firoozshahian, K. Hazelwood, B. Jia, H. S. Lee, M. Li, B. Maher, D. Mudigere, M. Naumov, M. Schatz, M. Smelyanskiy, X. Wang, B. Reagen, C. Wu, M. Hempstead, and X. Zhang. 2020. RecNMP: Accelerating personalized recommendation with near-memory processing. In ISCA.
[97]
Scott Kehler, John Hanesiak, Michelle Curry, David Sills, and Neil Taylor. 2016. High resolution deterministic prediction system (HRDPS) simulations of Manitoba lake breezes. In Atmosphere-Ocean.
[98]
Duckhwan Kim, Jaeha Kung, Sek Chai, Sudhakar Yalamanchili, and Saibal Mukhopadhyay. 2016. Neurocube: A programmable digital neuromorphic architecture with high-density 3D memory. In ISCA.
[99]
J. Kim, C. S. Oh, H. Lee, D. Lee, H. R. Hwang, S. Hwang, B. Na, J. Moon, J. Kim, H. Park, J. Ryu, K. Park, S. K. Kang, S. Kim, H. Kim, J. Bang, H. Cho, M. Jang, C. Han, J. LeeLee, J. S. Choi, and Y. Jun. 2012. A 1.2 V 12.8 GB/s 2 Gb mobile wide-I/O DRAM with 4\(\times\)128 I/Os using TSV based stacking. In JSSC.
[100]
Jeremie S. Kim, Damla Senol Cali, Hongyi Xin, Donghyuk Lee, Saugata Ghose, Mohammed Alser, Hasan Hassan, Oguz Ergin, Can Alkan, and Onur Mutlu. 2018. GRIM-Filter: Fast seed location filtering in DNA read mapping using processing-in-memory technologies. BMC Genomics 19, 2 (2018), 23–40.
[101]
Gunjae Koo, Kiran Kumar Matam, Te I, H. V. Krishna Giri Narra, Jing Li, Hung-Wei Tseng, Steven Swanson, and Murali Annavaram. 2017. Summarizer: Trading communication with computing near storage. In MICRO.
[102]
Young-Cheon Kwon, Suk Han Lee, Jaehoon Lee, Sang-Hyuk Kwon, Je Min Ryu, Jong-Pil Son, Seongil O, Hak-Soo Yu, Haesuk Lee, Soo Young Kim, Youngmin Cho, Jin Guk Kim, Jongyoon Choi, Hyun-Sung Shin, Jin Kim, BengSeng Phuah, HyoungMin Kim, Myeong Jun Song, Ahn Choi, Daeho Kim, SooYoung Kim, Eun-Bong Kim, David Wang, Shinhaeng Kang, Yuhwan Ro, Seungwoo Seo, JoonHo Song, Jaeyoun Youn, Kyomin Sohn, and Nam Sung Kim. 2021. A 20nm 6GB function-in-memory DRAM, based on HBM2 with a 1.2TFLOPS programmable computing unit using bank-level parallelism, for machine learning applications. In ISSCC.
[103]
Yi-Hsiang Lai, Yuze Chi, Yuwei Hu, Jie Wang, Cody Hao Yu, Yuan Zhou, Jason Cong, and Zhiru Zhang. 2019. HeteroCL: A multi-paradigm programming infrastructure for software-defined reconfigurable computing. In FPGA.
[104]
Donghyuk Lee, Saugata Ghose, Gennady Pekhimenko, Samira Khan, and Onur Mutlu. 2016. Simultaneous multi-layer access: Improving 3D-Stacked memory bandwidth at low cost. ACM TACO 12, 4 (2016), 1–29.
[105]
D. U. Lee, K. W. Kim, K. W. Kim, H. Kim, J. Y. Kim, Y. J. Park, J. H. Kim, D. S. Kim, H. B. Park, J. W. Shin, J. H. Cho, K. H. Kwon, M. J. Kim, J. Lee, K. W. Park, B. Chung, and S. Hong. 2014. 25.2 A 1.2V 8Gb 8-channel 128GB/s high-bandwidth memory (HBM) stacked DRAM with effective microbump I/O test methods using 29nm process and TSV. In ISSCC.
[106]
Jinho Lee, Heesu Kim, Sungjoo Yoo, Kiyoung Choi, H. Peter Hofstee, Gi-Joon Nam, Mark R. Nutter, and Damir Jamsek. 2017. ExtraV: Boosting graph processing near storage with a coherent accelerator. In VLDB.
[107]
Joo Hwan Lee, Jaewoong Sim, and Hyesoon Kim. 2015. BSSync: Processing near memory for machine learning workloads with bounded staleness consistency models. In PACT.
[108]
Sukhan Lee, Shin-haeng Kang, Jaehoon Lee, Hyeonsu Kim, Eojin Lee, Seungwoo Seo, Hosang Yoon, Seungwon Lee, Kyounghwan Lim, Hyunsung Shin, Jinhyun Kim, Seongil O, Anand Iyer, David Wang, Kyomin Sohn, and Nam Sung Kim. 2021. Hardware architecture and software stack for FIM based on commercial DRAM technology. In ISCA.
[109]
Vincent T. Lee, Amrita Mazumdar, Carlo C. del Mundo, Armin Alaghi, Luis Ceze, and Mark Oskin. 2018. Application codesign of near-data processing for similarity search. In IPDPS.
[110]
Jiajie Li, Yuze Chi, and Jason Cong. 2020. HeteroHalide: From image processing DSL to efficient FPGA acceleration. In FPGA.
[111]
Shuangchen Li, Cong Xu, Qiaosha Zou, Jishen Zhao, Yu Lu, and Yuan Xie. 2016. Pinatubo: A processing-in-memory architecture for bulk bitwise operations in emerging non-volatile memories. In DAC.
[112]
Jiawen Liu, Hengyu Zhao, Matheus A. Ogleari, Dong Li, and Jishen Zhao. 2018. Processing-in-Memory for energy-efficient neural network training: A heterogeneous approach. In MICRO.
[113]
Zhiyu Liu, Irina Calciu, Maurice Herlihy, and Onur Mutlu. 2017. Concurrent data structures for near-memory computing. In SPAA.
[114]
David Mayhew and Venkata Krishnan. 2003. PCI express and advanced switching: Evolutionary path to building next generation interconnects. In HOTI.
[115]
Jiayuan Meng and Kevin Skadron. 2011. A performance study for iterative stencil loops on GPUs with ghost zone optimizations. In IJPP.
[116]
Microsoft. Deploy ML models to field-programmable gate arrays (FPGAs) with Azure Machine Learning. Retrieved from https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-fpga-web-service.
[117]
Amir Morad, Leonid Yavits, and Ran Ginosar. 2015. GP-SIMD processing-in-memory. ACM TACO 11, 4 (2015), 1–26.
[118]
Onur Mutlu. 2021. Intelligent architectures for intelligent computing systems. In DATE.
[119]
Onur Mutlu, Saugata Ghose, Juan Gómez-Luna, and Rachata Ausavarungnirun. 2019. Enabling practical processing in and near memory for data-intensive computing. In DAC.
[120]
Onur Mutlu, Saugata Ghose, Juan Gómez-Luna, and Rachata Ausavarungnirun. 2019. Processing data where it makes sense: Enabling in-memory computation. In MicPro, Vol. 67. 28–41.
[121]
Onur Mutlu, Saugata Ghose, Juan Gómez-Luna, and Rachata Ausavarungnirun. 2021. A modern primer on processing in memor. In Emerging Computing: From Devices to Systems-Looking Beyond Moore and Von Neumann. Springer.
[122]
L. Nai, R. Hadidi, J. Sim, H. Kim, P. Kumar, and H. Kim. 2017. GraphPIM: Enabling instruction-level PIM offloading in graph computing frameworks. In HPCA.
[123]
R. Nair, S. F. Antao, C. Bertolli, P. Bose, J. R. Brunheroto, T. Chen, C. Cher, C. H. A. Costa, J. Doi, C. Evangelinos, B. M. Fleischer, T. W. Fox, D. S. Gallo, L. Grinberg, J. A. Gunnels, A. C. Jacob, P. Jacob, H. M. Jacobson, T. Karkhanis, C. Kim, J. H. Moreno, J. K. O’Brien, M. Ohmacht, Y. Park, D. A. Prener, B. S. Rosenburg, K. D. Ryu, O. Sallenave, M. J. Serrano, P. D. M. Siegl, K. Sugavanam, and Z. Sura. 2015. Active memory cube: A processing-in-memory architecture for exascale systems. IBM JRD 59, 2/3 (2015), 17–1.
[124]
Fábio C. P. Navarro, Hussein Mohsen, Chengfei Yan, Shantao Li, Mengting Gu, William Meyerson, and Mark Gerstein. 2019. Genomics and data science: An application within an umbrella. Genome Biology 20, 1 (2019), 1–11.
[125]
Richard B. Neale, Chih-Chieh Chen, Andrew Gettelman, Peter H. Lauritzen, Sungsu Park, David L. Williamson, Andrew J. Conley, Rolando Garcia, Doug Kinnison, Jean-Francois Lamarque, et al. 2010. Description of the NCAR Community atmosphere model (CAM 5.0). In NCAR Tech. Note.
[126]
Geraldo Francisco Oliveira, Juan Gómez-Luna, Lois Orosa, Saugata Ghose, Nandita Vijaykumar, Ivan Fernandez, Mohammad Sadrosadati, and Onur Mutlu. 2021. DAMOV: A new methodology and benchmark suite for evaluating data movement bottlenecks. In IEEE Access, Vol. 9. 134457–134502.
[127]
M. Necati Özişik, Helcio R. B. Orlande, Marcelo J. Colaço, and Renato M. Cotta. 2017. Finite Difference Methods in Heat Transfer. CRC Press.
[128]
Jaehyun Park, Byeongho Kim, Sungmin Yun, Eojin Lee, Minsoo Rhu, and Jung Ho Ahn. 2021. TRiM: Enhancing processor-memory interfaces with scalable tensor reduction in memory. In MICRO.
[129]
Ashutosh Pattnaik, Xulong Tang, Adwait Jog, Onur Kayiran, Asit K. Mishra, Mahmut T. Kandemir, Onur Mutlu, and Chita R. Das. 2016. Scheduling techniques for gpu architectures with processing-in-memory capabilities. In PACT.
[130]
J. T. Pawlowski. 2011. Hybrid memory cube (HMC). In HCS.
[131]
Constantin Pohl, Kai-Uwe Sattler, and Goetz Graefe. 2019. Joins on high-bandwidth memory: A new level in the memory hierarchy. In VLDB.
[132]
Seth H. Pugsley, Jeffrey Jestes, Huihui Zhang, Rajeev Balasubramonian, Vijayalakshmi Srinivasan, Alper Buyuktosunoglu, Al Davis, and Feifei Li. 2014. NDC: Analyzing the impact of 3D-Stacked memory+logic devices on mapreduce workloads. In ISPASS.
[133]
Krzysztof Rojek et al. 2019. CFD Acceleration with FPGA. In H2RC.
[134]
Satish Kumar Sadasivam, Brian W. Thompto, Ron Kalla, and William J. Starke. 2017. IBM POWER9 processor architecture. In IEEE Micro.
[135]
Kentaro Sano, Yoshiaki Hatsuda, and Satoru Yamamoto. 2014. Multi-FPGA accelerator for scalable stencil computation with constant memory bandwidth. In TPDS.
[136]
Paulo C. Santos, Geraldo F. Oliveira, Diego G. Tomé, Marco A. Z. Alves, Eduardo C. Almeida, and Luigi Carro. 2017. Operand size reconfiguration for big data processing in memory. In DATE.
[137]
Christoph Schär, Oliver Fuhrer, Andrea Arteaga, Nikolina Ban, Christophe Charpilloz, Salvatore Di Girolamo, Laureline Hentgen, Torsten Hoefler, Xavier Lapillonne, David Leutwyler, Katherine Osterried, Davide Panosetti, Stefan Rudishli, Linda Schlemmer, Thomas C. Schulthess, Michael Sprenger, Stefano Ubbiali, and Heini Wernli. 2020. Kilometer-scale climate models: Prospects and challenges. In BAMS.
[138]
Vivek Seshadri, Kevin Hsieh, Amirali Boroum, Donghyuk Lee, Michael A. Kozuch, Onur Mutlu, Phillip B. Gibbons, and Todd C. Mowry. 2015. Fast bulk bitwise AND and OR in DRAM. In CAL.
[139]
Vivek Seshadri, Yoongu Kim, Chris Fallin, Donghyuk Lee, Rachata Ausavarungnirun, Gennady Pekhimenko, Yixin Luo, Onur Mutlu, Phillip B. Gibbons, Michael A. Kozuch, et al. 2013. RowClone: Fast and energy-efficient In-DRAM bulk data copy and initialization. In MICRO.
[140]
Vivek Seshadri, Donghyuk Lee, Thomas Mullins, Hasan Hassan, Amirali Boroumand, Jeremie Kim, Michael A. Kozuch, Onur Mutlu, Phillip B. Gibbons, and Todd C. Mowry. 2017. Ambit: In-Memory accelerator for bulk bitwise operations using commodity DRAM technology. In MICRO.
[141]
Vivek Seshadri, Donghyuk Lee, Thomas Mullins, Hasan Hassan, Amirali Boroumand, Jeremie Kim, Michael A. Kozuch, Onur Mutlu, Phillip B. Gibbons, and Todd C. Mowry. 2016. Buddy-RAM: Improving the performance and efficiency of bulk bitwise operations using DRAM (unpublished).
[142]
Vivek Seshadri, Thomas Mullins, Amirali Boroumand, Onur Mutlu, Phillip B. Gibbons, Michael A. Kozuch, and Todd C. Mowry. 2015. Gather-Scatter DRAM: In-DRAM address translation to improve the spatial locality of non-unit strided accesses. In MICRO.
[143]
Vivek Seshadri and Onur Mutlu. 2017. Simple operations in memory to reduce data movement. In Advances in Computers.
[144]
Vivek Seshadri and Onur Mutlu. 2019. In-DRAM bulk bitwise execution engine. arxiv.
[145]
D. D. Sharma. Compute express link. In CXL Consortium White Paper2019.
[146]
William Andrew Simon, Yasir Mahmood Qureshi, Marco Rios, Alexandre Levisse, Marina Zapater, and David Atienza. 2020. BLADE: An in-cache computing architecture for edge devices. In TC.
[147]
Gagandeep Singh et al. 2019. NAPEL: Near-memory computing application performance prediction via ensemble learning. In DAC.
[148]
Gagandeep Singh, Mohammed Alser, Damla Senol Cali, Dionysios Diamantopoulos, Juan Gómez-Luna, Henk Corporaal, and Onur Mutlu. 2021. FPGA-based near-memory acceleration of modern data-intensive applications. In IEEE Micro.
[149]
Gagandeep Singh, Lorenzo Chelini, Stefano Corda, Ahsan Javed Awan, Sander Stuijk, Roel Jordans, Henk Corporaal, and Albert-Jan Boonstra. 2019. Near-Memory computing: Past, present, and future. In MicPro, Vol. 71. 102868.
[150]
Gagandeep Singh, Lorenzo Chelini, Stefano Corda, Ahsan Javed Awan, Sander Stuijk, Roel Jordans, Henk Corporaal, and Albert-Jan Boonstra. 2018. A review of near-memory computing architectures: Opportunities and challenges. In DSD.
[151]
Gagandeep Singh, Dionysios Diamantopolous, Juan Gómez-Luna, Sander Stuijk, Onur Mutlu, and Henk Corporaal. 2021. Modeling FPGA-based systems via few-shot learning. In FPGA.
[152]
Gagandeep Singh, Dionysios Diamantopoulos, Christoph Hagleitner, Juan Gómez-Luna, Sander Stuijk, Onur Mutlu, and Henk Corporaal. 2020. NERO: A near high-bandwidth memory stencil accelerator for weather prediction modeling. In FPL.
[153]
Gagandeep Singh, Dionysios Diamantopoulos, Christoph Hagleitner, Sander Stuijk, and Henk Corporaal. 2019. NARMADA: Near-memory horizontal diffusion accelerator for scalable stencil computations. In FPL.
[154]
Gagandeep Singh, Dionysios Diamantopoulos, Sander Stuijk, Christoph Hagleitner, and Henk Corporaal. 2019. Low precision processing for high order stencil computations. In Springer LNCS.
[155]
Robert Strzodka, Mohammed Shaheen, Dawid Pajak, and Hans-Peter Seidel. 2010. Cache oblivious parallelograms in iterative stencil computations. In ICS.
[156]
Jeffrey Stuecheli et al. 2018. IBM POWER9 opens up a new era of acceleration enablement: OpenCAPI. IBM JRD 62, 4/5 (2018), 8–1.
[157]
Jeffrey Stuecheli, Bart Blaner, C. R. Johns, and M. S. Siegel. 2015. CAPI: A coherent accelerator processor interface. IBM JRD 59, 1 (2015), 7–1.
[158]
B. Sukhwani, T. Roewer, C. L. Haymes, K. Kim, A. J. McPadden, D. M. Dreps, D. Sanner, J. V. Lunteren, and S. Asaad. 2017. ConTutto—A novel FPGA-based prototyping platform enabling innovation in the memory subsystem of a server class processor. In MICRO.
[159]
Lukasz Szustak, Krzysztof Rojek, and Pawel Gepner. 2013. Using Intel Xeon Phi coprocessor to accelerate computations in MPDATA algorithm. In PPAM.
[160]
Yuan Tang, Rezaul Alam Chowdhury, Bradley C. Kuszmaul, Chi-Keung Luk, and Charles E. Leiserson. 2011. The Pochoir stencil compiler. In SPAA.
[161]
Felix Thaler, Stefan Moosbrugger, Carlos Osuna, Mauro Bianco, Hannes Vogt, Anton Afanasyev, Lukas Mosimann, Oliver Fuhrer, Thomas C. Schulthess, and Torsten Hoefler. 2019. Porting the COSMO weather model to Manycore CPUs. In PASC.
[162]
Llewellyn Thomas. 1949. Elliptic problems in linear differential equations over a network. In Watson Sci. Comput. Lab. Report, Columbia University.
[163]
Po-An Tsai et al. 2017. Jenga: Software-defined cache hierarchies. In ISCA.
[164]
Dean M. Tullsen, Susan J. Eggers, and Henry M. Levy. 1995. Simultaneous multithreading: Maximizing on-chip parallelism. In ISCA.
[165]
Jan van Lunteren, Ronald Luijten, Dionysios Diamantopoulos, Florian Auernhammer, Christoph Hagleitner, Lorenzo Chelini, Stefano Corda, and Gagandeep Singh. 2019. Coherently attached programmable near-memory acceleration platform and its application to stencil processing. In DATE.
[166]
Vasily Volkov and James W. Demmel. 2008. Benchmarking GPUs to tune dense linear algebra. In SC.
[167]
Mohamed Wahib and Naoya Maruyama. 2014. Scalable kernel fusion for memory-bound GPU applications. In SC.
[168]
Hasitha Muthumala Waidyasooriya and Masanori Hariyama. 2019. Multi-FPGA accelerator architecture for stencil computation exploiting spacial and temporal scalability. IEEE Access 7 (2019), 53188–53201.
[169]
H. M. Waidyasooriya, Y. Takei, S. Tatsumi, and M. Hariyama. 2017. OpenCL-based FPGA-platform for stencil computation and its optimization methodology. In TPDS.
[170]
Shuo Wang and Yun Liang. 2017. A comprehensive framework for synthesizing stencil algorithms on FPGAs using OpenCL model. In DAC.
[171]
Zeke Wang, Hongjing Huang, Jie Zhang, and Gustavo Alonso. 2020. Shuhai: Benchmarking high bandwidth memory on FPGAs. In FCCM.
[172]
Lukas Wenzel, Robert Schmid, Balthasar Martin, Max Plauth, Felix Eberhardt, and Andreas Polze. 2018. Getting started with CAPI SNAP: Hardware development for software engineers. In Euro-Par.
[173]
Samuel Williams, Andrew Waterman, and David Patterson. 2009. Roofline: An insightful visual performance model for multicore architectures. In CACM.
[174]
Lingxi Wu, Rasool Sharifi, Marzieh Lenjani, Kevin Skadron, and Ashish Venkat. 2021. Sieve: Scalable In-situ DRAM-based accelerator designs for massively parallel k-mer matching. In ISCA.
[175]
Jingheng Xu, Haohuan Fu, Wen Shi, Lin Gan, Yuxuan Li, Wayne Luk, and Guangwen Yang. 2018. Performance tuning and analysis for stencil-based applications on POWER8 processor. ACM TACO 15, 4 (2018), 1–25.
[176]
Dongping Zhang, Nuwan Jayasena, Alexander Lyashevsky, Joseph L. Greathouse, Lifan Xu, and Michael Ignatowski. 2014. TOP-PIM: Throughput-oriented programmable processing in memory. In HPDC.
[177]
Jun A. Zhang, Frank D. Marks, Jason A. Sippel, Robert F. Rogers, Xuejin Zhang, Sundararaman G. Gopalakrishnan, Zhan Zhang, and Vijay Tallapragada. 2018. Evaluating the impact of improvement in the horizontal diffusion parameterization on hurricane prediction in the operational hurricane weather research and forecast (HWRF) model. In Weather and Forecasting.
[178]
Maohua Zhu, Youwei Zhuo, Chao Wang, Wenguang Chen, and Yuan Xie. 2018. Performance evaluation and optimization of HBM-enabled GPU for data-intensive applications. In VLSI.
[179]
Hamid Reza Zohouri, Artur Podobas, and Satoshi Matsuoka. 2018. Combined spatial and temporal blocking for high-performance stencil computation on FPGAs using OpenCL. In FPGA.

Cited By

View all
  • (2024)RUBICON: a framework for designing efficient deep learning-based genomic basecallersGenome Biology10.1186/s13059-024-03181-225:1Online publication date: 16-Feb-2024
  • (2024)MIMDRAM: An End-to-End Processing-Using-DRAM System for High-Throughput, Energy-Efficient and Programmer-Transparent Multiple-Instruction Multiple-Data Computing2024 IEEE International Symposium on High-Performance Computer Architecture (HPCA)10.1109/HPCA57654.2024.00024(186-203)Online publication date: 2-Mar-2024
  • (2024)Simultaneous Many-Row Activation in Off-the-Shelf DRAM Chips: Experimental Characterization and Analysis2024 54th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)10.1109/DSN58291.2024.00024(99-114)Online publication date: 24-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Reconfigurable Technology and Systems
ACM Transactions on Reconfigurable Technology and Systems  Volume 15, Issue 4
December 2022
476 pages
ISSN:1936-7406
EISSN:1936-7414
DOI:10.1145/3540252
  • Editor:
  • Deming Chen
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 06 June 2022
Online AM: 09 February 2022
Accepted: 01 November 2021
Revised: 01 October 2021
Received: 01 July 2021
Published in TRETS Volume 15, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. FPGA
  2. near-memory computing
  3. weather modeling
  4. high-performance computing
  5. processing in memory

Qualifiers

  • Research-article
  • Refereed

Funding Sources

  • H2020 research and innovation programme
  • European Commission under Marie Sklodowska-Curie Innovative Training Networks European Industrial Doctorate

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)234
  • Downloads (Last 6 weeks)21
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)RUBICON: a framework for designing efficient deep learning-based genomic basecallersGenome Biology10.1186/s13059-024-03181-225:1Online publication date: 16-Feb-2024
  • (2024)MIMDRAM: An End-to-End Processing-Using-DRAM System for High-Throughput, Energy-Efficient and Programmer-Transparent Multiple-Instruction Multiple-Data Computing2024 IEEE International Symposium on High-Performance Computer Architecture (HPCA)10.1109/HPCA57654.2024.00024(186-203)Online publication date: 2-Mar-2024
  • (2024)Simultaneous Many-Row Activation in Off-the-Shelf DRAM Chips: Experimental Characterization and Analysis2024 54th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)10.1109/DSN58291.2024.00024(99-114)Online publication date: 24-Jun-2024
  • (2023)SimplePIM: A Software Framework for Productive and Efficient Processing-in-Memory2023 32nd International Conference on Parallel Architectures and Compilation Techniques (PACT)10.1109/PACT58117.2023.00017(99-111)Online publication date: 21-Oct-2023
  • (2023)Casper: Accelerating Stencil Computations Using Near-Cache ProcessingIEEE Access10.1109/ACCESS.2023.325200211(22136-22154)Online publication date: 2023
  • (2022)A novel NVM memory file system for edge intelligenceIEICE Electronics Express10.1587/elex.19.2022007919:8(20220079-20220079)Online publication date: 25-Apr-2022
  • (2022)LEAPER: Fast and Accurate FPGA-based System Performance Prediction via Transfer Learning2022 IEEE 40th International Conference on Computer Design (ICCD)10.1109/ICCD56317.2022.00080(499-508)Online publication date: Oct-2022

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media