Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

A Survey on Edge Performance Benchmarking

Published: 22 April 2021 Publication History

Abstract

Edge computing is the next Internet frontier that will leverage computing resources located near users, sensors, and data stores to provide more responsive services. Therefore, it is envisioned that a large-scale, geographically dispersed, and resource-rich distributed system will emerge and play a key role in the future Internet. However, given the loosely coupled nature of such complex systems, their operational conditions are expected to change significantly over time. In this context, the performance characteristics of such systems will need to be captured rapidly, which is referred to as performance benchmarking, for application deployment, resource orchestration, and adaptive decision-making. Edge performance benchmarking is a nascent research avenue that has started gaining momentum over the past five years. This article first reviews articles published over the past three decades to trace the history of performance benchmarking from tightly coupled to loosely coupled systems. It then systematically classifies previous research to identify the system under test, techniques analyzed, and benchmark runtime in edge performance benchmarking.

References

[1]
M. Aazam, M. St-Hilaire, C. Lung, and I. Lambadaris. 2016. MeFoRE: QoE based resource estimation at fog to enhance QoS in IoT. In Proceedings of the International Conference on Telecommunications. 1--5.
[2]
V. Abeykoon, Z. Liu, R. Kettimuthu, G. Fox, and I. Foster. 2019. Scientific image restoration anywhere. In Proceedings of the IEEE/ACM 1st Annual Workshop on Large-scale Experiment-in-the-Loop Computing. 8--13.
[3]
B. Amento, B. Balasubramanian, R. Hall, K. Joshi, G. Jung, and K. Purdy. 2016. FocusStack: Orchestrating edge clouds using location-based focus of attention. In Proceedings of the IEEE/ACM Symposium on Edge Computing. 179--191.
[4]
E. Anderson, X. Li, M. A. Shah, J. Tucek, and J. J. Wylie. 2010. What consistency does your key-value store actually provide? In Proceedings of the 6th Workshop on Hot Topics in System Dependability. 1--16.
[5]
M. I. Andreica, N. Tapus, C. Dumitrescu, A. Iosup, D. Epema, I. Raicu, I. Foster, and M. Ripeanu. 2006. Towards ServMark, an architecture for testing grid services. Technical Report ServMark-2006-002, Technical University of Delft, July 2006.
[6]
M. Antonini, T. H. Vu, C. Min, A. Montanari, A. Mathur, and F. Kawsar. 2019. Resource characterisation of personal-scale sensing models on edge accelerators. In Proceedings of the 1st International Workshop on Challenges in Artificial Intelligence and Machine Learning for Internet of Things. 49--55.
[7]
G. Apostolopoulos, V. Peris, and D. Saha. 1999. Transport layer security: How much does it really cost? In Proceedings of the 18th Annual Joint Conference of the IEEE Computer and Communications Societies, Vol. 2. 717--725.
[8]
D. H. Bailey, E. Barszcz, J. T. Barton, D. S. Browning, R. L. Carter, L. Dagum, R. A. Fatoohi, P. O. Frederickson, T. A. Lasinski, R. S. Schreiber, and et al. 1991. The NAS parallel benchmarks—Summary and preliminary results. In Proceedings of the ACM/IEEE Conference on Supercomputing. 158--165.
[9]
G. Benelli, G. Meoni, and L. Fanucci. 2018. A low power keyword spotting algorithm for memory constrained embedded systems. In Proceedings of the 2018 IFIP/IEEE International Conference on Very Large Scale Integration. 267--272.
[10]
D. Bermbach. 2014. Benchmarking Eventually Consistent Distributed Storage Systems. Ph.D. Dissertation. Karlsruhe Institute of Technology.
[11]
D. Bermbach and J. Kuhlenkamp. 2013. Consistency in distributed storage systems: An overview of models, metrics and measurement approaches. In Networked Systems, V. Gramoli and R. Guerraoui (Eds.). Lecture Notes in Computer Science, Vol. 7853. Springer, Berlin, 175--189.
[12]
D. Bermbach, J. Kuhlenkamp, A. Dey, A. Ramachandran, A. Fekete, and S. Tai. 2017. BenchFoundry: A benchmarking framework for cloud storage services. In Proceedings of the 15th International Conference on Service Oriented Computing.
[13]
D. Bermbach, F. Pallas, David García Pérez, P. Plebani, M. Anderson, R. Kat, and S. Tai. 2017. A research perspective on fog computing. In Proceedings of the 2nd Workshop on IoT Systems Provisioning and Management for Context-Aware Smart Cities.
[14]
D. Bermbach and S. Tai. 2011. Eventual consistency: How soon is eventual? An evaluation of Amazon S3’s consistency behavior. In Proceedings of the 6th Workshop on Middleware for Service Oriented Computing. Article 1, 1:1--1:6 pages.
[15]
D. Bermbach and S. Tai. 2014. Benchmarking eventual consistency: Lessons learned from long-term experimental studies. In Proceedings of the 2nd International Conference on Cloud Engineering. 47--56.
[16]
D. Bermbach and E. Wittern. 2016. Benchmarking web API quality. In Proceedings of the 16th International Conference on Web Engineering. 188--206.
[17]
D. Bermbach and E. Wittern. 2019. Benchmarking web API quality-revisited. arXiv:1903.07712. Retrieved from https://arxiv.org/abs/1903.07712.
[18]
David Bermbach and Erik Wittern. 2020. Benchmarking web API quality—Revisited. J. Web Eng. 19, 5--6 (2020), 603--646.
[19]
D. Bermbach, E. Wittern, and S. Tai. 2017. Cloud Service Benchmarking: Measuring Quality of Cloud Services from a Client Perspective. Springer.
[20]
C. Binnig, D. Kossmann, T. Kraska, and S. Loesing. 2009. How is the weather tomorrow? Towards a benchmark for the cloud. In Proceedings of the 2nd International Workshop on Testing Database Systems. 1--6.
[21]
A. Bond, D. Johnson, G. Kopczynski, and H. R. Taheri. 2016. Profiling the performance of virtualized databases with the TPCx-V benchmark. In Performance Evaluation and Benchmarking: Traditional to Big Data to Internet of Things, R. Nambiar and M. Poess (Eds.). Springer International Publishing, 156--172.
[22]
A. H. Borhani, P. Leitner, B. S. Lee, X. Li, and T. Hung. 2014. WPress: An application-driven performance benchmark for cloud-based virtual machines. In Proceedings of the IEEE 18th International Enterprise Distributed Object Computing Conference. 101--109.
[23]
B. Boroujerdian, H. Genc, S. Krishnan, W. Cui, A. Faust, and V. J. Reddi. 2018. MAVBench: Micro aerial vehicle benchmarking. In Proceedings of the 51st Annual IEEE/ACM International Symposium on Microarchitecture. 894--907.
[24]
J. Boubin, N. Babu, C. Stewart, J. Chumley, and S. Zhang. 2019. Managing edge resources for fully autonomous aerial systems. In Proceedings of the ACM Symposium on Edge Computing.
[25]
A. Burns and A. J. Wellings. 2001. Real-time Systems and Programming Languages: Ada 95, Real-time Java, and Real-time POSIX. Pearson Education.
[26]
S. Caldas, P. Wu andd T. Li, J.b Konecný, H. B. McMahan, V. Smith, and A. Talwalkar. 2018. LEAF: A benchmark for federated settings. arxiv:1812.01097. Retrieved from http://arxiv.org/abs/1812.01097.
[27]
S. Cass. 2019. Taking AI to the edge: Google’s TPU now comes in a maker-friendly package. IEEE Spectr. 56, 5 (2019), 16--17.
[28]
S. Che, M. Boyer, J. Meng, D. Tarjan, J. W. Sheaffer, S. Lee, and K. Skadron. 2009. Rodinia: A benchmark suite for heterogeneous computing. In Proceedings of the IEEE International Symposium on Workload Characterization. 44--54.
[29]
M. Chen, W. Li, G. Fortino, Y. Hao, L. Hu, and I. Humar. 2019. A dynamic service migration mechanism in edge cognitive computing. ACM Trans. Internet Technol. 19, 2 (2019), 1--15.
[30]
X. Chen, L. Jiao, W. Li, and X. Fu. 2015. Efficient multi-user computation offloading for mobile-edge cloud computing. IEEE/ACM Trans. Netw. 24, 5 (2015), 2795--2808.
[31]
X. Chen, Q. Shi, L. Yang, and J. Xu. 2018. ThriftyEdge: Resource-efficient edge computing for intelligent IoT applications. IEEE Netw. 32, 1 (2018), 61--65.
[32]
Y. Chen, N. Zhang, Y. Zhang, X. Chen, W. Wu, and X. S. Shen. 2019. Energy efficient dynamic offloading in mobile edge computing for Internet of Things. IEEE Trans. Cloud Comput. (2019).
[33]
Z. Chen, L. Jiang, W. Hu, K. Ha, B. Amos, P. Pillai, A. Hauptmann, and M. Satyanarayanan. 2015. Early implementation experience with wearable cognitive assistance applications. In Proceedings of the Workshop on Wearable Systems and Applications. ACM, 33--38.
[34]
C. Coarfa, P. Druschel, and D. S. Wallach. 2006. Performance analysis of TLS web servers. ACM Trans. Comput. Syst. 24, 1 (2006), 39--69.
[35]
B. F. Cooper, A. Silberstein, E. Tam, R. Ramakrishnan, and R. Sears. 2010. Benchmarking cloud serving systems with YCSB. In Proceedings of the 1st ACM Symposium on Cloud Computing. 143--154.
[36]
H. J. Curnow and B. A. Wichmann. 1976. A synthetic benchmark. Comput. J. 19, 1 (01 1976), 43--49.
[37]
A. Danalis, G. Marin, C. McCurdy, J. S. Meredith, P. C. Roth, K. Spafford, V. Tipparaju, and J. S. Vetter. 2010. The scalable heterogeneous computing (SHOC) benchmark suite. In Proceedings of the 3rd Workshop on General-Purpose Computation on Graphics Processing Units. 63--74.
[38]
A. Das, S. Patterson, and M. P. Wittie. 2018. EdgeBench: Benchmarking edge computing platforms. In Proceedings of the 4th International Workshop on Serverless Computing.
[39]
T. Davies, C. Karlsson, H. Liu, C. Ding, and Z. Chen. 2011. High performance LINPACK benchmark: A fault tolerant implementation without checkpointing. In Proceedings of the International Conference on Supercomputing. 162--171.
[40]
D. E. Difallah, A. Pavlo, C. Curino, and P. Cudre-Mauroux. 2013. OLTP-bench: An extensible testbed for benchmarking relational databases. Proc. VLDB Endow. 7, 4 (2013), 277--288.
[41]
G. Dinelli, G. Meoni, E. Rapuano, G. Benelli, and L. Fanucci. 2019. An FPGA-based hardware accelerator for CNNs using on-chip memories only: Design and benchmarking with Intel movidius neural compute stick. Int. J. Reconfig. Comput. 2019, Article 7218758 (2019).
[42]
J. J. Dongarra, P. Luszczek, and A. Petitet. 2003. The LINPACK benchmark: Past, present and future. Concurr. Comput.: Pract. Exp. 15, 9 (2003), 803--820.
[43]
I. Drago, E. Bocchi, M. Mellia, H. Slatman, and A. Pras. 2013. Benchmarking personal cloud storage. In Proceedings of the Internet Measurement Conference. 205--212.
[44]
C. Dumitrescu, I. Raicu, M. Ripeanu, and I. Foster. 2004. DiPerF: An automated distributed performance testing framework. In Proceedings of the IEEE/ACM International Workshop on Grid Computing. 289--296.
[45]
R. Eigenmann and S. Hassanzadeh. 1996. Benchmarking with real industrial applications: The SPEC high-performance group. IEEE Comput. Sci. Eng. 3, 1 (1996), 18--23.
[46]
A. Elhabbash, F. Samreen, J. Hadley, and Y. Elkhatib. 2019. Cloud brokerage: A systematic survey. Comput. Surv. 51, 6, Article 119 (2019).
[47]
W. Felter, A. Ferreira, R. Rajamony, and J. Rubio. 2015. An updated performance comparison of virtual machines and Linux containers. In Proceedings of the IEEE International Symposium on Performance Analysis of Systems and Software. 171--172.
[48]
M. Ferdman, A. Adileh, O. Kocberber, S. Volos, M. Alisafaee, D. Jevdjic, C. Kaynak, A. Daniel Popescu, A. Ailamaki, and B. Falsafi. 2012. Clearing the clouds: A study of emerging scale-out workloads on modern hardware. ACM SIGPLAN Not. 47, 4 (2012), 37--48.
[49]
I. Foster and C. Kesselman (Eds.). 1998. The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann.
[50]
A. Fox and E. A. Brewer. 1999. Harvest, yield, and scalable tolerant systems. In Proceedings of the 7th Workshop on Hot Topics in Operating Systems. 174--178.
[51]
M. Frumkin and R. F. Van der Wijngaart. 2001. NAS grid benchmarks: A tool for grid space exploration. In Proceedings of the 10th IEEE International Symposium on High Performance Distributed Computing. 315--322.
[52]
A. Gaikwad, V. Doan, M. Bossy, F. Baude, and F. Abergel. 2012. SuperQuant financial benchmark suite for performance analysis of grid middlewares. In Modeling, Simulation and Optimization of Complex Processes, H. G. Bock, X. P. Hoang, R. Rannacher, and J. P. Schlöder (Eds.). Springer, Berlin, 103--113.
[53]
Y. Gan, Y. Zhang, D. Cheng, A. Shetty, P. Rathi, N. Katarki, A. Bruno, J. Hu, B. Ritchken, B. Jackson, K. Hu, M. Pancholi, Y. He, B. Clancy, C. Colen, F. Wen, C. Leung, S. Wang, L. Zaruvinsky, M. Espinosa, R. Lin, Z. Liu, J. Padilla, and C. Delimitrou. 2019. An open-source benchmark suite for microservices and their hardware-software implications for cloud and edge systems. In Proceedings of the 24th International Conference on Architectural Support for Programming Languages and Operating Systems. 3--18.
[54]
L. Gillam, B. Li, J. O’Loughlin, and A. P. S. Tomar. 2013. Fair benchmarking for cloud computing system. J. Cloud Comput.: Adv. Syst. Appl. 2, 1 (2013).
[55]
W. Golab, X. Li, and M. A. Shah. 2011. Analyzing consistency properties for fun and profit. In Proceedings of the 30th Symposium on Principles of Distributed Computing. 197--206.
[56]
M. Grambow, F. Lehmann, and D. Bermbach. 2019. Continuous benchmarking: Using system benchmarking in build pipelines. In Proceedings of the 1st Workshop on Service Quality and Quantitative Evaluation in New Emerging Technologies.
[57]
M. Grambow, L. Meusel, E. Wittern, and D. Bermbach. 2020. Benchmarking microservice performance: A pattern-based approach. In Proceedings of the 35th ACM Symposium on Applied Computing.
[58]
J. Guo, Z. Song, Y. Cui, Z. Liu, and Y. Ji. 2017. Energy-efficient resource allocation for multi-user mobile edge computing. In Proceedings of the IEEE Global Communications Conference. 1--7.
[59]
H. Gupta, A. V. Dastjerdi, S. K. Ghosh, and R. Buyya. 2017. iFogSim: A toolkit for modeling and simulation of resource management techniques in the Internet of Things, edge and fog computing environments. Softw.: Pract. Exp. 47, 9 (2017), 1275--1296.
[60]
K. Habak, M. Ammar, K. A. Harras, and E. Zegura. 2015. Femto clouds: Leveraging mobile devices to provide cloud service at the edge. In Proceedings of the IEEE 8th International Conference on Cloud Computing. 9--16.
[61]
T. Hao, Y. Huang, X. Wen, W. Gao, F. Zhang, C. Zheng, L. Wang, H. Ye, K. Hwang, Z. Ren, and J. Zhan. 2018. Edge AIBench: Towards comprehensive end-to-end edge computing benchmarking. In Proceedings of the 1st BenchCouncil International Symposium on Benchmarking, Measuring, and Optimizing. 23--30.
[62]
J. Hasenburg, M. Grambow, E. Grunewald, S. Huk, and D. Bermbach. 2019. MockFog: Emulating fog computing infrastructure in the cloud. In Proceedings of the 1st IEEE International Conference on Fog Computing.
[63]
Jonathan Hasenburg, Sebastian Werner, and David Bermbach. 2018. FogExplorer. In Proceedings of the 19th International Middleware Conference, Demos, and Posters (MIDDLEWARE’18). ACM.
[64]
Jonathan Hasenburg, Sebastian Werner, and David Bermbach. 2018. Supporting the evaluation of fog-based IoT applications during the design phase. In Proceedings of the 5th Workshop on Middleware and Applications for the Internet of Things (M4IoT’18). ACM.
[65]
T. Hauer, P. Hoffmann, J. Lunney, D. Ardelean, and A. Diwan. 2020. Meaningful availability. In Proceedings of the 17th USENIX Symposium on Networked Systems Design and Implementation. 545--557.
[66]
M. A. Heroux and J. J. Dongarra. 2013. Toward a New Metric for Ranking High Performance Computing Systems. Technical Report SAND2013-4744. Sandia National Lab.
[67]
C.-H. Hong and B. Varghese. 2019. Resource management in fog/edge computing: A survey on architectures, infrastructure, and algorithms. Comput. Surv. 52, 5 (2019).
[68]
A. Van Hoorn, J. Waller, and W. Hasselbring. 2012. Kieker: A framework for application performance monitoring and dynamic software analysis. In Proceedings of the 3rd ACM/SPEC International Conference on Performance Engineering. 247--248.
[69]
A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv:1704.04861. Retrieved from https://arxiv.org/abs/1704.04861.
[70]
B. Hu and C. J. Rossbach. 2019. Mirovia: A benchmarking suite for modern heterogeneous computing. arXiv:1906.10347. Retrieved from http://arxiv.org/abs/1906.10347.
[71]
M. H. Ionica and D. Gregg. 2015. The movidius myriad architecture’s potential for scientific computing. IEEE Micro 35, 1 (2015), 6--14.
[72]
A. Iosup and D. Epema. 2006. GRENCHMARK: A framework for analyzing, testing, and comparing grids. In Proceedings of the IEEE International Symposium on Cluster Computing and the Grid. 313--320.
[73]
A. Iosup, S. Ostermann, M. N. Yigitbasi, R. Prodan, T. Fahringer, and D. Epema. 2011. Performance analysis of cloud computing services for many-tasks scientific computing. IEEE Trans. Parallel Distrib. Syst. 22, 6 (2011), 931--945.
[74]
S. Islam, K. Lee, A. Fekete, and A. Liu. 2012. How a consumer can measure elasticity for cloud platforms. In Proceedings of the 3rd ACM/SPEC International Conference on Performance Engineering. 85--96.
[75]
B. Ismail, E. Goortani, M. Ab Karim, W. Tat, S. Setapa, J. Luke, and O. Hoe. 2015. Evaluation of Docker as edge computing platform. In Proceedings of the IEEE Conference on Open Systems. IEEE, 130--135.
[76]
Y. Kang, J. Hauswald, C. Gao, A. Rovinski, T. Mudge, J. Mars, and L. Tang. 2017. Neurosurgeon: Collaborative intelligence between the cloud and mobile edge. In Proceedings of the 22nd International Conference on Architectural Support for Programming Languages and Operating Systems. 615--629.
[77]
K. Kant, R. Iyer, and P. Mohapatra. 2000. Architectural impact of secure socket layer on Internet servers. In Proceedings of the International Conference on Computer Design. 7--14.
[78]
K. Katsalis, T. Papaioannou, N. Nikaein, and L. Tassiulas. 2016. SLA-driven VM scheduling in mobile edge computing. In Proceedings of the IEEE International Conference on Cloud Computing. 750--757.
[79]
M. Klems, M. Menzel, and R. Fischer. 2010. Consistency benchmarking: Evaluating the consistency behavior of middleware services in the cloud. In Service-Oriented Computing, P. Maglio, M. Weske, J. Yang, and M. Fantinato (Eds.). Lecture Notes in Computer Science, Vol. 6470. Springer, Berlin, 627--634.
[80]
O. Kolosov, G. Yadgar, S. Maheshwari, and E. Soljanin. 2020. Benchmarking in the dark: On the absence of comprehensive edge datasets. In Proceedings of the 3rd USENIX Workshop on Hot Topics in Edge Computing.
[81]
D. Komosny, J. Pruzinsky, P. Ilko, J. Polasek, P. Masek, and O. Kocatepe. 2015. On geographic coordinates of PlanetLab europe. In Proceedings of the 38th International Conference on Telecommunications and Signal Processing. 642--646.
[82]
D. Kossmann, T. Kraska, and S. Loesing. 2010. An evaluation of alternative architectures for transaction processing in the cloud. In Proceedings of the 30th International Conference on Management of Data. 579--590.
[83]
C. P. Kruger and G. P. Hancke. 2014. Benchmarking Internet of Things devices. In Proceedings of the 12th IEEE International Conference on Industrial Informatics. 611--616.
[84]
J. Kuhlenkamp, M. Klems, and O. Röss. 2014. Benchmarking scalability and elasticity of distributed database systems. In Proceedings of the International Conference on Very Large Databases. 1219--1230.
[85]
J. Kuhlenkamp, K. Rudolph, and D. Bermbach. 2015. AISLE: Assessment of provisioned service levels in public IaaS-based database systems. In Proceedings of the 13th International Conference on Service-Oriented Computing. 154--168.
[86]
J. Kuhlenkamp, S. Werner, M. C. Borges, D. Ernst, and D. Wenzel. 2020. Benchmarking elasticity of faas platforms as a foundation for objective-driven design of serverless applications. In Proceedings of the 35th ACM/SIGAPP Symposium on Applied Computing.
[87]
C. Lee, M. Lin, C. Yang, and Y. Chen. 2019. IoTBench: A benchmark suite for intelligent Internet of Things edge devices. In Proceedings of the IEEE International Conference on Image Processing. 170--174.
[88]
X. Li, J. Wan, H.-N. Dai, M. Imran, M. Xia, and A. Celesti. 2019. A hybrid computing solution and resource scheduling strategy for edge computing in smart manufacturing. IEEE Trans. Industr. Inf. 15, 7 (2019), 4225--4234.
[89]
L. A. Libutti, F. D. Igual, L. Pinuel, L. De Giusti, and M. Naiouf. 2020. Benchmarking performance and power of USB accelerators for inference with MLPerf. In Proceedings of the 2nd Workshop on Accelerated Machine Learning.
[90]
A. Limaye and T. Adegbija. 2018. HERMIT: A benchmark suite for the Internet of Medical Things. IEEE IoT J. 5, 5 (2018), 4212--4222.
[91]
Y. Lin, B. Kemme, M. Patino-Martinez, and R. Jimenez-Peris. 2007. Enhancing edge computing with database replication. In Proceedings of the IEEE International Symposium on Reliable Distributed Systems. IEEE, 45--54.
[92]
Z. Liu, T. Bicer, R. Kettimuthu, and I. Foster. 2019. Deep learning accelerated light source experiments. In Proceedings of the IEEE/ACM 3rd Workshop on Deep Learning on Supercomputers. 20--28.
[93]
Z. Liu, T. Bicer, R. Kettimuthu, D. Gursoy, F. De Carlo, and I. Foster. 2019. Tomogan: Low-dose X-ray tomography with generative adversarial networks. arXiv:1902.07582. Retrieved from https://arxiv.org/abs/1902.07582.
[94]
M. M. Lopes, W. A. Higashino, M. A. M. Capretz, and L. F. Bittencourt. 2017. MyiFogSim: A simulator for virtual machine migration in fog computing. In Companion Proceedings of the 10th International Conference on Utility and Cloud Computing. 47--52.
[95]
C. Luo, J. Zhan, Z. Jia, L. Wang, G. Lu, L. Zhang, and N. Sun C. Z. Xu, 3. 2012. CloudRank-D: Benchmarking and ranking cloud computing systems for data processing applications. Front. Comput. Sci. 6, 4 (2012), 347--362.
[96]
C. Luo, F. Zhang, C. Huang, X. Xiong, J. Chen, L. Wang, W. Gao, H. Ye, T. Wu, R. Zhou, and J. Zhan. 2019. AIoT bench: Towards comprehensive benchmarking mobile and embedded device intelligence. In Benchmarking, Measuring, and Optimizing, C. Zheng and J. Zhan (Eds.). Springer International Publishing, 31--35.
[97]
P. R. Luszczek, D. H. Bailey, J. J. Dongarra, J. Kepner, R. F. Lucas, R. Rabenseifner, and D. Takahashi. 2006. The HPC challenge (HPCC) benchmark suite. In Proceedings of the ACM/IEEE Conference on Supercomputing.
[98]
L. Ma, S. Yi, and Q. Li. 2017. Efficient service handoff across edge servers via Docker container migration. In Proceedings of the ACM/IEEE Symposium on Edge Computing. 1--13.
[99]
A. Madej, N. Wang, N. Athanasopoulos, R. Ranjan, and B. Varghese. 2020. Priority-based fair scheduling in edge computing. In Proceedings of the International Conference on Fog and Edge Computing.
[100]
A. Madhavapeddy, R. Mortier, C. Rotsos, D. Scott, B. Singh, T. Gazagnaire, S. Smith, S. Hand, and J. Crowcroft. 2013. Unikernels: Library operating systems for the cloud. ACM SIGARCH Comput. Arch. News 41, 1 (2013), 461--472.
[101]
J. D. McCalpin. 1995. Memory bandwidth and machine balance in current high performance computers. IEEE Techn. Commit. Comput. Arch. Newslett. 2 (1995).
[102]
J. McChesney, N. Wang, A. Tanwer, E. de Lara, and B. Varghese. 2019. DeFog: Fog computing benchmarks. In Proceedings of the 4th ACM/IEEE Symposium on Edge Computing. 47--58.
[103]
S. Mittal. 2018. Survey of FPGA-based accelerators for convolutional neural networks. Neural Comput. Appl. 32, 4 (2018), 1--31.
[104]
R. Morabito. 2017. Virtualization on Internet of Things edge devices with container technologies: A performance evaluation. IEEE Access 5 (2017), 8835--8850.
[105]
R. Morabito, V. Cozzolino, A. Ding, N. Beijar, and J. Ott. 2018. Consolidate IoT edge computing with lightweight virtualization. IEEE Network 32, 1 (2018), 102--111.
[106]
S. Müller, D. Bermbach, S. Tai, and F. Pallas. 2014. Benchmarking the performance impact of transport layer security in cloud database systems. In Proceedings of the 2nd International Conference on Cloud Engineering. IEEE.
[107]
Raul Muñoz, Laia Nadal, Ramon Casellas, Michela Svaluto Moreolo, Ricard Vilalta, Josep Maria Fàbrega, Ricardo Martínez, Arturo Mayoral, and Fco. Javier Vílchez. 2017. The ADRENALINE testbed: An SDN/NFV packet/optical transport network and edge/core cloud platform for end-to-end 5G and IoT services. In Proceedings of the European Conference on Networks and Communications. 1--5.
[108]
F. Nadeem, R. Prodan, T. Fahringer, and A. Iosup. 2008. Benchmarking Grid Applications. Springer US, Boston, MA, 19--37.
[109]
R. Nambiar, M. Poess, A. Dey, P. Cao, T. Magdon-Ismail, D. Q. Ren, and A. Bond. 2015. Introducing TPCx-HS: The first industry standard for benchmarking big data systems. In Performance Characterization and Benchmarking. Traditional to Big Data, R. Nambiar and M. Poess (Eds.). Springer International Publishing, 1--12.
[110]
M. O. J. Olguín Muñoz, J. Wang, M. Satyanarayanan, and J. Gross. 2019. EdgeDroid: An experimental approach to benchmarking human-in-the-loop applications. In Proceedings of the 20th International Workshop on Mobile Computing Systems and Applications. 93--98.
[111]
T. Palit, Yongming Shen, and M. Ferdman. 2016. Demystifying cloud benchmarking. In Proceedings of the IEEE International Symposium on Performance Analysis of Systems and Software. 122--132.
[112]
F. Pallas, D. Bermbach, S. Müller, and S. Tai. 2017. Evidence-based security configurations for cloud datastores. In Proceedings of the the 32nd ACM Symposium on Applied Computing.
[113]
F. Pallas, J. Günther, and D. Bermbach. 2017. Pick your choice in HBase: Security or performance. In Proceedings of the IEEE International Conference on Big Data.
[114]
F. Pallas, P. Raschke, and D. Bermbach. 2020. Fog computing as privacy enabler. IEEE Internet Comput. 24, 4 (2020), 15--21.
[115]
S. Patil, M. Polte, K. Ren, W. Tantisiriroj, L. Xiao, J. López, G. Gibson, A. Fuchs, and B. Rinaldi. 2011. YCSB++: Benchmarking and performance debugging advanced features in scalable table stores. In Proceedings of the 2nd Symposium on Cloud Computing. Article 9, 9:1--9:14 pages.
[116]
T. Rabl, M. Sadoghi, H.-A. Jacobsen, S. Gómez-Villamor, V. Muntés-Mulero, and S. Mankovskii. 2012. Solving big data challenges for enterprise application performance management. Proc. VLDB Endow. 5, 12 (2012).
[117]
N. Rajovic, A. Rico, N. Puzovic, C. Adeniyi-Jones, and A. Ramirez. 2014. Tibidabo: Making the case for an ARM-based HPC system. Fut. Gener. Comput. Syst. 36 (2014), 322--334.
[118]
F. Ramalho and A. Neto. 2016. Virtualization at the network edge: A performance comparison. In Proceedings of the IEEE 17th International Symposium on A World of Wireless, Mobile and Multimedia Networks. 1--6.
[119]
V. J. Reddi, C. Cheng, D. Kanter, P. Mattson, G. Schmuelling, C.-J. Wu, B. Anderson, M. Breughe, M. Charlebois, and W. Chou et al. 2019. MLPerf inference benchmark. arXiv:1911.02549. Retrieved from https://arxiv.org/abs/1911.02549.
[120]
A. Reuther, P. Michaleas, M. Jones, V. Gadepally, S. Samsi, and J. Kepner. 2019. Survey and benchmarking of machine learning accelerators. In Proceedings of the 24th Annual IEEE High Performance Extreme Computing Conference.
[121]
B. P. Rimal, M. Maier, and M. Satyanarayanan. 2018. Experimental testbed for edge computing in fiber-wireless broadband access networks. IEEE Commun. Mag. 56, 8 (2018), 160--167.
[122]
D. Roy, D. De, A. Mukherjee, and R. Buyya. 2016. Application-aware cloudlet selection for computation offloading in multi-cloudlet environment. J. Supercomput. 73, 4 (2016), 1672--1690.
[123]
M. Ryden, K. Oh, A. Chandra, and J. Weissman. 2014. Nebula: Distributed edge cloud for data intensive computing. In Proceedings of the IEEE International Conference on Cloud Engineering. 57--66.
[124]
S. Sardellitti, G. Scutari, and S. Barbarossa. 2015. Joint optimisation of radio and computational resources for multicell mobile-edge computing. IEEE Trans. Sign. Inf. Process. Netw. 1, 2 (2015), 89--103.
[125]
M. Satyanarayanan. 2017. The emergence of edge computing. Computer 50, 1 (2017), 30--39.
[126]
M. Satyanarayanan, P. Bahl, R. Caceres, and N. Davies. 2009. The case for VM-based cloudlets in mobile computing. IEEE Perv. Comput. 8, 4 (2009), 14--23.
[127]
B. Schroeder, A. Wierman, and M. Harchol-Balter. 2006. Open versus closed: A cautionary tale. In Proceedings of the 3rd Conference on Networked Systems Design & Implementation.
[128]
M. Schuundefined, C. A. Boano, M. Weber, and K. Römer. 2017. A competition to push the dependability of low-power wireless protocols to the edge. In Proceedings of the International Conference on Embedded Wireless Systems and Networks. 54--65.
[129]
L. Shao, J. Zhao, T. Xie, L. Zhang, B. Xie, and H. Mei. 2009. User-perceived service availability: A metric and an estimation approach. In Proceedings of the IEEE International Conference on Web Services. 647--654.
[130]
S. Sharma, C.-H. Hsu, and W.-C. Feng. 2006. Making a case for a Green500 list. In Proceedings of the 20th IEEE International Parallel Distributed Processing Symposium.
[131]
S. Shastri, V. Banakar, M. Wasserman, A. Kumar, and V. Chidambaram. 2019. Understanding and benchmarking the impact of GDPR on database systems. arXiv:1910.00728. Retrieved from https://arxiv.org/abs/1910.00728.
[132]
W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu. 2016. Edge computing: Vision and challenges. IEEE IoT J. 3, 5 (2016), 637--646.
[133]
A. Shukla, S. Chaturvedi, and Y. Simmhan. 2017. RIoTBench: An IoT benchmark for distributed stream processing systems. Concurr. Comput.: Pract. Exp. 29, 21 (2017), e4257.
[134]
O. Skarlat, M. Nardelli, S. Schulte, M. Borkowski, and P. Leitner. 2017. Optimized IoT service placement in the fog. Serv. Orient. Comput. Appl. 11, 4 (2017), 427--443.
[135]
O. Skarlat, M. Nardelli, S. Schulte, and S. Dustdar. 2017. Towards QoS-aware fog service placement. In Proceedings of the IEEE International Conference on Fog and Edge Computing. IEEE. 89--96.
[136]
A. Snavely, G. Chun, H. Casanova, R. F. Van der Wijngaart, and M. Frumkin. 2003. Benchmarks for grid computing: A review of ongoing efforts and future directions. SIGMETRICS Perf. Eval. Rev. 30, 4 (2003), 27--32.
[137]
C. Sonmez, A. Ozgovde, and C. Ersoy. 2017. EdgeCloudSim: An environment for performance evaluation of edge computing systems. In Proceedings of the International Conference on Fog and Mobile Edge Computing. 39--44.
[138]
C. Sonmez, A. Ozgovde, and C. Ersoy. 2019. Fuzzy workload orchestration for edge computing. IEEE Trans. Netw. Serv. Manage. 16, 2 (2019), 769--782.
[139]
S. Sridhar and M. E. Tolentino. 2017. Evaluating voice interaction pipelines at the edge. In Proceedings of the IEEE International Conference on Edge Computing. 248--251.
[140]
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. 2015. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1--9.
[141]
M. Toeroe and F. Tam. 2012. Service Availability: Principles and Practice. John Wiley & Sons.
[142]
F. P. Tso, D. R. White, S. Jouet, J. Singer, and D. P. Pezaros. 2013. The Glasgow Raspberry Pi Cloud: A scale model for cloud computing infrastructures. In Proceedings of the 1st International Workshop on Resource Management of Cloud Computing. 108--112.
[143]
G. Tsouloupas and M. D. Dikaiakos. 2003. GridBench: A tool for benchmarking grids. In Proceedings of the 4th International Workshop on Grid Computing.
[144]
B. Varghese, O. Akgun, I. Miguel, L. Thai, and A. Barker. 2014. Cloud benchmarking for performance. In Proceedings of the IEEE International Conference on Cloud Computing Technology and Science. 535--540.
[145]
B. Varghese, O. Akgun, I. Miguel, L. Thai, and A. Barker. 2019. Cloud benchmarking for maximising performance of scientific applications. IEEE Trans. Cloud Comput. 7, 1 (2019), 170--182.
[146]
B. Varghese and R. Buyya. 2018. Next generation cloud computing: New trends and research directions. Fut. Gener. Comput. Syst. 79, 3 (2018), 849--861.
[147]
B. Varghese, P. Leitner, S. Ray, K. Chard, A. Barker, Y. Elkhatib, H. Herry, C. Hong, J. Singer, F. P. Tso, E. Yoneki, and M. Zhani. 2019. Cloud futurology. Computer 52, 9 (2019), 68--77.
[148]
B. Varghese, C. Reaño, and F. Silla. 2018. Accelerator virtualization in fog computing: Moving from the cloud to the edge. IEEE Cloud Comput. 5, 6 (2018), 28--37.
[149]
B. Varghese, L. T. Subba, L. Thai, and A. Barker. 2016. Container-based cloud virtual machine benchmarking. In Proceedings of the IEEE International Conference on Cloud Engineering. 192--201.
[150]
B. Varghese, N. Wang, S. Barbhuiya, P. Kilpatrick, and D. S. Nikolopoulos. 2016. Challenges and opportunities in edge computing. In Proceedings of the IEEE International Conference on Smart Cloud. 20--26.
[151]
L. Villas, A. Boukerche, H. De Oliveira, R. De Araujo, and A. Loureiro. 2014. A spatial correlation aware algorithm to perform efficient data collection in wireless sensor networks. Ad Hoc Netw. 12 (2014), 69--85.
[152]
H. Wada, A. Fekete, L. Zhao, K. Lee, and A. Liu. 2011. Data consistency properties and the trade-offs in commercial cloud storages: The consumers’ perspective. In Proceedings of the 5th Conference on Innovative Data Systems Research. 134--143.
[153]
J. Wang, Z. Feng, S. George, R. Iyengar, P. Pillai, and M. Satyanarayanan. 2019. Towards scalable edge-native applications. In Proceedings of the 4th ACM/IEEE Symposium on Edge Computing. 152--165.
[154]
N. Wang, M. Matthaiou, D. S. Nikolopoulos, and B. Varghese. 2020. DYVERSE: DYnamic VERtical scaling in multi-tenant edge environments. Fut. Gener. Comput. Syst. 108 (2020), 598--612.
[155]
N. Wang, B. Varghese, M. Matthaiou, and D. S. Nikolopoulos. 2020. ENORM: A framework for edge node resource management. IEEE Trans. Serv. Comput. 13, 6 (2020), 1086--1099.
[156]
Y. Wang, S. Liu, X. Wu, and W. Shi. 2018. CAVBench: A benchmark suite for connected and autonomous vehicles. In Proceedings of the IEEE/ACM Symposium on Edge Computing. 30--42.
[157]
R. P. Weicker. 1984. Dhrystone: A synthetic systems programming benchmark. Commun. ACM 27, 10 (1984), 1013--1030.
[158]
H. Wu, F. Liu, and R. B. Lee. 2016. Cloud server benchmark suite for evaluating new hardware architectures. IEEE Comput. Arch. Lett. (2016).
[159]
Y. Xiao and M. Krunz. 2017. QoE and power efficiency tradeoff for fog computing networks with fog node cooperation. In Proceedings of the IEEE Conference on Computer Communications. IEEE, 1--9.
[160]
X. Yang, X. Li, Y. Ji, and M. Sha. 2008. CROWNBench: A grid performance testing system using customizable synthetic workload. In Progress in WWW Research and Development, Y. Zhang, G. Yu, E. Bertino, and G. Xu (Eds.). Springer, Berlin, 190--201.
[161]
Y. Yao, Q. Cao, and A. Vasilakos. 2013. EDAL: An energy-efficient, delay-aware, and lifetime-balancing data collection protocol for wireless sensor networks. In Proceedings of the IEEE International Conference on Mobile Ad-Hoc and Sensor Systems. 182--190.
[162]
C. You, K. Huang, H. Chae, and B.-H. Kim. 2016. Energy-efficient resource allocation for mobile-edge computation offloading. IEEE Trans. Wireless Commun. 16, 3 (2016), 1397--1411.
[163]
K. Zellag and B. Kemme. 2012. How consistent is your cloud application? In Proceedings of the 3rd Symposium on Cloud Computing. Article 6, 6:1--6:14 pages.
[164]
K. Zhang, Y. Mao, S. Leng, S. Maharjan, and Y. Zhang. 2017. Optimal delay constrained offloading for vehicular edge computing networks. In Proceedings of the IEEE International Conference on Communications. IEEE, 1--6.
[165]
X. Zhang, Y. Wang, and W. Shi. 2018. pCAMP: Performance comparison of machine learning packages on the edges. In Proceedings of the USENIX Workshop on Hot Topics in Edge Computing.
[166]
J. Zhao, Q. Li, Y. Gong, and K. Zhang. 2019. Computation offloading and resource allocation for cloud assisted mobile edge computing in vehicular networks. IEEE Trans. Vehic. Technol. 68, 8 (2019), 7944--7956.
[167]
P. Zhao, H. Tian, C. Qin, and G. Nie. 2017. Energy-saving offloading by jointly allocating radio and computational resources for mobile edge computing. IEEE Access 5 (2017), 11255--11268.

Cited By

View all
  • (2024)A Review of Recent Hardware and Software Advances in GPU-Accelerated Edge-Computing Single-Board Computers (SBCs) for Computer VisionSensors10.3390/s2415483024:15(4830)Online publication date: 25-Jul-2024
  • (2024)Analyzing Threats and Attacks in Edge Data Analytics within IoT EnvironmentsIoT10.3390/iot50100075:1(123-154)Online publication date: 5-Mar-2024
  • (2024)Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the UglyProceedings of the Eighth Workshop on Data Management for End-to-End Machine Learning10.1145/3650203.3663331(39-50)Online publication date: 9-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Computing Surveys
ACM Computing Surveys  Volume 54, Issue 3
April 2022
836 pages
ISSN:0360-0300
EISSN:1557-7341
DOI:10.1145/3461619
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 April 2021
Accepted: 01 December 2020
Revised: 01 December 2020
Received: 01 April 2020
Published in CSUR Volume 54, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Edge computing
  2. benchmark runtime
  3. edge performance benchmarking
  4. system under test
  5. techniques analyzed

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

  • Rakuten Mobile, Japan
  • Korea government (MSIP)
  • National Research Foundation of Korea (NRF)
  • Royal Society Short Industry Fellowship

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)341
  • Downloads (Last 6 weeks)29
Reflects downloads up to 30 Aug 2024

Other Metrics

Citations

Cited By

View all
  • (2024)A Review of Recent Hardware and Software Advances in GPU-Accelerated Edge-Computing Single-Board Computers (SBCs) for Computer VisionSensors10.3390/s2415483024:15(4830)Online publication date: 25-Jul-2024
  • (2024)Analyzing Threats and Attacks in Edge Data Analytics within IoT EnvironmentsIoT10.3390/iot50100075:1(123-154)Online publication date: 5-Mar-2024
  • (2024)Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the UglyProceedings of the Eighth Workshop on Data Management for End-to-End Machine Learning10.1145/3650203.3663331(39-50)Online publication date: 9-Jun-2024
  • (2024)The Necessary Shift: Toward a Sufficient Edge ComputingIEEE Pervasive Computing10.1109/MPRV.2024.338633723:2(7-16)Online publication date: 1-Apr-2024
  • (2024)Sparse Measurement Algorithm Execution Time Prediction on Heterogeneous Edge Devices for Early Stage Software-Hardware Matching2024 IEEE 7th International Conference on Industrial Cyber-Physical Systems (ICPS)10.1109/ICPS59941.2024.10640034(1-8)Online publication date: 12-May-2024
  • (2024)Deep Neural Network Representation for Explainable Machine Learning Algorithms: A Method for Hardware Acceleration2024 IEEE International Instrumentation and Measurement Technology Conference (I2MTC)10.1109/I2MTC60896.2024.10560978(1-6)Online publication date: 20-May-2024
  • (2024)Balancing Energy Efficiency and Portability: Assessing Domain-Specific Languages in Edge Platforms2024 20th International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT)10.1109/DCOSS-IoT61029.2024.00032(162-169)Online publication date: 29-Apr-2024
  • (2024)Data cube-based storage optimization for resource-constrained edge computingHigh-Confidence Computing10.1016/j.hcc.2024.100212(100212)Online publication date: Feb-2024
  • (2024)Service Provisioning at the Edge: An AI Approach Based on PoliciesIoT Edge Intelligence10.1007/978-3-031-58388-9_5(149-172)Online publication date: 29-Mar-2024
  • (2023)A Study of Mobile Edge Computing for IOTIoT, Cloud and Data Science10.4028/p-2u34v7(856-863)Online publication date: 27-Feb-2023
  • Show More Cited By

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media