Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Implementation Issues and Resource-Aware ML: Student Instructor Institution Course Code Date

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 12

Implementation Issues and Resource-Aware

ML

Student

Instructor

Institution

Course code

Date
Introduction

• Deep neural networks and ensemble methods are just two examples of ML

algorithms that have proven they can handle challenging problems.

• However, as ML models grow in size and complexity, the practical

implementation of these models faces significant challenges (Shalaginov et

al.,2019).

• Resource-aware ML approaches have attracted the interest of researchers

and practitioners as a solution to these implementation problems

Problem statement

• The actual use of machine learning (ML) models faces numerous

difficulties as they increase in size and complexity, particularly concerning

the effective use of computational resources.


• Memory, processing power, energy use, and deployment costs are only a

few of ML models' resource demands that frequently surpass conventional

computing systems' capabilities (Shalaginov et al.,2019).

• Due to this, it is urgent to develop machine learning (ML) algorithms that

enhance model performance while considering resource limitations.

Relevance and significance

• To overcome constraints imposed by limited contexts, efficient use of

computer resources is essential

• Scalability becomes increasingly essential as ML models grow more

significant and more complicated.

• ML has the potential to completely transform a variety of industries,

including healthcare, finance, and transportation.


Research Questions

1. What specific implementation problems arise when applying ML

models in real-world contexts?

2. How do resource limitations affect ML models' effectiveness,

scalability, and performance?

3. What are the effects of resource inefficiency on energy usage and

deployment costs?

4. How might resource-aware machine learning approaches lessen

the difficulties constrained computational resources create?

5. What compromises must be made to optimize ML models for

limited contexts, considering precision, speed, memory footprint,

and energy usage?


Barriers and Issues

• The need for more sufficient computational resources, such as processing

power, memory, and storage, is one of the main obstacles.

• In particular, deep neural network-based ML models can consume much

energy during the inference and training phases.

Literature review

• According to the research, software-defined wireless sensor networks

increasingly integrate machine learning techniques, which have enhanced

network performance, energy efficiency, and scalability.

• The research goal at the Competence Center Machine Learning Rhine-

Ruhr is to aid in advancing resource-aware ML techniques.

• Applying probabilistic programming techniques might improve the


resource awareness of machine learning algorithms.

• The ultimate goal is to make it possible to deploy ML models .

• The research focuses on creating a methodology or framework that

improves resource usage and efficiency in ML models (Wu,2019).

• The results may enhance resource-aware machine learning algorithms

and illuminate the advantages and real-world uses of adding

probabilistic programming

Research Methodology

Qualitative methodology

• Setting up focus groups with various stakeholders from various

companies will further improve the quality of our qualitative data in

addition to doing individual interviews


• We will employ exhaustive data collection techniques to capture

participant narratives during the qualitative data collection phase fully.

• In this study, we will augment the insights gained from qualitative data

by adding a quantitative component to our analysis of the database

application system life cycle using quantitative methodologies.

• Statistical techniques appropriate for the nature of the data and the

study's goals will be used to investigate the quantitative data from the

surveys thoroughly.

• In this study, we will augment the insights gained from qualitative data

by adding a quantitative component to our analysis of the database

application system life cycle using quantitative methodologies.


Findings, Analysis and Discussion

• The investigation revealed that there are considerable differences

between hardware resources in terms of machine learning models'

accuracy.

• On particular hardware, some models performed better than others,

while others showed higher consistency in accuracy over a range of

combinations.

• The study discovered differences in the machine learning models' use

of resources.

• On specific hardware resources, several models showed exceptional

efficiency, achieving great accuracy with comparatively quick training

and inference times.


• .Surprisingly, it was discovered that increased model accuracy was not

always related to longer training sessions (Murshed et al.,2021).

• According to the investigation, choosing machine learning models based

on the available hardware resources is essential for maximizing

performance and resource use.

• The efficiency of various models on various hardware varies, and

selecting the best configuration can significantly impactystem's overall

performance (Kasarapu et al.,2023).

• The study highlighted how much hardware resources affect model

performance.

• Deep neural networks are computationally intensive models that can

benefit from GPU or TPU acceleration.


• Future research in resource-aware ML implementation will be

more and more crucial as technology develops.

• The capabilities of resource-aware machine learning systems will

be further improved by investigating novel hardware accelerators,

adaptive learning algorithms, and effective model architectures.

• Knowing how hardware and resources interact in cloud-based or

data center environments can also result in scalable and affordable

deployments where models are assigned to the best hardware

configurations.

• Practitioners can choose wisely regarding hardware investments

and deployment techniques by solidly understanding these

resource-specific performance characteristics.


Conclusion

• In order to maximize resource usage and improve efficiency, we also looked at the prospect of expanding resource-aware ML

approaches with probabilistic programming.

• These study topics provide exciting directions for future advancements in this quickly developing discipline and provide light

on the significance of considering resource limits in machine learning applications.

• Additionally, the correlation analysis contributes to the general comprehension of resource-aware machine learning

implementation by illuminating the relationship between training time and hardware resources and model correctness.

Limitations

• Consider using model quantization and compression techniques to increase resource efficiency.

• T implementation study's conclusions could not accurately reflect situations that occur in real life since it is data is fictious

Recommendations
• Avoid training sessions that are too lengthy since they could result in accuracy losses and waste computational resources.
• Cabanillas, C., Ackermann, L., Schönig, S., Sturm, C., & Mendling, J. (2020). The RALph miner for automated discovery and

verification of resource-aware process models. Software and Systems Modeling, 19, 1415-1441.

https://link.springer.com/article/10.1007/s10270-020-00820-7

• Das, A., Balzer, S., Hoffmann, J., Pfenning, F., & Santurkar, I. (2019). Resource-aware session types for digital

contracts. arXiv preprint arXiv:1902.06056. https://arxiv.org/abs/1902.06056

•Ferikoglou, A. (2020). Resource-aware GPU scheduling in Kubernetes infrastructure.

https://dspace.lib.ntua.gr/xmlui/bitstream/handle/123456789/51534/thesis_ferikoglou.pdf?sequence=1

• Imteaj, A., & Amini, M. H. (2020, December). Fedar: Activity and resource-aware federated learning model for distributed

mobile robots. In 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA) (pp. 1153-

1160). IEEE. https://ieeexplore.ieee.org/abstract/document/9356216/

• Leandro, F. (2021). A Resource-Aware Federated Learning Simulation Platform (Doctoral dissertation).

You might also like