International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING , 2024
This paper presents a State-of-the-Art Gaussian Distributive Optimized Congruential Cryptographic... more This paper presents a State-of-the-Art Gaussian Distributive Optimized Congruential Cryptographic Deep Multilayer Perceptive Network (GD-DMPN) for achieving Load Balancing and Secure Data Outsourcing in Federated Cloud. The GD-DMPN can efficiently distribute and correlate data over many nodes, making it a valuable tool for data management in federated Clouds. The GD-DMPN can also exploit multiple layers of Perceptual Learning for enhanced data correlation and load balancing. As the world moves more and more towards digitalization, the demand for cloud services is increasing rapidly. Cloud services allow users to access their data and applications anywhere, anytime. However, the use of cloud services also raises security and privacy concerns. Several research studies have proposed using a federated cloud to address these concerns. Federated cloud is a type of cloud computing where a group of organizations cooperate to provide cloud services. Each organization in the federated cloud has its portion of the total resources available. This type of cloud computing has several advantages over other types, such as improved security and privacy and giving users more control over their data. This study proposes a state-of-the-art Gaussian distributive optimized congruential cryptographic deep multilayer perceptive network for load balancing and secure data outsourcing in a federated cloud. Our proposed network is based on the Gaussian distribution, a wellknown statistical distribution. We use the Gaussian distribution to distribute the resources among the organizations in the federated cloud. This ensures that each organization has access to the resources it needs while providing a degree of security and privacy. We also propose a deep multilayer perceptive network for our proposed system. This network is used to monitor the activities of the organizations in the federated cloud and to provide feedback to the system. This feedback is used to optimize the system and ensure the resources are used efficiently. Our proposed system can provide many benefits, such as improved security, privacy, and efficiency. In addition, our system can provide users with more control over their data. Our proposed system has the potential to revolutionize the federated cloud and provide users with a more secure and private way to access their data.
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
This paper proposes a novel deep multilayer perceptive network (DMPLN) based on Gaussian distribu... more This paper proposes a novel deep multilayer perceptive network (DMPLN) based on Gaussian distributive optimized congruential cryptographic (GDOC) for load balancing and secure data outsourcing in federated cloud computing. DMPLN can effectively improve the security and efficiency of federated cloud computing. Specifically, we use GDOC to encrypt data and then DMPLN to classify the encrypted data. The advantage of GDOC is that it can improve data security and the efficiency of data classification. Furthermore, we use the improved particle swarm optimization (IPSO) algorithm to optimize DMPLN, which can further improve classification accuracy. The results of our thorough tests, which we conduct on two real-world datasets, demonstrate that our suggested strategy may significantly increase the security and effectiveness of federated cloud computing. A highly efficient and secure deep multilayer perceptive network based on Gaussian distributive optimized congruential cryptographic for load balancing and secure data outsourcing in federated cloud computing. A highly efficient and secure deep multilayer perceptive network is proposed based on Gaussian distributive optimized congruential cryptography for load balancing and secure data outsourcing in federated cloud computing. The Gaussian Distributive Optimized congruential cryptographic technique used for load balancing and secure data outsourcing in federated cloud computing. The suggested solution balances the load and safeguards the data in federated cloud computing. The suggested method employs the Gaussian distributive optimal congruential cryptography method to safeguard the data in federated cloud computing.
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
Since IT permits the establishment of a unique virtual network (VN) on a common physical infrastr... more Since IT permits the establishment of a unique virtual network (VN) on a common physical infrastructure, network virtualization technology research and application have gained popularity in the communication industry in recent years. A proposal has been put out to virtualize wireless networks with the explicit intention of improving the future of the Internet. In order to establish segregation among slices of wireless networks, Bilateral auction-based resource allocation is established., considering that virtual network mapping algorithms are efficient and necessary for the building of virtual networks on the SN using these technologies (VNE). Presently, the consensus among industry experts is that network virtualization technology is a viable remedy for the limited architecture of the Internet. Presently, the consensus among industry experts is that network virtualization technology is a viable remedy for the limited architecture of the Internet. As a result of the scarcity of fundamental technological investigations concerning wireless access network virtualization and the preponderance of current research in this domain being devoted to cable network virtualization, which is predominantly linked to the backbone and network of data centres, wireless network virtualization has become a focal point of academic and industrial research. This initiative aims to expedite the development of wireless skills in order to foster innovation and meet the ever-changing needs of the industry. However, there has been no effort to implement the dynamic resource allocation technique for resource sharing in virtualized wireless networks. Present strategies for allocating resources on local and global virtual networks are examined and summarised in this study. The analysis of the topology properties of physical and virtual networks is conducted using the centrality theory of social and complex networks. Additionally, this study proposes two efficient methods for resource allocation in wireless and cross-domain virtual networks and develops two models for such networks.
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
Using a deep learning methodology, this article conducts an exhaustive examination and study of i... more Using a deep learning methodology, this article conducts an exhaustive examination and study of intelligent resource allocation in wireless communication networks. The investigation begins with an examination of the methods and concepts behind CSCN architecture, in addition to the throughput of SBS (small base stations) included into this design. Thus, an LSTM (long short-term memory network) model is then built to forecast the mobile positions of users. User transmission conditions are assessed using two factors: the location of the users' mobile devices and the condition of the tiny base stations to which they are linked, ensuring that the cache settings are in the intended state. Upon careful examination of the scores, the tiny base station ascertains which users are in possession of the most advantageous transmission circumstances. Throughput optimization in networks is seen as a multi-agent, noncooperative game problem that may be approached using game theory. The purpose of this study is to allow the tiny base station to autonomously learn and choose channel resources in line with the network environment so as to optimise performance by creating a deep augmented learning-based method for wireless resource allocation. When compared to the standard random-access approach and the algorithm reported in the literature, simulation findings suggest that the method presented in this study significantly boosts network throughput. We provide a framework-based resource control technique in this study by tackling the difficulty of user traffic distribution within fine-grained resource management. Despite having a processing cost similar to polynomials, the findings suggest that the resource management approach exhibits a performance that is unexpectedly comparable to that of a proportional fair user dual connection strategy based on matching. In order to allocate available resources and delegate duties, the subsequent course of action is to implement the optimized decision strategy. Once the intelligent entities have undergone training, they will independently execute these activities in accordance with the present condition of the system and the predetermined policy. The results of the simulations indicate that, in conclusion, the algorithm has the potential to reduce latency and energy consumption while improving user experience.
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
This paper presents a deep recurrent network (DRN) for green task scheduling in the cloud. The DR... more This paper presents a deep recurrent network (DRN) for green task scheduling in the cloud. The DRN is designed to optimize resource allocation by learning the dependencies between tasks and their resources. Experimental results show that the DRN can achieve significantly better resource utilization than several state-of-the-art optimization algorithms. Due to its many benefits, including flexibility, mobility, and scalability, cloud computing has recently gained popularity. However, deploying large-scale cloud applications can be challenging due to resource allocation problems. This paper proposes a logistic regression-based deep recurrent network (LRDN) that can successfully address the cloud computing issue of green job scheduling. Our LRDN can achieve near-optimal resource allocation by predicting future resource demand and adjusting the allocation accordingly. Our LRDN also outperforms a state-of-the-art deep recurrent network in several resource-intensive scenarios. As cloud computing services become increasingly popular, the need for efficient and green task scheduling algorithms becomes increasingly essential. This paper proposes a logistic regression-based deep recurrent network (LR-DRN) for green task scheduling in cloud computing. The proposed LR-DRN can learn the scheduling patterns from historical data and accurately predict future green task scheduling results. In addition, the proposed LR-DRN can optimize the resource allocation for green task scheduling by using the predicted results. Simulation results show that the proposed LR-DRN can significantly improve the green task scheduling performance in cloud computing.
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING
Resource-optimized task scheduling is an essential issue in green computing. This paper uses a lo... more Resource-optimized task scheduling is an essential issue in green computing. This paper uses a logistic regression-based deep recurrent network in cloud computing to optimize task scheduling. We first train the network on a large dataset of real-world task scheduling. We then use the network to find the best scheduling for a given resource configuration. Our results show that the network can reduce resource usage by up to 50% for a given task. Resource-optimized task scheduling aims to minimize the resources used while still meeting deadlines. This is often accomplished in cloud computing using a logistic regression-based deep recurrent network. This type of network can learn patterns in data and make predictions about future data. Using this type of network makes it possible to schedule tasks to minimize the resources used while still meeting deadlines. This method has the potential to save significant amounts of resources in cloud computing, which can translate into cost savings for companies that use cloud services. Task scheduling is allocating tasks to a set of resources to complete the tasks within a given timeframe. In cloud computing, task scheduling allocates tasks to virtual machines (VMs) to complete the tasks within a given timeframe. Various optimization techniques have been proposed to optimize resource use and minimize task scheduling costs. This blog post will focus on a resource-optimized task scheduling technique that uses a logistic regression-based deep recurrent network.
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
This research paper discusses using Land Weber iterative supervised classification and Quantized ... more This research paper discusses using Land Weber iterative supervised classification and Quantized Spiking Network for emotion analysis in crime detection. The proposed methodology is evaluated using a real-world data set. The proposed approach is promising in terms of accuracy and robustness. This work aims to develop a supervised classification and quantized spiking network for emotion analysis. We propose a method to extract features from the temporal dynamics of a spiking neural network (SNN) and use these features to train a support vector machine (SVM) classifier. We also quantize the SNN output to improve the classification accuracy. Our results show that the proposed method can achieve good classification performance on a publicly available dataset.
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
Emotion analysis is a promising tool for crime detection. It can identify potential suspects, ass... more Emotion analysis is a promising tool for crime detection. It can identify potential suspects, assess the risk of violence, and track the progress of a criminal investigation. However, it cannot be easy to identify emotions accurately, and several factors can influence the results of emotional analysis. This paper proposes a new approach to emotion analysis in crime detection that utilizes Land Weber iterative supervised classification and quantized spiking network. Land Weber's iterative supervised classification is a technique that can improve the accuracy of emotion analysis by iteratively training a classifier on a dataset of labeled data. A quantized spiking network is a type of neural network well-suited for emotion analysis because it can capture the temporal dynamics of emotions. The proposed approach was evaluated on a dataset of facial expressions and voice recordings. The results showed that the proposed approach achieved state-of-the-art accuracy in emotion analysis. The proposed approach has several advantages over traditional approaches to emotion analysis. First, it is more accurate. Second, it is more robust to noise. Third, it is more efficient. The proposed approach can improve the accuracy of emotion analysis in crime detection. It can also be used to develop new applications for emotion analysis, such as a system that automatically detect signs of deception in a witness statement. The Land-Weber iterative supervised classification algorithm has been used to detect emotions in crime detection. The quantized spiking neural network has been used to classify emotions. The study results showed that the Land-Weber iterative supervised classification algorithm achieved % overall accuracy of 97.5% in classifying emotions. In comparison, the quantized spiking neural network achieved an overall accuracy of 97.2%.
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
Energy efficiency is one of the most crucial aspects to consider while operating a cloud. After a... more Energy efficiency is one of the most crucial aspects to consider while operating a cloud. After all, a cloud that isn't energy efficient will be more expensive to operate and maintain. Using the horse herd algorithm to position virtual machines within a cloud is one technique to increase the system's energy efficiency. The horse herd algorithm is a heuristic used to optimize virtual machines' placement in a cloud. The algorithm works by first identifying the set of machines that are most energy efficient. These are the machines that will be used to host the virtual machines. The next step is to identify the set of machines that are the least energy efficient. These are the machines that will be used to host the virtual machines. Finally, the algorithm places the virtual machines on the most energy efficient machines. Additionally, the algorithm can help to meet SLA requirements. This is because the algorithm ensures that the virtual machines are placed on the most energy-efficient machines. As a result, the cloud will be able to meet the SLA requirements. The horse herd algorithm is a fantastic technique to increase a cloud's energy effectiveness. Additionally, the algorithm can help to meet SLA requirements. If you're searching for a way to improve the energy efficiency of your cloud, the horse herd algorithm is a good option to consider. A recent study has shown that the Horse Herd Algorithm can achieve energy efficiency and meet Service Level Agreement (SLA) requirements in virtual machine placement for SDN managed clouds. The Horse Herd Algorithm is a placement algorithm that is based on the location of resources in a data center.
INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
The horse herd algorithm deploys virtual computers in a data center (SLAs) to reduce the danger o... more The horse herd algorithm deploys virtual computers in a data center (SLAs) to reduce the danger of overload and adhere to Service Level Agreements. The algorithm is based on the observation that a herd of horses will naturally spread out to cover a larger area than a single horse. In the context of data centers, virtual machines can be placed in a way that minimizes the risk of overload and meets SLAs. The Abstract section is a blog about the Horse Herd Algorithm and how it can be used to ensure energy and SLA awareness in the virtual machine placement for SDN-managed cloud. The horse herd algorithm creates potential solutions, selects the best solution from the set, and returns the best solution. The horse herd algorithm is greedy, meaning it will always choose the solution that appears to be the best at the time without considering future consequences. The horse herd algorithm is not guaranteed to find the optimal solution, but it is often fast and can find reasonable solutions. This technique is a supervised learning technique that is used for the classification of data. The technique is based on the principle of least squares and is used to classify linearly separable data. The method is employed to train data sets that may be linearly separated. Data sets that cannot be separated linearly are trained using this method. Data sets that cannot be separated linearly are trained using this method. Data sets that cannot be separated linearly are trained using this method. The method is employed to train data sets that may be linearly separated. Data sets that cannot be separated linearly are trained using this method. The research paper then discusses the quantized spiking network. This network is used for crime detection. The network is based on the principle of artificial neural networks. The network is used for the classification of data. The categorization of data is done through the network. The network is employed to train data sets. The categorization of data is done through the network. The network is employed to train data sets. The research paper then discusses the results of the study. The study shows that the Landweber iterative supervised classification technique is more accurate than the quantized spiking network. The study also shows that the Landweber iterative supervised classification technique is more efficient than the quantized spiking network.
international journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
Crowdsensing is an emerging field where sensing is performed by a large number of devices distrib... more Crowdsensing is an emerging field where sensing is performed by a large number of devices distributed in an environment. This paper presents a Collaborative Mobile Fog (CMF) environment where users deploy sensors. Each user can sense and collect data from the environment. The collected data is then processed and analyzed by a centralized server. We use Volterra integral to model Crowdsensing's sensing process in a collaborative mobile fog environment using Volterra integral and logistic drop-offloading. Crowdsensing is an emerging field where sensing is performed by a large number of devices distributed in an environment. This paper presents a Collaborative Mobile Fog [CMF] environment where users deploy sensors. Each user can sense and collect data from the environment. The collected data is then processed and analyzed by a centralized server. We use Volterra integral to model the sensing process. There are several challenges when deploying crowdsensing systems. One challenge is that crowdsensing can be time-consuming and resource-intensive. Another challenge is that data can be difficult to process and analyze. This paper addresses these challenges using Volterra integral to model the sensing process. Volterra Integral is a software tool that efficiently processes large amounts of data. This allows us to efficiently process and analyze the data collected by the sensors in our CMF environment. We use Volterra integral to model the sensing process. Volterra Integral is a software tool that efficiently processes large amounts of data. This allows us to efficiently process and analyze the data collected by the sensors in our CMF environment. We use Volterra integral to model the sensing process. Volterra Integral is a software tool that efficiently processes large amounts of data. This allows us to efficiently process and analyze the data collected by the sensors in our CMF environment.
international journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
Volterra integral and logistic drop-offloading are two methods that can be used for crowd sensing... more Volterra integral and logistic drop-offloading are two methods that can be used for crowd sensing in a collaborative mobile fog environment. Volterra integral allows for detecting a target object in a scene, while logistic drop-offloading can be used to determine the target object's position. These methods can be used together to improve the accuracy of crowd sensing in a collaborative mobile fog environment. This method utilizes the Volterra integral to approximate the crowd sensing function and then uses the logistic function to drop off the data sensed by the crowd. This method is shown to be effective in reducing the error in the crowd-sensing function. It is also shown to be more efficient regarding computational time and energy consumption.
The primary goal of this research is to examine and evaluate several stock market forecasting mod... more The primary goal of this research is to examine and evaluate several stock market forecasting models. Despite the complexity of the problem space, we discovered that techniques such as random forest and support vector machine were underutilised. In this piece, we'll talk about a more realistic strategy for forecasting the direction of stock prices. Our first consideration is the stock market data set from the prior year. The data set was cleaned and fine-tuned before being used in the research. Therefore, we will also discuss preparing the raw data for our work. Second, when the data has been cleaned and prepared, we'll compare random forest and support vector machine, two popular machine learning methods. Random Forest Classifier mathematical modeling also looks at the accuracy of the overall values supplied and how they might be used in practice. In addition, this research presents a machine-learning strategy for forecasting stock prices in a volatile market. Financial institutions would benefit greatly from accurate stock forecasting, and investors would have real solutions to their problems.
Journal of Emerging Technologies and Innovative Research , 2024
To profit from trading or to mitigate market dangers, investors need dependable financial applica... more To profit from trading or to mitigate market dangers, investors need dependable financial application forecasting methods. When the FTSE100 and the New York Stock Exchange return, this study examines the predictive accuracy of a trading strategy using a neurofuzzy model. Furthermore, empirical evidence supports the premise proposed by Bekaert and Wu (2000) that the inclusion of conditional volatility change estimates substantially improves the predictability of the neurofuzzy model. Consequently, headed into the following trading day-a potentially pivotal juncture-we are armed with reliable data. By continuously surpassing the returns of feedforward neural networks, Markov-switching models, and buy-and-hold strategies, the volatility-based neurofuzzy model yields a superior total return (including transaction costs). Two plausible hypotheses that provide weight to the notion that dependence on indicators results from reliance on volatility are the presence of portfolio insurance plans in the stock markets and the "volatility feedback" idea. Passive portfolio management may be surpassed by an investing strategy established on the suggested neurofuzzy model.
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING , 2024
This paper presents a State-of-the-Art Gaussian Distributive Optimized Congruential Cryptographic... more This paper presents a State-of-the-Art Gaussian Distributive Optimized Congruential Cryptographic Deep Multilayer Perceptive Network (GD-DMPN) for achieving Load Balancing and Secure Data Outsourcing in Federated Cloud. The GD-DMPN can efficiently distribute and correlate data over many nodes, making it a valuable tool for data management in federated Clouds. The GD-DMPN can also exploit multiple layers of Perceptual Learning for enhanced data correlation and load balancing. As the world moves more and more towards digitalization, the demand for cloud services is increasing rapidly. Cloud services allow users to access their data and applications anywhere, anytime. However, the use of cloud services also raises security and privacy concerns. Several research studies have proposed using a federated cloud to address these concerns. Federated cloud is a type of cloud computing where a group of organizations cooperate to provide cloud services. Each organization in the federated cloud has its portion of the total resources available. This type of cloud computing has several advantages over other types, such as improved security and privacy and giving users more control over their data. This study proposes a state-of-the-art Gaussian distributive optimized congruential cryptographic deep multilayer perceptive network for load balancing and secure data outsourcing in a federated cloud. Our proposed network is based on the Gaussian distribution, a wellknown statistical distribution. We use the Gaussian distribution to distribute the resources among the organizations in the federated cloud. This ensures that each organization has access to the resources it needs while providing a degree of security and privacy. We also propose a deep multilayer perceptive network for our proposed system. This network is used to monitor the activities of the organizations in the federated cloud and to provide feedback to the system. This feedback is used to optimize the system and ensure the resources are used efficiently. Our proposed system can provide many benefits, such as improved security, privacy, and efficiency. In addition, our system can provide users with more control over their data. Our proposed system has the potential to revolutionize the federated cloud and provide users with a more secure and private way to access their data.
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
This paper proposes a novel deep multilayer perceptive network (DMPLN) based on Gaussian distribu... more This paper proposes a novel deep multilayer perceptive network (DMPLN) based on Gaussian distributive optimized congruential cryptographic (GDOC) for load balancing and secure data outsourcing in federated cloud computing. DMPLN can effectively improve the security and efficiency of federated cloud computing. Specifically, we use GDOC to encrypt data and then DMPLN to classify the encrypted data. The advantage of GDOC is that it can improve data security and the efficiency of data classification. Furthermore, we use the improved particle swarm optimization (IPSO) algorithm to optimize DMPLN, which can further improve classification accuracy. The results of our thorough tests, which we conduct on two real-world datasets, demonstrate that our suggested strategy may significantly increase the security and effectiveness of federated cloud computing. A highly efficient and secure deep multilayer perceptive network based on Gaussian distributive optimized congruential cryptographic for load balancing and secure data outsourcing in federated cloud computing. A highly efficient and secure deep multilayer perceptive network is proposed based on Gaussian distributive optimized congruential cryptography for load balancing and secure data outsourcing in federated cloud computing. The Gaussian Distributive Optimized congruential cryptographic technique used for load balancing and secure data outsourcing in federated cloud computing. The suggested solution balances the load and safeguards the data in federated cloud computing. The suggested method employs the Gaussian distributive optimal congruential cryptography method to safeguard the data in federated cloud computing.
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
Since IT permits the establishment of a unique virtual network (VN) on a common physical infrastr... more Since IT permits the establishment of a unique virtual network (VN) on a common physical infrastructure, network virtualization technology research and application have gained popularity in the communication industry in recent years. A proposal has been put out to virtualize wireless networks with the explicit intention of improving the future of the Internet. In order to establish segregation among slices of wireless networks, Bilateral auction-based resource allocation is established., considering that virtual network mapping algorithms are efficient and necessary for the building of virtual networks on the SN using these technologies (VNE). Presently, the consensus among industry experts is that network virtualization technology is a viable remedy for the limited architecture of the Internet. Presently, the consensus among industry experts is that network virtualization technology is a viable remedy for the limited architecture of the Internet. As a result of the scarcity of fundamental technological investigations concerning wireless access network virtualization and the preponderance of current research in this domain being devoted to cable network virtualization, which is predominantly linked to the backbone and network of data centres, wireless network virtualization has become a focal point of academic and industrial research. This initiative aims to expedite the development of wireless skills in order to foster innovation and meet the ever-changing needs of the industry. However, there has been no effort to implement the dynamic resource allocation technique for resource sharing in virtualized wireless networks. Present strategies for allocating resources on local and global virtual networks are examined and summarised in this study. The analysis of the topology properties of physical and virtual networks is conducted using the centrality theory of social and complex networks. Additionally, this study proposes two efficient methods for resource allocation in wireless and cross-domain virtual networks and develops two models for such networks.
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
Using a deep learning methodology, this article conducts an exhaustive examination and study of i... more Using a deep learning methodology, this article conducts an exhaustive examination and study of intelligent resource allocation in wireless communication networks. The investigation begins with an examination of the methods and concepts behind CSCN architecture, in addition to the throughput of SBS (small base stations) included into this design. Thus, an LSTM (long short-term memory network) model is then built to forecast the mobile positions of users. User transmission conditions are assessed using two factors: the location of the users' mobile devices and the condition of the tiny base stations to which they are linked, ensuring that the cache settings are in the intended state. Upon careful examination of the scores, the tiny base station ascertains which users are in possession of the most advantageous transmission circumstances. Throughput optimization in networks is seen as a multi-agent, noncooperative game problem that may be approached using game theory. The purpose of this study is to allow the tiny base station to autonomously learn and choose channel resources in line with the network environment so as to optimise performance by creating a deep augmented learning-based method for wireless resource allocation. When compared to the standard random-access approach and the algorithm reported in the literature, simulation findings suggest that the method presented in this study significantly boosts network throughput. We provide a framework-based resource control technique in this study by tackling the difficulty of user traffic distribution within fine-grained resource management. Despite having a processing cost similar to polynomials, the findings suggest that the resource management approach exhibits a performance that is unexpectedly comparable to that of a proportional fair user dual connection strategy based on matching. In order to allocate available resources and delegate duties, the subsequent course of action is to implement the optimized decision strategy. Once the intelligent entities have undergone training, they will independently execute these activities in accordance with the present condition of the system and the predetermined policy. The results of the simulations indicate that, in conclusion, the algorithm has the potential to reduce latency and energy consumption while improving user experience.
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
This paper presents a deep recurrent network (DRN) for green task scheduling in the cloud. The DR... more This paper presents a deep recurrent network (DRN) for green task scheduling in the cloud. The DRN is designed to optimize resource allocation by learning the dependencies between tasks and their resources. Experimental results show that the DRN can achieve significantly better resource utilization than several state-of-the-art optimization algorithms. Due to its many benefits, including flexibility, mobility, and scalability, cloud computing has recently gained popularity. However, deploying large-scale cloud applications can be challenging due to resource allocation problems. This paper proposes a logistic regression-based deep recurrent network (LRDN) that can successfully address the cloud computing issue of green job scheduling. Our LRDN can achieve near-optimal resource allocation by predicting future resource demand and adjusting the allocation accordingly. Our LRDN also outperforms a state-of-the-art deep recurrent network in several resource-intensive scenarios. As cloud computing services become increasingly popular, the need for efficient and green task scheduling algorithms becomes increasingly essential. This paper proposes a logistic regression-based deep recurrent network (LR-DRN) for green task scheduling in cloud computing. The proposed LR-DRN can learn the scheduling patterns from historical data and accurately predict future green task scheduling results. In addition, the proposed LR-DRN can optimize the resource allocation for green task scheduling by using the predicted results. Simulation results show that the proposed LR-DRN can significantly improve the green task scheduling performance in cloud computing.
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING
Resource-optimized task scheduling is an essential issue in green computing. This paper uses a lo... more Resource-optimized task scheduling is an essential issue in green computing. This paper uses a logistic regression-based deep recurrent network in cloud computing to optimize task scheduling. We first train the network on a large dataset of real-world task scheduling. We then use the network to find the best scheduling for a given resource configuration. Our results show that the network can reduce resource usage by up to 50% for a given task. Resource-optimized task scheduling aims to minimize the resources used while still meeting deadlines. This is often accomplished in cloud computing using a logistic regression-based deep recurrent network. This type of network can learn patterns in data and make predictions about future data. Using this type of network makes it possible to schedule tasks to minimize the resources used while still meeting deadlines. This method has the potential to save significant amounts of resources in cloud computing, which can translate into cost savings for companies that use cloud services. Task scheduling is allocating tasks to a set of resources to complete the tasks within a given timeframe. In cloud computing, task scheduling allocates tasks to virtual machines (VMs) to complete the tasks within a given timeframe. Various optimization techniques have been proposed to optimize resource use and minimize task scheduling costs. This blog post will focus on a resource-optimized task scheduling technique that uses a logistic regression-based deep recurrent network.
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
This research paper discusses using Land Weber iterative supervised classification and Quantized ... more This research paper discusses using Land Weber iterative supervised classification and Quantized Spiking Network for emotion analysis in crime detection. The proposed methodology is evaluated using a real-world data set. The proposed approach is promising in terms of accuracy and robustness. This work aims to develop a supervised classification and quantized spiking network for emotion analysis. We propose a method to extract features from the temporal dynamics of a spiking neural network (SNN) and use these features to train a support vector machine (SVM) classifier. We also quantize the SNN output to improve the classification accuracy. Our results show that the proposed method can achieve good classification performance on a publicly available dataset.
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
Emotion analysis is a promising tool for crime detection. It can identify potential suspects, ass... more Emotion analysis is a promising tool for crime detection. It can identify potential suspects, assess the risk of violence, and track the progress of a criminal investigation. However, it cannot be easy to identify emotions accurately, and several factors can influence the results of emotional analysis. This paper proposes a new approach to emotion analysis in crime detection that utilizes Land Weber iterative supervised classification and quantized spiking network. Land Weber's iterative supervised classification is a technique that can improve the accuracy of emotion analysis by iteratively training a classifier on a dataset of labeled data. A quantized spiking network is a type of neural network well-suited for emotion analysis because it can capture the temporal dynamics of emotions. The proposed approach was evaluated on a dataset of facial expressions and voice recordings. The results showed that the proposed approach achieved state-of-the-art accuracy in emotion analysis. The proposed approach has several advantages over traditional approaches to emotion analysis. First, it is more accurate. Second, it is more robust to noise. Third, it is more efficient. The proposed approach can improve the accuracy of emotion analysis in crime detection. It can also be used to develop new applications for emotion analysis, such as a system that automatically detect signs of deception in a witness statement. The Land-Weber iterative supervised classification algorithm has been used to detect emotions in crime detection. The quantized spiking neural network has been used to classify emotions. The study results showed that the Land-Weber iterative supervised classification algorithm achieved % overall accuracy of 97.5% in classifying emotions. In comparison, the quantized spiking neural network achieved an overall accuracy of 97.2%.
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
Energy efficiency is one of the most crucial aspects to consider while operating a cloud. After a... more Energy efficiency is one of the most crucial aspects to consider while operating a cloud. After all, a cloud that isn't energy efficient will be more expensive to operate and maintain. Using the horse herd algorithm to position virtual machines within a cloud is one technique to increase the system's energy efficiency. The horse herd algorithm is a heuristic used to optimize virtual machines' placement in a cloud. The algorithm works by first identifying the set of machines that are most energy efficient. These are the machines that will be used to host the virtual machines. The next step is to identify the set of machines that are the least energy efficient. These are the machines that will be used to host the virtual machines. Finally, the algorithm places the virtual machines on the most energy efficient machines. Additionally, the algorithm can help to meet SLA requirements. This is because the algorithm ensures that the virtual machines are placed on the most energy-efficient machines. As a result, the cloud will be able to meet the SLA requirements. The horse herd algorithm is a fantastic technique to increase a cloud's energy effectiveness. Additionally, the algorithm can help to meet SLA requirements. If you're searching for a way to improve the energy efficiency of your cloud, the horse herd algorithm is a good option to consider. A recent study has shown that the Horse Herd Algorithm can achieve energy efficiency and meet Service Level Agreement (SLA) requirements in virtual machine placement for SDN managed clouds. The Horse Herd Algorithm is a placement algorithm that is based on the location of resources in a data center.
INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
The horse herd algorithm deploys virtual computers in a data center (SLAs) to reduce the danger o... more The horse herd algorithm deploys virtual computers in a data center (SLAs) to reduce the danger of overload and adhere to Service Level Agreements. The algorithm is based on the observation that a herd of horses will naturally spread out to cover a larger area than a single horse. In the context of data centers, virtual machines can be placed in a way that minimizes the risk of overload and meets SLAs. The Abstract section is a blog about the Horse Herd Algorithm and how it can be used to ensure energy and SLA awareness in the virtual machine placement for SDN-managed cloud. The horse herd algorithm creates potential solutions, selects the best solution from the set, and returns the best solution. The horse herd algorithm is greedy, meaning it will always choose the solution that appears to be the best at the time without considering future consequences. The horse herd algorithm is not guaranteed to find the optimal solution, but it is often fast and can find reasonable solutions. This technique is a supervised learning technique that is used for the classification of data. The technique is based on the principle of least squares and is used to classify linearly separable data. The method is employed to train data sets that may be linearly separated. Data sets that cannot be separated linearly are trained using this method. Data sets that cannot be separated linearly are trained using this method. Data sets that cannot be separated linearly are trained using this method. The method is employed to train data sets that may be linearly separated. Data sets that cannot be separated linearly are trained using this method. The research paper then discusses the quantized spiking network. This network is used for crime detection. The network is based on the principle of artificial neural networks. The network is used for the classification of data. The categorization of data is done through the network. The network is employed to train data sets. The categorization of data is done through the network. The network is employed to train data sets. The research paper then discusses the results of the study. The study shows that the Landweber iterative supervised classification technique is more accurate than the quantized spiking network. The study also shows that the Landweber iterative supervised classification technique is more efficient than the quantized spiking network.
international journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
Crowdsensing is an emerging field where sensing is performed by a large number of devices distrib... more Crowdsensing is an emerging field where sensing is performed by a large number of devices distributed in an environment. This paper presents a Collaborative Mobile Fog (CMF) environment where users deploy sensors. Each user can sense and collect data from the environment. The collected data is then processed and analyzed by a centralized server. We use Volterra integral to model Crowdsensing's sensing process in a collaborative mobile fog environment using Volterra integral and logistic drop-offloading. Crowdsensing is an emerging field where sensing is performed by a large number of devices distributed in an environment. This paper presents a Collaborative Mobile Fog [CMF] environment where users deploy sensors. Each user can sense and collect data from the environment. The collected data is then processed and analyzed by a centralized server. We use Volterra integral to model the sensing process. There are several challenges when deploying crowdsensing systems. One challenge is that crowdsensing can be time-consuming and resource-intensive. Another challenge is that data can be difficult to process and analyze. This paper addresses these challenges using Volterra integral to model the sensing process. Volterra Integral is a software tool that efficiently processes large amounts of data. This allows us to efficiently process and analyze the data collected by the sensors in our CMF environment. We use Volterra integral to model the sensing process. Volterra Integral is a software tool that efficiently processes large amounts of data. This allows us to efficiently process and analyze the data collected by the sensors in our CMF environment. We use Volterra integral to model the sensing process. Volterra Integral is a software tool that efficiently processes large amounts of data. This allows us to efficiently process and analyze the data collected by the sensors in our CMF environment.
international journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
Volterra integral and logistic drop-offloading are two methods that can be used for crowd sensing... more Volterra integral and logistic drop-offloading are two methods that can be used for crowd sensing in a collaborative mobile fog environment. Volterra integral allows for detecting a target object in a scene, while logistic drop-offloading can be used to determine the target object's position. These methods can be used together to improve the accuracy of crowd sensing in a collaborative mobile fog environment. This method utilizes the Volterra integral to approximate the crowd sensing function and then uses the logistic function to drop off the data sensed by the crowd. This method is shown to be effective in reducing the error in the crowd-sensing function. It is also shown to be more efficient regarding computational time and energy consumption.
The primary goal of this research is to examine and evaluate several stock market forecasting mod... more The primary goal of this research is to examine and evaluate several stock market forecasting models. Despite the complexity of the problem space, we discovered that techniques such as random forest and support vector machine were underutilised. In this piece, we'll talk about a more realistic strategy for forecasting the direction of stock prices. Our first consideration is the stock market data set from the prior year. The data set was cleaned and fine-tuned before being used in the research. Therefore, we will also discuss preparing the raw data for our work. Second, when the data has been cleaned and prepared, we'll compare random forest and support vector machine, two popular machine learning methods. Random Forest Classifier mathematical modeling also looks at the accuracy of the overall values supplied and how they might be used in practice. In addition, this research presents a machine-learning strategy for forecasting stock prices in a volatile market. Financial institutions would benefit greatly from accurate stock forecasting, and investors would have real solutions to their problems.
Journal of Emerging Technologies and Innovative Research , 2024
To profit from trading or to mitigate market dangers, investors need dependable financial applica... more To profit from trading or to mitigate market dangers, investors need dependable financial application forecasting methods. When the FTSE100 and the New York Stock Exchange return, this study examines the predictive accuracy of a trading strategy using a neurofuzzy model. Furthermore, empirical evidence supports the premise proposed by Bekaert and Wu (2000) that the inclusion of conditional volatility change estimates substantially improves the predictability of the neurofuzzy model. Consequently, headed into the following trading day-a potentially pivotal juncture-we are armed with reliable data. By continuously surpassing the returns of feedforward neural networks, Markov-switching models, and buy-and-hold strategies, the volatility-based neurofuzzy model yields a superior total return (including transaction costs). Two plausible hypotheses that provide weight to the notion that dependence on indicators results from reliance on volatility are the presence of portfolio insurance plans in the stock markets and the "volatility feedback" idea. Passive portfolio management may be surpassed by an investing strategy established on the suggested neurofuzzy model.
Uploads
Papers by beeztry publication