Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (7)

Search Parameters:
Keywords = web traffic forecast

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 1171 KiB  
Review
Monitoring Volcanic Plumes and Clouds Using Remote Sensing: A Systematic Review
by Rui Mota, José M. Pacheco, Adriano Pimentel and Artur Gil
Remote Sens. 2024, 16(10), 1789; https://doi.org/10.3390/rs16101789 - 18 May 2024
Viewed by 981
Abstract
Volcanic clouds pose significant threats to air traffic, human health, and economic activity, making early detection and monitoring crucial. Accurate determination of eruptive source parameters is crucial for forecasting and implementing preventive measures. This review article aims to identify the most common remote [...] Read more.
Volcanic clouds pose significant threats to air traffic, human health, and economic activity, making early detection and monitoring crucial. Accurate determination of eruptive source parameters is crucial for forecasting and implementing preventive measures. This review article aims to identify the most common remote sensing methods for monitoring volcanic clouds. To achieve this, we conducted a systematic literature review of scientific articles indexed in the Web of Science database published between 2010 and 2022, using multiple query strings across all fields. The articles were reviewed based on research topics, remote sensing methods, practical applications, case studies, and outcomes using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Our study found that satellite-based remote sensing approaches are the most cost-efficient and accessible, allowing for the monitoring of volcanic clouds at various spatial scales. Brightness temperature difference is the most commonly used method for detecting volcanic clouds at a specified temperature threshold. Approaches that apply machine learning techniques help overcome the limitations of traditional methods. Despite the constraints imposed by spatial and temporal resolution and optical limitations of sensors, multiplatform approaches can overcome these limitations and improve accuracy. This study explores various techniques for monitoring volcanic clouds, identifies research gaps, and lays the foundation for future research. Full article
Show Figures

Figure 1

2489 KiB  
Proceeding Paper
Development of a Compact IoT-Enabled Device to Monitor Air Pollution for Environmental Sustainability
by Vijayaraja Loganathan, Dhanasekar Ravikumar, Vidhya Devaraj, Uma Mageshwari Kannan and Rupa Kesavan
Eng. Proc. 2023, 58(1), 18; https://doi.org/10.3390/ecsa-10-15996 - 15 Nov 2023
Cited by 2 | Viewed by 835
Abstract
Degrading air quality is a matter of concern nowadays, and monitoring air quality helps us keep an eye on it. Air pollution is a pressing global issue with far-reaching impacts on public health and the environment. The need for effective and real-time monitoring [...] Read more.
Degrading air quality is a matter of concern nowadays, and monitoring air quality helps us keep an eye on it. Air pollution is a pressing global issue with far-reaching impacts on public health and the environment. The need for effective and real-time monitoring systems has become increasingly apparent to combat this growing concern. Here, an innovative air pollution surveillance system (APSS) that leverages Internet of Things (IoT) technology to enable comprehensive and dynamic air quality assessment is introduced. The proposed APMS employs a network of Io enabled sensors strategically deployed across urban and industrial areas. These sensors are equipped to measure various pollutants, including particulate matter (PM2.5 and PM10), nitrogen dioxide (NO2), sulfur dioxide (SO2), ozone (O3), carbon monoxide (CO), and volatile organic compounds (VOCs). Here, a regression model is created to forecast air quality using sensor data while taking into account variables including weather information, traffic patterns, and pollutants. Additionally, air quality categories (such as good, moderate, and harmful) are classified using classification algorithms based on preset thresholds. The IoT architecture facilitates seamless data transmission from these sensors to a centralized cloud-based platform. The developed APSS monitors the air quality using an MQ-135 gas sensor, and the data are shared over a web server using the Internet. An alarm will trigger when the air quality goes below a certain level. Also, the air quality, which is measured in parts per million (PPM), is displayed on the unit connected to it. Furthermore, when the PPM goes beyond a certain level, an alert message is sent to the air pollution control board, which takes preventive measures to control the pollution and also alerts the people, which helps each person in that society save their environment from pollution and have a good air quality environment. Additionally, the APSS offers user-friendly interfaces, accessible through web and mobile applications, to empower citizens with real-time air quality information. The effectiveness of the IoT-based air pollution monitoring system has been validated through successful field trials in urban and industrial environments, and it has the ability to provide real-time data insights and empower stakeholders in promoting environmental sustainability and fostering citizen engagement. Full article
Show Figures

Figure 1

23 pages, 3332 KiB  
Article
Predicting the Performance of Retail Market Firms: Regression and Machine Learning Methods
by Darko B. Vukovic, Lubov Spitsina, Ekaterina Gribanova, Vladislav Spitsin and Ivan Lyzin
Mathematics 2023, 11(8), 1916; https://doi.org/10.3390/math11081916 - 18 Apr 2023
Cited by 4 | Viewed by 3618
Abstract
The problem of predicting profitability is exceptionally relevant for investors and company owners. This paper examines the factors affecting firm performance and tests and compares various methods based on linear and non-linear dependencies between variables for predicting firm performance. In this study, the [...] Read more.
The problem of predicting profitability is exceptionally relevant for investors and company owners. This paper examines the factors affecting firm performance and tests and compares various methods based on linear and non-linear dependencies between variables for predicting firm performance. In this study, the methods include random effects regression, individual machine learning algorithms with optimizers (DNN, LSTM, and Random Forest), and advanced machine learning methods consisting of sets of algorithms (portfolios and ensembles). The training sample includes 551 retail-oriented companies and data for 2017–2019 (panel data, 1653 observations). The test sample contains data for these companies for 2020. This study combines two approaches (stages): an econometric analysis of the influence of factors on the company’s profitability and machine learning methods to predict the company’s profitability. To compare forecasting methods, we used parametric and non-parametric predictive measures and ANOVA. The paper shows that previous profitability has a strong positive impact on a firm’s performance. We also find a non-linear positive effect of sales growth and web traffic on firm profitability. These variables significantly improve the prediction accuracy. Regression is inferior in forecast accuracy to machine learning methods. Advanced methods (portfolios and ensembles) demonstrate better and more steady results compared with individual machine learning methods. Full article
(This article belongs to the Section Network Science)
Show Figures

Figure 1

23 pages, 3039 KiB  
Article
The Effectiveness of Centralized Payment Network Advertisements on Digital Branding during the COVID-19 Crisis
by Damianos P. Sakas, Ioannis Dimitrios G. Kamperos, Dimitrios P. Reklitis, Nikolaos T. Giannakopoulos, Dimitrios K. Nasiopoulos, Marina C. Terzi and Nikos Kanellos
Sustainability 2022, 14(6), 3616; https://doi.org/10.3390/su14063616 - 19 Mar 2022
Cited by 36 | Viewed by 3721
Abstract
Crises are always challenging for banking systems. In the case of COVID-19, centralized payment networks and FinTech companies’ websites have been affected by user behavior globally. As a result, there is ample opportunity for marketing managers and professionals to focus on big data [...] Read more.
Crises are always challenging for banking systems. In the case of COVID-19, centralized payment networks and FinTech companies’ websites have been affected by user behavior globally. As a result, there is ample opportunity for marketing managers and professionals to focus on big data from FinTech websites. This can contribute to a better understanding of the variables impacting their brand name and how to manage risk during crisis periods. This research is divided into three stages. The first stage presents the web analytics and the data retrieved from the FinTech platforms. The second stage illustrates the statistical analysis and the fuzzy cognitive mapping (FCM) performed. In the final stage, an agent-based model is outlined in order to simulate and forecast a company’s brand name visibility and user behavior. The results of this study suggest that, during crises, centralized payment networks (CPNs) and FinTech companies with high organic traffic tend to convert new visitors to actual “customers”. Full article
(This article belongs to the Special Issue Crowd-Powered e-Services)
Show Figures

Figure 1

21 pages, 5978 KiB  
Article
Web Traffic Time Series Forecasting Using LSTM Neural Networks with Distributed Asynchronous Training
by Roberto Casado-Vara, Angel Martin del Rey, Daniel Pérez-Palau, Luis de-la-Fuente-Valentín and Juan M. Corchado
Mathematics 2021, 9(4), 421; https://doi.org/10.3390/math9040421 - 21 Feb 2021
Cited by 39 | Viewed by 9811
Abstract
Evaluating web traffic on a web server is highly critical for web service providers since, without a proper demand forecast, customers could have lengthy waiting times and abandon that website. However, this is a challenging task since it requires making reliable predictions based [...] Read more.
Evaluating web traffic on a web server is highly critical for web service providers since, without a proper demand forecast, customers could have lengthy waiting times and abandon that website. However, this is a challenging task since it requires making reliable predictions based on the arbitrary nature of human behavior. We introduce an architecture that collects source data and in a supervised way performs the forecasting of the time series of the page views. Based on the Wikipedia page views dataset proposed in a competition by Kaggle in 2017, we created an updated version of it for the years 2018–2020. This dataset is processed and the features and hidden patterns in data are obtained for later designing an advanced version of a recurrent neural network called Long Short-Term Memory. This AI model is distributed training, according to the paradigm called data parallelism and using the Downpour training strategy. Predictions made for the seven dominant languages in the dataset are accurate with loss function and measurement error in reasonable ranges. Despite the fact that the analyzed time series have fairly bad patterns of seasonality and trend, the predictions have been quite good, evidencing that an analysis of the hidden patterns and the features extraction before the design of the AI model enhances the model accuracy. In addition, the improvement of the accuracy of the model with the distributed training is remarkable. Since the task of predicting web traffic in as precise quantities as possible requires large datasets, we designed a forecasting system to be accurate despite having limited data in the dataset. We tested the proposed model on the new Wikipedia page views dataset we created and obtained a highly accurate prediction; actually, the mean absolute error of predictions regarding the original one on average is below 30. This represents a significant step forward in the field of time series prediction for web traffic forecasting. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining in Pattern Recognition)
Show Figures

Figure 1

26 pages, 29488 KiB  
Article
Geospatial Serverless Computing: Architectures, Tools and Future Directions
by Sujit Bebortta, Saneev Kumar Das, Meenakshi Kandpal, Rabindra Kumar Barik and Harishchandra Dubey
ISPRS Int. J. Geo-Inf. 2020, 9(5), 311; https://doi.org/10.3390/ijgi9050311 - 7 May 2020
Cited by 22 | Viewed by 6399
Abstract
Several real-world applications involve the aggregation of physical features corresponding to different geographic and topographic phenomena. This information plays a crucial role in analyzing and predicting several events. The application areas, which often require a real-time analysis, include traffic flow, forest cover, disease [...] Read more.
Several real-world applications involve the aggregation of physical features corresponding to different geographic and topographic phenomena. This information plays a crucial role in analyzing and predicting several events. The application areas, which often require a real-time analysis, include traffic flow, forest cover, disease monitoring and so on. Thus, most of the existing systems portray some limitations at various levels of processing and implementation. Some of the most commonly observed factors involve lack of reliability, scalability and exceeding computational costs. In this paper, we address different well-known scalable serverless frameworks i.e., Amazon Web Services (AWS) Lambda, Google Cloud Functions and Microsoft Azure Functions for the management of geospatial big data. We discuss some of the existing approaches that are popularly used in analyzing geospatial big data and indicate their limitations. We report the applicability of our proposed framework in context of Cloud Geographic Information System (GIS) platform. An account of some state-of-the-art technologies and tools relevant to our problem domain are discussed. We also visualize performance of the proposed framework in terms of reliability, scalability, speed and security parameters. Furthermore, we present the map overlay analysis, point-cluster analysis, the generated heatmap and clustering analysis. Some relevant statistical plots are also visualized. In this paper, we consider two application case-studies. The first case study was explored using the Mineral Resources Data System (MRDS) dataset, which refers to worldwide density of mineral resources in a country-wise fashion. The second case study was performed using the Fairfax Forecast Households dataset, which signifies the parcel-level household prediction for 30 consecutive years. The proposed model integrates a serverless framework to reduce timing constraints and it also improves the performance associated to geospatial data processing for high-dimensional hyperspectral data. Full article
Show Figures

Figure 1

17 pages, 350 KiB  
Article
The Use of Big Data and Its Effects in a Diffusion Forecasting Model for Korean Reverse Mortgage Subscribers
by Jinah Yang, Daiki Min and Jeenyoung Kim
Sustainability 2020, 12(3), 979; https://doi.org/10.3390/su12030979 - 29 Jan 2020
Cited by 4 | Viewed by 3092
Abstract
In recent years, big data has been widely used to understand consumers’ behavior and opinions. With this paper, we consider the use of big data and its effects in the problem of projecting the number of reverse mortgage subscribers in Korea. We analyzed [...] Read more.
In recent years, big data has been widely used to understand consumers’ behavior and opinions. With this paper, we consider the use of big data and its effects in the problem of projecting the number of reverse mortgage subscribers in Korea. We analyzed web-news, blog post, and search traffic volumes associated with Korean reverse mortgages and integrated them into a Generalized Bass Model (GBM) as a part of the exogenous variables representing marketing effort. We particularly consider web-news volume as a proxy for marketer-generated content (MGC) and blog post and search traffic volumes as proxies for user-generated content (UGC). Empirical analysis provides some interesting findings: First, the GBM by incorporating big data is helpful for forecasting the sales of Korean reverse mortgages, and second, the UGC as an exogenous variable is more useful for predicting sales volume than the MGC. The UGC can explain consumers’ interest relatively well. Additional sensitivity analysis supports that the UGC is important for increasing sales volume. Finally, prediction performance is different between blog posts and search traffic volumes. Full article
(This article belongs to the Special Issue Social Media Influence on Consumer Behaviour)
Show Figures

Figure 1

Back to TopTop