Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Numerical and Experimental Analysis of Flow and Pulsation in Hump Section of Siphon Outlet Conduit of Axial Flow Pump Device
Next Article in Special Issue
Hybrid Network–Spatial Clustering for Optimizing 5G Mobile Networks
Previous Article in Journal
A Traceable and Verifiable Tobacco Products Logistics System with GPS and RFID Technologies
Previous Article in Special Issue
Simulated Annealing for Resource Allocation in Downlink NOMA Systems in 5G Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

5G/B5G Service Classification Using Supervised Learning

by
Jorge E. Preciado-Velasco
1,*,
Joan D. Gonzalez-Franco
2,
Caridad E. Anias-Calderon
2,
Juan I. Nieto-Hipolito
3 and
Raul Rivera-Rodriguez
4
1
Department of Electronics and Telecommunications, CICESE Research Center, Carretera Ensenada-Tijuana 3918, Playitas, Ensenada 22860, BC, Mexico
2
Faculty of Electronics and Telecommunication, La Havana Technology University CUJAE, Calle 114, Marianao, La Havana 19390, Cuba
3
Faculty of Engineering, Architecture and Design, FIAD Autonomous University of Baja California, Carretera Ensenada-Tijuana 3917, Playitas, Ensenada 22860, BC, Mexico
4
Division of Telematics, CICESE Research Center, Carretera Ensenada-Tijuana 3918, Playitas, Ensenada 22860, BC, Mexico
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(11), 4942; https://doi.org/10.3390/app11114942
Submission received: 23 April 2021 / Revised: 17 May 2021 / Accepted: 25 May 2021 / Published: 27 May 2021
(This article belongs to the Special Issue 5G Network Planning and Design)

Abstract

:
The classification of services in 5G/B5G (Beyond 5G) networks has become important for telecommunications service providers, who face the challenge of simultaneously offering a better Quality of Service (QoS) in their networks and a better Quality of Experience (QoE) to users. Service classification allows 5G service providers to accurately select the network slices for each service, thereby improving the QoS of the network and the QoE perceived by users, and ensuring compliance with the Service Level Agreement (SLA). Some projects have developed systems for classifying these services based on the Key Performance Indicators (KPIs) that characterize the different services. However, Key Quality Indicators (KQIs) are also significant in 5G networks, although these are generally not considered. We propose a service classifier that uses a Machine Learning (ML) approach based on Supervised Learning (SL) to improve classification and to support a better distribution of resources and traffic over 5G/B5G based networks. We carry out simulations of our proposed scheme using different SL algorithms, first with KPIs alone and then incorporating KQIs and show that the latter achieves better prediction, with an accuracy of 97% and a Matthews correlation coefficient of 96.6% with a Random Forest classifier.

1. Introduction

The complexity, flexibility, and dynamism of 5G/B5G networks means that they need to be managed automatically [1]. Variations in behaviour patterns limit the identification of network activity. Furthermore, the traditional management model is insufficient, and due to the correlations between multiple variables and the extensive datasets handled in a single analysis, computational assistance is required [1]. Artificial Intelligence (AI) can be used to support the cognitive management of 5G/B5G [2], and ML is one of the most promising tools in this area.
5G is a network that focuses on services, and the classification of these services therefore requires an efficient scheme for network resource allocation. In [3], a variety of new services for 5G are described, some of which have very similar performance and quality requirements. The growth and heterogeneity of the different services implemented in 5G networks have undergone fast development due to their particular characteristics and specifications. If services are classified without the help of ML, it is difficult to monitor and control network resources and to predict and avoid SLA violations, which can affect both the QoS performance and the QoE perceived by users, that is, the management of the network as a whole.
In the field of telecommunications, and, particularly in the deployment of 5G/B5G networks, the correct classification of services offers a way of providing a better QoS of the network and of optimizing the QoE [3], and is therefore important. The User Equipment (UE) requests services that require precise categorization, in order to allow network operators to select specific network slices for each service, to improve the QoS of the network and the QoE perceived by the users, and to define the SLAs for each network slice [3,4].
The critical requirements for 5G can be considered from two points of view: from the user´s perspective, and from a network performance perspective [5]. Current, 5G service classification systems use ML, and consider KPIs as the main factor when performing the classification. However, it is necessary to consider KQIs in order to achieve the best possible classification, since these allow us to consider the performance of the network and the user´s requests for a service, and our proposed scheme therefore takes these into account.
The main objective of this work was to prove the hypothesis that when considering the quality parameters (KQI) in addition to the traditional performance parameters (KPI), there would be a better identification (through classification) of the services. Consequently, providers can optimally allocate resources with proper QoS management.
The inclusion of KQIs, which reflect the performance and quality of End To End (E2E) services, makes it possible to achieve a better customer experience in practice [5,6]. Incorporating the KQIs reflects the customer’s experience in terms of indicators that include their requirements, and can therefore provide them with better network performance and better QoE.
To improve the management of 5G networks in general, and the QoS and QoE in particular, we present a 5G/B5G service classifier system based on Supervised Machine Learning (SML) techniques. This system is based on the KPIs and KQIs of the different services to offer a better classification. One contribution of our scheme is the feedback made once a new service is classified, introducing its KPI/KQI parameters in the database to retrain dynamically and consequently regenerate a new predictive model. To make it more robust (from the structural point of view) concerning the previously generated predictive model.
The rest of this paper is organized as follows. Section 2 gives an overview of previous works on 5G services, and discusses some characteristics of the KPIs and KQIs involved in service classification. Section 3 describes our 5G/B5G service classifier system and the creation of our database. In Section 4 we used the Jupyter Notebook Integrated Development Environment (IDE) from the Anaconda Navigator platform to carry out the classifier simulations and present the results. Finally, Section 5 concludes the paper with some final remarks and suggestions for future work. We are including Supplementary Files (dataset and simulation program) for those who want to experiment (see Data Availability Statement section for details).

2. Related Work

The authors of [7] identified the different services that are implemented in 5G networks and services related to a generic use case, such as: Enhanced Mobile Broadband (eMBB), Ultra-reliable and Low Latency Communication (urLLC), and Massive Machine Type Communication (mMTC). Several services are shown in the middle of the triangle (see page 12 (Figure 2) in ref [7]), meaning that they may have characteristics of several use cases and different requirements in terms of KPIs and KQIs; for example, Augmented Reality (AR) is appearing between eMBB and urLLC. However, other services belong to specific use cases; for instance, smart cities are related to mMTC.
The relevance of the specific essential requirements may vary significantly, depending on the use case or scenario in which a service is deployed. ITU in [7] (see page 15, Figure 4) shows the importance of some critical requirements with reference to the KPIs/KQIs for three generic use cases, based on a scale with three levels: low, medium and high. For example, for services associated with the urLLC use case, low latency is the most critical requirement, while the peak data rates and network energy efficiency are not key parameters. The KPI and KQI parameters affect the network performance so strongly that we consider it essential to incorporate them both, in an interrelated way, and this is the idea that underpins this work.
Several research groups have implemented smart solutions to address the need for mobile network services, to standardize certain methods, and, to improve network performance, for example through the alliance of Third Generation Partnership Project (3GPP), Next Generation Mobile Networks (NGMN), European Telecommunications Standards Institute (ETSI), and many other research initiatives [8,9].
The authors of [9] highlighted the possibility of using SL and unsupervised learning algorithms to classify new services within use cases (eMBB, mMTC, and urLLC). They proposed the use of basic requirements or KPIs such as latency, bandwidth and data rate.
In [10], the author demonstrated the possibility of classifying the services demanded by users using technologies such as Software Defined Networks (SDN), Network Function Virtualization (NFV), and ML. The objective of this work was to predict demand and dynamically allocate network resources [10]. The parameters were classified based on the bandwidth, latency, jitter and other KPIs.
In [4], a system was established and a database was created in which SLAs were defined for each network slice. The authors used unsupervised learning techniques to classify 5G services based on the primary system. Although little information was provided on the elements considered in this classification, these are expected to have been related to the KPI requirements.
The Network Machine Learning Research Group (NMLRG) has worked on novel methods of classifying 5G services using ML [11]. Some of the papers published by the NMLRG [11] present results obtained from the application of their models to the classification of 5G services. They focused on KPIs related to network traffic as the main factors when performing system classification. The authors used some SL techniques and compare it is, and they concluded that Decision Tree and Random Forest are best in this kind of problem.
KQIs represent a shift from traditional network-based performance parameters (KPIs) to, a subjective quality-based perceived by the end-user, known as QoE. After they were defined, however, KQIs were not promoted or applied for some time [12].
The different variants used in the works described above were developed based on data collection and analysis to give a better classification of 5G services. These studies focused on KPI requirements, and none of them took into account the KQI parameters as elements for the classification of 5G services.
When KQIs are included, the ML algorithm becomes even more complicated; due to the multiple interdependencies throughout the E2E route, the measurement of service quality is not a trivial issue, even when it is limited to objective quality rather than subjective experience. KQI offers a framework that can reflect service performance and quality in an objective way, from an E2E perspective and these indicators can be obtained through direct testing and statistical analysis of the network [12].
In the literature [9,10,11], the classification carried out was considering only KPI parameters. We propose improving the classification (identification) of the services considering (besides KPI) KQI parameters.
We estimated that the incorporation of KQIs into the classification of 5G/B5G services would improve the QoS and QoE, and would make the determination and compliance of SLAs more precise. The requirements for establishing SLAs in 5G based on the services provided and the infrastructure of the provider are known as Service Level Objectives (SLOs). It is therefore imperative to consider both the KPIs and KQIs, among others [3], since the QoS, which forms the basis for establishing the SLA, is a function of the applications, the network, and subjective factors such as user experience (QoE) [6].

3. 5G/B5G Service System Classifier Proposed

Figure 1 shows a block-level diagram of the proposed system for the classification of services in 5G/B5G networks. The planned scheme first operates offline until the predictive model has been validated, and can learn to classify services effectively with few mistakes. In the next phase, the system is implemented online by the network operators, and the predictive model then classifies new services requested by the UEs. The output of the system corresponds to the requested service classification, and is feedback to the ML algorithm, making the predictive model more efficient. Each of these phases, and each block, is explained later in this article.
The proposed system can classify services in next-generation networks. However, it is essential to clarify that this forms only one part of a system that a network operator can use to offer services. Our system needs to interact to connect with the rest of the operator’s system, and we must therefore consider two options:
  • Reprogramming the proposed system in the language of the operator’s system. This has the disadvantage of requiring reprogramming of the systems used by each operator, including the cloud, which is not very feasible (due to future maintenance or update issues).
  • Incorporating into the proposed system an appropriate Application Programming Interface (API) to enable communication with the operator’s system (accessible from a public or private server). The necessary security must be provided to ensure this is only employed in an authorized way.
The second option is preferable, since 5G systems provide appropriate APIs to allow a trusted third party to create, modify, delete and monitor instances of the network segments used by the third party, and to manage a set of devices or capabilities including QoS functions [13].

3.1. Building the Dataset

The main limitation in this project was the achievement of a real dataset, a dataset with 5G operating systems parameters. We considered building a synthetic dataset (manually) analyzing KPI and KQI parameters extracted from ITU standards documents and various European projects and analysis documents carried out by telecommunications companies. The dataset built by taking the threshold values consulted in the bibliography and oscillating these values randomly, causing diverse values for each KPI/KQI and each service individually.
The first block of the scheme shown in Figure 1 corresponds to our database, which we used to train the ML algorithm to validate and verify the predictive model. We created the database manually, using parameter values corresponding to the KPIs and KQIs of the selected services in Comma Separated Values (CSV) format. The documents consulted in this stage were related to standards and various project reports on 5G networks, such as 3GPP [14,15,16,17], the 5G Public-Private Partnership (5G-PPP) [2], NGMN [18,19], Speed [20,21], 5G America [4], International Telecommunications Union (ITU) [7,22,23], Huawei [24], and others [12,25,26,27]. The selected parameter values were the standard threshold values, and these were manipulated randomly until we obtained values that were sufficiently close to their limiting values. Appendix A shows the tables with the threshold parameters for the extracted KPI/KQI parameters.
The database contained 165 rows and 14 columns. The rows were the parameter values of the 5G services to be classified and, the firsts 13 columns contained the KPI and KQI values, while the last column corresponded to the labels of the 5G services. Table 1 is a fragment of the database, and we can visualize some values of KPI, KQI and services 5G. This project is a work in progress, the database is a process that can grow over time, the proposal is to test the combination of KPI+KQI, and as the database grows, we will test our hypothesis. The dataset is available, see the statement section for the link.
In this project, we work on a classification problem where different labels are used (5G services). We need to attribute or assign a label to the elements to be classified to distinguish them. The review and attribution of labels can be done manually or computationally, using a specifically designed program. When applying ML, a set of labeled data, both the characteristics of the services and their labels are used to solve a classification problem we are talking about the SL scheme for the classification of 5G/B5G services. Similarly, it is necessary to label (represented by variable y) the 5G services found in the database; labels y corresponds to parameters or characteristics for the arriving services represented by the variable x (see Figure 1). The labeling of the data is realized before creating the algorithm’s predictive model, so it is possible to know which label (y) corresponds to the parameters (x) of each 5G service in the database (see Table 1).

3.2. ML Algorithm and Predictive Model

The classification of services carried out in this work is based on the following premise: the services are in the form of a series of parameters (x) determined by the KPIs and KQIs that define them, and the corresponding labels (y) must be assigned according to these parameters.
As mentioned above, the descriptive parameters of the services to be classified are in the form of a single database, and it is therefore necessary to partition it into two datasets, one of which is used to train the algorithm, and the other to predict or verify it. Splitting the database generates four new variables: Xtrain, Xtest, Ytrain, and Ytest. The training variable Xtrain correlates with the input values for the 5G services that are selected to train the ML and Ytrain algorithm with their respective output labels. The other two variables Xtest and Ytest correspond to the input and output variables for the testing and validation phase of the predictive model.
The training phase involves passing training data to the ML algorithm to allow it to learn. The ML algorithm develops a function based on the training data (Xtrain) that provides the correct answer (Ytrain). Using Xtrain and Ytrain, the algorithm learns, and a function f(Xtrain) = Ytrain is generated that: identifies patterns in the training data, allows the attributes of the input data to be assigned to the target data (representing the answer to be predicted) and generates a model that captures these patterns.
An ML algorithm is then applied whose objective is to create a function y = f(x) that is capable of predicting the value corresponding to any input object x (in the proposed system for the classification of services, these represent the KPIs and KQIs of 5G services). The ML algorithm must be trained with a set of parameters or characteristics of the different services to be classified. This training allows new known input values (x), new unknown labels (y) assigned since after the first training, the predictive model can provide results for new data. This means that once the ML algorithm is trained, the predictive model it generates is ready to predict or classify the requested services (see Figure 1).
The result is a predictive model that can classify 5G services, which is generated by training the ML algorithm with the Xtrain data. However, the Xtest data are not used in the training of the ML algorithm, which may mean that the generated predictive model cannot classify services correctly; there is a possibility that this model is prone to underfitting or overfitting, and it is therefore necessary to validate the ML algorithm to ensure that the predictive model is effective. If the predictive model shows overfitting, it is not very useful, since a model that repeats the labels of the samples it has just seen would achieve a perfect score but would not be able to predict unknown labels. To avoid this problem, the validation block of the ML algorithm applies a method based on the cross-validation technique.
If the result of the validation stage is similar to those of the evaluation and training stages, the trained model is correct, and there is no indication of overfitting. This validation indirectly affects the final evaluation of the predictive model. When the model is tested with the new Xtest data, the values of the metrics must be similar to those from the validation stage, as this indicates that the chosen algorithm works effectively.
When the algorithm has been trained and validated, the next step is to test the predictive model to determine whether it can predict new and future data. The test block of the model addresses this by carrying out a prediction test with Xtest; in Figure 1, Y = f(Xtest) represents this. The output Y from this block is a vector of the different 5G services generated by the predictive model, which corresponds to the test results of the prediction model with the variable Xtest.
Similarly, it is necessary to label (represented by variable y) the 5G services found in the database; labels y corresponds to parameters or characteristics for the arriving services represented by the variable x (see Figure 1). The labeling of the data is realized before creating the algorithm’s predictive model, so it is possible to know which label (y) corresponds to the parameters (x) of each 5G service in the database.
A prediction Y can be compared with previously known data as the target response (Ytest) in order to determine the quality of the predictions from the model, and this test can therefore be used as a basis for predictive precision for future data. Hence, a comparison between the Y and Ytest vectors can describe the verification or validation ability of the model.

3.3. Validation of the Predictive Model

To validate the predictive model, it is necessary to determine whether the values obtained for Y are the expected ones. The use of metrics to measure performance can allow us to confirm the effectiveness of the model. The relationship between Ytest and Y is used to generate the performance measures, and to construct the confusion matrix shown in Table 2.
A confusion matrix is so named because it visualizes the performance of the predictive model and observes confusion in two labels. The columns of the matrix represent the number of predictions for each label (Y) made by the predictive model, while each row represents the current label for the test values (Ytest) as follows: [28]
  • True Positives: The number of current values classified as belonging to a particular class, for which the model´s prediction is correct.
  • False Positives: These are the current values classified as belonging to an incorrect class. They are considered by the model to be positive, but the prediction is wrong.
  • False Negatives: These are values that belong to a particular class but are classified differently (incorrect prediction).
  • True Negatives: These are observations that do not belong to a given class and are classified correctly.
A series of metrics can be derived from the results in the confusion matrix of Table 2 and used to evaluate performance of the predictive model as follows: [29]
  • Accuracy: This is the relationship between the number of correct predictions (TP and TN results) made by the model and the total number of predictions. In other words, this reflects how often the predictive model’s classification is correct. It is the most direct measure of the quality of the classification, although it is less appropriate when the labels of the output variables are not balanced (unbalanced data), i.e., labels are not of similar quantities.
    A c c u r a c y = T P + T N T P + F P + T N + F N
  • Precision: This measures the precision with which the predictive model ranks services by their performance due to optimistic predictions. It is the relationship between the number of correct predictions and the total number of correctly predicted predictions.
    P r e c i s i o n = T P T P + F P
  • Recall: This is the relationship between the number of correct predictions to the total number of positive predictions. In other words, it represents the sensitivity of the predictive model in terms of detecting positive instances.
    R e c a l l = T P T P + F N
  • F1 score: This is a weighted average of the recall and precision. A higher score represents a better model. Thus, it provides a good indicator of the overall accuracy of the predictive model, while the accuracy and recall provide information on explicit areas.
    F 1   s c o r e = 2 × P r e c i s i o n × r e c a l l P r e c i s i o n + r e c a l l
  • Matthews correlation coefficient (MCC): As an alternative measure unaffected by the unbalanced datasets issue, MCC is the only binary classification rate that generates a high score only if the binary predictor was able to correctly predict the majority of positive data instances and the majority of negative data instances. It ranges in the interval [−1, +1], with extreme values −1 and +1 reached in case of perfect misclassification and perfect classification, respectively. At the same time, MCC = 0 is the expected value for the coin-tossing classifier [30].
    M C C = T P × T N F P × F N T P + F P × T P + F N × T N + F P × T N + F N
If the values of the metrics for the predictive model are satisfactory, the offline work phase is terminated, and the model is ready to be used online by a network operator to classify new services requested by the UEs. If the results achieved in terms of the metrics are not as expected, the entire cycle must be repeated, starting from the training of the ML algorithm, until a reasonable rate of success is observed, so that the model will generate fewer mistakes in the future. In the latter case, one or more of the following actions can be taken:
  • Increasing the volume of data used to train the ML algorithm and test the predictive model.
  • Choosing another ML algorithm.
  • Making the ML algorithm used in the simulation more straightforward or more complex (from a structural point of view) to achieve better precision.
We now have a predictive model that is capable of classifying 5G services based on their KPIs and KQIs. When this is implemented online, UEs request new services (represented in the lower part of Figure 1), and the model takes as its input a vector formed of the KPIs and KQIs of the requested services. When classifying the services, our system also uses an output tag and the characteristics (KPI/KQI) of the service, and feeds them back into the system database. The objective of this approach is to take advantage of each requested service by incorporating it into the database and repeating the entire procedure of retraining the ML algorithm until a new, more robust predictive model is generated that provides a better classification.

4. Simulation Results

To determine whether if the inclusion of KQIs improves the predictive service classification model, we perform two simulations. The first considers only the KPIs, while the second also incorporates the KQIs.
We first explain and define the scenario and conditions used in the simulations. The necessary elements are the SML algorithms, a programming language, a development platform, the 5G services to be classified, and the parameters of their KPIs and KQIs.
For the validation scenario and to simulate the proposed system, we used SL algorithms: Decision Tree, Random Forest (with five trees), Support Vector Machine (SVM) with a linear kernel, K-Nearest Neighbors (KNN, K = 3) and Multi-Layer Perceptron Classifier (MLPC), using the Python language, and Anaconda Navigator platform with Jupyter Notebook as IDE. We considered nine essential 5G services to be classified: Ultra High Definition (UHD) video streaming, immersive experience, connected vehicles, e-health, industry automation, video surveillance, smart grid, Intelligent Transport Systems (ITS) and Voice over 5G (Vo5G). The selected KPI parameters were E2E latency, jitter, bit rate, packet loss rate, peak data rate DownLink (DL), peak data rate UpLink (UL), mobility and, service reliability. The KQI parameters were service availability, user experience data rate DL/UL, survival time and interruption time.
In the first simulation, we worked with the KPIs. The dataset had dimensions of 165 × 9, where the first eight columns represented the KPIs, and the last contained the labels of the services. We divided the database into two parts, where 80% (132) of the data (Xtrain) were used to train the algorithms created and, once trained, generated the predictive model. The remaining 20% (Xtest) were used to test the model.
The models may be prone to underfitting or overfitting, meaning that it will work perfectly for the training data (Xtrain) that are already known, but its accuracy may be lower for new services (Xtest). According to [29,31], there are two possible approaches to avoid overfitting: increasing the volume of the database or reserving additional data by dividing the dataset into three parts (training, validation and testing). Increasing the amount of data was difficult because there were insufficient known data from the 5G service; hence, additional data were reserved, and the K-Folds cross-validation technique was applied using K = 10 [32] and we obtained the results showed in Table 3. It should be noted that all of the data formed part of the original dataset, and did not constitute three new datasets.
Figure 2 shows the confusion matrices obtained during the testing process for each model in the first simulation, in which we considered only the KPIs. The main diagonal shows the number of correct predictions made by the predictive model. Values outside the main diagonal represent predictions in which the model was wrong.
We applied Equations (1)–(5) to the metrics obtained from the confusion matrix to evaluate the performance of the predictive models. The results were as follows Table 4:
In the second simulation, we incorporated the user quality parameters (KQIs) and repeated the procedure (with a few differences from the previous simulation). The KQI parameters considered here were service availability, user experience data rate DL/UL, survival time, and interruption time. A database containing 165 rows was kept, with five additional columns corresponding to the KQI parameters.
We used the same functions to create and train the ML algorithms, resulting in the same SL algorithms. Again, we used the K-Folds cross-validation technique with K = 10 [32] to validate the ML algorithm and we obtained the results showed in Table 5. Figure 3 shows the confusion matrix obtained for each model in this simulation.
We obtained the performance metrics for the predictive models based on the newly generated confusion matrices. The results were as follows Table 6:
From Table 6, it is possible to know that the KNN model does not apply to our problem because it had an inadequate accuracy. Furthermore, we can see that the other models increased their metrics in this second simulation, and the best metrics obtained are Decision Tree, Random Forest, and SVM.
To verify if the predictive model was satisfactory, we created a function to compare the accuracy obtained in the cross-validation stage versus the accuracy of the testing stage. We considered the model was acceptable if the difference does not exceed 5%. The result obtained for the SVM had a difference of 8.41%, so we conclude that this model may be overfitting. The result obtained had a difference of 0.69% and 1.62% in the Decision Tree and Random Forest. This result means that the predictive model generated by the Decision Tree and the Random Forest algorithms are not overfitting. If the predictive model is overfitting, we choose the third option mentioned above, for example, making a Random Forest with maximum depth.
We determined that we can use both Decision Tree or Random Forest to solve the service classification problem presented, as the authors obtained in [11]. Although we are going to choose Random Forest as the predictive model in our proposal. Figure 4 and Figure 5 show the schematics of one of the trees in the Random Forest in each simulation. We can appreciate the KPIs that the tree uses in the first simulation to classify a 5G service. In the second simulation (Figure 5), the same tree incorporates KQIs to improve its classification and obtain better metrics.
As expected, when the KQI parameters were incorporated to classify 5G services, the predictive model learned to classify the services more effectively. Figure 6 shows the average results in terms of the evaluation metrics obtained by the Random Forest model for both simulations. We can see that there were increments of 3% in accuracy, 2.5% in precision, 1.6% in recall, 2.4% in F1 score and 3.4% in MCC.
The low percentages in the results arise from the small amounts of data available for the simulation. With an increase in the number of values in the database, the ML algorithm would have more data to learn from and more data to classify, meaning that the percentages for the predictive model would also increase. All these metrics represent aspects of the performance of the predictive model of the proposed 5G service classifier system. Since these metrics were increased, the performance of the proposed system also increased, meaning that more effective service classification was produced when both KPIs and KQIs were considered.

5. Conclusions

During selecting the network slices in new generation networks (5G/B5G), the use of KPI and KQI parameters is crucial to identify and characterize each service requested by the UE. This procedure allows service providers to allocate ad hoc resources for the service and, implicitly, have a better and appropriate QoS. A good classification scheme can improve network and service management, SLA compliance, and in consequence, the QoE perceived by users.
This paper has proposed a system for classifying services in new generation networks based on ML. The predictive model is a fundamental block of the proposed service classifier system, which is in charge of classifying 5G services. The main limitation of the project was to have a 5G real operating dataset. It achieved the best possible classification results in our system. It was necessary to create a dataset containing KPI/KQI parameters extracted from standards documents and projects to classify 5G services.
The SML algorithm generated a predictive model trained and validated using the KPI and KQI parameters to classify each service. We established two situations; classify services using only parameters KPI and applying both parameters (KPI + KQI). We implemented simulations employing five different SL algorithms (Decision Tree, Random Forest, SVM, KNN, and MLPC), and we validated the results with the K-Folds technique.
Analyzing the results produced by the confusion matrices and applying equations to evaluate performance indicates that it is possible to compare the proposed ML algorithms. Furthermore, comparing the simulation results obtained, two proposals showed similarities; both the classifications of services by Decision Trees and Random Forests are the best approaches. It is not easy to make a direct comparison between the two proposals if the characteristics or attributes differ.
Incorporating KQIs allowed for improvements of 3% in accuracy and 3.41% in MCC for the classification of services using a Random Forest algorithm, as shown in Figure 6. The aim was to prove that including KQI besides KPI in the service classification will improve the identification of the services. This idea was gotten supported using ML techniques to solve a new generation telecommunication network. It was proven satisfactory according to the results.
Future simulations will use a dataset of real operating values for the KPI and KQI parameters from 5G/B5G networks to better characterize the network’s performance.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/app11114942/s1.

Author Contributions

Conceptualization, J.E.P.-V.; Methodology, J.E.P.-V.; Software, J.D.G.-F.; Supervision, C.E.A.-C., J.I.N.-H. and R.R.-R.; Writing—original draft, J.E.P.-V. and J.D.G.-F.; Writing—review and editing, J.E.P.-V., J.D.G.-F., C.E.A.-C., J.I.N.-H., R.R.-R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available on request. The dataset which contains the 5G services and their KPI/KQI parameters is available on: Zenodo.org in the link https://zenodo.org/record/4779074#.YK8rd5MzbOQ, doi:10.5281/zenodo 4779074. Also the programming code of the simulations is available on: Zenodo.org in the link https://zenodo.org/record/4817139#.YK8q2JMzbOQ, doi:10.5281/zenodo 4817139, accessed 26 May 2021.

Acknowledgments

This project is undertaken under a multi-institutional agreement. The authors would like to thanks Technological University of Havana (CUJAE), Autonomous University of Baja California (UABC), and CICESE Research Center for their support.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. 5G service KPI parameters.
Table A1. 5G service KPI parameters.
KPIE2E Latency (ms)Jitter (ms)Bit Rate (Mbps)Packet Loss Rate (%)Peak Data Rate DL (Gbps)Peak Data Rate UL (Gbps)Mobility (km/h)Service Reliability (%)
Service
UHD Video streamingMin: 4 [7,14].
Max: 20 [7,14]
5.84 [7]10 [16]Max: 1 [7]20 [7]10 [7]Min: 0 [19].
Max: 500 [19]
Min: 95 [19]
Inmersive experienceMin: 7 [14].
Max: 15 [14]
20 [14]50 [13]Max: 5 [20,21]20 [7]10 [7]Min: 0 [25].
Max: 30 [25]
Min: 95 [7]
Smart gridMin: 5 [27].
Max: 50 [27]
1 [27]1 [16]Max: 0.0001 [20]20 [7] 10 [7]Min: 0 [19].
Max: 0 [19]
Min: 99.9 [27]
e-healthMin: 1 [4].
Max: 10 [4]
10 [7]16 [16]Max: 0.00000001
[16]
0.3 [20,21]0.3 [20,21]Min: 0 [4]
Max: 120 [4]
Min: 99.9999 [20,25]
ITSMin: 10 [26,27].
Max: 100 [26,27]
20 [27]0.5 [16]Max: 0.1 [26,27]20 [7]10 [7]Min: 50 [26,27].
Max: 500 [26,27]
Min: 99.999 [26,27]
Vo5GMin: 20 [23]. Max: 150 [25]30 [25]10 [16,23]Max: 1 [25]20 [7]10 [7]Min: 0 [14].
Max: 500 [14]
Min: 99.9 [24]
Connected vehiclesMin: 3 [4,18].
Max: 100 [4,18]
0.44 [33,34]10 [13]Max: 0.001 [20,25]1 [18]0.025 [18]Min: 50 [18] [33]. Max: 250 [4,33]Min: 99.999 [18,25]
Industry automationMin: 1 [18].
Max: 50 [4,25]
0.1 [25]1 [17]Max: 0.0000001 [27]20 [7]10 [7]Min: 0 [4].
Max: 30 [27]
Min: 99.999 [14,27]
Video surveillanceMin: 10 [35].
Max: 50 [15,18]
5 [35]10 [13]Max: 0.001 [4]0.05 [15]0.12 [15]Min: 0 [4] Max: 320 [15]Min: 99 [15]
Table A2. 5G service KQI parameters.
Table A2. 5G service KQI parameters.
KQIAvailability (%)Survival Time (ms)Experience Data Rate DL (Mbps)Experience Data Rate UL (Mbps)Interruption Time (ms)
Service
UHD Video streamingMin: 99 [20,21]
Max: 99.999 [20,21]
Min: 8 [16]
Max: 16 [16]
1000 [14]500 [14]Min: 1000 [7,16]
Max: 3000 [7,16]
Inmersive experienceMin: 99.9 [17]Min: 1 [17]
Max: 10 [2]
1000 [14]50 [25]0 [25]
Smart gridMin: 99.999 [17,20,21]
Max: 99.9999 [14,17]
Min: 10 [4,14].
Max: 25 [4,14]
1 [18]5 [18]Almost 0 [7]
e-healthMin: 99 [14,17].
Max: 99.99999 [17,20,21]
Min: 1 [14,17].
Max: 50 [14]
0.1 [4,19]10 [4,18,19] 0 [7,25]
ITS99.9999 [14]100 [14,16] 10 [4]10 [4]1000 [25]
Vo5GMin: 95 [14].
Max: 99 [23]
100 [14]50 [25]25 [25]0 [25]
Connected vehiclesMin: 95 [14].
Max: 99 [14]
Min: 1 [17].
Max: 50 [17]
50 [4,14,18] 25 [4,14,18] 0 [7]
Industry automationMin: 99.99 [14,25].
Max: 99.9999 [14]
Min: 0 [14].
Max: 100 [14]
100 [14,25] 1 [14,25]Min: 0 [25].
Max: 100 [25]
Video surveillanceMin: 99 [14,15].
Max: 99.9 [14]
Min: 10 [17].
Max: 100 [17]
10 [18,25] 100 [18,25]Almost 0 [17]

References

  1. Barona López, L.; Maestre Vidal, J.; García Villalba, L. An Approach to Data Analysis in 5G Networks. Entropy 2017, 19, 74. [Google Scholar] [CrossRef] [Green Version]
  2. Mullins, M.; Taynann, R. Cognitive Network Management for 5G. 5GPPP Work. Gr. Netw. Manag. QoS 2017, 1, 1–40. Available online: https://5g-ppp.eu/wp-content/uploads/2016/11/NetworkManagement_WhitePaper_1.0.pdf (accessed on 26 May 2021).
  3. Yousaf, Z. Deliverable D5.1 Definition of Connectivity and QoE / QoS Management Mechanisms—Intermediate Report. 5gnorma Proj. Deliv. (v1.0) 2016, 15. [Google Scholar]
  4. 5GAmericas, “Network Slicing for 5G Networks & Services,”. 2016, pp. 24–25. Available online: http://www.5gamericas.org/files/3214/7975/0104/5G_Americas_Network_Slicing_11.21_Final.pdf (accessed on 26 May 2021).
  5. 3Gpp TR 23.862 (v.14.0.0). 3GPP Organizational Partners’ Publications Valbonne, France. 2016. Available online: https://itectec.com/archive/3gpp-specification-tr-32-862/ (accessed on 26 May 2021).
  6. Kapassa, E.; Touloupou, M.; Kyriazis, D. SLAs in 5G: A Complete Framework Facilitating VNF- and NS-Tailored SLAs Management. In Proceedings of the 32nd IEEE International Conference on Advanced Information Networking and Applications Workshops, Krakow, Poland, 16–18 May 2018; pp. 469–474. [Google Scholar] [CrossRef]
  7. ITU-R M.2083-0. IMT Vision—Framework and Overall Objectives of the Future Development of IMT for 2020 and Beyond. 2015. Available online: https://www.itu.int/dms_pubrec/itu-r/rec/m/R-REC-M.2083-0-201509-I!!PDF-E.pdf (accessed on 26 May 2021).
  8. Klaine, P.V.; Imran, M.A.; Onireti, O.; Souza, R.D. A Survey of Machine Learning Techniques Applied to Self-Organizing Cellular Networks. IEEE Commun. Surv. Tutor. 2017, 19, 2392–2431. [Google Scholar] [CrossRef] [Green Version]
  9. Kafle, V.P.; Fukushima, Y.; Martinez-Julia, P.; Miyazawa, T. Consideration on Automation of 5G Network Slicing with Machine Learning. In Proceedings of the 10th ITU Academic Conference Kaleidoscope: Machine Learning for a 5G Future, Santa Fe, Argentina, 26–28 November 2018. [Google Scholar] [CrossRef]
  10. Morocho-Cayamcela, M.E.; Lee, H.; Lim, W. Machine Learning for 5G/B5G Mobile and Wireless Communications: Potential, Limitations, and Future Directions. IEEE Access 2019, 7, 137184–137206. [Google Scholar] [CrossRef]
  11. Demestichas, P.; Tsagkaris, A.G.K.; Vassaki, K.S. Service Classification in 5G Networks. November. Seoul, Korea. 2016, p. 13. Available online: https://datatracker.ietf.org/meeting/97/materials/slides-97-nmlrg-service-classification-in-5g-networks-00 (accessed on 26 May 2021).
  12. Chen, W.; Zhao, Q.; Duan, H. Research on the Key Concepts and Problems of Service Quality. In 2nd International Conference on Mechatronics Engineering and Information Technology; Atlantis Press: Paris, France, 2017; pp. 651–654. [Google Scholar] [CrossRef] [Green Version]
  13. Schmelz, L.C.; Nok-de, C.M. 5G Mobile Network Architecture for Diverse Services, Use Cases, and Applications in 5G and Beyond (v1.0). 2017, p. 14. Available online: https://5g-monarch.eu/wp-content/uploads/2017/10/5G-MoNArch_761445_D6.1_Documentation_of_Requirements_and_KPIs_and_Definition_of_Suitable_Evaluation_Criteria_v1.0.pdf (accessed on 26 May 2021).
  14. 3GPP ETSI. TS 22.261 5G. Service Requirements for Next Generation New Services and Markets (Release 15) (v.15.5.0); 3GPP Organizational Partners’ Publications: Valbonne, France, 2018; pp. 29–33. Available online: http://www.etsi.org/standards-search (accessed on 26 May 2021).
  15. 3GPP ETSI.3GPP TS 22.125. Unmanned Aerial System (UAS) Support in 3GPP Release 17 (v17.1.0); 3GPP Organizational Partners’ Publications: France, 2019; pp. 12–14. Available online: https://www.3gpp.org/ftp/Specs/archive/22_series/22.125/ (accessed on 26 May 2021).
  16. 3GPP ETSI. 3GPP TS 22.263 Service Requirements for Video, Imaging and Audio for Professional Applications (VIAPA) Support in 3GPP Release 17 (v17.0.0); 3GPP Organizational Partners’ Publications: Valbonne, France, 2019; pp. 12–17. Available online: https://www.3gpp.org/ftp/Specs/archive/22_series/22.263/ (accessed on 26 May 2021).
  17. 3GPP ETSI. 3GPP TS 22.104 Service Requirements for Cyber-Physical Control Applications in Vertical Domains Support in 3GPP Release 17; (v17.2.0); 3GPP Organizational Partners’ Publications: Valbonne, France, 2019; pp. 15–22. Available online: https://www.3gpp.org/ftp/Specs/archive/22_series/22.104 (accessed on 26 March 2021).
  18. The Next Generation Mobile Networks Alliance. NGMN Perspectives on Vertical Industries and Implications for 5G. Berkshire, UK. 2016. Available online: https://www.ngmn.org/fileadmin/ngmn/content/images/news/ngmn_news/NGMN_5G_White_Paper_V1_0.pdf (accessed on 26 May 2021).
  19. Next Generation Mobile Networks Alliance 5G Initiative. NGMN 5G White Paper 2015. Available online: https://www.ngmn.org/wp-content/uploads/NGMN_5G_White_Paper_V1_0.pdf (accessed on 26 May 2021). [CrossRef]
  20. Mumtaz, S.; Huq, K.S.; Rodriguez, J.; Marques, P. D3.2: SPEED-5G Enhanced Functional and System Architecture, Scenarios and Performance Evaluation Metrics (v1.2); European Union: Mestreech, The Nederlands, 2016; pp. 45–46. Available online: https://speed-5g.eu/wp-content/uploads/2017/01/speed5g-d3.2-v1.2_enhanced_functional_and_system_architecture.pdf?x79064 (accessed on 26 May 2021).
  21. Keith Briggs, U.; Fitch, M.; Miatton, F.H.; Georgakopoulos, A.; Belikaidis, P.I.; Demestichas, O.; Panagiotis, C.; Moessner, K. D4.1: Metric Definition and Preliminary Strategies and Algorithms for RM (v1.3); European Union: Mestreech, The Nederlands, 2016; pp. 13–16, 18–22, 35–31; Available online: https://speed-5g.eu/wp-content/uploads/2017/01/speed5g-d4.1-v1.3_metric_definition_and_preliminary_strategies_and_algorithms_for_rm.pdf?x79064 (accessed on 26 May 2021).
  22. ITU-T G.1028. End-to-End Quality of Service for Voice over 4G Mobile Networks (v2.0). 2019. Available online: https://www.itu.int/dms_pubrec/itu-r/rec/m/T-REC-G.1028-201906-I!!PDF-E.pdf (accessed on 26 May 2021).
  23. ITU-T G.1028.2. Assessment of the LTE Circuit Switched Fall Back—Impact on Voice Quality of Service (v1.0). 2019. Available online: https://www.itu.int/dms_pubrec/itu-r/rec/m/T-REC-G.1028-2-201906-I!!PDF-E.pdf (accessed on 26 May 2021).
  24. HUAWEI Technologies Co. Vo5G Technical White Paper. 2018, p. 28. Available online: http://www.huawei.com (accessed on 26 May 2021).
  25. Lorca, J.; One 5G Project. Deliverable D2.1 Scenarios, KPIs, Use Cases and Baseline System Evaluation. 2017, pp. 12–14, 17–19, 23–27, 30–43 . Available online: https://one5g.eu/documents/ (accessed on 26 May 2021).
  26. Cominardi, L.; Contreras, M.L.; Bcrnardos, J.C.; Berberana, I. Understanding QoS Applicability in 5G Transport Networks. In Proceedings of the IEEE International Symposium on Broadband Multimedia Systems and Broadcasting, Valencia, Spain, 6–8 June 2018. [Google Scholar] [CrossRef] [Green Version]
  27. Schulz, P. Latency Critical IoT Applications in 5G: Perspective on the Design of Radio Interface and Network Architecture. In IEEE Communications Magazine; IEEE: Piscataway, NJ, USA, 2018. [Google Scholar] [CrossRef] [Green Version]
  28. Zamorano Ruiz, J. Comparativa y Análisis De Algoritmos de Aprendizaje Automático para la Predicción del Tipo Predominante de Cubierta Arbórea; Universidad Complutense de Madrid: Madrid, Spain, 2018; Available online: https://eprints.ucm.es/id/eprint/48800/ (accessed on 26 May 2021).
  29. Liyanapathirana, L. Classification Model Evaluation. 2018. Available online: https://heartbeat.fritz.ai/classification-model-evaluation-90d743883106 (accessed on 26 May 2021).
  30. Chicco, D.; Jurman, G. The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genom. 2020, 21, 1–13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Anonymous. Cross-Validation: Evaluating Estimator Performance,” Scikit-Learn. 2020. Available online: https://scikit-llearn.org/stable/modules/cross_validation.html# (accessed on 26 May 2021).
  32. Witten, I.H.; Frank, E.; Hall, M.A.; Pal, C.J. Data Mining Practical Machine Learning Tools and Techniques; Morgan Kaufmann: Burlington, MA, USA, 2005. [Google Scholar]
  33. 3GPP ETSI. 3GPP TS 22.186 Service Requirements for Enhanced V2X Scenarios (Release 15) (v.15.3.0); 3GPP Organizational Partners’ Publications: Valbonne, France, 2018; pp. 9–11. Available online: https://www.3gpp.org/ftp/Specs/archive/22_series/22.186/ (accessed on 26 May 2021).
  34. Sadek, M.N.; Halawa, H.H.; Daoud, M.R.; Amer, H.H. A Robust Multi-RAT VANET/LTE for Mixed Control & Entertainment Traffic. J. Transp. Technol. 2015, 5, 113–121. [Google Scholar] [CrossRef] [Green Version]
  35. Varga, P. 5G Support for Industrial Iot Applications—Challenges, Solutions, and Research Gaps. Sensors 2020, 20, 828. [Google Scholar] [CrossRef] [PubMed] [Green Version]

Short Biography

Applsci 11 04942 i001Jorge Enrique Preciado-Velasco (IEEE Senior Member and ACM Member) was born in Ensenada, Mexico He received his B.S. degree from the University of Guadalajara in 1977 in Communications and Electronics Engineering, and his M.Sc. in Electronics and Telecommunications from the CICESE Research Center in 1983. He has twice been President of Board of Directors of CUDI (the National Research and Education Network in Mexico), CIO of the University of Colima (2008–2012), Mexico, and Director of the Telematics Division in the CICESE Research Centre (1997–2005). Since 1988, he has been a researcher in the Electronics and Telecommunications Department of the CICESE Research Centre. His research interests include network and services ICT management, new generation wireless communications, and QoS in IP networks.
Applsci 11 04942 i002Joan David González-Franco was born in La Havana, Cuba in 1996. He received his B.S. degree in Telecommunications and Electronics Engineering from the Technological University of Havana (CUJAE), La Havana, Cuba, in 2020. His main area of interest is Machine Learning application in telecommunication.
Applsci 11 04942 i003Caridad Emma Anias-Calderon was born in La Havana, in 1956. She received her B.S. degree in Telecommunications Engineering in 1981. Optical Communications Specialist in 1987, and a Master´s in Telematics in 1996. She received her doctorate in Technical Sciences in 1998. She is Emeritus Professor of the Technological University of Havana (CUJAE) in 2019. Her areas of interest are telematic networks and the management of telecommunications networks and services. She currently directs the Centre for Telecommunications and Informatics Studies of the CUJAE.
Applsci 11 04942 i004Juan Ivan Nieto-Hipolito (IEEE Member) received his M.Sc. degree from the CICESE Research Centre in 1994 (Mexico). He received a PhD from the Computer Architecture Department at the Polytechnic University of Catalonia (UPC, Spain) in 2005. Since August 1994, he has been a full professor at the Autonomous University of Baja California (UABC, Mexico), where he was the leader of the Telematics Research Group from 2007 to 2012. From 2011 to 2019, he was also Director of the Faculty of Engineering, Architecture, and Design. His research interests include the applications of ICT, and mainly wireless, MAC, routing, and instrumentation for e-health.
Applsci 11 04942 i005Raul Rivera-Rodriguez was born in Mochis, Mexico, in 1971, He received his B.S. degree in Electronic Engineering from the Sonora Institute of Technology, Ciudad Obregon, Mexico, in 1994, and an M.Sc. in Electronics and Telecommunications from the CICESE Research Centre, Ensenada, Mexico, in 1997. He received his Ph.D. from the Autonomous University of Baja California, Tijuana, Mexico, in 2010. He has contributed to the deployment of the National Research and Education Network Infrastructure in Mexico, as the President of the Network Committee of CUDI (NREN in Mexico). He is currently the Director of the Telematics Division of the CICESE Research Centre. His research interests include network management systems, QoS in IP networks, signal processing for wireless communications, cybersecurity, cloud computing, cross-layer design, and coding theory.
Figure 1. Schematic diagram of the proposed 5G/B5G service classifier.
Figure 1. Schematic diagram of the proposed 5G/B5G service classifier.
Applsci 11 04942 g001
Figure 2. Confusion matrices for the first simulation (KPIs). CV: Connected Vehicles; IE: Immersive Experience; IA: Industry Automation; SG: Smart Grid; UHD: Video Streaming; VS: Video Surveillance; VO: Vo5G; eH: e-health.
Figure 2. Confusion matrices for the first simulation (KPIs). CV: Connected Vehicles; IE: Immersive Experience; IA: Industry Automation; SG: Smart Grid; UHD: Video Streaming; VS: Video Surveillance; VO: Vo5G; eH: e-health.
Applsci 11 04942 g002
Figure 3. Confusion matrices for the second simulation (KPIs + KQIs). CV: Connected Vehicles; IE: Immersive Experience; IA: Industry Automation; SG: Smart Grid; UHD: Video Streaming; VS: Video Surveillance; VO: Vo5G; eH: e-health.
Figure 3. Confusion matrices for the second simulation (KPIs + KQIs). CV: Connected Vehicles; IE: Immersive Experience; IA: Industry Automation; SG: Smart Grid; UHD: Video Streaming; VS: Video Surveillance; VO: Vo5G; eH: e-health.
Applsci 11 04942 g003
Figure 4. Scheme of one Decision Tree of the Random Forest for the first simulation (KPIs).
Figure 4. Scheme of one Decision Tree of the Random Forest for the first simulation (KPIs).
Applsci 11 04942 g004
Figure 5. Scheme of one Decision Tree of the Random Forest for the second simulation (KPIs + KQIs).
Figure 5. Scheme of one Decision Tree of the Random Forest for the second simulation (KPIs + KQIs).
Applsci 11 04942 g005
Figure 6. Comparison of the results obtained by the Random Forest model in both simulations.
Figure 6. Comparison of the results obtained by the Random Forest model in both simulations.
Applsci 11 04942 g006
Table 1. Fragment of ten entries of the database built.
Table 1. Fragment of ten entries of the database built.
Latency (ms)Jitter (ms)Bit Rate (Mbps)Packet Loss Rate (%)Peak Data Rate DL (Gbps)Peak Data Rate UL (Gbps)Mobility (km/h)Reliability (%)Service Availability (%)Survival Time (ms)Experienced Data Rate DL (Mbps)Experienced Data Rate UL (Mbps)Interruption Time (ms)Service
155110.11872609599810005001000UHD_Video_Streaming
55.51012010209599.299904402000UHD_Video_Streaming
810503.8157159799.9101000500.2Immerse_Experience
4010.51.00e-05189099.9299.99910580Smart_Grid
90180.28.00e-0213248099.999599.999910010101000ITS
130589.00e-0114640099.949510050250Vo5G
1019324.71352695.699.928.9900400.1Immerse_Experience
23158.00e-090.20.210099.99996991101000e_Health
50.5107.50e-040.80.0248099.999299150250Connected_Vehicles
10.050.61.00e-071562899.99999.99990110100Industry_Automation
Table 2. Confusion matrix for binary classification.
Table 2. Confusion matrix for binary classification.
Prediction (Y)
Current (Ytest) PositiveNegative
PositiveTrue Positives
(TP)
False Negatives
(FN)
NegativeFalse Positives
(FP)
True Negatives
(TN)
Table 3. Results of the accuracy in cross-validation stage for the first simulation (KPIs).
Table 3. Results of the accuracy in cross-validation stage for the first simulation (KPIs).
SL AlgorithmsK-Folds (K = 10) Cross-Validation Results
Decision Tree99.23
Random Forest99.23
SVM92.42
KNN59.83
MLPC87.08
Table 4. Model metric resuts for the first simulation (KPIs).
Table 4. Model metric resuts for the first simulation (KPIs).
SL AlgorithmsAccuracy (%)Precision Macro (%)Recall Macro (%)F1-Score Macro (%)MCC (%)
Decision Tree93.993.594.793.193.19
Random Forest93.994.794.793.893.17
SVM96.996.398.496.996.6
KNN78.87079.971.876.89
MLPC87.882.485.182.786.58
Table 5. Results of the accuracy in cross-validation stage for the second simulation (KPIs + KQIs).
Table 5. Results of the accuracy in cross-validation stage for the second simulation (KPIs + KQIs).
SL AlgorithmsK-Folds (K = 10) Cross-Validation Results
Decision Tree97.69
Random Forest98.52
SVM91.59
KNN83.35
MLPC90.11
Table 6. Model metric results for the second simulation (KPIs + KQIs).
Table 6. Model metric results for the second simulation (KPIs + KQIs).
SL AlgorithmsAccuracy (%)Precision Macro (%)Recall Macro (%)F1-Score Macro (%)MCC (%)
Decision Tree96.997.296.396.296.6
Random Forest96.997.296.396.296.6
SVM100100100100100
KNN81.876.175.671.879.8
MLPC93.997.297.897.293.2
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Preciado-Velasco, J.E.; Gonzalez-Franco, J.D.; Anias-Calderon, C.E.; Nieto-Hipolito, J.I.; Rivera-Rodriguez, R. 5G/B5G Service Classification Using Supervised Learning. Appl. Sci. 2021, 11, 4942. https://doi.org/10.3390/app11114942

AMA Style

Preciado-Velasco JE, Gonzalez-Franco JD, Anias-Calderon CE, Nieto-Hipolito JI, Rivera-Rodriguez R. 5G/B5G Service Classification Using Supervised Learning. Applied Sciences. 2021; 11(11):4942. https://doi.org/10.3390/app11114942

Chicago/Turabian Style

Preciado-Velasco, Jorge E., Joan D. Gonzalez-Franco, Caridad E. Anias-Calderon, Juan I. Nieto-Hipolito, and Raul Rivera-Rodriguez. 2021. "5G/B5G Service Classification Using Supervised Learning" Applied Sciences 11, no. 11: 4942. https://doi.org/10.3390/app11114942

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop