A Compact and High-Performance Acoustic Echo Canceller Neural Processor Using Grey Wolf Optimizer along with Least Mean Square Algorithms
Abstract
:1. Introduction
- Parallel neural adder.Recently, Ref. [34] developed a neural adder circuit to compute two large integer numbers in parallel. In spite of this achievement, this adder demands an enormous number of synapses and neurons to process large integer numbers. As a result, its use for simulation purposes becomes impractical, since metaheuristic SI methods demand high precision numerical accuracy.
- Parallel neural multiplier.In [35], the authors intended to significantly reduce the number of synapses to create an ultra-compact parallel multiplier. Despite decreasing the area consumption, the processing speed was still high.
- Design of high-speed and high-precision neural adders and neural multipliers.
- Design of high-processing speed hardware architectures to efficiently simulate the proposed convex GWO/LMS adaptive filter in embedded devices.
- Design of a high-precision floating-point parallel adder circuit. Here, we present, for the first time, a neural adder, which computes the numbers in a customized floating-point model. We employ new variants of the SN P systems, called coloured-spikes [36], rules on the synapses [37], target indications, extended channel rules [38] and extended rules [39] to process numbers under this format in the proposed arithmetic circuit.
- Design of a high-precision floating-point parallel multiplier. Here, we present, for the first time, the development of a neural multiplier to compute floating-point numbers at high processing speeds.
- Design of a new FPGA-based GWO/LMS neuromorphic architecture. We design the proposed neuromorphic architecture employing basic digital components, such as shift registers, adders and multiplexors, to guarantee a low area consumption. Since the proposed convex GWO/LMS adaptive filter dynamically varies the number of search agents, we propose, for the first time, a time-multiplexing control scheme to adequately support this behavior.
2. The Proposed Block Convex GWO/LMS Algorithm
2.1. GWO Algorithm
2.2. LMS Algorithm
2.3. Convex GWO/LMS
- The calculation of the filter coefficients by employing the LMS algorithm. Here, we use the block LMS algorithm to calculate the weights. This has allowed us to create efficient implementations by using commercially available embedded devices [42,43].The adaptive filter coefficients are defined as:The error vector is obtained as follows:The filter output of each block is given by the following matrix vector product:
- The calculation of the filter coefficients by employing the GWO algorithm. Here, the use of the block-processing scheme in SI algorithms allowed us to process the signal in real-time applications. As a consequence, this potentially allows full exploitation of the performance capabilities of the parallel hardware architectures by simulating the intrinsic parallel computational capabilities of the SI algorithms. In this work, we introduce, for the first time, the block-based GWO algorithm to be applied in practical and real-time AEC applications, as shown in Figure 2.The encircling behavior of the grey wolves is mathematically described as follows:To obtain the best solution, a fitness function, which is defined in terms of the mean square error (MSE), is used to evaluate each search agent. Hence, the fitness value of the position is expressed as:Considering the output of both filters, and at time n, we obtain the output of the parallel filter as:Finally, the performance of the combined filter can be further improved by transferring a portion of to , and . This can be formulated as follows:In this way, the GWO filter can reach a lower steady-state MSE and continues to keep a high convergence rate.
3. Pure Software Simulation
- The echo signal was mixed with white Gaussian noise (SNR = 20 dB).
- We used an AR(1) process as input signal.
- In the proposed algorithm, the swarm size was defined in the range of 15–30 search agents.
- To test the tracking capabilities of the proposed algorithm, we induced an abrupt change in the impulse response of the acoustic echo path in the middle of the adaptive filtering process by multiplying the acoustic paths by −1.
- The maximum number of iterations was set to 2,000,000.
- Effect of changing the order of the adaptive filter.Figure 5 shows the evaluation of the ERLE level of the proposed algorithm. In this evaluation, we used a population size of 30 search agents and varied the number of coefficients of the adaptive filter from 150 to 500. The aim of this experiment was to observe how the ERLE level was affected by using different numbers of coefficients. Here, the proposed convex GWO/LMS adaptive filter guaranteed the same ERLE level regardless of the number of coefficients, as shown in Figure 5. Therefore, we used the minimum number of coefficients, since this factor is relevant, especially when it is implemented in resource-constrained devices.
- Effect of varying the number of search agents of the proposed convex GWO/LMS adaptive filter. In this experiment, we varied the number of search agents from 30 to 200 to evaluate the performance of the proposed algorithm in terms of ERLE level. Since the minimum number of adaptive filter coefficients (150) guaranteed a good ERLE level, we used this number for the experiment. As can be observed from Figure 6, we obtained the same performance by using different numbers of search agents. In this way, we confirmed that, when employing the minimum number of search agents, the proposed method reached a good ERLE level. From an engineering perspective, this has a great impact on the performance of resource-constrained devices, since the proposed algorithm intends to reduce its computational cost by decreasing the number of search agents over the adaptive process.
- Performance comparison between the proposed convex GWO/LMS and existing approaches.We performed two experiments to make a coherent comparison between the proposed convex GWO/LMS and the following existing approaches: LMS algorithm [1], conventional GWO [6], PSO [23], differential evolution (DE) algorithm [47], artificial bee colony optimization (ABC) [48], hybrid PSO–LMS [49] and modified ABC (MABC) [50]. In the first experiment, we shifted the acoustic path, and in the second experiment, we multiplied the acoustic path by −1 at the middle of the adaptive process. In addition, the tuning parameters of all the algorithms were selected to guarantee the best performance. Such tuning parameters are displayed in the following list:
- LMS
- –
- Convergence factor
- GWO
- –
- a decreases linearly from 2 to 0
- –
- lower bound
- –
- Upper bound
- –
- Population size
- PSO
- –
- Acceleration coefficient,
- –
- Acceleration coefficient,
- –
- Inertia weight
- –
- Lower bound
- –
- Upper bound
- –
- Population size
- DE
- –
- Crossover rate
- –
- Scaling factor
- –
- Combination factor
- –
- Lower bound
- –
- Upper bound
- –
- Population size
- ABC
- –
- Evaporation parameter
- –
- Pheromone
- –
- Lower bound
- –
- Upper bound
- –
- Population size
- PSO-LMS
- –
- Acceleration coefficient,
- –
- Acceleration coefficient,
- –
- Inertia weight
- –
- Lower bound
- –
- Upper bound
- –
- Convergence factor
- –
- Population size
- MABC
- –
- Evaporation parameter
- –
- Pheromone
- –
- Lower bound
- –
- Upper bound
- –
- Population size
- –
- Convergence factor
As can be observed from Figure 7, the proposed convex GWO/LMS adaptive filter showed the best performance, in terms of ERLE level and convergence speed, by expending a large number of additions and multiplications, as shown in Table 1. In contrast, the LMS algorithm expended fewer additions and multiplications compared with the proposed algorithm at the cost of exhibiting a slow convergence speed. In general, the excessive number of additions and multiplications makes the implementation of the GWO adaptive filter in current embedded devices, such as DSP and FPGA devices, impractical since they have a limited number of these circuits. Here, our proposal intended to dynamically decrease the number of search agents, as shown in Figure 8. As a consequence, the number of multiplications and additions also reduced (Equation (19)). In this way, the implementation of our proposal in embedded devices can be feasible. - Statistical comparison between the proposed convex GWO/LMS and existing approaches Statistical results were obtained with two different evaluations: average value of ERLE in dB and its corresponding standard deviation. The maximum number of iterations was set to 2,000,000 and each algorithm ran 10 times. The results are reported in Table 2.As can be observed from Table 2, the proposed convex GWO/LMS achieved a good average ERLE level, in comparison with other existing algorithms. It should be noted that the MABC algorithm possessed the highest average value. Nonetheless, this algorithm presented a lower convergence speed, especially when abrupt changes occurred, as shown in Figure 7b. On the other hand, the GWO, PSO and PSO-LMS algorithms presented lower standard deviations in comparison with the proposed Convex GWO/LMS algorithm. Nonetheless, the proposed method achieved a higher average value.
4. Pure Hardware Simulation
- In general terms, the numbers u and v are separated in integer and fractional digits. Specifically, the number of integer digits and the number of fractional digits can be chosen over a range (⋯⋯, ⋯⋯), as shown in Figure 9. Here, we painted the digits with a specific color by using a variant of the SN P systems called coloured spikes to easily distinguish the units, tens, hundreds, etc., of the integer part and tenths, hundredths, thousandths, etc., of the fractional part. This strategy was also used to represent the digits of the results of the addition and multiplication operations.
- Here, each synaptic channel (1) and synaptic channel (2) has a set of dendritic branches. To perform customized floating-point addition, the integer and fractional digits of u and v were represented as the number of active dendritic branches, labeled as . In the case of calculating a customized floating-point multiplication, the integer and fractional digits of u are represented as the number of spikes denoted in the extended rule . On the other hand, the integer and fractional digits of v activate the number of branches according to their value.
- Parallel neural adder circuit .The proposed neural adder circuit has a set of neurons, , ⋯, , ⋯, and a neuron, . The set of neurons is in charge of computing the addition of two numbers, where each number is composed of an integer part and a fractional part, and neuron, , determines the position of the point to segment the number into an integer part and a fractional part, as shown in Figure 9.The proposed neural adder circuit computes the addition as follows:In the initial state, the neurons, , are empty. At this time, the dendritic branches of synaptic channels, (1) and (2), are activated according to the value of the digits, u and v, respectively. For example, if the value of a digit, v or u, is equal to five, then five dendritic branches are activated. Therefore, these dendritic branches allow the flow of five spikes towards a specific neuron, . Simultaneously, the neuron, , places the point to segment the number into integer digits and the fractional digits by setting the firing rule, . This spiking rule implies that, if neuron receives a spike at any time, it fires and sends a spike to a specific neuron, . To do this, we used a variant of the SN P systems called target indications. In this way, the point was allocated according to the desired precision. Therefore, the point, which was represented as a spike, was stored in a specific neuron, . Once the neural circuit was configured, the partial additions of the spikes started.Here, the addition of the numbers was performed in a single simulation step, since all neurons, , processed their respective input spikes simultaneously. Additionally, the carry spike was obtained when any neuron, , accumulated ten spikes. At this moment, the spiking rule, , was set. After one simulation step, the result, represented by the remaining spikes in the soma of each neuron, is placed at the output synapses. In addition, neuron sends the spike point to the environment by enabling its spiking rule, .In this work, we proved that the use of several variants of the SN P systems, such as coloured-spikes [36], rules on the synapses [37], target indications [51], extended channel rules [38], extended rules [39] and dendritic trunks [52] creates an ultra-compact and high performance circuit, instead of only using the soma as conventional SN P systems do. In particular, the proposed neural circuit required fewer simulation steps, neurons and synapses compared with the existing approach [34], as shown in Table 3.
Approach [34] This Work Synapses 41 n 14 n Neurons 12 n n Simulation steps 1 - Parallel neural multiplier circuit .Since the multiplier is one of the most demanding in terms of processing and area consumption, a large number of techniques have been developed to minimize these factors. Recently, several authors have used SN P systems to create an efficient parallel multiplier circuit. However, the improvement of processing speed is still an issue since most of the studies improved the area consumption. The improvement of this factor potentially allows the development of high performance systems to support AEC systems in real-time. In addition, the development of a high-precision neural multiplier is still a challenging task, since this factor is especially relevant when metaheuristic algorithms are simulated. Here, we developed a neural multiplier that shows higher processing speeds, in comparison with existing approaches, by keeping the area consumption low. To achieve this, we reduced the processing time by using cutting-edge variants of the SN P systems, such as coloured-spikes [36], rules on the synapses [37], target indications [51], extended rules [39] and dendritic trunks [52]. Specifically, we used these variants to significantly improve the time expended in the computation of the partial products of the multiplication, in comparison with the most recent approach [35].The proposed neural multiplier circuit is composed of a set of neurons (, ⋯, , ⋯, , ⋯, , ⋯, ), and neuron, , as shown in Figure 10. In general terms, neurons, , perform the addition of the partial products, where each partial product is computed by using dendritic branches. In particular, each neuron computes the partial product between a single digit of u and a digit of v, where the digits of u are represented as p spikes, which are generated by neuron when its spiking rule, , is applied, and the value of each digit of v activates an equal number of dendritic branches.The proposed neural multiplier circuit performs the multiplication, as follows:At the initial simulation step, neurons, , are empty. At this time, neuron, , places the spike point by receiving a spike. Therefore, it fires and sends the spike point to a specific neuron, , using the target indications [51]. In this way, the digits of u or v are segmented into integer and fractional parts. Once the multiplier is configured, partial products are executed in parallel. To perform any partial product, neuron, , fires p spikes which are sent to its corresponding neuron, . The soma of this neuron receives many copies of the spikes, p, in function of the number of active dendritic branches. For example, if neuron multiplies 3 × 3, this implies that neuron, , receives three copies of three spikes by means of three dendritic branches. In this way, the neuron, , increases its potential of the soma by performing three additions. Therefore, the synaptic weights are not required since many approaches use them to perform partial products. Hence, the number of branched connections can be variable. In this way, the neuron enables the optimal number of synaptic connections. From an engineering perspective, we proposed the use of a variable number of forked connections, since the implementation of a very-large number of synaptic connections in advanced FPGAs creates critical routing problems. Once the result is obtained, neuron, , places the spike point according to the addition of the number of fractional digits, as in conventional multiplication. Therefore, , which received the spike point at the initial simulation, fired the point spike by applying it firing rule ().As can be observed from Table 4, we achieved a significant improvement in terms of simulation steps, since only one simulation step was required to perform a multiplication of two numbers with any length. This aspect is relevant, especially when real-time AEC system simulations are required. Additionally, we reduced the number of synapses in comparison with the existing work.
Existing Neural Multiplier | [35] | This Work |
---|---|---|
Synapses | ||
Neurons | n | n |
Simulation steps | 10 | 1 |
4.1. Experimental Results
- We employed 1024 adaptive filter coefficients and varied the search agents from 70 to 15.
- The filter and the block had the same length as the echo path.
- As input signals, we used an AR(1) process and speech sequence signals.
- The step-size of the LMS algorithm was set to and the step-size of the convex algorithm was selected to be
- The search agents were initialized using normally distributed random numbers and their positions were bounded between [−1, 1] over the filtering process.
4.2. Single-Talk Scenario
4.3. Double-Talk Scenario
5. Conclusions
- From the AEC model point of view.Here, we made intensive efforts to reduce the computational cost of the AEC systems to be implemented in resource-constrained devices. In addition, we significantly increased the convergence properties of these systems by using a cutting-edge metaheuristic swarm intelligence method, in combination with a gradient descent algorithm to be used in practical acoustic environments. Specifically, we present a new variant of GWO algorithm along with the LMS algorithm. The use of this combination allowed us to guarantee a higher convergence rate and lower MSE level, in comparison to when gradient descent algorithms or metaheuristic SI methods were used separately. To improve the tracking capabilities of the conventional GWO algorithm, the proposed variant has new exploration capabilities, since the search space is dynamically adjusted. To make the implementation of the proposed variant of the GWO algorithm in embedded devices feasible, we used the block-processing scheme. In this way, the proposed convex GWO/LMS algorithm can be easily implemented in parallel hardware architectures. As a consequence, it can be simulated at high processing speeds. In addition, we significantly reduced the computational cost of the proposed convex GWO/LMS algorithm. To achieve this aim, we propose a method to dynamically decrease the population of a variant of the GWO algorithm over the filtering process.
- From the SN P systems point of view.Here, we present, for the first time, a compact and high-processing speed floating-point neural adder and multiplier circuit. We used cutting-edge variants of the SN P systems, coloured-spikes, rules on the synapses, target indications, extended channel rules, extended rules and dendritic trunks to create a customized floating-point neural adder and multiplier. Specifically, the proposed neural adder and multiplier exhibits higher processing speed, compared with existing SN P adders and multipliers, since both expend only one simulation step, which is the best improvement achieved until now.
- From the digital point of view.In this work, we present, for the first time, the development of a parallel hardware architecture to simulate a variable number of search agents by using the proposed time-multiplexing control scheme. In this way, we implemented the proposed GWO method properly, in which the number of search agents increase or decrease according to the simulation needs. In addition, the use of this scheme allowed us to exploit, to the maximum, the flexibility and scalability features of the GWO algorithm.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Benesty, J.; Duhamel, P. A fast exact least mean square adaptive algorithm. IEEE Trans. Signal Process. 1992, 40, 2904–2920. [Google Scholar] [CrossRef] [PubMed]
- Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the MHS’95, Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
- Ling, Q.; Ikbal, M.A.; Kumar, P. Optimized LMS algorithm for system identification and noise cancellation. J. Intell. Syst. 2021, 30, 487–498. [Google Scholar] [CrossRef]
- Botzheim, J.; Cabrita, C.; Kóczy, L.T.; Ruano, A. Fuzzy rule extraction by bacterial memetic algorithms. Int. J. Intell. Syst. 2009, 24, 312–339. [Google Scholar] [CrossRef]
- Ariyarit, A.; Kanazaki, M. Multi-modal distribution crossover method based on two crossing segments bounded by selected parents applied to multi-objective design optimization. J. Mech. Sci. Technol. 2015, 29, 1443–1448. [Google Scholar] [CrossRef]
- Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
- Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
- Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
- Khehra, B.S.; Singh, A.; Kaur, L.M. Masi Entropy-and Grey Wolf Optimizer-Based Multilevel Thresholding Approach for Image Segmentation. J. Inst. Eng. Ser. B 2022, 103, 1619–1642. [Google Scholar] [CrossRef]
- Vashishtha, G.; Kumar, R. An amended grey wolf optimization with mutation strategy to diagnose bucket defects in Pelton wheel. Measurement 2022, 187, 110272. [Google Scholar] [CrossRef]
- Rajammal, R.R.; Mirjalili, S.; Ekambaram, G.; Palanisamy, N. Binary Grey Wolf Optimizer with Mutation and Adaptive K-nearest Neighbour for Feature Selection in Parkinson’s Disease Diagnosis. Knowl.-Based Syst. 2022, 246, 108701. [Google Scholar] [CrossRef]
- Reddy, V.P.C.; Gurrala, K.K. Joint DR-DME classification using deep learning-CNN based modified grey-wolf optimizer with variable weights. Biomed. Signal Process. Control 2022, 73, 103439. [Google Scholar] [CrossRef]
- Dey, S.; Banerjee, S.; Dey, J. Implementation of Optimized PID Controllers in Real Time for Magnetic Levitation System. In Computational Intelligence in Machine Learning; Springer: Berlin/Heidelberg, Germany, 2022; pp. 249–256. [Google Scholar]
- Zhang, X.; Li, D.; Li, J.; Liu, B.; Jiang, Q.; Wang, J. Signal-Noise Identification for Wide Field Electromagnetic Method Data Using Multi-Domain Features and IGWO-SVM. Fractal Fract. 2022, 6, 80. [Google Scholar] [CrossRef]
- Premkumar, M.; Jangir, P.; Kumar, B.S.; Alqudah, M.A.; Nisar, K.S. Multi-objective grey wolf optimization algorithm for solving real-world BLDC motor design problem. Comput. Mater. Contin. 2022, 70, 2435–2452. [Google Scholar] [CrossRef]
- Nagadurga, T.; Narasimham, P.; Vakula, V.; Devarapalli, R. Gray wolf optimization-based optimal grid connected solar photovoltaic system with enhanced power quality features. Concurr. Comput. Pract. Exp. 2022, 34, e6696. [Google Scholar] [CrossRef]
- Musharavati, F.; Khoshnevisan, A.; Alirahmi, S.M.; Ahmadi, P.; Khanmohammadi, S. Multi-objective optimization of a biomass gasification to generate electricity and desalinated water using Grey Wolf Optimizer and artificial neural network. Chemosphere 2022, 287, 131980. [Google Scholar] [CrossRef]
- Meidani, K.; Hemmasian, A.; Mirjalili, S.; Barati Farimani, A. Adaptive grey wolf optimizer. Neural Comput. Appl. 2022, 34, 7711–7731. [Google Scholar] [CrossRef]
- Zhang, L.; Yu, C.; Tan, Y. A method for pulse signal denoising based on VMD parameter optimization and Grey Wolf optimizer. Journal of Physics: Conference Series. In Proceedings of the 2021 2nd International Conference on Electrical, Electronic Information and Communication Engineering (EEICE 2021), Tianjin, China, 16–18 April 2021; Volume 1920, p. 012100. [Google Scholar]
- Negi, G.; Kumar, A.; Pant, S.; Ram, M. GWO: A review and applications. Int. J. Syst. Assur. Eng. Manag. 2021, 12, 1–8. [Google Scholar] [CrossRef]
- Faris, H.; Aljarah, I.; Al-Betar, M.A.; Mirjalili, S. Grey wolf optimizer: A review of recent variants and applications. Neural Comput. Appl. 2018, 30, 413–435. [Google Scholar] [CrossRef]
- Salinas, G.; Pichardo, E.; Vázquez, Á.A.; Avalos, J.G.; Sánchez, G. Grey wolf optimization algorithm for embedded adaptive filtering applications. IEEE Embed. Syst. Lett. 2022, 1. [Google Scholar] [CrossRef]
- Mahbub, U.; Acharjee, P.P.; Fattah, S.A. A time domain approach of acoustic echo cancellation based on particle swarm optimization. In Proceedings of the International Conference on Electrical & Computer Engineering (ICECE 2010), Dhaka, Bangladesh, 18–20 December 2010; pp. 518–521. [Google Scholar]
- Mahbub, U.; Acharjee, P.P.; Fattah, S.A. An acoustic echo cancellation scheme based on particle swarm optimization algorithm. In Proceedings of the TENCON 2010—2010 IEEE Region 10 Conference, Fukuoka, Japan, 21–24 November 2010; pp. 759–762. [Google Scholar]
- Kimoto, M.; Asami, T. Multichannel Acoustic Echo Canceler Based on Particle Swarm Optimization. Electron. Commun. Jpn. 2016, 99, 31–40. [Google Scholar] [CrossRef]
- Mishra, A.K.; Das, S.R.; Ray, P.K.; Mallick, R.K.; Mohanty, A.; Mishra, D.K. PSO-GWO optimized fractional order PID based hybrid shunt active power filter for power quality improvements. IEEE Access 2020, 8, 74497–74512. [Google Scholar] [CrossRef]
- Suman, S.; Chatterjee, D.; Mohanty, R. Comparison of PSO and GWO Techniques for SHEPWM Inverters. In Proceedings of the 2020 International Conference on Computer, Electrical & Communication Engineering (ICCECE), Kolkata, India, 17–18 January 2020; pp. 1–7. [Google Scholar]
- Şenel, F.A.; Gökçe, F.; Yüksel, A.S.; Yiğit, T. A novel hybrid PSO–GWO algorithm for optimization problems. Eng. Comput. 2019, 35, 1359–1373. [Google Scholar] [CrossRef]
- Wang, W.; Wang, J. Convex combination of two geometric-algebra least mean square algorithms and its performance analysis. Signal Process. 2022, 192, 108333. [Google Scholar] [CrossRef]
- Bakri, K.J.; Kuhn, E.V.; Matsuo, M.V.; Seara, R. On the behavior of a combination of adaptive filters operating with the NLMS algorithm in a nonstationary environment. Signal Process. 2022, 196, 108465. [Google Scholar] [CrossRef]
- Jeong, J.J.; Kim, S. Robust adaptive filter algorithms against impulsive noise. Circuits Syst. Signal Process. 2019, 38, 5651–5664. [Google Scholar] [CrossRef]
- Silva, M.T.; Nascimento, V.H. Improving the tracking capability of adaptive filters via convex combination. IEEE Trans. Signal Process. 2008, 56, 3137–3149. [Google Scholar] [CrossRef]
- Ionescu, M.; Păun, G.; Yokomori, T. Spiking neural P systems. Fundam. Inform. 2006, 71, 279–308. [Google Scholar]
- Frias, T.; Sanchez, G.; Garcia, L.; Abarca, M.; Diaz, C.; Sanchez, G.; Perez, H. A new scalable parallel adder based on spiking neural P systems, dendritic behavior, rules on the synapses and astrocyte-like control to compute multiple signed numbers. Neurocomputing 2018, 319, 176–187. [Google Scholar] [CrossRef]
- Avalos, J.G.; Sanchez, G.; Trejo, C.; Garcia, L.; Pichardo, E.; Vazquez, A.; Anides, E.; Sanchez, J.C.; Perez, H. High-performance and ultra-compact spike-based architecture for real-time acoustic echo cancellation. Appl. Soft Comput. 2021, 113, 108037. [Google Scholar] [CrossRef]
- Song, T.; Rodríguez-Patón, A.; Zheng, P.; Zeng, X. Spiking neural P systems with colored spikes. IEEE Trans. Cogn. Dev. Syst. 2017, 10, 1106–1115. [Google Scholar] [CrossRef]
- Peng, H.; Chen, R.; Wang, J.; Song, X.; Wang, T.; Yang, F.; Sun, Z. Competitive spiking neural P systems with rules on synapses. IEEE Trans. NanoBiosci. 2017, 16, 888–895. [Google Scholar] [CrossRef]
- Lv, Z.; Bao, T.; Zhou, N.; Peng, H.; Huang, X.; Riscos-Núñez, A.; Pérez-Jiménez, M.J. Spiking neural p systems with extended channel rules. Int. J. Neural Syst. 2021, 31, 2050049. [Google Scholar] [CrossRef] [PubMed]
- Chen, H.; Ionescu, M.; Ishdorj, T.O.; Păun, A.; Păun, G.; Pérez-Jiménez, M.J. Spiking neural P systems with extended rules: Universality and languages. Nat. Comput. 2008, 7, 147–166. [Google Scholar] [CrossRef]
- Adam, S.P.; Alexandropoulos, S.A.N.; Pardalos, P.M.; Vrahatis, M.N. No free lunch theorem: A review. In Approximation and Optimization; Springer: Berlin/Heidelberg, Germany, 2019; pp. 57–82. [Google Scholar]
- Scarpiniti, M.; Comminiello, D.; Uncini, A. Convex combination of spline adaptive filters. In Proceedings of the 2019 27th European Signal Processing Conference (EUSIPCO), A Coruna, Spain, 2–6 September 2019; pp. 1–5. [Google Scholar]
- Khan, M.T.; Kumar, J.; Ahamed, S.R.; Faridi, J. Partial-LUT designs for low-complexity realization of DA-based BLMS adaptive filter. IEEE Trans. Circuits Syst. II Express Briefs 2020, 68, 1188–1192. [Google Scholar] [CrossRef]
- Khan, M.T.; Shaik, R.A. Analysis and implementation of block least mean square adaptive filter using offset binary coding. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018; pp. 1–5. [Google Scholar]
- International Telecommunication Union ITU-T. Digital Network Echo Cancellers; Standardization Sector of ITU: Geneva, Switzerland, 2002. [Google Scholar]
- Clark, G.; Mitra, S.; Parker, S. Block implementation of adaptive digital filters. IEEE Trans. Acoust. Speech Signal Process. 1981, 29, 744–752. [Google Scholar] [CrossRef]
- Burrus, C. Block implementation of digital filters. IEEE Trans. Circuit Theory 1971, 18, 697–701. [Google Scholar] [CrossRef]
- Reddy, K.S.; Sahoo, S.K. An approach for FIR filter coefficient optimization using differential evolution algorithm. AEU-Int. J. Electron. Commun. 2015, 69, 101–108. [Google Scholar] [CrossRef]
- Bansal, J.C.; Sharma, H.; Jadon, S.S. Artificial bee colony algorithm: A survey. Int. J. Adv. Intell. Paradig. 2013, 5, 123–159. [Google Scholar] [CrossRef]
- Krusienski, D.; Jenkins, W. A particle swarm optimization-least mean squares algorithm for adaptive filtering. In Proceedings of the Conference Record of the Thirty-Eighth Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 7–10 November 2004; Volume 1, pp. 241–245. [Google Scholar]
- Ren, X.; Zhang, H. An Improved Artificial Bee Colony Algorithm for Model-Free Active Noise Control: Algorithm and Implementation. IEEE Trans. Instrum. Meas. 2022, 71, 1–11. [Google Scholar] [CrossRef]
- Wu, T.; Zhang, L.; Pan, L. Spiking neural P systems with target indications. Theor. Comput. Sci. 2021, 862, 250–261. [Google Scholar] [CrossRef]
- Garcia, L.; Sanchez, G.; Vazquez, E.; Avalos, G.; Anides, E.; Nakano, M.; Sanchez, G.; Perez, H. Small universal spiking neural P systems with dendritic/axonal delays and dendritic trunk/feedback. Neural Netw. 2021, 138, 126–139. [Google Scholar] [CrossRef]
- Maya, X.; Garcia, L.; Vazquez, A.; Pichardo, E.; Sanchez, J.C.; Perez, H.; Avalos, J.G.; Sanchez, G. A high-precision distributed neural processor for efficient computation of a new distributed FxSMAP-L algorithm applied to real-time active noise control systems. Neurocomputing 2023, 518, 545–561. [Google Scholar] [CrossRef]
Algorithm | Multiplications | Additions |
---|---|---|
LMS [1] | 6,000,118,333 | 6,000,118,333 |
GWO [6] | 1,349,996,758,333 | 2,249,994,508,333 |
PSO [23] | 1,500,036,249,900 | 1,500,036,249,900 |
DE [47] | 149,999,625,000 | 299,999,250,000 |
ABC [48] | 600,058,499,850 | 749,938,125,150 |
PSO-LMS [49] | 903,077,742,300 | 906,037,734,900 |
MABC [50] | 1,799,955,500,100 | 1,620,075,949,800 |
Convex GWO/LMS | 73,599,043,327 | 46,560,966,659 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pichardo, E.; Anides, E.; Vazquez, A.; Garcia, L.; Avalos, J.G.; Sánchez, G.; Pérez, H.M.; Sánchez, J.C. A Compact and High-Performance Acoustic Echo Canceller Neural Processor Using Grey Wolf Optimizer along with Least Mean Square Algorithms. Mathematics 2023, 11, 1421. https://doi.org/10.3390/math11061421
Pichardo E, Anides E, Vazquez A, Garcia L, Avalos JG, Sánchez G, Pérez HM, Sánchez JC. A Compact and High-Performance Acoustic Echo Canceller Neural Processor Using Grey Wolf Optimizer along with Least Mean Square Algorithms. Mathematics. 2023; 11(6):1421. https://doi.org/10.3390/math11061421
Chicago/Turabian StylePichardo, Eduardo, Esteban Anides, Angel Vazquez, Luis Garcia, Juan G. Avalos, Giovanny Sánchez, Héctor M. Pérez, and Juan C. Sánchez. 2023. "A Compact and High-Performance Acoustic Echo Canceller Neural Processor Using Grey Wolf Optimizer along with Least Mean Square Algorithms" Mathematics 11, no. 6: 1421. https://doi.org/10.3390/math11061421