Figure - available from: Soft Computing
This content is subject to copyright. Terms and conditions apply.
a The proposed C-FLANN with k ICUs (ICU1, ICU2,..., ICUk) where the number of original and expanded inputs is n and m, respectively. W→i=[wi,1,…,wi,n],i=1…k\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\vec {W}_i =[w_{i,1} ,\ldots ,w_{i,n} ],\,\, i=1\ldots k$$\end{document} are the weights of competitive connections that are randomly assigned and determine the winner ICU. V→i=[vi,1,…,vi,m,bi],i=1…k\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\vec {V}_i =[v_{i,1} ,\ldots ,v_{i,m} ,b_i ],\quad i=1\ldots k$$\end{document} are the learning weights of C-FLANN. The output of WTA neuron (i.e., a1,a2,…,ak\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$a_1 ,a_2 ,\ldots ,a_k$$\end{document}) can enable/disable ICU(s). The learning weights of enabled/winner ICU (i.e., V→i∗\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\vec {V}_{i^{*}}$$\end{document} where i∗\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i^{*}$$\end{document} is determined by Eq. 4) calculate the final output. b Insight of ICUs

a The proposed C-FLANN with k ICUs (ICU1, ICU2,..., ICUk) where the number of original and expanded inputs is n and m, respectively. W→i=[wi,1,…,wi,n],i=1…k\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\vec {W}_i =[w_{i,1} ,\ldots ,w_{i,n} ],\,\, i=1\ldots k$$\end{document} are the weights of competitive connections that are randomly assigned and determine the winner ICU. V→i=[vi,1,…,vi,m,bi],i=1…k\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\vec {V}_i =[v_{i,1} ,\ldots ,v_{i,m} ,b_i ],\quad i=1\ldots k$$\end{document} are the learning weights of C-FLANN. The output of WTA neuron (i.e., a1,a2,…,ak\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$a_1 ,a_2 ,\ldots ,a_k$$\end{document}) can enable/disable ICU(s). The learning weights of enabled/winner ICU (i.e., V→i∗\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\vec {V}_{i^{*}}$$\end{document} where i∗\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i^{*}$$\end{document} is determined by Eq. 4) calculate the final output. b Insight of ICUs

Source publication
Article
Full-text available
In this article, a competitive functional link artificial neural network (C-FLANN) is proposed for function approximation and classification problems. In contrast to the traditional functional link artificial neural networks (FLANNs), the novel structure is a universal approximator and can be used for various applications. C-FLANN is a single-layer...

Similar publications

Preprint
Full-text available
Capsules are the name given by Geoffrey Hinton to vector-valued neurons. Neural networks traditionally produce a scalar value for an activated neuron. Capsules, on the other hand, produce a vector of values, which Hinton argues correspond to a single, composite feature wherein the values of the components of the vectors indicate properties of the f...

Citations

... Jangir et al. [14] have discussed the application of ML and deep learning-based techniques in the diagnosis of diabetes and discussed how this disease relies not only on blood glucose level but also on parameters like obesity, age, sex, and body mass index (BMI). Lotfi and Reazee [15] have discussed problems like the "dimensionality curse," where there are too many learning parameters in any multilayer neural network-based technique. It will not only increase the computational complexity of the regression algorithm but also make the whole network sluggish. ...
... is learning rate, total number of input pattern., Δ , Δ are change in weight and bias and it is represented in Equations(15)(16). The efficiency of the proposed models is estimated by computing statistical matrices like Root Mean Squares Error (RMSE), Mean Absolute Error (MAE), and Coefficient of Determination ( 2 ). ...
Article
Full-text available
Predictive analytics involves the use of Artificial Intelligence (AI) and Machine Learning (ML) techniques to analyze current and historical data, identify patterns, and make predictions about future outcomes. In the context of healthcare, predictive analytics is invaluable for understanding and addressing various challenges, with a particular emphasis on predicting mortality rates in specific communities during illnesses such as diabetes, heart disease, and drug overdose. In this study, we propose a unique approach combining Feature Selection Regression based on Neighborhood Component Analysis (FSRNCA) with a less complex single-layer Functional Link Artificial Neural Network (FLANN) model to predict mortality rates using publicly available open-source inventory, i.e., Big Cities Health Inventory (BCHI). Our methodology leverages regularized FSRNCA to select relevant features from the BCHI dataset, considering factors such as demographics, socio-economic indicators, and health metrics. Subsequently, the FLANN model is trained using the selected features to predict mortality rates in diverse urban populations. The model achieves a high R2 score of 91%, outperforming other competitive ML algorithms. Additionally, the proposed technique is successfully tested with datasets such as PIMA, BUPA, ECOLI, and LYMPHOGRAPHY for the classification of various other illnesses. Furthermore, FLANN exhibits minimal computational complexity and rapid training-testing times, highlighting its practicality for real-world applications. The minimal computational complexity and training-testing time between the dense multilayer neural networks reveal the excellent potential and utility of the proposed technique.
... For instance, the global exponential stability is shown for shunting inhibitory cellular neural networks [14], bidirectional associative memory neural networks [15], discrete-time bidirectional associative memory neural networks with delays [16], and networks with discontinuous activation functions [17]. The universal function approximation property is another critical mathematical property that was first proved for multilayered feedforward networks in the seminal work of Hornik and colleagues [18] as well as later for other network types such as those with unbounded activation functions [19], competitive functional link neural networks [20], and recurrent neural networks with stochastic input [21]. However, few works have investigated these critical aspects of emotional models. ...
Article
Full-text available
Emotional controllers have been successfully pursued toward various control objectives in the past two decades, but there remain considerable challenges in exploiting their theoretical and cognitive aspects. This paper addresses these two challenges by proposing a modified thalamic connection based on an averaging operator using a radial basis emotional network (RBEN-ATC). From a theoretical perspective, we prove that the resulting RBEN-ATC mapping has the universal function approximation property. It also becomes continuous and differentiable with respect to the network weights. We then incorporate this structure to approximate the unknown dynamics of a nonlinear affine system with nonsymmetric input saturation and develop a stable adaptive radial basis emotional controller (ARBEC). From a cognitive perspective, we stay committed to the fundamental laws of the emotional brain. While this standpoint considerably complicates the design process, the resulting control system becomes interpretable from a biological perspective. For these purposes, a two-prong approach is employed here. Firstly, we incorporate a first-order compensator that handles the errors due to the nonsymmetric actuator saturation with unknown bounds. Secondly, we use a robust adaptive compensator that lessens the effect of uncertainties. Lyapunov stability analysis shows the overall ARBEC closed-loop stability of the nonlinear system. Furthermore, comparisons with other competing neuro- and fuzzy approaches show that ARBEC has better noise rejection and steady-state tracking while having comparable control energy consumption.
... Hence, they are commonly used in classification (Fakhrmoosavy et al. 2018b;Farhoudi et al. 2017;Zhen-Tao Liu et al. 2018;Mei et al. 2017), prediction and regression problems (Jandaghian et al. 2023;Lotfi and Akbarzadeh-T 2016;Lotfi and Rezaee 2018), and control applications (Babaie et al. 2008;Chih-Min Lin et al. 2019;Lotfi and Rezaee 2019;Lucas et al. 2004). ...
... should interact based on the presented limbic system structures (LeDoux 2000;Moren and Balkenius 2000) and various enhanced structures presented (Zhen-TaoLiu et al. 2018;Lotfi and Akbarzadeh-T 2013, 2014a, 2014b, 2016Lotfi and Rezaee 2018;Parsapoor 2016;Parsapoor and Bilstrup 2012, 2013). Thus, in the following relationship in phase 1: sequential learning is utilized to demonstrate the interaction between BL and MO. ...
Preprint
Full-text available
One of the advanced machine learning branches is emotional learning based on brain emotional learning (BEL), which has been widely used for almost three decades. BEL mimics the emotional learning mechanism in the mammalian brain, which has the superior features of fast learning, quick reacting, and low computational complexity. The original BEL model inspired by the limbic system is composed of two neural network components, namely the amygdala (AMYG) and orbitofrontal cortex (ORBI), which interact with each other. Using a fuzzy extreme learning machine (FELM), a brain-inspired emotional learning model was developed in this study to predict noisy, chaotic time series. The exchange of information between AMYG and ORBI is facilitated by an online sequential type-one FELM with interactive recurrent memory (OIRMS-T1FELM). The CenteroMedial (CM) section of AMYG uses type-one FELM (T1FELM) and interval type-two FELM (IT2FELM). Hence, the proposed model is named BEL-OIRMS-T1/IT2 FELM. The impact of noise on time series targets was determined by altering the various standard deviation of Gaussian noise levels (Sig). The experimental results show that the interaction between AMYG and ORBI leads to the reduction of RMSE in their outputs. Also, when IT2FELM is used instead of T1FELM in the CM of AMYG, the RMSE and MAPE of the final prediction output are lower, and the R is higher. In this state, by increasing the noise level, the proposed method with MF=2 performs better than other states.
... Five kinds of algorithms are supervised classification and recognition algorithms, i.e., extreme learning machine (ELM) [34], SVM [35], WNN, WNN-PSO, and PCA-WNN-PSO with 10 principal components. In addition, two kinds of unsupervised classification and recognition algorithms were used, i.e., competitive neural network (CNN) [36] and self-organizing map neural network (SOMNN) [37]. The correct rates based on these seven kinds of algorithms are shown in Fig. 16. ...
Article
Full-text available
In this work, photoacoustic spectroscopy was employed to distinguish real blood from fake blood rapidly, accurately, and recoverably. To achieve this goal, a photoacoustic detection system for blood was established in the forward mode. In the experiments, four kinds of animal blood and two kinds of fake blood in a total of 150 groups were used. The time-resolved photoacoustic signal and peak-to-peak values (PPVs) of all blood were captured in 700−1064 nm with intervals of 5 nm. Experimental results show that the amplitudes, profiles, peak-point time, and PPVs are different between real and fake blood. Although the PPVs of real blood are larger than those of the fake ones at 700−850 nm, the differences in PPVs are not obvious at 850−1064 nm, especially when there are spectral overlaps of PPVs. To accurately classify and discriminate real and fake blood, a wavelet neural network (WNN) was used to train 120 groups of blood and test 30 groups of blood. Moreover, the particle swarm optimization (PSO) algorithm was used to optimize the weights and thresholds, as well as the translation and scale factors of the Morlet-liked wavelet basis function of the WNN. Under optimal parameters, the correct rate of the WNN-PSO algorithm was improved from 63.3% to 96.7%. Next, principal component analysis (PCA) was combined into the WNN-PSO algorithm to further improve the correct rate. The results indicate that the correct rate of the PCA-WNN-PSO algorithm with 10 principal components reaches 100 %. Therefore, photoacoustic spectroscopy combined with the PCA-WNN-PSO algorithm exhibits excellent performance in the classification and discrimination of real and fake blood.
... However, the ANN structure as black box model is normally built by trial, the computation of the algorithm is complex. To reduce the complexity associated with multilayer architecture, researchers have proposed a low-complexity functional link artificial neural networks (FLANN) whose topology and neuron functions are designed according to physical interpretation of the object of interest [25,26]. Here, we propose a FLANN based algorithm to extract the velocities of trunk and relative limb motion from the absolute velocity of the limb measured by the velocity capture device worn on human wrist. ...
Article
Full-text available
Human motion monitoring is important in applications of movies, animation, sport training, physical rehabilitation, and human-robot interaction. There is a demand for a simple and easy-to-use method to recognize motions of limbs and trunk. In this paper, we developed a wearable velocity tracking device using two orthogonally-placed micro flow sensors to implement three-dimensional motion velocity measurement. In addition, we proposed a functional link artificial neural network (FLANN) model to extract trunk velocity and relative limb velocity from an absolute limb motion detected by the wearable tracking device according to their different dynamic features. Experiments were conducted to validate the effectiveness of the velocity tracking methodology. Results showed the proposed method with wearable device enables real-time measurements of motion velocities of limb and trunk, which were free of accumulated error, robust for dynamic walking and running, and simple to use.
... The absence of hidden layer reduces the curse of dimensionality problem and hence reduces the computational complexity. Therefore, keeping the basic concept of FLANN same, different variants have been introduced such as random vector functional link artificial neural network RVFLANN (Park and Pao, 2000;Pao et al., 1994;Scardapane et al., 2016), recursive FLANN (Sicuranza and Carini, 2012), FLANN with RBF neurons (Hernández-Aguirre et al., 2002), complex valued FLANN (Amin et al., 2012), competitive FLANN (Lotfi and Rezaee, 2017), etc. In addition, several authors have introduced variants of FLANN such as TFLANN, CFLANN (Patra and Kot, 2002;Patra et al., 2006;Dehuri and Cho, 2009;Reddy and Varma, 2014), LFLANN (Bebarta et al., 2012;Ali and Haweel, 2013;Reddy and Varma, 2014) and PFLANN (Panagiotopoulos et al., 1999;Abbas, 2009;Mahapatra et al., 2012) based on the functions used in the expansion of inputs. ...
... The absence of hidden layer reduces the curse of dimensionality problem and hence reduces the computational complexity. Therefore, keeping the basic concept of FLANN same, different variants have been introduced such as random vector functional link artificial neural network RVFLANN (Park and Pao, 2000;Pao et al., 1994;Scardapane et al., 2016), recursive FLANN (Sicuranza and Carini, 2012), FLANN with RBF neurons (Hernández-Aguirre et al., 2002), complex valued FLANN (Amin et al., 2012), competitive FLANN (Lotfi and Rezaee, 2017), etc. In addition, several authors have introduced variants of FLANN such as TFLANN, CFLANN (Patra and Kot, 2002;Patra et al., 2006;Dehuri and Cho, 2009;Reddy and Varma, 2014), LFLANN (Bebarta et al., 2012;Ali and Haweel, 2013;Reddy and Varma, 2014) and PFLANN (Panagiotopoulos et al., 1999;Abbas, 2009;Mahapatra et al., 2012) based on the functions used in the expansion of inputs. ...
... The absence of hidden layer reduces the curse of dimensionality problem and hence reduces the computational complexity. Therefore, keeping the basic concept of FLANN same, different variants have been introduced such as random vector functional link artificial neural network RVFLANN (Park and Pao, 2000;Pao et al., 1994;Scardapane et al., 2016), recursive FLANN (Sicuranza and Carini, 2012), FLANN with RBF neurons (Hernández-Aguirre et al., 2002), complex valued FLANN (Amin et al., 2012), competitive FLANN (Lotfi and Rezaee, 2017), etc. In addition, several authors have introduced variants of FLANN such as TFLANN, CFLANN (Patra and Kot, 2002;Patra et al., 2006;Dehuri and Cho, 2009;Reddy and Varma, 2014), LFLANN (Bebarta et al., 2012;Ali and Haweel, 2013;Reddy and Varma, 2014) and PFLANN (Panagiotopoulos et al., 1999;Abbas, 2009;Mahapatra et al., 2012) based on the functions used in the expansion of inputs. ...
... neural competition involves recurrently connected populations of excitatory and inhibitory neurons and occurs in cortical area ( [10]). This operation has played an essential role in the model of receptive fields, unsupervised self-organization networks ( [11,39,84,87]), competitive neural network ensemble pruning algorithm ( [15]), supervised competitive learning algorithm ( [17]), competitive functional link ANN ( [51]) and winnertake-all neural models ( [56,68]), in computational models of the cortex such as hierarchical models of vision ( [74]) and attention and recognition modelling ( [11,34]) ...
Article
Full-text available
Brain emotional learning (BEL) methods are a recently developed class of emotional brain-inspired algorithms, that enjoy feed-forward computational complexity on the order of O(n). BEL methods suffer from a major drawback related to the non-linear problem solving ability, i.e. they cannot solve n-bit parity problems in which \(\hbox {n} \ge 3\). The present paper proposes a competitive BEL (C-BEL) capable of accommodating a higher number of bits in the parity problem. The proposed C-BEL is inspired by the competitive property of neucortex’s neurocircuits. The method is tested on n-bit parity, function approximation and a pattern recognition problem. Various comparisons with the reinforcement BEL (R-BEL), supervised BEL (S-BEL), evolutionary BEL (E-BEL), a Boltzmann machine and a convolutional neural network indicate the superiority of the approach in terms of its higher ability in non-linear problem solving, function approximation and pattern recognition.