Article

Knowledge-based data mining of news information on the Internet using cognitive maps and neural networks

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In this paper, we investigate ways to apply news information on the Internet to the prediction of interest rates. We developed the Knowledge-Based News Miner (KBNMiner), which is designed to represent the knowledge of interest rate experts with cognitive maps (CMs), to search and retrieve news information on the Internet according to prior knowledge, and to apply the information, which is retrieved from news information, to a neural network model for the prediction of interest rates.This paper focuses on improving the performance of data mining by using prior knowledge. Real-world interest rate prediction data is used to illustrate the performance of the KBNMiner. Our integrated approach, which utilizes CMs and neural networks, has been shown to be effective in experiments. While the 10-fold cross validation is used to test our research model, the experimental results of the paired t-test have been found to be statistically significant.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Only 4 are strongly associated with ER status [8].The new ways to find information from news through internet and apply interest rate on the basis of this information, so write a survey on data mining and ANN based applications. According to prior knowledge it retrieves and search information of NEWS on internet, and then we apply this information on the neural network model for the interest rate prediction [9]. The data mining is used to explore the potential factor of huge amount of data but still the preterm birth is still unclear. ...
... And the methods used were Training samples and least square method [23]. This paper [9] explains that with the increasing amount of data the value of data mining also increases. The study is undertaken to find the new ways to find information from news through internet and apply interest rate on the basis of this information, so we propose the Knowledge-Based News Miner (KBNMiner). ...
... The study is undertaken to find the new ways to find information from news through internet and apply interest rate on the basis of this information, so we propose the Knowledge-Based News Miner (KBNMiner). It shows the knowledge of interest rate with CMs (Cognitive Maps) [9]. According to prior knowledge it retrieves and search information of NEWS on internet, and then this information were applied on the neural network model for the interest rate prediction. ...
Article
The use of neural network is very wide in data mining due to some characteristic like parallel performance, Self-organizing adaptive, robustness and fault tolerance. Data mining models depend on task they accomplish: Association Rules, Clustering, Prediction, and Classification. Neural network is used to find pattern in data. The grouping of neural network model and data mining method can greatly increase the efficiency of data mining methods and it has been broadly used. Different algorithms have been discussed for optimizing the artificial neural network (ANN). ANN combines with other algorithms to find out the high accurate data as compare to traditional algorithm. The role of ANN using data mining techniques is playing an important role in forecasting or prediction about games and weather. This produces high accurate predictions than that of traditional algorithm. Data mining approaches using ANN can also work w ell. ANN is a highly class algorithm which can be accelerated using neuron. The result of which will produce a high speed up ANN. ANN can also be used for the purpose of extracting rules from trained neural networks
... Porém, o conhecimento tácito é de difícil captura mas poderia incorporar "know-how", para melhor entendimento do domínio, o que pode contribuir na identificação de padrões válidos, úteis e não óbvios em projetos de ciência de dados. Durante a etapa de entendimento do problema, o cientista de dados, sob a orientação do especialista de domínio, deveria primeiro compreender o domínio de problema e caracterizá-lo utilizando modelos de ontologia ou mapas conceituais (Hong & Han, 2002, Cao, 2010 Esse modelo conterá as principais dimensões (perspectivas), aspectos e atributos que podem ser relevantes para dar início a um projeto em ciência de dados. ...
Article
Full-text available
Data Science aims to infer knowledge from facts and evidence expressed from data. This occurs through a knowledge discovery process (KDD), which requires an understanding of the application domain. However, in practice, not enough time is spent on understanding this domain, and consequently, the extracted knowledge may not be correct or not relevant. Considering that understanding the problem is an essential step in the KDD process, this work proposes the CAPTO method for understanding domains, based on knowledge management models, and together with the available/acquired tacit and explicit knowledge, proposes a strategy for construction of conceptual models to represent the problem domain. This model will contain the main dimensions (perspectives), aspects and attributes that may be relevant to start a data science project. As a case study, it will be applied in the Type 2 Diabetes domain. Results show the effectiveness of the method. The conceptual model, obtained through the CAPTO method, can be used as an initial step for the conceptual selection of attributes.
... Other models rely on machine learning techniques that are capable of incorporating non-linear relationships between economic variables to predict interest rates. These techniques include support vector machines (Gogas et al., 2015), fuzzy logic and genetic algorithms (Ju et al., 1997), neural networks (Kim & Noh, 1997;Oh & Han, 2000;Hong & Han, 2002;Bianchi et al. 2020b, a) and case-based reasoning (Kim & Noh 1997). However, the financial literature has been slow to adapt such methods (Bianchi et al. 2020b), possibly because it is not necessary straightforward to understand their abundant non-linear patterns (Diaz et al., 2016) and it is claimed that they are not suitable for parameter inference (see Mullainathan & Spiess, 2017). ...
Article
Full-text available
We shed light on computational challenges when fitting the Nelson-Siegel, Bliss and Svensson parsimonious yield curve models to observed US Treasury securities with maturities up to 30 years. As model parameters have a specific financial meaning, the stability of their estimated values over time becomes relevant when their dynamic behavior is interpreted in risk-return models. Our study is the first in the literature that compares the stability of estimated model parameters among different parsimonious models and for different approaches for predefining initial parameter values. We find that the Nelson-Siegel parameter estimates are more stable and conserve their intrinsic economical interpretation. Results reveal in addition the patterns of confounding effects in the Svensson model. To obtain the most stable and intuitive parameter estimates over time, we recommend the use of the Nelson-Siegel model by taking initial parameter values derived from the observed yields. The implications of excluding Treasury bills, constraining parameters and reducing clusters across time to maturity are also investigated.
... Works in [9,10] analyze financial articles and create a handcrafted thesaurus containing words that drive the stock prices and that are later used to predict stock prices. Similarly, [11] uses a-priori domain knowledge to predict interest rates: a cognitive map represents cause-effect relationships among the events in the domain and is used as the basis to retrieve the relevant news; these are then classified as either positive or negative according to the way they influence the rates. A work similar to ours is [12], where the objective is to predict the Tokyo stock exchange price using a-priori knowledge in the form of rules. ...
... However, these assumptions are usually not valid for real-time data. Therefore, the use of machine learning models to get accurate and robust results becomes inevitable [15,16]. These models also capture non-linear patterns in time series data, which makes them more advantageous and useful. ...
Article
Full-text available
In macroeconomics, decision making is highly sensitive and significantly influences the financial and business world, where the interest rate is a crucial factor. In addition, the interest rate is used by the governments to manage the monetary policy. There is a need to design an efficient algorithm for interest rate prediction. The analysis of the social media sentiment impact on financial decision making is also an open research area. In this study, we deploy a deep learning model for the accurate forecasting of the interest rate for the UK, Turkey, China, Hong Kong, and Mexico. For this purpose, daily data of the interest rate and exchange rate covering the period from Jan 2010 to Oct 2019 is used for all the mentioned countries. We also incorporate the input of the twitter sentiments of six mega-events, namely the US election 2012, Mexican election 2012, Gaza under attack 2014, Hong Kong protest 2014, Refugee Welcome 2015, and Brexit 2016. Our results provide evidence that the error of the deep learning model significantly decreases when event sentiment is incorporated. A notable improvement has been observed in the case of the Hong Kong interest rate, i.e., a 266% decline in the error after incorporating event sentiments as an input in the deep learning model.
... Data mining, a technique to discover knowledge in a database, has become a research area and has assumed increasing importance with the significant increase in the amount of data in recent years. 18 The data mining technique has been widely used in many fields, including biology, 19 agriculture, 20 medical science, 21 and finance. 22 The entire process of adaptation rule acquisition consists of two steps: data pre-processing and adaptation rule mining. ...
Article
Full-text available
Case-based reasoning has proven to be a promising methodology for obtaining new mechanical products by adapting previous cases. However, case adaptation is still a bottleneck in case-based reasoning. The key issue in case adaptation is acquiring the adaptation knowledge. To realize the automation of case adaptation for variant design, a novel case adaptation method is proposed. The method consists of two parts. In the first part, a data mining technique is introduced to acquire the adaptation rules that reflect the relationship between the changes in design requirements and design results. In the second part, the most similar case is retrieved by first using the adaptation rules to weight the design requirements. Then, suitable adaptation rules are selected and used to realize the case adaptation. To validate the proposed method, two experiments are performed. The results show that our method outperforms other methods when the design requirements and design results have both numerical and categorical attributes.
... In information technology (IT), FCM has been applied mostly to support IT project management. Applications in IT range from evaluating investments in information systems (Irani et al. 2002), knowledge-based data mining of information from the internet (Hong and Han 2002), automatic generation of semantics for scientific e-documents (Zhuge and Luo 2006), modeling the success of IT projects (Rodriguez-Repiso et al. 2007), to predicting software reliability (Chytas et al. 2010). FCM has also been used in medicine (e.g., for aiding medical diagnosis) (Innocent and John 2004) and tumor grading (Papageorgiou et al. 2006). ...
Book
This volume brings together, in a central text, chapters written by leading scholars working at the intersection of modeling, the natural and social sciences, and public participation. This book presents the current state of knowledge regarding the theory and practice of engaging stakeholders in environmental modeling for decision-making, and includes basic theoretical considerations, an overview of methods and tools available, and case study examples of these principles and methods in practice. Although there has been a significant increase in research and development regarding participatory modeling, a unifying text that provides an overview of the different methodologies available to scholars and a systematic review of case study applications has been largely unavailable. This edited volume seeks to address a gap in the literature and provide a primer that addresses the growing demand to adopt and apply a range of modeling methods that includes the public in environmental assessment and management. The book is divided into two main sections. The first part of the book covers basic considerations for including stakeholders in the modeling process and its intersection with the theory and practice of public participation in environmental decision-making. The second part of the book is devoted to specific applications and products of the various methods available through case study examination. This second part of the book also provides insight from several international experts currently working in the field about their approaches, types of interactions with stakeholders, models produced, and the challenges they perceived based on their practical experiences.
... Os algoritmos de aprendizado de máquina são recomendados para aquisição de conhecimento por reduzirem a necessidade de especialistas [10,26], mas a literatura também recomenda interação entre especialistas em MD e do domínio investigado [27][28][29][30][31][32][33][34]. Esta interação foi possível neste estudo, tendo contribuído para melhor compreensão dos dados, bem como dos resultados obtidos. ...
Article
Aim: Predict the Human Development Index (HDI) of 2013 and 2014 of Latin American countries through forecast data mining techniques. Methodology: Full stages of Knowledge Discovery in Databases applied in univariate and multivariate time series. For the prediction, the predicting abilities of 90 predicting models were tested, distributed in two global multivariate, 44 specific multivariate per country and 44 univariate. The algorithm SMOReg was adopted in the development of models as it presented a better performance among the learning algorithms based on functions tested in the experiment. Results: It was observed that the predictions of the models did not present significant statistical differences from the HDI tendencies disclosed in the last report of the United Nations Development Program. Nevertheless, the global multivariate models presented better quality measures in the predictions. Conclusion: The HDI prediction models used with multivariate time series provide better learning of algorithms with the increase of different univariate historical experiences.
... For example, the MEDLINE database contains over twelve million citations dating back to the mid-1960's. Therefore, it has become an important issue for mining valuable biomedical information from the literature (Valencia-García, Ruiz-Sánchez, Vicente, Fernández-Breis, & Martínez-Béjar, 2004;Wang, Kuo, Chen, Hsiao, & Tsai, 2005), especially information on the Internet (Hong & Han, 2002). Expert systems and data mining techniques have been used for years in medical diagnosis domain (Chou, Lee, Shao, & Chen, 2004;Alonso, Caraça-Valente, González, & Montes, 2002). ...
Chapter
Full-text available
This study proposes a mining system for finding protein-to-protein interaction literatures from the databases on the Internet. In this system, we search for discriminating words for protein-to-protein interaction by way of statistics and the results from literatures. A threshold is also evaluated to check if a given literature is related to protein-to-protein interactions. In addition, a keypage-based search mechanism is used to find related papers for protein-to-protein interactions from a given document. To expand the search space and ensure better performance of the system, mechanisms for protein name identification and databases for protein names are also developed. The system is designed with a web-based user interface and a job-dispatching kernel. Experiments are conducted and the results have been checked by a biomedical expert. The experimental results indicate that by using the proposed mining system, it is helpful for researchers to find protein-to-protein literatures from the overwhelming pieces of information available on the biomedical databases over the Internet. Purchase this chapter to continue reading all 30 pages > This chapter introduces the emerging technology of Semantic Web services. It concentrates on two dominant specifications in this domain, namely...
... For example, the MEDLINE database contains over twelve million citations dating back to the mid-1960's. Therefore, it has become an important issue for mining valuable biomedical information from the literature (Valencia-García, Ruiz-Sánchez, Vicente, Fernández-Breis, & Martínez-Béjar, 2004;Wang, Kuo, Chen, Hsiao, & Tsai, 2005), especially information on the Internet (Hong & Han, 2002). Expert systems and data mining techniques have been used for years in medical diagnosis domain (Chou, Lee, Shao, & Chen, 2004;Alonso, Caraça-Valente, González, & Montes, 2002). ...
Chapter
Full-text available
This study proposes a mining system for finding protein-to-protein interaction literatures from the databases on the Internet. In this system, we search for discriminating words for protein-to-protein interaction by way of statistics and the results from literatures. A threshold is also evaluated to check if a given literature is related to protein-to-protein interactions. In addition, a keypage-based search mechanism is used to find related papers for protein-to-protein interactions from a given document. To expand the search space and ensure better performance of the system, mechanisms for protein name identification and databases for protein names are also developed. The system is designed with a web-based user interface and a job-dispatching kernel. Experiments are conducted and the results have been checked by a biomedical expert. The experimental results indicate that by using the proposed mining system, it is helpful for researchers to find protein-to-protein literatures from the overwhelming pieces of information available on the biomedical databases over the Internet.
... This new algorithm was called Balanced Differential Algorithm (BDA). The new algorithm eliminates the limitation of DHL method [5] where weight update for an edge connecting two concepts (nodes) is dependent only on the values of these two concepts. However, proposed learning method was applied only to Fuzzy Cognitive Map with binary concept values which significantly restricts its application areas. ...
... In information technology (IT), FCM has been applied mostly to support IT project management. Applications in IT range from evaluating investments in information systems (Irani et al. 2002), knowledge-based data mining of information from the internet (Hong and Han 2002), automatic generation of semantics for scientific e-documents (Zhuge and Luo 2006), modeling the success of IT projects (Rodriguez-Repiso et al. 2007), to predicting software reliability (Chytas et al. 2010). FCM has also been used in medicine (e.g., for aiding medical diagnosis) (Innocent and John 2004) and tumor grading (Papageorgiou et al. 2006). ...
Chapter
Lack of information and large uncertainties can constrain the effectiveness and acceptability of environmental models. Fuzzy-logic cognitive mapping (FCM) is an approach that deals with these limitations by incorporating existing knowledge and experience. It is a soft-knowledge approach for system modeling, where components of a system and their relationships are identified and semi-quantified in a participatory way. Its usefulness has been manifested through applications in a variety of disciplines, including engineering, information technology, business, and medicine. This chapter introduces FCM as a simple, transparent, and flexible participatory method to model complex social-ecological systems based on expert and stakeholder knowledge. It describes the evolution of FCM to environmental modeling due to its ability to facilitate public participation, data generation, and systems thinking. Numerous actors can be involved when studying environmental issues: experts, scientists, decision makers, and other stakeholders. Thus, a wide range of opinions and perceptions can be taken into account, providing a platform for discussion and negotiation among different actors. Moreover, data that is otherwise inaccessible can be gathered through FCM. Finally, one of the most significant characteristics of the method is the possibility to study causal relationships and feedback loops. In this way, FCM supports decision-making by simulation and scenario studies.
... In order to use expert knowledge for plant habitat prediction, Skov and Svenning (2003) combined FCMs with a geographic information system. Hong and Han (2002) and Lee et al. (2002) combined FCMs with data mining techniques to use expert knowledge. By investigating inference properties of FCMs, Liu and Satur (1999) proposed contextual FCMs, which introduce the object-oriented paradigm for decision support systems. ...
Article
In this paper, a new fuzzy cognitive mapping approach is proposed, in which the values of the concepts and the strength of the links are presented through a set of possible values under Hesitant Fuzzy Sets (HFSs). In particular, we discuss student accommodation problems which can occur at every education system and affect students’ academic achievements. In this regard, we interview with the university chancellor, the student accommodation manager, and the top student to create individual cognitive maps; then, these cognitive maps aggregate to build the strategic cognitive map. In this study, based on the experts’ opinions, three kinds of scenarios, optimistic, moderate, and pessimistic, are developed. By so doing, the concept with the most value is determined. Furthermore, the effects of different initial values of the concepts on the final values and the number of simulation iterations are analyzed.
... FCMs have gained considerable research interest in several scientific fields from knowledge modelling to decision making [21,22], resorting to data mining techniques for promoting user's expert knowledge [23][24][25] and modelling interdependence between concepts in the real-world, by graphically representing the causal reasoning relationships between vague or un-crisp concepts [8,26,27]. Overall, the FCM structure can be viewed as an artificial neural network, where concepts are represented by neurons and causal relationships by weighted links or edges connecting the neurons. ...
Conference Paper
Modelling dietary intake of older adults can prevent nutritional deficiencies and diet-related diseases, improving their quality of life. Towards such direction, a Fuzzy Cognitive Map (FCM)-based modelling approach that models the interdependencies between the factors that affect the Quality of Nutrition (QoN) is presented here. The proposed FCM-QoN model uses a FCM with seven input-one output concepts, i.e., five food groups of the UK Eatwell Plate, Water (H2O), and older adult’s Emotional State (EmoS), outputting the QoN. The weights incorporated in the FCM structure were drawn from an experts’ panel, via a Fuzzy Logic-based knowledge representation process. Using various levels of analysis (causalities, static/feedback cycles), the role of EmoS and H2O in the QoN was identified, along with the one of Fruits/Vegetables and Protein affecting the sustainability of effective food combinations. In general, the FCM-QoN approach has the potential to explore different dietary scenarios, helping health professionals to promote healthy ageing and providing prognostic simulations for diseases effect (such as Parkinson’s) on dietary habits, as used in the H2020 i-Prognosis project (www. i-prognosis. eu).
... FCM is a graph comprising a collection of nodes, which stand for concepts or variables, and weighted arcs connecting the nodes, which represent the cause-effect relationships between them [2] . FCM has been widely applied to modeling [3] , classification [4] , and prediction [5] . ...
Article
The Knowledge Map (KM) concept, which was derived from the Fuzzy Cognitive Map (FCM), is used to describe and manage knowledge. KM provides insight into the interdependencies and uncertainties contained in the system. This paper uses a model-free method to mine KMs in historical data to analyze component stock corporations of the Shanghai Stock 50 index. The analyses use static and time-domain analyses. The results indicate that a knowledge map is useful for representing knowledge and for monitoring the health of companies. Furthermore, sudden changes of the key features of the KMs should be taken seriously by policymakers as an alarm of a crisis.
... Expert knowledge can be used to guide the user in all those decisions. An intelligent discovery assistant that helps a data miner to explore the space of valid DM processes is presented in (Bernstein et al. 2005 presented in (Hong and Han 2002), which focuses on the effect that news information can have on the prediction of interest rates. KBNMiner is designed to use a prior knowledge base, representing expert knowledge, as a foundation on which to probe and collect news from the Web using text mining techniques. ...
Article
Expert systems are built from knowledge traditionally elicited from the human expert. It is precisely knowledge elicitation from the expert that is the bottleneck in expert system construction. On the other hand, a data mining system, which automatically extracts knowledge, needs expert guidance on the successive decisions to be made in each of the system phases. In this context, expert knowledge and data mining discovered knowledge can cooperate, maximizing their individual capabilities: data mining discovered knowledge can be used as a complementary source of knowledge for the expert system, whereas expert knowledge can be used to guide the data mining process. This article summarizes different examples of systems where there is cooperation between expert knowledge and data mining discovered knowledge and reports our experience of such cooperation gathered from a medical diagnosis project called Intelligent Interpretation of Isokinetics Data, which we developed. From that experience, a series of lessons were learned throughout project development. Some of these lessons are generally applicable and others pertain exclusively to certain project types.
... FCMs have been used in modeling the supervision of distributed systems Stylios, Georgopoulos, & Groumpos, 1997). They have also been used in operations research (Craiger, Goodman, Weiss, & Butler, 1996), web data mining (Hong & Han, 2002;Lee et al., 2002), as a back end to computer-based models and medical diagnosis (e.g. Georgopoulos, Malandraki, & Stylios, 2002). ...
... FCMs have been used in modeling the supervision of distributed systems (Stylios, Georgopoulos, & Groumpos, 1997). They have also been used in operations research (Craiger, Goodman, Weiss, & Butler, 1996), web data mining (Hong & Han, 2002;Lee et al., 2002), as a back end to computer-based models and medical diagnosis (e.g. Georgopoulos, Malandraki, & Stylios, 2002). ...
Article
Full-text available
This paper presents the application of a fuzzy cognitive map (FCM) based theoretical framework and its associated modeling and simulation tool to strategy maps (SMs). Existing limitations of SMs are presented in a literature survey. The need for scenario based SMs with inherited ability to change scenarios dynamically as well as the missing element of time are highlighted and discussed upon. FCMs are presented as an alternative to overcome these shortfalls with the introduction of fuzziness in their weights and the robust calculation mechanism. An FCM tool is presented that allows simulation of SMs as well as interconnection of nodes (performance measures) in different SMs which enables the creation of SM hierarchies. An augmented FCM calculation mechanism that allows this type of interlinking is also presented. The resulting methodology and tool are applied to two Banks and the results of these case studies are presented.
... More recently, researchers in the Management and IS disciplines have built upon it. Tools, such as protocol analysis, neural networks, causal mapping, and cognitive mapping have been utilized to study expertise in systems and requirements analysis, software operations support, and data mining [20,35,40,63]. Many of these techniques attempted to capture the cognitive processing of the expert so it can be expressed in the form of rules in a computer system. ...
Article
Systems analysts are continually challenged with problems that require critical thinking. It is important to identify students'critical thinking ability duringtheir education to insuresuccess afier graduation. However, manystudents, unsureoftheir careerpath, chooseISas theirmajorfor reasonsunrelatedto their interests andabilities. Subsequently these students do poorly in the core IS classes andare notable tofindrelevantjobs. This is a wasteofmanythousands ofdollars andan unfortunatemisapplicationofthe innateskills andtalents the studentpossesses. Thispaperdescribes a research study designedto explore the research question: Can critical thinkingskills (as measuredby a widely used cognitive ability testing instrument) be usedas a validpredictorofsuccess in IS classes? We discuss ourfindings and offer suggestions concerning the use ofthis instrument to help students avoid selecting the wrong major.
... Electricity energy consumption [28] Medicine [29] Accident frequency [30] Artificial Neural Network Electricity energy consumption [28] Country investment risk [31] Stock market returns [32] Medicine [29] Interest rates [33] Disease [34] Corporate failure [35] Bayesian Belief Networks (BBN) Student performance [26] & [25] Fuzzy Clustering Newspaper demand [36] The most popular intelligent techniques for prediction are Artificial Neural Network, Decision Tree, Case-based Reasoning, Genetic Algorithm, Rough Set, Soft Computing (known as Hybrid Intelligent System), Operational Research and others techniques such as SVM, Fuzzy logic and etc. [28]. Basically, most of the prediction applications in Table World Academy of Science, Engineering and Technology 50 2009 1 are used to predict stock, demand, rate, risk, event and etc., and a few apply on human or people. ...
Article
Full-text available
Human Resource (HR) applications can be used to provide fair and consistent decisions, and to improve the effectiveness of decision making processes. Besides that, among the challenge for HR professionals is to manage organization talents, especially to ensure the right person for the right job at the right time. For that reason, in this article, we attempt to describe the potential to implement one of the talent management tasks i.e. identifying existing talent by predicting their performance as one of HR application for talent management. This study suggests the potential HR system architecture for talent forecasting by using past experience knowledge known as Knowledge Discovery in Database (KDD) or Data Mining. This article consists of three main parts; the first part deals with the overview of HR applications, the prediction techniques and application, the general view of Data mining and the basic concept of talent management in HRM. The second part is to understand the use of Data Mining technique in order to solve one of the talent management tasks, and the third part is to propose the potential HR system architecture for talent forecasting.
Article
Prediction of the stock market can play a vital role in everyone’s life to attain sustainable growth. This can also lead to an attractive profit by making the proper choices in the financial stock market. The stock market prediction is a big challenge which requires extensive advanced tools and techniques to analyze the future and present data. The modern stock market institutions provide better self-trading and investment options, enabling regular people and traders to enter the market through numerous online applications and websites that are available via smart devices. Financial institutions are investing more and more in talent development so that investors may make the maximum money. There are a multitude of reasons for this, including volatility of market and a collection of other interrelated and independent parameters that control the market value of a different stock. Because of these factors, it is excessively difficult for any specialist in financial markets to accurately predict the market’s rise and collapse. This research paper proposed an artificial neural network (ANN) technique which combines with fuzzy logic to predict the stock market for short duration with high accuracy. First, historical and real-time data are used to classify and gather information for prediction. Then the short-duration prediction is done with a high accuracy of 96% and the error margin for a small instance is 2.3%. Through training, testing and validation it is clear that for sustainable short duration prediction, ANN integrated with fuzzy is a good choice.
Article
Full-text available
Different machine learning algorithms are discussed in this literature review. These algorithms can be used for predicting the stock market. The prediction of the stock market is one of the challenging tasks that must have to be handled. In this paper, it is discussed how the machine learning algorithms can be used for predicting the stock value. Different attributes are identified that can be used for training the algorithm for this purpose. Some of the other factors are also discussed that can have an effect on the stock value.
Article
Online communities are a rapidly growing knowledge repository that provides scholarly research, technical discussion, and social interactivity. This abundance of online information increases the difficulty of keeping up with new developments difficult for researchers and practitioners. Thus, we introduced a novel method that analyses both knowledge and social sentiment within the online community to discover the topical coverage of emerging technology and trace technological trends. The method utilizes the Weibull distribution and Shannon entropy to measure and link social sentiment with technological topics. Based on question-and-answer and social sentiment data from Zhihu, which is an online question and answer (Q&A) community with high-profile entrepreneurs and public intellectuals, we built an undirected weighting network and measured the centrality of nodes for technology identification. An empirical study on artificial intelligence technology trends supported by expert knowledge-based evaluation and cognition provides sufficient evidence of the method's ability to identify technology. We found that the social sentiment of hot technological topics presents a long-tailed distribution statistical pattern. High similarity between the topic popularity and emerging technology development trends appears in the online community. Finally, we discuss the findings in various professional fields that are widely applied to discover and track hot technological topics.
Chapter
This chapter draws on the methodology for e-service customization presented in chapter 5 and presents a modeling approach for realizing the service customization strategies. The described models are a combination of object-oriented business process and task models, and of fuzzy cognitive maps (FCM) that represent the interrelationships between business process performance and service customization. The modeling approach breaks down business process(es) and e-services components into customization activities and components respectively, and associates their performance indicators with customization strategic objectives. This chapter describes the concepts and models that are developed in each step in order to capture the strategic factors, service characteristics, process tasks and data entities, which are assumed in the customization process, and outlines the advantages of the modeling approach.
Chapter
Strategic Information Systems Planning (SISP) has been a continuing top concern for IS/IT management, since the mid 1980’s. Responding to the increasing interest in SISP, researchers have developed a large number of SISP methodologies and models. However, when organisations embark on planning for their information systems, they face difficulties and usually fail to gain the expected benefits. Strategic alignment and the identification of the business areas where the IT contribution is strategically important for the organisation are the most difficult problems in SISP. None of the existing SISP methodologies and models offers a complete solution. The approach presented in this chapter, utilises a Fuzzy Cognitive Map in order to align strategic objectives with IS opportunities that exist at the level of business processes. This chapter exemplifies fuzzy cognitive mapping in SISP, and illustrates how the strategic alignment between the business and IT domains can be realised.
Article
We empirically compare the Nelson-Siegel, Bliss and Svensson parsimonious yield curve models that are commonly used by central banks for monetary policy decisions and recommend the use of the former. Results shed light on the patterns of confounding effects in the Svensson model. We review estimation challenges and show implications of using different approaches for the initial values on the parameter stability and on the goodness of fit. Nelson-Siegel parameter estimates are more stable and conserve their intrinsic economical interpretation. The implications of excluding Treasury bills, constraining parameters and reducing clusters across time to maturity are also investigated. Full text available at SSRN: https://dx.doi.org/10.2139/ssrn.3600955
Article
Full-text available
Tidal power, also called tidal energy, is a form of hydro power that change the energy of tides into useful forms of power, mainly electricity. Although not yet wide used, tidal power has potential for upcoming electricity generation. Among the sources of renewable energy, tidal power has regularly suffered from relatively high cost and limited availability of sites with sufficiently high tidal flow of velocities, thus constricting its total availability. Aim of the paper is to design a mechanism that will produce Uni-Directional Output for Bi- Directional Rotation of Input Shaft. Main Motive behind this idea is to produce unidirectional Motion of Shaft where shaft can Rotate In both direction due To tidal waves and produce Electrical Energy with the help of Dynamometer. This Mechanism Will Allow Us to Rotate the Shaft of Dynamo in Single Direction for Continuous Generation of Energy, Independent of the Direction Of tides.
Article
Identifying the development trends of emerging technologies with disruptive potential as early as possible is crucial for enterprise research and development (R&D) investment planning and government R&D strategic planning. Many researchers have used academic papers and patent data to identify trends in emerging technologies, although they rarely have made use of web news data focused on emerging technologies. Web news contains a large amount of social awareness data that represents the public's sense of and response to emerging technologies, and the social awareness data is of great significance for identifying the development trends of emerging technologies. Therefore, this article presented a research framework to identify the development trends of emerging technologies using patent analysis and web news data mining. In the research framework, first, we have used patent analysis to analyze the evolution topics of these emerging technologies, and have applied the topic model to study the changing patterns of publics’ topics of concern regarding emerging technologies contained in web news. Second, we have made a comparison analysis of the results of patent analysis and the public’ sense of and response to emerging technologies contained in web news. Then, we have applied an improved sentiment analysis to study the changing patterns of publics’ expectations for emerging technologies contained in web news. Finally, by studying the changes in publics’ topics of concern and publics’ expectations for emerging technologies over time, we have identified the development trends of emerging technologies. The perovskite solar cells technology is used as a case study to analyze the effectiveness and feasibility of the framework. This article provides a new research perspective to identify technology trends, and contribute to the technology forecasting methodology.
Chapter
This paper reviews applications of fuzzy cognitive maps (FCMs) in the in the field of human factors and ergonomics, with special consideration of human systems integration efforts.
Article
Case retrieval is a key issue in case-based design. The purpose of retrieval is to select the most suitable case for reusing. In the context of rule-based automatic case adaptation, the prerequisite to adapt a case automatically is that all the required adaptation rules can be found in the rule base. In addition, the adaptation performance is also influenced by the quality of the required adaptation rules and the workload of the adaptation. To retrieve the most suitable case for rule-based automatic case adaptation, a new case retrieval method is proposed. First, the rule maturity of each case is calculated to measure the sufficiency and quality of the required adaptation rules. Then, the adaptability of each case is calculated to measure the adaptation workload. Finally, the suitability of the cases is obtained by synthesizing the maturity and the adaptability. The case with the largest suitability is selected as the starting point of the new design. Two comparative experiments were performed, and the results show that the cases retrieved by the proposed method were more suitable for automatic adaptation.
Chapter
Strategic Information Systems Planning (SISP) has been a continuing top concern for IS/IT management, since the mid 1980's. Responding to the increasing interest in SISP, researchers have developed a large number of SISP methodologies and models. However, when organisations embark on planning for their information systems, they face difficulties and usually fail to gain the expected benefits. Strategic alignment and the identification of the business areas where the IT contribution is strategically important for the organisation are the most difficult problems in SISP. None of the existing SISP methodologies and models offers a complete solution. The approach presented in this chapter, utilises a Fuzzy Cognitive Map in order to align strategic objectives with IS opportunities that exist at the level of business processes. This chapter exemplifies fuzzy cognitive mapping in SISP, and illustrates how the strategic alignment between the business and IT domains can be realised.
Conference Paper
A novel algorithm for the trajectory tracking problem of a nonholonomic mobile robot is presented. This method combined the neural dynamics model with finite time control and it solved the speed sharp jump problem on initial time. The finite time control method is easily designed and the resulting control law is continuous. It is improved by neural dynamics model which has stability, bounded and smooth response characteristic. The effectiveness and efficiency are demonstrated by simulation study.
Article
Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.
Article
Modelling the structure of risk-free rates and their relation to other economic and financial variables during different stages of the economic cycles has attracted much interest from both the theoretical and practical perspectives. The previous literature has emphasized the deployment of expert systems and knowledge-discovery approaches motivated by the need to address the limitations of the econometric models. However, it has failed to address the interpretability aspects and, more importantly, the need to provide methodological support that allows the deployment of such techniques in a more systematic way. This approach entails the definition of a process that includes the usual steps taken by experts to address similar problems and allows the relative merits of different techniques in relation to common goals and objectives to be gauged. This paper addresses the interpretability and the lack of methodological support by proposing a knowledge-discovery methodology that includes a minimal common number of steps to model, analyse, evaluate and deploy different non-linear techniques and models. Furthermore, the interpretability is addressed through the use of open-box techniques, such as decision trees. The proposed methodology helps to discover and describe hidden patterns, allowing for the study and characterization of economic cycles, and economic cycle stages, as well as the description of the historic relationships between interest rates and other relevant economic variables. These patterns can also be used in the forecasting of economic cycle stages, interest rates and other related variables of concern. The output of the methodology can provide actionable information for market agents, such as monetary authorities, financial institutions, and individual investors, as well as for the academic community, to increase further the knowledge and understanding of financial markets, thus enriching and complementing existing financial theories.
Article
Full-text available
This paper presents the application of a Fuzzy Cognitive Map (FCM) based theoretical framework and its associated modeling and simulation tool to StrategyMaps (SMs). Existing limitations of SMs are presented in a literature survey. The need for scenario based SMs with inherited ability to change scenarios dynamically as well as the missing element of time are highlighted and discussed upon. FCMs are presented as an alternative to overcome these shortfalls with the introduction of fuzziness in their weights and the robust calculation mechanism. An FCM tool is presented that allows simulation of SMs as well as interconnection of nodes (performance measures) in different SMs which enable the creation of SM hierarchies. An augmented FCM calculation mechanism that allows this type of interlinking is also presented. The resulting methodology and tool are applied to two Banks and the results of these case studies are presented.
Conference Paper
Full-text available
Nowadays, higher education institutions (HEIs) are facing the need of constant monitoring of users’ interaction with Learning Management Systems (LMSs), in order to identify key areas for potential improvement. In fact, LMSs under blended (b-) learning mode can efficiently support online learning environments (OLEs) at HEIs. An important challenge would be to provide flexible solutions, where intelligent models could contribute, involving artificial intelligence and incertitude modelling, e.g., via Fuzzy Logic (FL). This study addresses the hypothesis that the structural characteristics of a Fuzzy Cognitive Map (FCM) can efficiently model the way LMS users interact with it, by estimating their Quality of Interaction (QoI) within a b-learning context. This work proposes the FCM-QoI model, consisting of 14 input-one output concepts, dependences and trends, considering one academic year of two dance disciplines (i.e., the Rare and Contemporary Dances) of the LMS Moodle use. The experimental results reveal that the proposed FCM-QoI model can provide concepts interconnection and causal dependencies representation of Moodle LMS users’ QoI, helping educators of HEIs to holistically visualize, understand and assess stakeholders’ needs. In general, the results presented here could shed light upon designing aspects of educational scenarios, but also to those involved in cultural preservation and exploitation initiatives, such as the i-Treasures project (http:// i-treasures. eu/ ).
Article
Full-text available
Bu çalışmada, örgütsel değişimi ölçmek üzere bir ölçüm aracı geliştirilmeye çalışılmıştır. Bu amaçla, literatür incelenerek organizasyonlardaki değişim boyutları çıkarılmış ve frekansları bulunmuştur. Yüksek frekansa sahip değişim boyutları belirlenmiş ve bunların birbirlerine göre üstünlükleri Analitik Hiyerarşi Yöntemi ile incelenmiştir. Belirlenen boyutlara ait anket sorularının oluşturulması için Bulanık Bilişsel Haritalama yöntemi kullanılmıştır.
Chapter
Cognitive maps (CMs) were initially for graphical representation of uncertain causal reasoning. Later Kosko suggested Fuzzy Cognitive Maps (FCMs) in which users freely express their opinions in linguistic terms instead of crisp numbers. However, it is not always easy to assign some linguistic term to a causal link. In this paper we suggest a new type of CMs namely, Belief Degree-Distributed FCMs (BDD-FCMs) in which causal links are expressed by belief structures which enable getting the links’ evaluations with distributions over the linguistic terms. We propose a general framework to construct BDD-FCMs by directly using belief structures or other types of structures such as interval values, linguistic terms, or crisp numbers. The proposed framework provides a more flexible tool for causal reasoning as it handles any kind of structures to evaluate causal links. We propose an algorithm to find a similarity between experts judgments by BDD-FCMs for a case study in Energy Policy evaluation.
Article
This project examines the roles and interrelationships among the three main policy instruments, namely the exchange rate, inflation rate, interest rate, and the real GDP. It provides, first, an univariate data analysis to describe each variable; second, a comparison of the ARMA and the Neural Networks (NN) models to evaluate their estimation and forecasting performances; third a multivariate data analysis in order to explain the business cycles among these variables in Turkey starting from January 1987 to December 2007 through monthly data. Since there have not been many empirical studies regarding these issues, the contribution of this project is expected to develop a structure for econometric model construction and policy evaluation for the Turkish economy through three channels. The first channel provides a detailed descriptive data anal ysis on four variables and determines their contemporaneous and causal relationship for different sub-sample periods. The second channel provides a comparative analysis of ARMA and NN models for estimation and static forecasting over the period 2008:01-2008:07. The third channel provides a general view of business cycles and short run forecasts by NN models for two different sample periods. Different lengths of the sample periods are selected for each variable covering both the economic crises and different policy applications in order to compare the forecast performances and also to provide information about the reasons and the consequences of different economic policy applications. It is concluded that: (i) Statistical data analysis has proved that the distribution of economic series is changing from one period to another. (ii) Model1 and NN model for inflation rate; Model2 and NN model for exchange rate and interest rate; Model2 and ARMA model for RGDP have performed noticeably better. (iii) Model2 provides better forecast performances than Model1 for the business cycles. Özet Bu projede üç temel ekonomik politika değişkeni, döviz kuru, enflasyon oranı ve faiz oranı ile reel GSYİH değişkenlerinin Türkiye ekonomisindeki rolleri ve etkileşimleri farklı tekniklerle incelenmektedir. İlk olarak, her değişken için detaylı biçimde tek değişkenli veri analizi yapılmakta; sonra, 1987:01-2007:12 dönemi için ARMA ve NN modelleri ile tahmin ve 2008:01-2008:07 dönemi için öngörü yapılarak, modellerin tahmin ve öngörü performansları karşılaştırılmakta; son olarak, Türkiye'de bu değişkenler arasında iş çevrimlerini açıklamak amacıyla çok değişkenli tahmin analizi ve öngörü yapılmaktadır. İncelenen kapsamda sınırlı sayıda çalışmalar olması nedeniyle, bu araştırmanın üç kanaldan Türkiye ekonomisi için ekonometrik modelleme ve ekonomik politika araçlarının kontrol değişkeni olarak seçilmesi aşamalarına bilimsel katkı sağlaması beklenmektedir. Birinci kanalda, dört değişkenin detaylı veri analizi yapılarak farklı dönemlerde aynı zamanlı ve nedensellik ilişkileri belirlenmektedir. İkinci kanalda, ARMA ve NN modellerinin karşılaştırmalı analizini yapılarak uygun tahmin dönemi ve model belirlenmektedir. Üçüncü kanalda ise NN modelleri kullanılarak iki farklı dönem için Türkiye'de incelenen değişkenler kapsamında iş çevrinimleri ve 2008:01-2008:07 dönemi için öngörü değerleri belirlenmektedir. İncelenen her değişken için veri yaratma süreçleri, öngörü performansları ve farklı ekonomi politika uygulamalarının etki ve sonuçları karşılaştırmalı olarak saptanmaktadır. Bu amaçla ekonomik krizlerin ve farklı ekonomik politika uygulamalarının içerildiği farklı örnekleme dönemleri seçilmiştir. Bu çalışmada: (i) İstatistiksel veri analizi ile değişkenlerin zaman içinde dağılımlarının değiştiği; (ii) Enflasyon oranı için Model1 ve NN modelinin, döviz kuru ve faiz oranı için Model2 ve NN modelinin, reel GSYİH için Model2 ve ARMA modelinin daha iyi performans gösterdiği; (iii) İş çevrinimleri analizinde Model2'nin öngörü performansının daha yüksek olduğu saptanmıştır.
Article
Safety Culture describes how safety issues are managed within an enterprise. How to make safety culture strong and sustainable? How to be sure that safety is a prime responsibility or main focus for all types of activity? How to improve safety culture and how to identify the most vulnerable issues of safety culture? These are important questions for safety culture. Huge amount of studies focus on identifying and building the hierarchy of the main indicators of safety culture. However, there are only few methods to assess an organization's safety culture and those methods are often straightforward. In this paper we describe a novel approach for safety culture assessment by using Belief Degree-Distributed Fuzzy Cognitive Maps (BDD-FCMs). Cognitive maps were initially presented for graphical representation of uncertain causal reasoning. Later Kosko suggested Fuzzy Cognitive Maps FCMs in which users freely express their opinions in linguistic terms instead of crisp numbers. However, it is not always easy to assign some linguistic term to a causal link. By using BDD-FCMs, causal links are expressed by belief structures which enable getting the links evaluations with distributions over the linguistic terms. In addition, we propose a general framework to construct BDD-FCMs by directly using belief structures or other types of structures such as intervals, linguistic terms, or crisp numbers. The proposed framework provides a more flexible tool for causal reasoning as it handles different structures to evaluate causal links.
Article
The fundamental step in measuring the robustness of a system is the synthesis of the so called Process Map.This is generally based on the user raw data material.Process Maps are of fundamental importance towards the understanding of the nature of a system in that they indicate which variables are causally related and which are particularly important.This paper represent the system Map or business structure map to understand business criteria studying the various aspects of the company.The business structure map or knowledge map or Process map are used to increase the growth of the company by giving some useful measures according to the business criteria.This paper also deals with the different company strategy to reduce the risk factors.Process Map is helpful for building such knowledge successfully.Making decisions from such map in a highly complex situation requires more knowledge and resources.
Article
This work presents a novel automated approach to construct topic knowledge maps with knowledge structures, followed by its application to an internationally renowned journal. Knowledge structures are diagrams showing the important components of knowledge in study. Knowledge maps identify the locations of objects and illustrate the relationship among objects. In our study, the important components derived from knowledge structures are used as objects to be spotted in a topic knowledge map. The purpose of our knowledge structures is to find out the major topics serving as subjects of article collections as well as related methods employed in the published papers. The purpose of topic knowledge maps is to transform high-dimensional objects (topic, paper, and cited frequency) into a 2-dimensional space to help understand complicated relatedness among high-dimensional objects, such as the related degree between an article and a topic. First, we adopt independent chi-square test to examine the independence of topics and apply genetic algorithm to choose topics selection with best fitness value to construct knowledge structures. Additionally, high-dimensional relationships among objects are transformed into a 2-dimensional space using the multi-dimension scaling method. The optimal transformation coordinate matrix is also determined by using a genetic algorithm to preserve the original relations among objects and construct appropriate topic knowledge maps.
Conference Paper
Full-text available
This paper proposes utilization of fuzzy cognitive maps (FCM) for modeling educational management. It presents a case study of a Thailand science-based technology school (SBTS). Both qualitative and quantitative analysis based on CIPP model were used to identify critical success factors (CSFs) of the SBTS project. FCM was applied to the CSFs in order to synthesize the SBTS model. The model represents impact of the CSFs and their relationships to one another. Therefore, the proposed model is expected to be an useful tool for the education management in the future.
Chapter
Full-text available
The use of cyberspace to disseminate radical materials and messages has now become the predominant method used by extremists to recruit and radicalize individuals to their cause. The phenomenon of online radicalization is increasing, presenting pressing security concerns. In direct response to the emerging threats and risks arising from online radicalization, global efforts are now being made to monitor and disrupt contemporary cyber avenues of terrorist recruitment. Individuals, particularly young computer-literate males, are becoming self-radicalized through access to sophisticated online materials promoting and justifying extreme views and actions. The detection of these self-radicalized individuals is challenging, and detecting the threat of particular individuals becoming self-radicalized is even more so. This chapter seeks to identify factors which can be used as indicators of online radicalization leading to the presentation of an integrated model of online self-radicalization (FCM). The model attempts to support and inform the classification of individual profiles serving to tackle terrorist activities in the future. Keywords: Radicalization Process, Online Self-Radicalization, Radicalized Individual Profiles, Fuzzy Cognitive Maps (FCM), Causal Relationships
Article
The retail sector environment is characterized by intense pressure of competition, ever-changing portfolio of products, hundreds of different products, ever-changing customer requirements and be able to stand in a mass market. When considering that the giant retailers work together with their suppliers, each independent operation is seen as a comprehensive structure, consisting of thousands of sub-processes. In short, the retail industry dynamism and work in cooperation with the competitiveness of the sector is one of a rare combination. Of course in such a sector businesses of all sizes in many aspects of creating an efficient and low cost structure is in the effort. Collaborative planning, forecasting and replenishment (CPFR) model which is a scheme integrating trading partners’ internal and external information systems is proposed to assist establishing a more effective supply chain structure in retail industry. Although CPFR can provide many benefits, there have been many failed implementations. The aim of this study is to determine the factors that will support better implementation of CPFR strategy in retail industry and analyze them using fuzzy cognitive map (FCM) approach. FCMs have proven particularly useful for solving problems in which a number of decision variable and uncontrollable variables are causality interrelated. A CPFR model made up of three sub-systems, namely information sharing, decision synchronization and incentive alignment, is proposed and “what–if” scenarios for proposed model are developed and interpreted. To our knowledge, this is the first study that uses FCMs for CPFR success factors assessment.
Article
Cognitive map (CM) is a new intelligent method. Compared either experts system and neural networks, it has several desirable advantages such as: it is relative easy to use for representing structured knowledge, and the inference can be computed by numeric matrix operation instead of explicit IF/THEN rules. However, in order to exhibit these advantages about CM, the first step is that the corrected CMs must be obtained. Traditional approaches for obtaining the CMs, including questionnaire method, brainstorming method and sample learning method, mainly rely on experience of domain experts. Because these methods put much emphasis on the subjective factors, neglect the objective data resources, they always lose some information. Therefore, this paper proposes a new methodology of mining the CMs based on data resource, which mainly includes database preprocessing technology, optimization algorithm for weight coefficients and simplification strategy of CMs. The experimental research based on a finance database is done and the results show that: the new method can mine all possible relationship among all nodes to form the CMs, and can also simplify it according to the significant degree of relationships; the CMs mined by the new method has more information than the CMs obtained by traditional approaches.
Article
Full-text available
Organizations are taking advantage of “data-mining” techniques to leverage the vast amounts of data captured as they process routine transactions. Data mining is the process of discovering hidden structure or patterns in data. However, several of the pattern discovery methods in data-mining systems have the drawbacks that they discover too many obvious or irrelevant patterns and that they do not leverage to a full extent valuable prior domain knowledge that managers have. This research addresses these drawbacks by developing ways to generate interesting patterns by incorporating managers' prior knowledge in the process of searching for patterns in data. Specifically, we focus on providing methods that generate unexpected patterns with respect to managerial intuition by eliciting managers' beliefs about the domain and using these beliefs to seed the search for unexpected patterns in data. Our approach should lead to the development of decision-support systems that provide managers with more relevant patterns from data and aid in effective decision making.
Article
Full-text available
The authors present Pool2, a generic system for cognitive map development and decision analysis that is based on negative-positive-neutral (NPN) logics and NPN relations. NPN logics and relations are extensions of two-valued crisp logic, crisp (binary) relations, and fuzzy relations, NPN logics and relations assume logic values in the NPN interval [-1, 1] instead of values in [0, 1]. A theorem is presented that provides conditions for the existence and uniqueness of heuristic transitive closures of an NPN relation. It is shown that NPN logic and NPN relations can be used directly to model a target world with a combination of NPN relationships of attributes and/or concepts for the purposes of cognitive map understanding, and decision analysis. Two algorithms are presented for heuristic transitive closure computation and for heuristic path searching, respectively. Basic ideas are illustrated by example. A comparison is made between this approach and others
Article
Full-text available
: This study evaluated measures for making comparisons of errors across time series. We analyzed 90 annual and 101 quarterly economic time series. We judged error measures on reliability, construct validity, sensitivity to small changes, protection against outliers, and their relationship to decision making. The results lead us to recommend the Geometric Mean of the Relative Absolute Error (GMRAE) when the task involves calibrating a model for a set of time series. The GMRAE compares the absolute error of a given method to that from the random walk forecast. For selecting the most accurate methods, we recommend the Median RAE (MdRAE) when few series are available and the Median Absolute Percentage Error (MdAPE) otherwise. The Root Mean Square Error (RMSE) is not reliable, and is therefore inappropriate for comparing accuracy across series. Keywords: Forecast accuracy, M-Competition, Relative absolute error, Theil's U. 1. Introduction Over the past-two decades, many studies have been ...
Article
This study evaluated measures for making comparisons of errors across time series. We analyzed 90 annual and 101 quarterly economic time series. We judged error measures on reliability, construct validity, sensitivity to small changes, protection against outliers, and their relationship to decision making. The results lead us to recommend the Geometric Mean of the Relative Absolute Error (GMRAE) when the task involves calibrating a model for a set of time series. The GMRAE compares the absolute error of a given method to that from the random walk forecast. For selecting the most accurate methods, we recommend the Median RAE (MdRAE) when few series are available and the Median Absolute Percentage Error (MdAPE) otherwise. The Root Mean Square Error (RMSE) is not reliable, and is therefore inappropriate for comparing accuracy across series.
Article
Fuzzy cognitive maps (FCMs) are fuzzy-graph structures for representing causal reasoning. Their fuzziness allows hazy degrees of causality between hazy causal objects (concepts). Their graph structure allows systematic causal propagation, in particular forward and backward chaining, and it allows knowledge bases to be grown by connecting different FCMs. FCMs are especially applicable to soft knowledge domains and several example FCMs are given. Causality is represented as a fuzzy relation on causal concepts. A fuzzy causal algebra for governing causal propagation on FCMs is developed. FCM matrix representation and matrix operations are presented in the Appendix.
Article
Does the use of information on the past history of the nominal interest rates and inflation entail improvement in forecasts of the ex ante real interest rate over its forecasts obtained from using just the past history of the realized real interest rates? To answer this question we set up a univariate unobserved components model for the realized real interest rates and a bivariate model for the nominal rate and inflation which imposes cointegration restrictions between them. The two models are estimated under normality with the Kalman filter. It is found that the error-correction model provides more accurate one-period ahead forecasts of the real rate within the estimation sample whereas the unobserved components model yields forecasts with smaller forecast variances. In the post-sample period, the forecasts from the bivariate model are not only more accurate but also have tighter confidence bounds than the forecasts from the unobserved components model.
Article
Complex social systems are difficult to represent. Relationships between social forces demand feedback. For example, the causal connection between commodity price and consumer demand is a feedback system. Price increase tends to decrease demand for some commodities. On the other hand, an increased demand tends to elevate price. A stable system settles into equilibrium. A dynamic system is needed to model shifts in equilibrium brought about by changes in the causal environment. The knowledge-based expert system lacks the intrinsic structure for modeling these effects. Internally, all expert systems depend on these representations. Some attempt to simulate unrestricted graphs with virtual registers and loop counters. The result is a semantic gap between the internal representation of the social system and the social system itself. This article presents the Fuzzy Cognitive Map (FCM) alternative to the expert system. We first describe the FCM. We then diagram several social systems. Finally, we show a method for combining credibility-weighted FCMs to achieve a single global knowledge base.
Article
The availability of relatively inexpensive computing power as well as the ability to obtain, store, and retrieve huge amounts of data has spurred interest in data mining. In a majority of data mining applications, most of the effort is spent in cleaning the data and extracting useful patterns in the data. However, a critical step in refining the extracted knowledge especially in dynamic environments is often overlooked. This paper focuses on knowledge refinement, a necessary process to obtain and maintain current knowledge in the domain of interest. The process of knowledge refinement is necessary not only to have accurate and effective knowledge bases but also to dynamically adapt to changes. KREFS, a knowledge refinement system, is presented and evaluated in this paper.KREFS refines knowledge by intelligently self-guiding the generation of new training examples. Avoiding typical problems associated with dependency on domain knowledge, KREFS identifies and learns distinct concepts from scratch. In addition to improving upon features of existing knowledge refinement systems, KREFS provides a general framework for knowledge refinement. Compared to other knowledge refinement systems, KREFS is shown to have more expressive power that renders its applicability in more realistic applications involving the management of knowledge.
Article
In this paper, we are interested in mining the data with natural ordering according to some attributes, and the time series data is one of this kind of data. The problem of mining the time series data is that the quantity at different time may be very close or even equal to each other. To solve this problem, we propose a fuzzy linguistic summary as one of the data mining functions in our KDD (Knowledge Discovery in Databases) system to discover useful knowledge from the database. To help users to premine a database, our system also provides a graphic display tool so that the users can predetermine what knowledge could be discovered from the database. To demonstrate that our system works correctly, we use our system to analyze a time series data problem, the resources usage analysis problem, to predict the utilization ranks of different resources at a specific time.
Article
The prediction of economic and financial variables is a critical task for many decision makers. One of the most important variables is found in the interest rate, which strongly affects other economic and financial parameters. The literature is rife with negative results in forecasting time series in financial markets: The predictive techniques have been unable to outperform the random walk model at a statistically significant level. However, knowledge-based methods can reverse this phenomenon.This paper presents a comparative investigation of the predictability of interest rates through neural networks, case-based reasoning and their integration. In the integrated model, case-based reasoning serves as a filter by providing estimates which are then used as input into the neural network model.Predictions from these models are compared against each other and also with the random walk model. Overall, the prediction performance of the forecasting models for the US interest rate was superior to the random walk model. On the other hand, none of the models outperformed the random walk model at a statistically significant level in forecasting the Korean interest rate.
Article
In this paper we investigate ways to use prior knowledge and neural networks to improve multivariate prediction ability. Daily stock prices are predicted as a complicated real-world problem, taking non-numerical factors such as political and international events are into account. We have studied types of prior knowledge which are difficult to insert into initial network structures or to represent in the form of error measurements. We make use of prior knowledge of stock price predictions and newspaper information on domestic and foreign events. Event-knowledge is extracted from newspaper headlines according to prior knowledge. We choose several economic indicators, also according to prior knowledge, and input them together with event-knowledge into neural networks. The use of event-knowledge and neural networks is shown to be effective experimentally: the prediction error of our approach is smaller than that of multiple regression analysis on the 5% level of significance. © 1997 by John Wiley & Sons, Ltd.
Article
this article. 0738-4602/92/$4.00 1992 AAAI 58 AI MAGAZINE for the 1990s (Silberschatz, Stonebraker, and Ullman 1990)
Article
Knowledge Discovery in Databases creates the context for developing the tools needed to control the flood of data facing organizations that depend on ever-growing databases of business, manufacturing, scientific, and personal information.
Article
Ad hoc techniques - no longer adequate for sifting through vast collections of data - are giving way to data mining and knowledge discovery for turning corporate data into competitive business advantage.
Article
Electronic Commerce (EC) has offered a new channel for instant on-line shopping. However, there are too many various products available from a great number of virtual stores on the Internet for Internet shoppers to select. On-line one-to-one marketing therefore becomes a great assistance to Internet shoppers. One of the most important marketing resources is the prior daily transaction records in the database. The great amount of data not only gives the statistics, but also offers the resource of experiences and knowledge. It is quite natural that marketing managers can perform data mining on the daily transactions and treat the shoppers the way they prefer. However, the data mining on a significant amount of transaction records requires efficient tools. Data mining from automatic or semi-automatic exploration and analysis on a large amount of data items set in a database can discover significant patterns and rules underlying the database. The knowledge can be equipped in the on-line marketing system to promote Internet sales.The purpose of this paper is to develop a mining association rules procedure from a database to support on-line recommendation. By customers and products fragmentation, product recommendation based on the hidden habits of customers in the database is therefore very meaningful. The proposed data mining procedure consists of two essential modules. One is a clustering module based on a neural network, Self-Organization Map (SOM), which performs affinity grouping tasks on a large amount of database records. The other rule is extraction module employing rough set theory that can extract association rules for each homogeneous cluster of data records and the relationships between different clusters. The implemented system was applied to a sample of sales records from a database for illustration.
Article
Causal knowledge is often cyclic and fuzzy, thus it is hard to represent in the form of trees. A fuzzy cognitive map (FCM) can represent causal knowledge as a signed directed graph with feedback. It provides an intuitive framework in which to form decision problems as perceived by decision makers and to incorporate the knowledge of experts. This paper proposes a fuzzy time cognitive map (FTCM), which is a FCM introduced to a time relationship on arrows. We first discuss the characteristics and basic assumptions of the FCM, and present a description of causal propagations in a FCM with the causalities of negative-positive-neutral interval, [-1, 1]. We develop a method of translating the FTCM, that has a different time lag, into the FTCM that has one or the same unit-time lag, which is a value-preserving translation. With the FTCM, we illustrate analysing the change of causalities among factors according to lapse of time.
Article
There exists a large number of quantitative extrapolative forecasting methods which may be applied in research work or implemented in an organizational setting. For instance, the lead article of this issue of the Journal of Forecasting compares the ability to forecast the future of over twenty univariate forecasting methods. Forecasting researchers in various academic disciplines as well as practitioners in private or public organizations are commonly faced with the problem of evaluating forecasting methods and ultimately selecting one. Thereafter, most become advocates of the method they have selected. On what basis are choices made? More specifically, what are the criteria used or the dimensions judged important? If a survey was taken among academicians and practitioners, would the same criteria arise? Would they be weighted equally? Before you continue reading this note, write on a piece of paper your criteria in order of importance and answer the last two questions. This will enable you to see whether or not you share the same values as your colleagues and test the accuracy of your perception.
Conference Paper
Recurrent neural networks can be trained to behave like deterministic finite-state automata (DFAs) and methods have been developed for extracting grammatical rules from trained networks. Using a simple method for inserting prior knowledge of a subset of the DFA state transitions into recurrent neural networks, it is shown that recurrent neural networks are able to perform rule refinement. The results from training a recurrent neural network to recognize a known nontrivial randomly generated regular grammar show that not only do the networks preserve correct prior knowledge, but they are able to correct through training inserted prior knowledge which was wrong. By wrong, it is meant that the inserted rules were not the ones in the randomly generated grammar
Conference Paper
The authors propose a novel unified approach for integrating explicit knowledge and learning by example in recurrent networks. The hypothesis is that for a model to be effective, this integration should be as uniform as possible. The authors propose an architecture composed of two cooperating subnets. The first one is designed in order to inject the available explicit knowledge, whereas the second one is learned to allow management of uncertain information. Learning is conceived as a refinement process. The authors report preliminary results for a problem of isolated word recognition to evaluate the proposed model in practice
Article
A partial taxonomy for cognitive maps is provided. The notions of NPN (negative-positive-neural) logic, NPN relations, coupled-type neurons, and coupled-type neural networks are introduced and used as a framework for cognitive map modeling. D-POOL a cognitive-map-based architecture for the coordination of distributed cooperative agents, is presented. D-POOL consists of a collection of distributed nodes. Each node is a cognitive-map-based metalevel system coupled with a local expert/database system (or agent). To solve a problem, a local node first pools cognitive maps from relevant agents in an NPN relation that retains both negative and positive assertions. New cognitive maps are then derived and focuses of attentions are generated. With the focuses, a solution is proposed by the local node and passed to the remote systems. The remote systems respond to the proposal, and D-POOL strives for a cooperative or compromised solution through coherent communication and perspective sharing. The utility of D-POOL is demonstrated using two examples in distributed group decision support
Article
Standard algorithms for explanation-based learning require complete and correct knowledge bases. The KBANN system relaxes this constraint through the use of empirical learning methods to refine approximately correct knowledge. This knowledge is used to determine the structure of an artificial neural network and the weights on its links, thereby making the knowledge accessible for modification by neural learning. KBANN is evaluated by empirical tests in the domain of molecular biology. Networks created by KBANN are shown to be superior, in terms of their ability to correctly classify unseen examples, to randomly initialized neural networks, decision trees, "nearest neighbor" matching, and standard techniques reported in the biological literature. In addition, KBANN's networks improve the initial knowledge in biologically interesting ways. Introduction Explanation-based learning (EBL) (Mitchell et al. 1986; DeJong & Mooney 1986) provides a way of incorporating pre-existing knowledge i...
The prediction of interest rates using arti®cial neural network. Proceedings of The First Asia Paci®c Conference of Decision Science Institute (pp. 975±984)
  • T H Hong
  • I G Han
Hong, T.H. & Han, I.G. (1996). The prediction of interest rates using arti®cial neural network. Proceedings of The First Asia Paci®c Conference of Decision Science Institute (pp. 975±984). Hong Kong.
Pool2: a generic system for cognitive map development and decision analysis A cognitive-map-based approach to the coordination of distributed cooperative agents
  • Boston
  • W R Zhang
  • S S Chen
  • J C Bezdek
Boston. Zhang, W. R., Chen, S. S., & Bezdek, J. C. (1989). Pool2: a generic system for cognitive map development and decision analysis. IEEE Transac-tions on Systems, Man, and Cybernetics, 19, 31±39. Zhang, W. R., Chen, S. S., Wang, W., & King, R. S. (1992). A cognitive-map-based approach to the coordination of distributed cooperative agents. IEEE Transactions on Systems, Man, and Cybernetics, 22, 103±114.
Fuzzy cognitive maps Unexpectedness as a measure of interestingness in knowledge discovery. Decision Support Systems Fuzzy cognitive maps considering time relationships Dynamic rule re®nement in knowledge-based data mining systems
  • B Kosko
  • B Padmanabhan
  • A Tuzhilin
Kosko, B. (1986). Fuzzy cognitive maps. International Journal of Man± Machine Studies, 24, 65±75. Padmanabhan, B., & Tuzhilin, A. (1999). Unexpectedness as a measure of interestingness in knowledge discovery. Decision Support Systems, 27 (3), 303±318. Park, K. S., & Kim, S. H. (1995). Fuzzy cognitive maps considering time relationships. International Journal of Human±Computer Studies, 42, 157±168. Park, S. C., Piramuthu, S., & Shaw, M. J. (2001). Dynamic rule re®nement in knowledge-based data mining systems. Decision Support Systems, 31, 205±222.
Mining association rules procedure to support on-line recommendation by customer and products fragmenta-tion Mining time series data by a fuzzy linguistic summary system
  • S W Changchien
  • T Lu
Changchien, S. W., & Lu, T. (2001). Mining association rules procedure to support on-line recommendation by customer and products fragmenta-tion. Expert Systems with Applications, 20, 325±335. Chiang, D., Chow, L. R., & Wang, Y. (2000). Mining time series data by a fuzzy linguistic summary system. Fuzzy Sets and Systems, 112, 419± 432.
Re®nement of approximate domain theories by knowledge-based neural networks
  • G Towell
  • J Shavlik
  • M Noordewiser
Towell, G., Shavlik, J., & Noordewiser, M. (1990). Re®nement of approximate domain theories by knowledge-based neural networks. Proceeding of National Conference on Arti®cial Intelligence (pp. 861±866). Boston.
The prediction of interest rates using artificial neural network
  • T H Hong
  • I G Han
Stock price prediction using prior knowledge and neural networks
  • Kohara
Evaluation of extrapolative forecasting methods: results of academicians and practitioners
  • Carbone