Book

Developments in Applied Artificial Intelligence, 15th International Conference on Industrial and Engineering, Applications of Artificial Intelligence and Expert Systems, IEA/AIE 2002, Cairns, Australia, June 17-20, 2002, Proceedings

Authors:

Chapters (79)

License plate recognition involves three basics steps: 1) image preprocessing including thresholding, binarization, skew detection, noise filtering, and frame boundary detection, 2) character and number segmentations from the heading of the state area and the body of a license plate, 3) training and recognition on an Error Back-propagation Artificial Neural Networks (ANN). This report emphasizes on the implementation of modeling the recognition process. In particular, it deploys classical approaches and techniques for recognizing license plate numbers. The problems of recognizing characters and numbers from a license plate are described in details by examples. Also, the character segmentation algorithm is developed. This algorithm is then incorporated into the license plate recognition system.
The use of Artificial Neural Networks (ANN) for predicting the empennage buffet pressures as a function of aircraft state has been investigated. The buffet loads prediction method which is developed depends on experimental data to train the ANN algorithm and is able to expand its knowledge base with additional data. The study confirmed that neural networks have a great potential as a method for modelling buffet data. The ability of neural networks to accurately predict magnitude and spectral content of unsteady buffet pressures was demonstrated. Based on the ANN methodology investigated, a buffet prediction system can be developed to characterise the F/A-18 vertical tail buffet environment at different flight conditions. It will allow better understanding and more efficient alleviation of the empennage buffeting problem.
Selective attention learning is proposed to improve the speed of the error backpropagation algorithm of a multilayer Perceptron. Class-selective relevance for evaluating the importance of a hidden node in an off-line stage and a node attention technique for measuring the local errors appearing at the output and hidden nodes in an on-line learning process are employed to selectively update the weights of the network. The acceleration of learning time is then achieved by lowering the computational cost required for learning. By combining this method with other types of improved learning algorithms, further improvement in learning speed is also achieved. The effectiveness of the proposed method is demonstrated by the speaker adaptation task of an isolated word recognition system. The experimental results show that the proposed selective attention technique can reduce the adaptation time more than 65% in an average sense.
This paper analyses whether artificial neural networks can outperform traditional time series models for forecasting stock market returns. Specifically, neural networks were used to predict Brazilian daily index returns and their results were compared with a time series model with GARCH effects and a structural time series model (STS). Further, using output from ARMA-GARCH model as an input to a neural network is explored. Several procedures were utilized to evaluate forecasts, RMSE, MAE and the Chong and Hendry encompassing test. The results suggest that artificial neural networks are superior to ARMA-GARCH models and STS models and volatility derived from the ARMA-GARCH model is useful as an input to a neural network.
In this paper, we present a technique for automatic orientation detection of film rolls using Support Vector Machines (SVMs). SVMs are able to handle feature spaces of high dimension and automatically choose the most discriminative features for classification. We investigate the use of various kernels, including heavy tailed RBF kernels. Our results show that by using SVMs, an accuracy of 100% can be obtained, while execution time is kept to a mininum.
In this paper, we propose an approach based on HMM and linguistics for the Vietnamese recognition problem, including handwritten and speech recognition. The main contribution is that our method could be used to model all Vietnamese isolated words by a small number of HMMs. The method is not only used for handwritten recognition but also for speech recognition. Furthermore, it could be integrated with language models to improve the accuracy. Experimental results show that our approach is robust and considerable.
In this paper, we consider the problem of detecting the faces without constrained input conditions such as backgrounds, luminance and different image quality. We have developed an efficient and automatic faces detection algorithm in color images. Both the skin-tone model and elliptical shape of faces are used to reduce the influence of environments. A pre-built skin color model is based on 2D Gaussian distribution and sample faces for the skin-tone model. Our face detection algorithm consists of three stages: skin-tone segmentation, candidate region extraction and face region decision. First, we scan entire input images to extract facial color-range pixels by pre-built skin-tone model from YCbCr color space. Second, we extract candidate face regions by using elliptical feature characteristic of the face. We apply the best-fit ellipse algorithm for each skin-tone region and extract candidate regions by applying required ellipse parameters. Finally, we use the neural network on each candidate region in order to decide real face regions. The proposed algorithm utilizes the momentum back-propagation model to train it for 20*20 pixel patterns. The performance of the proposed algorithm can be shown by examples. Experimental results show that the proposed algorithm efficiently detects the faces without constrained input conditions in color images.
The MPEG-4 and MPEG-7 visual standard to support each frame of a video sequence should be segmented in terms of video object planes (VOPs). This paper presents an image segmentation method for extracting video objects from image sequences. The method is based on a multiresolution application of wavelet and watershed transformations, followed by a wavelet coefficient-based region merging procedure. The procedure toward complete segmentation consists of four steps: pyramid representation, image segmentation, region merging and region projection. First, pyramid representation creates multiresolution images using a wavelet transformation. Second, image segmentation is used to segment the lowest-resolution image of the created pyramid by watershed transformation. Third, region merging involves merging segmented regions using the third-order moment values of the wavelet coefficients. Finally, the region projection is used to recover a full-resolution image using an inverse wavelet transformation. Experimental results of the presented method can be applied to the segmentation of noise or degraded images as well as the reduction of over-segmentation.
In this paper an adaptive distribution system for manufacturing applications is considered and examined. The system receives a set of various components at a source point and supplies these components to destination points. The objective is to minimize the total distance that has to be traveled. At each destination point some control algorithms have to be activated and each segment of motion between destination points has also to be controlled. The paper suggests a model for such a distribution system based on autonomous sub-algorithms that can further be linked hierarchically. The links are set up during execution time (during motion) with the aid of the results obtained from solving the respective traveling salesman problem (TSP) that gives a proper tour of minimal length. The paper proposes an FPGA-based solution, which integrates a specialized virtual controller implementing hierarchical control algorithms and a hardware realization of genetic algorithm for the TSP.
Modular exponentiation is fundamental to several public-key cryptography systems such as the RSA encryption system. It is performed using successive modular multiplication. The latter operation is time consuming for large operands. Accelerating public-key cryptography software or hardware needs either optimising the time consumed by a single modular multiplication or reducing the total number of modular multiplication performed or both of them. This paper introduces a novel idea based on genetic algorithms for computing an optimal addition chain that allows us to minimise the number of modular multiplication and hence implementing efficiently the modular exponentiation.
It is known that RAM-based finite state machines (FSMs) can be very effectively implemented in reconfigurable hardware with distributed memory blocks. One possible approach to the engineering design of such circuits is based on a fuzzy-state encoding technique. This approach allows rapid construction of high-speed devices based on RAM-blocks of a modest size. The key point in the engineering design process is the development of a state encoding and this problem has a high combinatorial complexity, especially for FSMs with a large number of states. This paper proposes a novel evolutionary algorithm for the solution of this problem.
In this paper, genetic algorithm (GA) is applied to the optimum design of reinforced concrete liquid retaining structures, which comprise three discrete design variables, including slab thickness, reinforcement diameter and reinforcement spacing. GA, being a search technique based on the mechanics of natural genetics, couples a Darwinian survival-of-the-fittest principle with a random yet structured information exchange amongst a population of artificial chromosomes. As a first step, a penalty-based strategy is entailed to transform the constrained design problem into an unconstrained problem, which is appropriate for GA application. A numerical example is then used to demonstrate strength and capability of the GA in this domain problem. It is shown that, only after the exploration of a minute portion of the search space, near-optimal solutions are obtained at an extremely converging speed. The method can be extended to application of even more complex optimization problems in other domains.
As advanced crew support technologies will be available more and more in future military aircraft, it is necessary to have a good understanding of the possibilities in this area, taking into account operational demands and technical possibilities. A Crew Assistant (CA) is a decision support system for air crew, designed to improve mission effectiveness and redistribute crew workload in such a way that the crew can concentrate on its prime tasks. Designing a complex system with multiple crew assistants can be tackled by using a multi-agent system design. In this paper we will propose a multi-agent system architecture for crew assistants.
Programming of software agents is a difficult task. As a result, online learning techniques have been used in order to make software agents automatically learn to decide proper condition-action rules from their experiences. However, for complicated problems this approach requires a large amount of time and might not guarantee the optimality of rules. In this paper, we discuss our study to apply decision-making behaviors of humans to software agents, when both of them are present in the same environment. We aim at implementation of human instincts or sophisticated actions that can not be easily achieved by conventional multiagent learning techniques. We use RoboCup simulation as an experimenting environment and validate the effectiveness of our approach under this environment.
Mobile agent systems have unique properties and characteristics and represent a new application development paradigm. Existing solutions to distributed computing problems, such as deadlock avoidance algorithms, are not suited to environments where both clients and servers move freely through the network. This paper describes a distributed deadlock solution for use in mobile agent systems. The properties of this solution are locality of reference, topology independence, fault tolerance, asynchronous operation and freedom of movement. The presented technique, called the shadow agent solution proposes dedicated agents for deadlock initiation, detection and resolution. These agents are fully adapted to the properties of a mobile agent environment.
Management of changes and uncertainties belong to the most important issues in today’s production. In the approach proposed in the paper, not only these problems, but also the environmental impacts of processes are considered during a resource allocation process relying on market principles. A hierarchical rule structure called Priority Rules System is introduced for agent-based production control incorporating waste management holons. A decision support tool for selecting appropriate values of the utility factors in the rules is also presented, together with the related sensitivity analysis.
Many combinatorial optimisation problems have constraints that are difficult for meta-heuristic search algorithms to process. One approach is that of feasibility restoration. This technique allows the feasibility of the constraints of a problem to be broken and then brought back to a feasible state. The advantage of this is that the search can proceed over infeasible regions, thus potentially exploring difficult to reach parts of the state space. In this paper, a generic feasibility restoration scheme is proposed for use with the neighbourhood search algorithm simulated annealing. Some improved solutions to standard test problems are recorded.
Sonar sensors are widely used in mobile robotics research for local environment perception and mapping. Mobile robot platforms equipped with multiple sonars have been build and used by many researchers. A significant problem with the use of multiple sonars is that, when the sonars are operated concurrently, signal interference occurs, making it difficult to determine which received signal is an echo of the signal transmitted by a given sensor. In this paper, a technique for acquiring suitable modulation pulses for the signals emitted in a multi-sonar system is presented. We propose a technique to reduce the probability of erroneous operation due to interference by satisfying conditions for minimizing the signal length and the variation in the signal length of each sonar, using the Niched Pareto genetic algorithm (NPGA). The basic technique is illustrated for the case where two or more robots operate in the same environment.
Learning algorithms implemented for neural networks have generally being conceived for networks implemented in software. However algorithms which have been developed for software implementations typically require far greater accuracy for efficiently training the networks than can be easily implemented in hardware neural networks. Although some learning algorithms designed for software implementation can be successfully implemented in hardware it has become apparent that in hardware these algorithms are generally ill suited, failing to converge well (or at all). Particle Swarm Optimisation (PSO) is known to have a number of features that make it well suited to the training of neural hardware. In this paper the suitability of PSO to train limited precision neural hardware is investigated. Results show that the performance achieved with this algorithm does not degrade until the accuracy of the networks is reduced to a very small number of bits.
Adaptive Cruise Control (ACC) systems represent an active research area in the automobile industry. The design of such systems typically involves several, possibly conflicting criteria such as driving safety, comfort and fuel consumption. When the different design objectives cannot be met simultaneously, a number of non-dominated solutions exists, where no single solution is better than another in every aspect. The knowledge of this set is important for any design decision as it contains valuable information about the design problem at hand. In this paper we approximate the non-dominated set of a given ACC-controller design problem for trucks using multi-objective evolutionary algorithms (MOEAs). Two different search strategies based on a continuous relaxation and on a direct representation of the integer design variables are applied and compared to a grid search method.
A potential replacement for the conventional neuron is introduced. This is called a Macronet element and uses multiple channels per signal path with each channel containing two trainable non-linear structures in addition to a conventional weight. The authors show that such an architecture provides a rich spectrum of higher order powers and cross products of the inputs using less weights than earlier higher order networks. This lower number of weights does not compromise the ability of the Macronet element to generalise. Results from training a Macronet element to develop a relationship from a sparse map of Europe are given.
In this paper, a process by which experimental, or historical, data are used to create physically meaningful mathematical models is demonstrated. The procedure involves optimising the correlation between this ‘real world’ data and the mathematical models using a genetic algorithm which is constrained to operate within the physics of the system. This concept is demonstrated here by creating a structural dynamic finite element model for a complete F/A-18 aircraft based on experimental data collected by shaking the aircraft when it is on the ground. The processes used for this problem are easily broken up and solved on a large number of PCs. A technique is described here by which such distributed computing can be carried out using desktop PCs within the secure computing environment of the Defence Science & Technology Organisation without compromising PC or the network security.
The problem of placing a number of specific shapes in order to minimise material waste is commonly encountered in the sheet metal, clothing and shoe-making industries. It is driven by the demand to find a layout of non-overlapping parts in a set area in order to maximise material utilisation. A corresponding problem is one of compaction, which is to minimise the area that a set number of shapes can be placed without overlapping. This paper presents a novel connectivity based approach to leather part compaction using the no-fit polygon (NFP). The NFP is computed using an image processing method as the boundary of the Minkowski sum, which is the convolution between two shapes at given orientations. These orientations along with shape order and placement selection constitute the chromosome structure.
A hardware implementation of an evolutionary algorithm is capable of running much faster than a software implementation. However, the speed advantage of the hardware implementation will disappear for slow fitness evaluation systems. In this paper a Fast Evolutionary Algorithm (FEA) is implemented in hardware to examine the real time advantage of such a system. The timing specifications show that the hardware FEA is approximately 50 times faster than the software FEA. An image compression hardware subsystem is used as the fitness evaluation unit for the hardware FEA to show the benefit of the FEA for time-consuming applications in a hardware environment. The results show that the FEA is faster than the EA and generates better compression ratios.
We present a new approach to automatic speech recognition (ASR) based on the formalism of Bayesian networks. We put the foundations of new ASR systems for which the robustness relies on the fidelity in speech modeling and on the information contained in training data.
The problem of predicting the outcome of a conditional branch instruction is a prerequisite for high performance in modern processors. It has been shown that combining dierent branch predictors can yield more accurate prediction schemes, but the existing research only exam- ines selection-based approaches where one predictor is chosen without considering the actual predictions of the available predictors. The ma- chine learning literature contains many papers addressing the problem of predicting a binary sequence in the presence of an ensemble of predictors or experts. We show that the Weighted Majority algorithm applied to an ensemble of branch predictors yields a prediction scheme that results in a 5-11% reduction in mispredictions. We also demonstrate that a variant of the Weighted Majority algorithm that is simplied for ecient hard- ware implementation still achieves misprediction rates that are within 1.2% of the ideal case.
The configuration of complex multi-part products often requires that a human expert be available to determine a compatible set of parts satisfying the specification. With the availability of on-line web catalogs, such experts can now be supported or even substituted by intelligent web tools. This paper describes the architecture and problem representation for such a tool, built upon a constraint-based reasoning engine. The flexible strategy we employ enables a high degree of user customization of the web tool, allowing users to personalize not only the tool interface but also to edit or add to the catalogs as required, and even to change the product representation details. We illustrate the ideas by referring to the configuration of industrial products such as programmable logic controllers (PLC).
In order to overcome the influence of measurement errors in phase-to-phase wave parameters, this paper presents a neural network model for radial distribution lines. By providing a sample that is satisfactory and can resist measurement-error, and choosing a proper training method, the network can converge quickly. Simulation results show that this NN model can resist both amplitude-error and phase-error successfully, and can enhance the measurement precision of phase-to-phase wave parameters under working conditions.
The Support Vector Machine (SVM) has recently been introduced as a new learning technique for solving variety of real-world applications based on statistical learning theory. The classical Radial Basis Function (RBF) network has similar structure as SVM with Gaussian kernel. In this paper we have compared the generalization performance of RBF network and SVM in classification problems. We applied Lagrangian differential gradient method for training and pruning RBF network. RBF network shows better generalization performance and computationally faster than SVM with Gaussian kernel, specially for large training data sets.
This paper aims at developing a data mining approach for classification rule representation and automated acquisition from numerical data with continuous attributes. The classification rules are crisp and described by ellipsoidal regions with different attributes for each individual rule. A regularization model trading off misclassification rate, recognition rate and generalization ability is presented and applied to rule refinement. A regularizing data mining algorithm is given, which includes self-organizing map network based clustering techniques, feature selection using breakpoint technique, rule initialization and optimization, classifier structure and usage. An Illustrative example demonstrates the applicability and potential of the proposed techniques for domains with continuous attributes.
Neural computation offers many potential advantages, yet little research has attempted to explore how solutions to complex problems might be achieved using neural networks. This paper explores the use of linked and interacting neural modules, each capable of learning simple finite state machines, yet capable of being combined to perform complex actions. The modules are suitable for a hardware-only implementation.
In this paper, visualization and neural network techniques are applied together to a power transformer condition monitoring system. Through visualizing the data from the chromatogram of oil-dissolved gases by 2-D and/or 3-D graphs, the potential failures of the power transformers become easy to be identified. Through employing some specific neural network techniques, the data from the chromatogram of oil-dissolved gases as well as those from the electrical inspections can be effectively analyzed. Experiments show that the described system works quite well in condition monitoring of power transformers.
An electronic market has been constructed in an on-going collaborative research project between a university and a software house. The way in which actors (buyers, sellers and others) use the market will be influenced by the information available to them, including information drawn from outside the immediate market environment. In this experiment, data mining and filtering techniques are used to distil both individual signals drawn from the markets and signals from the Internet into meaningful advice for the actors. The goal of this experiment is first to learn how actors will use the advice available to them, and second how the market will evolve through entrepreneurial intervention. In this electronic market a multiagent process management system is used to manage all market transactions including those that drive the market evolutionary process.
There is a proliferation of news sites on the World Wide Web and hence, it is becoming increasingly harder for journalists to monitor the news on such sites. In response to this need, a software tool JWeb-Watch was developed to provide journalists with a suite of tools for checking the latest news on the web. One particular feature of this tool is the ability to download relevant images from these sites and discard extraneous figures. The paper reviews related software and discusses the unique features of this approach adopted.
Facilitation is considered one of the most important factors in the effective use of group decision support system (GDSS). However, high quality human facilitator may not be easily available. Furthermore, increase in globalization and telecommuting requires GDSS that supports dispersed group meeting, which is not conveniently implemented with human facilitators. This paper proposes an intelligent facilitation agent (IFA) that can be applied to facilitate group meeting through an online web-based discussion system. The IFA employs the power of asking questions, based on the supporting decision model associated with the problem being discussed, to structure group conversation. The experiments were set to study the effects of our IFA on group meeting through an online discussion system. The experimental results illustrate that group meetings with our IFA have higher number of generated ideas, higher discussion participation, higher amount of supporting information for decision making, and lower group distraction.
More and more people rely on e-mails rather than postal letters to communicate each other. Although e-mails are more convenient, letters still have many nice features. The ability to handle “anonymous recipient” is one of them. This research aims to develop a software agent that performs the routing task as human beings for the anonymous recipient e-mails. The software agent named “TWIMC (To Whom It May Concern)” receives anonymous recipient e-mails, analyze it, and then routes the e-mail to the mostly qualified person (i.e., e-mail account) inside the organization. The machine learning and automatic text categorization (ATC) techniques are applied for the task. We view each e-mail account as a category (or class) of ATC. Everyday e-mail collections for each e-mail account provide an excellent source of training data. The experiment shows the high possibility that TWIMC could be deployed in the real world.
A mental state model for autonomous agent negotiation in is described. In this model, agent negotiation is assumed to be a function of the agents’ mental state (attitude) and their prior experiences. The mental state model we describe here subsumes both competitive and cooperative agent negotiations. The model is first instantiated by buying and selling agents (competitively) negotiating in a virtual marketplace. Subsequently, it is shown that agent negotiations tend to be more cooperative than competitive as agents tend to agree (more so than disagree) on attributes of their mental state.
This paper presents a categorization of the knowledge used during the stage of post-processing of the recognized results in a system that automatically reads handwritten information from forms. The objective is to handle the uncertainty present in the semantics of each field of the form. We use grammatical rules particular of the Portuguese language and specialized information about the characteristics of the recognition system. A knowledge-based system uses the different information classes to collect evidence in order to correct the misclassified characters.
Series: Lecture notes in computer science DOI: 10.1007/3-540-48035-8_39 The choice of good construction site layout is inherently difficult, yet has a significant impact on both monetary and time saving. It is desirable to encapsulate systematically the heuristic expertise and empirical knowledge into the decision making process by applying the latest artificial intelligence technology. This paper describes a prototype knowledge-based system for the construction site layout, SITELAYOUT. It has been developed using an expert system shell VISUAL RULE STUDIO, which acts as an ActiveX Designer under the Microsoft Visual Basic programming environment, with hybrid knowledge representation approach under object-oriented design environment. By using custom-built interactive graphical user interfaces, it is able to assist designers by furnishing with much needed expertise in this planning activity. Increase in efficiency, improvement, consistency of results and automated record keeping are among the advantages of such expert system. Solution strategies and development techniques of the system are addressed and discussed. Author name used in this publication: K. W. Chau
Any small leakage in the submarines can lead to serious consecutive damages since it operates under high water pressure. Such leakage including damages on pipe and hull eventually incur human casualties and loss of expensive equipments as well as the loss of combat capabilities. In such cases, a decision-making system is necessary to respond immediately to the damages in order to maintain the safety or the survival of the submarine. So far, human decision has been the most important one based on personal experience, existing data, and any electronic information available. However, it is well recognized that such decisions may not be enough in certain emergency situations. The system that depends on only human experience may cause serious mistakes in devastating and scared situations. So it is necessary to have an automatic system that can generate responses and give advice the operator how to make decisions to maintain the survivability of the damaged vessel. In this paper, a knowledge-based decision support system for submarine safety is developed. The domain knowledge is acquired from the submarine design documents, design expertise, and interviews with operator. The knowledge consists of the responses regarding damage on pressure hull and piping system. Expert Elements are deduced to obtain the decision from the knowledge base, and for instance, the system makes recommendations on how the damages on hull and pipes decision and whether to stay in the sea or to blow. It is confirmed that developed system is well simulated to the real situation throughout sample applications.
The Verification and Validation (V&V) process states whether the software requirements specifications have been correctly and completely fulfilled. The methodologies proposed in software engineering showed to be inadequate for Knowledge Based Systems (KBS) validation, since KBS present some particular characteristics[1]. Designing KBS for dynamic environments requires the consideration of Temporal knowledge Reasoning and Representation (TRR) issues. Albeit, the last significant developments in TRR area, there is still a considerable gap for its successful use in practical applications. VERITAS is an automatic tool developed for KBS verification, it is currently in development and is being tested with SPARSE, a KBS used in the Portuguese Transmission Network (REN) for incident analysis and power restoration. In this paper some solutions are proposed for still open issues on Verification of KBS applied in critical domains.
Downsizing the corporation’s computer systems is still an important practice of many organizations. The feasibility of downsizing and breakthroughs in computer technology reinforce this concept, and cause corporations to downsize for cost reduction and to increase the efficiency and effectiveness of business operations. The problems of many processes and related complex factors need to be considered. The product of this research, “A Rule-Based System for Downsizing the Corporation’s Computer Systems (DOWNSIZINGX),” is a tool that uses a model of a knowledge base that provides a visual programming or window interface with an expert system shell. The system provides recommendations to support the decision-making of downsizing the computer systems. It uses CLIPS as an expert system shell and makes inferences about knowledge by using the forward chaining and the depth first strategy as a conflict resolution technique. A user interacts with the system through Visual Basic Interface connected to CLIPS via CLIPS OCX. The explanation facility allows the user to trace back the recommendations to find what factors were involved and to assist effective decision-making with reduced risks. The prototype was developed corresponding to the model and tested on some business decision scenarios and case studies. The results show that the system performs all functions correctly and also recommends situations effectively. Moreover, the system demonstrates the flexibility of the implementation of the user interface through Visual Basic programming tool which will enable and enhance more friendly interface for the users.
Credit Apportionment scheme is the backbone of the performance of adaptive rule based system. The more cases the credit apportionment scheme can consider, the better is the overall systems performance. Currently rule based systems are used in various areas such as expert systems and machine learning which means that new rules to be generated and others to be eliminated. Several credit apportionment schemes have been proposed and some of them are even used but still most of these schemes suffer from disability of distinguishing between good rules and bad rules. Correct rules might be weakened because they are involved in an incorrect inference path (produces incorrect conclusion) and incorrect rules might be strengthen because they are involved in an inference path which produces correct conclusion. In this area a lot of research has been done, we consider three algorithms, Bucket Brigade algorithm (BB), Modified Bucket Algorithm (MBB) and General Credit Apportionment (GCA). The algorithms BB and MBB are from the same family in which they use the same credit allocation techniques where GCA uses different approach. In this research, we make a comparison study by implementing the three algorithms and apply them on a simulated “Soccer” expert rule-based system. To evaluate the algorithms, two experiments have been conducted.
This paper presents a novel approach to successfully predict Web pages that are most likely to be re-accessed in a given period of time. We present the design of an intelligent predictor that can be implemented on a Web server to guide caching strategies. Our approach is adaptive and learns the changing access patterns of pages in a Web site. The core of our predictor is a neural network that uses a back-propagation learning rule. We present results of the application of this predictor on static data using log files; it can be extended to learn the distribution of live Web page access patterns. Our simulations show fast learning, uniformly good prediction, and up to 82% correct prediction for the following six months based on a one- day training data. This long-range prediction accuracy is attributed to the static structure of the test Web site.
Internet auctions are seen as an effective form of electronic commerce. An internet auction consists of multiple buyers and a single seller. We propose an alternative, the REV auction, in which a buyer can select sellers before conducting the auction. There are several advantages to our mechanism. First, the seller selection mechanism enabled us to reflect the buyers’ preference. Second, the seller’s evaluation mechanism effectively maintains seller quality. Third, our mechanism can avoid consulting before bidding. We implemented an experimental e-commerce support system based on the REV auction. Experiments demonstrated that the REV auction increased the number of successful trades.
This paper describes a fuzzy usage parameter control (UPC) mechanism in DiffServ (DS) and Multiprotocol Label Switching (MPLS) networks based on fuzzy logics. Current research treats MPLS as the Internet’s solution to high performance network. In DS network, UPC is an important factor in ensuring the sources conforms to the negotiated service level agreement (SLA). Most of the UPC techniques proposed are based on conventional crisp set which are inefficient when dealing with the conflicting requirements of UPC. Simulation results show that the proposed fuzzy scheme outperforms conventional techniques in terms of packet loss ratio, higher selectivity and lower false alarm probability.
In this report, we present the system that allows various forms of knowledge exchange for users of the natural language question answering system. The tool is also capable of performing the domain restructuring by domain experts to adjust it to a particular audience of customers. The tool is implemented for financial and legal advisors, where the information is extremely dynamic by nature and requires fast correction and update in natural language. Knowledge management takes advantage of the technique of semantic headers, which is applied to represent the poorly structured and logically complex data in the form of textual answers. Question answering is performed by matching the semantic representation of a query with the ones of the answers. The issues of domain extension via understanding of natural language definitions of new entities and new objects are addressed.
Information retrieval services on the Internet have tried to provide Internet users with information processed as the way users want. These information retrieval systems merely show html pages that include the index word abstracted from the user’s query. There are a number of difficulties to be overcome in the present technology which could be solved by considering semantics in html documents. However it can heighten the precision of retrieval results by using the structural information of the document as an alternative source of information. The tabular form, which appears on ordinary documents, usually has the most relevant information. Based on the similarity to a Web’s html document, we try to improve the precision of results by analyzing the table on html documents. Our main purpose here is to do table parsing and construct a dictionary of table indexes for applying to our information retrieval system and thus enhance the accuracy.
Genetic Algorithms (GAs) are a popular and robust strategy for optimisation problems. However, these algorithms often require huge computation power for solving real problems and are often criticized for their slow operation. For most applications, the bottleneck of the GAs is the fitness evaluation task. This paper introduces a fitness estimation strategy (FES) for genetic algorithms that does not evaluate all new individuals, thus operating faster. A fitness and associated reliability value are assigned to each new individual that is only evaluated using the true fitness function if the reliability value is below some threshold. Moreover, applying some random evaluation and error compensation strategies to the FES further enhances the performance of the algorithm. Simulation results show that for six optimization functions, the GA with FES requires fewer evaluations while obtaining similar solutions to those found using a traditional genetic algorithm. For these same functions the algorithm generally also finds a better fitness value on average for the same number of evaluations. Additionally the GA with FES does not have the side effect of premature convergence of the population. It climbs faster in the initial stages of the evolution process without becoming trapped in the local minima.
L-systems are widely used in the modelling of branching structures and the growth process of biological objects such as plants, nerves and airways in lungs. The derivation of such L-system models involves a lot of hard mental work and time-consuming manual procedures. A method based on genetic algorithms for automating the derivation of L-systems is presented here. The method involves representation of branching structure, translation of L-systems to axial tree architectures, comparison of branching structure and the application of genetic algorithms. Branching structures are represented as axial trees and positional information is considered as an important attribute along with length and angle in the database configuration of branches. An algorithm is proposed for automatic L-system translation that compares randomly generated branching structures with the target structure. Edit distance, which is proposed as a measure of dissimilarity between rooted trees, is extended for the comparison of structures represented in axial trees and positional information is involved in the local cost function. Conventional genetic algorithms and repair mechanics are employed in the search for L-system models having the best fit to observational data.
The Travelling Salesman Problem (TSP) has a “big valley” search space landscape: good solutions share common building blocks. In evolutionary computation, crossover mixes building blocks, and so crossover works well on TSP. This paper considers a more complicated and realistic single-machine problem, with batching/lotsizing, sequence-dependent setup times, and time-dependent costs. Instead of a big valley, it turns out that good solutions share few building blocks. For large enough problems, good solutions have essentially nothing in common. This suggests that crossover (which mixes building blocks) is not suited to this more complex problem.
This paper presents the development of feature extraction algorithms for the recognition of off-line Thai handwritten characters. These algorithms are used to exploit prominent features of Thai characters. The decision trees were used to classify Thai characters that share common features into five classes then 12 algorithms were developed. As a result, the major features of Thai characters such as an end-point (EP), a turning point (TP), a loop (LP), a zigzag (ZZ), a closed top (CT), a closed bottom (CB), and a number of legs were identified. These features were defined as standard features or the “Thai Character Feature Space.” Then, we defined the 5x3 standard regions used to map these standard features, result in the “Thai Character Solution Space,” which will be used as a fundamental tool for recognition. The algorithms have been tested thoroughly by using of more than 44,600 Thai characters handwritten by 22 individuals from 100 documents. The feature extraction rate is as high as 98.66% with the average of 93.08% while the recognition rate is as high as 99.19% with the average of 91.42%. The results indicate that our proposed algorithms are well established and effective.
Route planning is one of the design problems and studies in various application areas, such as building/factory layout design, robotics, automobile navigation, VLSI design, etc. Route planning is to design an appropriate route from various candidates in terms of various perspectives, which is a time-consuming and difficult task even to a skilled designer. The author has proposed an approach of genetic algorithm (GA) to pipe route planning, and has reported the basic idea and its prototype system. Although the prototype system can generate a candidate route after the convergence of route planning process, its performance was found to heavily rely on the parameters and constraint conditions. For better performance, the previous paper proposed heuristics which was developed to narrow the search space and to improve the performance of GA engine as a preprocessor. Considering several issues we had in the past research, the paper proposes our new approach for chromosome generation, which partitions the design space, put random nodes to each partition, pick up nodes for connection, generates connection routes, set up network using these node, design routes from the network. Since we redesigned definition of chromosome from flexible length to fix length, GA operations became simpler and easier, and calculation time for design was drastically reduced. We have also modified and extended several functions in GUI modules, and implemented a prototype system called Route Planning Wizard. This paper describes basic ideas and implementation for our route planning method, then presents some experimental results using road roadmap data and maze problem to show the validity of our approach.
First, we implemented the iterative self-organizing data analysis techniques algorithm (ISODATA) in Color Matching Method (CMM). Then, the BP algorithm and the neural network structure in CMM are presented. We used four methods in the CMM to enhance network efficiency. Finally, we made quantitative analysis for the network learning procedure.
A dream of the software-engineering discipline is to develop reusable program-components and to build programs out of them. Formalization a type of component-oriented programming (COP) problem (that does not need any non-trivial effort for gluing components together) shows a surprising similarity to the problem of Planning within the Artificial Intelligence (AI). This short paper explores the possibility of solving COP by using AI-planning techniques. We have looked into some closely related AI-planning algorithms and suggested directions on how to adopt them for the purpose. Other important related issues like the target specification languages and other relevant research disciplines are also being touched upon here.
Many applications such as planning, scheduling, computational linguistics and computational models for molecular biology involve systems capable of managing qualitative and metric time information. An important issue in designing such systems is the efficient handling of temporal information in an evolutive environment. In a previous work, we have developed a temporal model, TemPro, based on the interval algebra, to express such information in terms of qualitative and quantitative temporal constraints. In order to find a good policy for solving time constraints in a dynamic environment, we present in this paper, a study of dynamic arc-consistency algorithms in the case of temporal constraints. We show that, an adaptation of the new AC-3 algorithm presents promising results comparing to the other dynamic arc-consistency algorithms. Indeed, while keeping an optimal worst-case time complexity, this algorithm has a better space complexity than the other methods.
There has been a lot of interest in matching and retrieval of similar time sequences in time series databases. Most of previous work is concentrated on similarity matching and retrieval of time sequences based on the Euclidean distance. However, the Euclidean distance is sensitive to the absolute offsets of time sequences. In addition, the Euclidean distance is not a suitable similarity measurement in terms of shape. In this paper, we propose an indexing scheme for efficient matching and retrieval of time sequences based on the minimum distance. The minimum distance can give a better estimation of similarity in shape between two time sequences. Our indexing scheme can match time sequences of similar shapes irrespective of their vertical positions and guarantees no false dismissals. We experimentally evaluated our approach on real data(stock price movement).
This paper presents a method to automate the preliminary spacecraft design by applying both a Multi-Criteria Decision-Making (MCDM) methodology and the Fuzzy Logic theory. Fuzzy logic has been selected to simulate the human thinking of several experts’ teams in making refined choices within a universe of on-board subsystem solutions according to a set of a given sub-criteria generated by the general ”maximum product return” goal. Among MCDM approaches, the Multi-Attribute (MADM) has been chosen to implement the proposed method: starting from the Analytic Hierarchical Process (AHP), criteria relative importance is evaluated by a combination-sensitive weight vector obtained through a multi-level scheme. Uncertainty intrinsic in technical parameters is taken into account by managing all quantities with the interval algebra rules. Comparison between simulation results and existing space systems showed the validity of the proposed method. The results are really encouraging as the method detects almost identical combinations, drastically reducing the time dedicated to the preliminary spacecraft design. The suggested preliminary spacecraft configuration is also the nearest-according to an Euclidean metric in the criteria hyper-space-to the optimum detected by the MODM approach.
A case base system for a complex problem like oil field design needs to be richer than the usual case based reasoning system. The system described in this paper contains large heterogeneous cases with metalevel knowledge. A multi level indexing scheme with both preallocated and dynamically computed indexing capability has been implemented. A user interface allows dynamic creation of similarity measures based on modelling of the user’s intentions. Both user aiding and problem solution facilities are supported, a novel feature is that risk estimates are also provided. Performance testing indicates that the case base produces on average, better predictions for new well developments than company experts. Early versions of the system have been deployed into oil companies in 6 countries around the world and research is continuing on refining the system in response to industry feedback.
Ant Colony optimisation has proved suitable to solve static optimisation problems, that is problems that do not change with time. However in the real world changing circumstances may mean that a previously optimum solution becomes suboptimial. This paper explores the ability of the ant colony optimisation algorithm to adapt from the optimum solution to one set of circumstances to the optimal solution to another set of circumstances. Results are given for a preliminary investigation based on the classical travelling salesperson problem. It is concluded that, for this problem at least, the time taken for the solution adaption process is far shorter than the time taken to find the second optimum solution if the whole process is started over from scratch.
Geographic information systems (GIS) and Artificial Intelligence (AI) techniques were used to develop an intelligent asset management system to optimize road and bridge maintenance. In a transportation context, asset management is defined as a cost-effective process to construct, operate, and maintain physical capital. This requires analytical tools to assist the allocation of resources, including personnel, equipment, materials, and supplies. One such tool, artificial intelligence (AI), is the creation of computer programs that use human-like reasoning concepts to implement and/or improve a process or task. This paper presents “heuristic” or “experience” based AI methodologies to optimize transportation asset management procedures. Specifically, we outline and illustrate a GIS-based intelligent asset management system using the case study of snow removal for winter road and bridge maintenance in Iowa, USA. The system uses ArcView GIS to access and manage road and bridge data, and ART*Enterprise, an AI shell, for the user interface.
2(nd) Order Many-Sorted Language is presented here as an algebraic structure and, as such, yields an adequate tool for formalizing an extension of Peano Algebra for spatial object modeling, spatial data base information retrieval and manipulation, and also for map analysis. Here we will demonstrate that this tool provides a unified treatment for several classes of operations realized among maps, including those presented in Boolean Logic Models, Fuzzy Logic Models and Bayesian Probability Models.
The paper describes a novel approach for learning and applying artificial neural network (ANN) models based on incomplete data. A basic novelty in this approach is not to replace the missing part of incomplete data but to train and apply ANN-based models in a way that they should be able to handle such situations. The root of the idea is inherited form the authors' earlier research for finding an appropriate input- output configuration of ANN models (21). The introduced concept shows that it is worth purposely impairing the data used for learning to prepare the ANN model for handling incomplete data efficiently. The applicability of the proposed solution is demonstrated by the results of experimental runs with both artificial and real data. New experiments refer to the modelling and monitoring of cutting processes. Keywords: Neural Networks, Machine Learning, Applications to Manufacturing
Today, a firm’s employees embody a significant source of knowledge, not only by documenting knowledge, but also by assisting other colleagues with problem solving. Due to decentralising or business networking aiming at cooperation among companies, the transparency within an enterprise as to which employees are experts in what field diminishes. The purpose of IT-systems for expert recommendation is to endow employees with easy access to experts within certain subject fields. This paper illustrates the Xpertfinder method developed at the Fraunhofer IPA that analyses explicit knowledge forms such as E-Mail-or Newsgroup messages of logged-in users for the preparation of expert profiles. Contrary to common systems Xpertfinder only uses those parts of a message entirely created by the sender. The Latent Semantic Indexing methodology is used in order to determine the subject of each message. With the aid of Bayesian Belief Networks analysis results are combined to evocative expert characteristics for anonymous display. Measures for the protection of personal data as well as future research fields are addressed.
Series: Lecture notes in computer science DOI: 10.1007/3-540-48035-8_65 In order to aid novice users in the proper selection and application of myriad ever-complicated algorithmic models on coastal processes, needs arise on the incorporation of the recent artificial intelligence technology into them. This paper delineates an intelligent knowledge processing system on hydrodynamics and water quality modeling to emulate expert heuristic reasoning during the problem-solving process by integration of the pertinent descriptive, procedural, and reasoning knowledge. This prototype system is implemented using a hybrid expert system shell, Visual Rule Studio, which acts as an ActiveX Designer under the Microsoft Visual Basic programming environment. The architecture, solution strategies and development techniques of the system are also presented. The domain knowledge is represented in object-oriented programming and production rules, depending on its nature. Solution can be generated automatically through its robust inference mechanism. By custom-built interactive graphical user interfaces, it is capable to assist model users by furnishing with much needed expertise. Author name used in this publication: K. W. Chau
This paper is devoted to the informational relevance notion in qualitative reasoning under uncertainty. We study the uncertainty and the relevance notions and we present a symbolic approach to deal with uncertainty. This approach enables us to represent the uncertainty in ignorance form, as in common-sense reasoning, by using linguistic expressions.
This paper deals with a better method to treat the various linguistic errors and ambiguities that we encounter when analyzing Korean text automatically. A natural language understanding system and the full-sentence analysis would provide a better way to resolve such problems. But the practical application of natural language understanding is still far from being achieved and full sentence analysis in the current state is not only difficult to implement but also time consuming. For those reasons a Korean Grammar Checker using the partial parsing method and the conception of potential governing relationship is implemented. The paper improves the knowledge base of disambiguation rules while trying to reduce them with the result of the linguistic analysis. The extended lexical disambiguation rules and the parsing method based on the asymmetric relation that we propose thus guarantee the accuracy and efficiency of the Grammar Checker.
For the purpose of improvement in computational time and recognition accuracy in the framework of stroke order- and number-free on-line handwriting character recognition, we performed structural analysis of the style of stroke order and stroke connection. From the real handwritten characters, chosen from among 2965 Chinese characters, we investigated the information on stroke order and connection, using the automatic stroke correspondence system. It was proved that the majority of real characters are written in fixed stroke order, and stroke order is predominantly in the standard stroke order; about 98.1 % of characters were located nearly in the standard order. Almost all stroke connections occur in the standard order (92.8 %), whereas 2 stroke connections occurred often, and stroke connections in nonstandard order occurred very rarely. In a comparison of our findings with the expected stroke connections, very few connections were found to actually occur. Moreover, we show the methods for incorporating the information on the completely stroke order- and number-free framework. The large improvement on both computational time and recognition accuracy are demonstrated by experiments.
In this paper, we propose a method to automatically segment out a human’s face from a given image that consists of head-and shoulder views of humans against complex backgrounds in videoconference video sequences. The proposed method consists of two steps: region segmentation and facial region detection. In the region segmentation, the input image is segmented using multiresolution-based watershed algorithms segmenting the image into an appropriate set of arbitrary regions. Then, to merge the regions forming an object, we use spatial similarity between two regions since the regions forming an object share some common spatial characteristics. In the facial region detection, the facial regions are identified from the results of region segmentation using a skin-color model. The results of the multiresolution-based watersheds image segmentation and facial region detection are integrated to provide facial regions with accurate and closed boundaries. In our experiments, the proposed algorithm detected 87–94% of the faces, including frames from videoconference images and new video. The average run time ranged from 0.23–0.34 sec per frame. This method has been successfully assessed using several test video sequences from MPEG-4 as well as MPEG-7 videoconferences.
Social interaction is essential in improving robot human interface. Such behaviors for social interaction may include paying attention to a new sound source, moving toward it, or keeping face to face with a moving speaker. Some sound-centered behaviors may be difficult to attain, because the mixture of sounds is not well treated or auditory processing is too slow for real-time applications. Recently, Nakadai et al have developed real-time auditory and visual multiple-talker tracking technology by associating auditory and visual streams. The system is implemented on an upper-torso humanoid and the real-time talker tracking is attained with 200 msec of delay by distributed processing on four PCs connected by Gigabit Ethernet. Focus-of-attention is programmable and allows a variety of behaviors. The system demonstrates non-verbal social interaction by realizing a receptionist robot by focusing on an associated stream, while a companion robot on an auditory stream.
In this paper, we introduce three acoustic confidence measures (CM) for domain-specific keyword spotting system. The first one is a statistically normalized version of well-known CM (NCM). And the second one is a new CM based on anti-filler concept. And, finally, we propose a hybrid CM (HCM) combining the above two CMs. HCM is a linear combination of two CMs with weighting parameters. To evaluate the proposed CMs, we constructed directory service system, which is a kind of keyword spotting system. We applied our CMs to this system and compared the performance results of the proposed CM with that of the conventional CM (RLH-CM). In our experiments, NCM and HCM show superior ROC performances to the conventional CM. Especially, with HCM, the enhancement of 40% FAR reduction was achieved.
We describe the extension of the well-known model-based diagnosis approach to the location of errors in imperative programs (exhibited on a subset of the Java language). The source program is automatically converted to a logical representation (called model). Given this model and a particular test case or set of test cases, a program-independent search algorithm determines a the minimal sets of statements whose incorrectness can explain incorrect outcomes when the program is executed on the test cases, and which can then be indicated to the developer by the system. We analyze example cases and discuss empirical re- sults from a Java debugger implementation incorporating our approach. The use of AI techniques is more flexible than traditional debugging techniques such as algorithmic debugging and program slicing.
In this paper, we propose a method to detect and correct design faults in a combinational boolean network, based on the model-based inference. We focus on the design verification for the network with multiple inverter errors. The complexity of this problem is NP-hard and it is harder than the usual verification to find a tractable algorithm. We present an effective algorithm which consists of the generation of the logical formula and its comparison to the specification for each cone in gate implementation. In this algorithm, the heuristic search method is incorporated to avoid the unnecessary backtracking based on the property that a part of the logical formula of each cone must be subformulas of functional specifications if the gate implementation is correct and irredundant.
We propose a practical technique to compile pattern matching for prioritised overlapping patterns in equational languages into a minimal, deterministic, adaptive, matching automaton. Compared with left-to-right matching automata, adaptive ones have a smaller size and allow shorter matching time. They may improve termination properties as well. Here, space requirements are further reduced by using directed acyclic graphs (dags) automata that shares all the isomorphic subautomata. We design an efficient method to identify such subautomata and hence avoid duplicating their construction while generating the minimised dag automaton.
Affective information processing is an advanced research direction in the AI world. Affective Information of image is taken as the objective of research in this paper. The influence of color vision properties’ histograms of image on human emotions is analyzed. Then based on 1/f fluctuation theory, a model of two-dimensional 1/f fluctuation is established, on which analysis is made on the fluctuation characteristics of image, resulting in a new algorithm proposed to objectively evaluate the harmonious feeling of image. After that, psychological testing method of SD is applied to verify the uniformity of the objective and subjective evaluations. At last, a conclusion is drawn that the image with 1/f fluctuation is harmonious and beautiful.
This paper describes how biologically-inspired agents can be used to solve complex routing problems incorporating prioritized information flow. These agents, inspired by the foraging behavior of ants, exhibit the desirable characteristics of simplicity of action and interaction. The collection of agents, or swarm system, deals only with local knowledge and exhibits a form of distributed control with agent communication effected through the environment. While ant-like agents have been applied to the routing problem, previous work has ignored the problems of agent adaptation, multi-path and priority-based routing. These are discussed here.
The aim is to improve the monitoring and control of district heating systems through the use of agent technology. In order to increase the knowledge about the current and future state in a district heating system at the producer side, each substation is equipped with an agent that makes predictions of future consumption and monitors current consumption. The contributions to the consumers, will be higher quality of service, e.g., better ways to deal with major shortages of heat water, which is facilitated by the introduction of redistribution agents, and lower costs since less energy is needed for the heat production. Current substations are purely reactive devices and have no communication capabilities.
Controlling complex dynamic systems requires skills that operators often cannot completely describe, but can demonstrate. This paper describes research into the understanding of such tacit control skills. Understanding tacit skills has practical motivation in respect of communicating skill to other operators, operator training, and also mechanising and optimising human skill. This paper is concerned with approaches whereby, using techniques of machine learning, controllers that emulate the human operators are generated from examples of control traces. This process is also called ”behavioural cloning”. The paper gives a review of ML-based approaches to behavioural cloning, representative experiments, and an assessment of the results. Some recent work is presented with particular emphasis on understanding human tacit skill, and generating explanation of how it works. This includes the extraction of the operator’s subconscious sub-goals and the use of qualitative control strategies. We argue for qualitative problem representations and decomposition of the machine learning problem involved.
Artificial Neural Network (ANN) is used in various fields including control and analysis of power systems. ANN in its learning process establishes the relationship between input variables by means of its weights updating, and provides a good response to another nonidentical but similar input. This paper proposes the use of neural network to control the on-load tap changer of parallel operation of two transformers supplying power to a local area. For simplicity, only two transformers are considered although operation of multiple transformers can be dealt with in a similar manner. A synthetic data set relating to tap changer operation sequence was used for training a backpropagation network to decide automatically on transformer’s on-load tap changer whether to raise, lower or hold the same desired position. Preliminary results show that a trained neural network can be successfully used for on load tap changing operation of transformers.