Target Relative Coordinate: Heading of a missile 

Target Relative Coordinate: Heading of a missile 

Source publication
Article
Full-text available
The paper presents a unique utilization of swarm-based distributed problem solving techniques to evolve effective battle strategies in a computer simulated battle scenario in 3D space. We propose a new method to evolve com- plex and dynamic collective intelligence. Our method is based on a novel behav- ioral Gene-string representation for unit agen...

Contexts in source publication

Context 1
... H B so that it represents the Range, Heading and Bearing of the missile with respect to the target. Us- ing vector mathematics these attributes can be computed as followed: The heading of an object is directly re- lated to the pitch of the missile. In order to follow a target a missile has to pitch so that the heading becomes close to the value 1 (Fig. 4). A negative heading means that the missile is moving away from the target a positive heading means that the missile is getting closer to the target (Fig. 4). A head- ing is zero if the missile is moving tangen- tially WRT the designated ...
Context 2
... as followed: The heading of an object is directly re- lated to the pitch of the missile. In order to follow a target a missile has to pitch so that the heading becomes close to the value 1 (Fig. 4). A negative heading means that the missile is moving away from the target a positive heading means that the missile is getting closer to the target (Fig. 4). A head- ing is zero if the missile is moving tangen- tially WRT the designated ...

Citations

... In addition, we offer many practical insights and consideration that separate a real world application from its theoretical counterpart. The content of this chapter is focused around solving a complex problem arising in the Battle Swarm simulation engine [2] , using combination of CG with intelligent approach, thus introducing a new geometric intelligence concept. The Battle Swarm engine is constructed to study the relationship between tactical efficiency and formation of collaborative agents by utilizing evolutionary approaches [2]. ...
... The content of this chapter is focused around solving a complex problem arising in the Battle Swarm simulation engine [2] , using combination of CG with intelligent approach, thus introducing a new geometric intelligence concept. The Battle Swarm engine is constructed to study the relationship between tactical efficiency and formation of collaborative agents by utilizing evolutionary approaches [2]. The system is a strategic war game in which a battle-ship defends against a swarm of oncoming missile (more details provided in the next section). ...
... Engineers also become increasingly interested in swarm behavior since the resulting swarm intelligence can be applied in optimization (e.g. in telecommunicate systems) [11], robotics [13][36][37], traffic patterns in transportation systems [34] and military applica- tions [11][13]. The idea of using SI for battle tactics and war games is relatively new and active area of research [2][11]. However, because of the complexity of these systems, it cannot be modeled efficiently using a conventional swarm simulation engine [1]. ...
Chapter
Full-text available
This chapter presents an application of swarm intelligence technique to simulate the behavior of complex tactical formation of strategic missiles. The Layered Delaunay triangulation is employed to effectively resolve collision optimization problem in this dynamic system.
... For a more formal definition and properties, refer to [3]. The Voronoi diagram and its dual structure, the Delaunay triangulation, have been used in a wide variety of applications such as collision detection [14], extraction of crust and skeleton [16], swarm intelligence optimization [2], cluster analysis [7], and mobile robot agent network [17]. The Voronoi diagram is also a well-known roadmap in the path-planning literature, which has edges that provide a maximum clearance path among a set of disjoint polygonal obstacles. ...
Article
Full-text available
Path planning still remains one of the core problems in modern robotic applications, such as the design of autonomous vehicles and perceptive systems. The basic path-planning problem is concerned with finding a good-quality path from a source point to a destination point that does not result in collision with any obstacles. In this article, we chose the roadmap approach and utilized the Voronoi diagram to obtain a path that is a close approximation of the shortest path satisfying the required clearance value set by the user. The advantage of the proposed technique versus alternative path-planning methods is in its simplicity, versatility, and efficiency.
Chapter
A basic task of machine learning and data mining is to automatically uncover patterns that reflect regularities in a data set. When dealing with a large database, especially when domain knowledge is not available or very weak, this can be a challenging task. The purpose of pattern discovery is to find non-random relations among events from data sets. For example, the “exclusive OR” (XOR) problem concerns 3 binary variables, A, B and C=A B, i.e. C is true when either A or B, but not both, is true. Suppose not knowing that it is the XOR problem, we would like to check whether or not the occurrence of the compound event [A=T, B=T, C=F] is just a random happening. If we could estimate its frequency of occurrences under the random assumption, then we know that it is not random if the observed frequency deviates significantly from that assumption. We refer to such a compound event as an event association pattern, or simply a pattern , if its frequency of occurrences significantly deviates from the default random assumption in the statistical sense. For instance, suppose that an XOR database contains 1000 samples and each primary event (e.g. [A=T]) occurs 500 times. The expected frequency of occurrences of the compound event [A=T, B=T, C=F] under the independence assumption is 0.5×0.5×0.5×1000 = 125. Suppose that its observed frequency is 250, we would like to see whether or not the difference between the observed and expected frequencies (i.e. 250 – 125) is significant enough to indicate that the compound event is not a random happening. In statistics, to test the correlation between random variables, contingency table with chi-squared statistic (Mills, 1955) is widely used. Instead of investigating variable correlations, pattern discovery shifts the traditional correlation analysis in statistics at the variable level to association analysis at the event level, offering an effective method to detect statistical association among events. In the early 90’s, this approach was established for second order event associations (Chan & Wong, 1990). A higher order pattern discovery algorithm was devised in the mid 90’s for discrete-valued data sets (Wong & Yang, 1997). In our methods, patterns inherent in data are defined as statistically significant associations of two or more primary events of different attributes if they pass a statistical test for deviation significance based on residual analysis . The discovered high order patterns can then be used for classification (Wang & Wong, 2003). With continuous data, events are defined as Borel sets and the pattern discovery process is formulated as an optimization problem which recursively partitions the sample space for the best set of significant events (patterns) in the form of high dimension intervals from which probability density can be estimated by Gaussian kernel fit (Chau & Wong, 1999). Classification can then be achieved using Bayesian classifiers. For data with a mixture of discrete and continuous data (Wong & Yang, 2003), the latter is categorized based on a global optimization discretization algorithm (Liu, Wong & Yang, 2004). As demonstrated in numerous real-world and commercial applications (Yang, 2002), pattern discovery is an ideal tool to uncover subtle and useful patterns in a database. In pattern discovery, three open problems are addressed. The first concerns learning where noise and uncertainty are present. In our method, noise is taken as inconsistent samples against statistically significant patterns. Missing attribute values are also considered as noise. Using a standard statistical hypothesis testing to confirm statistical patterns from the candidates, this method is a less ad hoc approach to discover patterns than most of its contemporaries. The second problem concerns the detection of polythetic patterns without relying on exhaustive search. Efficient systems for detecting monothetic patterns between two attributes exist (e.g. Chan & Wong, 1990). However, for detecting polythetic patterns, an exhaustive search is required (Han, 2001). In many problem domains, polythetic assessments of feature combinations (or higher order relationship detection) are imperative for robust learning. Our method resolves this problem by directly constructing polythetic concepts while screening out non-informative pattern candidates, using statisticsbased heuristics in the discovery process. The third problem concerns the representation of the detected patterns. Traditionally, if-then rules and graphs, including networks and trees, are the most popular ones. However, they have shortcomings when dealing with multilevel and multiple order patterns due to the non-exhaustive and unpredictable hierarchical nature of the inherent patterns. We adopt attributed hypergraph (AHG) (Wang & Wong, 1996) as the representation of the detected patterns. It is a data structure general enough to encode information at many levels of abstraction, yet simple enough to quantify the information content of its organized structure. It is able to encode both the qualitative and the quantitative characteristics and relations inherent in the data set. </div
Chapter
This chapter spans topics from such important areas as Artificial Intelligence, Computational Geometry and Biometric Technologies. The primary focus is on the proposed Adaptive Computation Paradigm and its applications to surface modeling and biometric processing. Availability of much more affordable storage and high resolution image capturing devices have contributed significantly over the past few years to accumulating very large datasets of collected data (such as GIS maps, biometric samples, videos etc.). On the other hand, it also created significant challenges driven by the higher than ever volumes and the complexity of the data, that can no longer be resolved through acquisition of more memory, faster processors or optimization of existing algorithms. These developments justified the need for radically new concepts for massive data storage, processing and visualization. To address this need, the current chapter presents the original methodology based on the paradigm of the Adaptive Geometric Computing. The methodology enables storing complex data in a compact form, providing efficient access to it, preserving high level of details and visualizing dynamic changes in a smooth and continuous manner. The first part of the chapter discusses adaptive algorithms in real-time visualization, specifically in GIS (Geographic Information Systems) applications. Data structures such as Real-time Optimally Adaptive Mesh (ROAM) and Progressive Mesh (PM) are briefly surveyed. The adaptive method Adaptive Spatial Memory (ASM), developed by R. Apu and M. Gavrilova, is then introduced. This method allows fast and efficient visualization of complex data sets representing terrains, landscapes and Digital Elevation Models (DEM). Its advantages are briefly discussed. The second part of the chapter presents application of adaptive computation paradigm and evolutionary computing to missile simulation. As a result, patterns of complex behavior can be developed and analyzed. The final part of the chapter marries a concept of adaptive computation and topology-based techniques and discusses their application to challenging area of biometric computing.