Fig 2 - uploaded by Göran Falkman
Content may be subject to copyright.
A bit genome representation of a small Petri net. 

A bit genome representation of a small Petri net. 

Context in source publication

Context 1
... In this view, a GA is simply a methodology inspired by Mother Nature for performing that search. An advantage of GAs is however their ability of searching at many places of the hypothesis space simultaneously [19]. In GAs a population of candidate solutions is formed, where each individual is encoded as a set of genes that together form a genome (or chromosome). In the traditional view, genes are encoded as single bits, and a genome is thus a bit string which can be translated into a solution. More complex gene alphabets such as natural numbers can also be used. The population of candidate solutions is evolved over a number of generations, and in each generation, each individual is evaluated on the problem solving task to establish its fitness. The fitness values are used as basis in a selection and reproduction phase, in which individuals in the present population are selected to form the basis of the next generation of candidate solutions. There are many different selection mechanisms and genetic operators, such as elitism selection which retains the best individuals, genetic cross over which mixes the genomes of two parents to produce one or more offspring, mutation which changes the value of some genes according to some distribution, and roulette- wheel selection which selects individuals based on their contribution to the summed fitness of the population. The task that we are evolving solutions to, consist of processing events in order to recognize when certain patterns are instantiated in the data. We have a set of files consisting of events, as well as files consisting of all intentionally instantiated behavior patterns (patterns may also appear unintentionally). The GA runs for a number of generations, and in each generation, each individual in the population is passed the events in order to recognize situations. After having parsed all events, each individual is evaluated on the task by establishing a fitness value describing its performance. To assure that good solutions are not lost during evolution, elitism selection is used to clone 5% of the present population when creating a new population. The remaining 95% are created by using roulette-wheel selection to select two parents, which are combined through single-point crossover to give two offspring. Each gene in the genomes of the offspring also have a probability of α/ ( numberof genes ) of being mutated, where α initially is set to 2, after which it is varied between 2 and 12 through the use of a running average of the average fitness of the population. In case the average is less than 0.0001, α is increased by 1, and in case it is greater than 0.01, α is decreased by 1. Traditionally, individuals of the first generation in a population are constructed by generating genomes with randomly initialized genes. In our case, a genome translates to a Petri net for recognition, and in order to at all be able recognize anything, a valid Petri net needs to have: (1) at least one input place, (2) at least one match-place, (3) no places being both input and match places, and (4) at least one valid path from an input to a match place. Thus, instead of merely creating genomes randomly, we argue for generating random genomes until the population consists of valid Petri nets. In the classical bit string representation, genomes consists of genes that can be 0 and 1. For Boolean valued problems it is straight forward to transform a genome to a problem solution. A Petri net for recognition is however symbolic in nature (as well as being a graph), which require us to define two translation functions, one for creating a Petri net from a genome, and one for creating a genome from a Petri net. These functions must be able to translate every aspect of a Petri net to genes. Initially, we consider a genetic representation of static size, and to achieve this we must decide the number of transitions, places, and variables. For each place, two genes are needed to represent its place type. For each transition we need to represent if it is a conditional transition (1 gene), its conditional constraint (3 genes for a maximum of 5 constraint types), its constraint value (1 gene), and two optional constraint variables (2 genes for each variable if there are at most 4 variables in the target concept). In addition we must also be able to represent which places that are used as input, and which places that receives output. We chose a similar approach to the connectivity matrices used in [12], [13]. In a transition we thus use two bits for each place to determine which places that serve as input and output. This sums to 2 M additional genes for each transition (where M is the number of places). Finally, we also need four bits to describe which variables that are used to match a target concept (2 bits for variables gives 4 variables in total, whereas a target concept perhaps only consist of 3 variables). An example of the bit genome representation of a Petri net is given in Figure 2. There are at least two potential drawbacks of using bit genomes for representing Petri nets. First, all gene subsets representing numbers will have a capacity of 2 n , where n is the number of bits required to represent the number. Assume we have five distinct event types at our disposal when modeling situations. This requires three bits, resulting in a capacity of representing eight event types. A transition referring to any of the three "non-existing" event types will however never be activated, resulting in 3/8 invalid combina- tions, for each transition. Secondly, Petri nets contain distinct sub structures: places and transitions. These structures may during crossover be split without any preference. It can however be beneficial, with respect to evolution time, to have distinct structural parts non-divisible. We therefore suggest using complex genes for the various parts of a Petri net. We also suggest disconnecting edge activation from transitions, and instead represent these in a free connectivity matrix. This enables for the graphical structure of Petri net to be kept intact whilst content of places and transitions change. To address these issues we suggest using a set of complex genes for representation. Three gene types have been identified: bit genes represent single Boolean values, place genes represent the content of places, and transition genes represent the content of transitions. The content of places and transitions can in this way be kept within allowed limits. Furthermore, we also suggest grouping distinct structural parts into unique chromosomes. In this way we have one chromosome consisting of place genes, one chromosome consisting of transition genes, one chromosome consisting of bit genes (graph connectivity), and one chromosome consisting of bit genes (match concept variable activation). The chromosomes are all part of the genome, and Figure 3 illustrates the genome-chromosome-gene representation. A genome-chromosome-bit representation however raises questions concerning how to perform genetic crossover. In this study we perform a chromosome wise crossover, i.e. when two genomes are to be combined, a crossover point is selected in each chromosome (the same relative position has been used), and thus, each chromosome of an offspring contains parts from both parents. Furthermore, the mutation rate needs to be calculated differently, since the number of genes will be fewer. Instead of having a mutation rate inversely proportional to the number of genes, a mutation rate which is inversely proportional to the total number of traits is used. Both the bit genome representation and the complex genome representation possibly suffer from two problems. First, the connectivity matrix for successfully representing a target concept will most likely be very sparse since densely connected Petri nets will likely: (1) be complex with respect to time (which is not allowed), and (2) be too unrestrictive and not provide enough constraints on the domain. Most individuals in an initial population will however, due to randomization, have approximately half of their edges activated. This can result in many generations being needed for the population to converge on suitable edge activation. Secondly, the number of places and transitions needs to be decided manually. Perhaps we decide on too few places and transitions, resulting in the target concept not being separable, or perhaps we decide on too many, significantly increasing the time required to find the concept. To address the first problem we suggest using arc genes to model individual edges in the graph. Thus, instead of having the third chromosome consisting of bit genes describing a connectivity matrix, it consists of a number of arc genes representing edges in the Petri net. We can thus model fewer edges, resulting in sparsely connected Petri nets. To address the second problem we draw inspiration from Mayo and Beretta [10], who use a dynamic number of edges when evolving Petri nets. Instead of having a fixed number of places, transitions, and arcs, the evolutionary process evolves the graph structure as well as content of nodes. To incorporate this, we suggest using chromosome mutation in addition to gene mutation. Chromosome mutation consists of two mutation operators: gene addition and gene removal, similar to [10]. The gene addition operator copies a randomly selected gene and puts it at the end of the chromosome. The gene removal operator removes a randomly selected gene from the chromosome. In our experiments, a chromosome mutation rate of 0 . 01 n g has been used, where n g is the number of genes in the chromosome being mutated. The main objective when working with situation recognition is to achieve high performance on the recognition task. This means that we would like to recognize as many true situations as possible, whilst not classifying false situations as true. In other words, we want to have high recall and high precision ...

Citations

... Thus, we specifically assess whether and how currently available SAW systems are capable of capturing and tracking the evolution of the inferred situations. Capturing evolution might be supported by explicit evolution models, for instance in form of evolution templates (as e.g., suggested in [20], [22]), preconditions, or evolution patterns. For tracking aspects, estimating the probable evolution paths (e.g., as in [20]) and criticality escalation (as has been recognized in [23]) are relevant. ...
Conference Paper
Full-text available
Situation awareness (SAW) denotes a human’s adequate interpretation of the observed environment, which is of prime relevance for human operators in control center applications (e.g., road and air traffic control). Since humans may lose their SAW due to information overload and time criticality, a series of intelligent systems have been proposed that should support human operators in gaining and maintaining SAW, whereby existing approaches focus more on the gaining aspect so far. However, a comparative evaluation of the distinct approaches has not been the focus up to now, as has been recently acknowledged. Therefore, the present work attempts at filling this gap by providing a comparative evaluation of approaches for gaining and maintaining SAW, thereby focusing on the less studied aspect of support for maintaining SAW. Thus, this survey highlights open issues and directions of further research.
... During 2009, with the already mentioned numerous panels calling out the needs for HLIF, numerous papers are presented. Solutions are presented for HLIF L2 situation assessment [82, 83] and L3 threat assessment [84]. The scenario issues of context [85, 86] and culture [87] are addressed. ...
Data
Full-text available
High-Level Information Fusion (HLIF) utilizes techniques from Low-Level Information Fusion (LLIF) to support situation/impact assessment, user involvement, and mission and resource management (SUM). Given the unbounded analysis of situations, events, users, resources, and missions; it is obvious that uncertainty is manifested by the nature of application requirements. In this panel, we seek discussions on methods and techniques to intelligently assess the problem of HLIF uncertainty analysis to alleviate high-performance statistical computational optimizations, unrealizable mathematical assumptions, or rigorous modeling and problem scoping which lead to time delays, brittleness, and rigidity, respectively. Given the various methods of LLIF and the complexity of HLIF, an interest to the ISIF community is to utilize diverse methods (such as those from other communities) that bridge the LLIF-HLIF gap of uncertainty analysis. To get a qualified and diverse viewpoint, we present a summary of HLIF uncertainty processes towards developing a multisource ontology of uncertainty to support HLIF modeling, methods, and management and systems design.
... During 2009, with the already mentioned numerous panels calling out the needs for HLIF, numerous papers are presented. Solutions are presented for HLIF L2 situation assessment [82, 83] and L3 threat assessment [84]. The scenario issues of context [85, 86] and culture [87] are addressed. ...
Conference Paper
High-Level Information Fusion (HLIF) utilizes techniques from Low-Level Information Fusion (LLIF) to support situation/impact assessment, user involvement, and mission and resource management (SUM). Given the unbounded analysis of situations, events, users, resources, and missions; it is obvious that uncertainty is manifested by the nature of application requirements. In this panel, we seek discussions on methods and techniques to intelligently assess the problem of HLIF uncertainty analysis to alleviate high-performance statistical computational optimizations, unrealizable mathematical assumptions, or rigorous modeling and problem scoping which lead to time delays, brittleness, and rigidity, respectively. Given the various methods of LLIF and the complexity of HLIF, an interest to the ISIF community is to utilize diverse methods (such as those from other communities) that bridge the LLIF-HLIF gap of uncertainty analysis. To get a qualified and diverse viewpoint, we present a summary of HLIF uncertainty processes towards developing a multisource ontology of uncertainty to support HLIF modeling, methods, and management and systems design. I. PANEL MOTIVATION High-level Information Fusion (HLIF) has been of considerable interest to the fusion community ever since the development of the fusion process models. The low-level versus high-level distinction was made evident in the seminal text on the subject by Waltz and Llinas, Multisensor Data Fusion, in Figure 1.1 Elements of a basic data fusion system. [1] The low-level functional processes support target classification, identification, and tracking, while high-level functional processes support situation, impact, and user fusion process refinement. LLIF concerns numerical data (e.g., locations, kinematics, and attribute target types). HLIF concerns abstract symbolic information (e.g., threat, intent, and goals). Research is needed in uncertainty analysis over modeling, representations, reasoning, cognition, management, hard-soft integration, and relevance of HLIF processes.
... In previous work we have looked at a Petri net based approach for recognizing situations of temporal and concurrent nature [5], [6]. This approach extends work by [7]– [9] in order to implicitly manage role assignment (which observed object represents which object in a modelled situation) and in order to manage the potentially large space of partial matches between modelled situations and the flow of information. ...
Article
Full-text available
Situation recognition is an important problem within the surveillance domain, which addresses the prob-lem of recognizing a priori defined patterns of interesting situations that may be of concurrent and temporal nature, and which possibly are occurring in the present flow of data and information. There may be many viable approaches, with different properties, for addressing this problem however, something they must have in common is good efficiency and high performance. In order to determine if a potential solution has these properties, it is a necessity to have access to test and development environments. In this paper we present DESIRER, a development environment for working with situation recog-nition, and for evaluating and comparing different approaches.
... During 2009, with the already mentioned numerous panels calling out the needs for HLIF, numerous papers are presented. Solutions are presented for HLIF L2 situation assessment [69, 70, 71] and L3 threat assessment [72]. The scenario issues of context [73, 74, 75] and culture [76] are addressed. ...
Conference Paper
The goal of the High-Level Information Fusion (HLIF) Panel Discussion is to present contemporary HLIF advances and developments to determine unsolved grand challenges and issues. The discussion will address the issues between low-level (signal processing and object state estimation and characterization) and high-level information fusion (control, situational understanding, and relationships to the environment). Specific areas of interest include modeling (situations, environments), representations (semantic, knowledge, and complex), systems design (scenario-based, user-based, distributed-agent) and evaluation (measures of effectiveness and empirical case studies). The goal is to address the contemporary operational and strategic issues in information fusion system design.
... In previous work, we have investigated a Petri net based approach for modelling and recognizing situations [5,6]. This approach extends previous work by [13,3,15], to manage the complete space of partial matches, and for automatically managing role assignment. ...
Article
Full-text available
Situation recognition is an important problem to solve for introducing new capabilities in surveillance applications. It is concerned with recognizing a priori defined situations of interest, which are character-ized as being of temporal and concurrent nature. The purpose is to aid decision makers with focusing on information that is known to likely be important for them, given their goals. Besides the two most impor-tant problems: knowing what to recognize and being able to recognize it, there are three main problems coupled to real time recognition of situations. Computational complexity — we need to process data and information within bounded time. Tractability — human operators must be able to easily understand what is being modelled. Expressability — we must be able to express situations at suitable levels of abstraction. In this paper we attempt to lower the computational complexity of a Petri net based approach for situation.
Conference Paper
Consumers video surveillance systems are now being used not only for security reasons but also for better understanding consumer behaviors. In this paper, we propose a new visual behavior analysis tool for consumer video surveillance systems. This tool can be embedded in consumer videos to automatically detect and analyze unusual events. The proposed tool is developed by using a special type of Gamma Markov chain for background modeling and Petri Nets for object classification. We present some experimental results to show the effectiveness of the proposed system which will be leading to new visual behavior analysis tools for the consumers.