Figure 3 - uploaded by Alberto Uriarte
Content may be subject to copyright.
5: Flocking rules examples  

5: Flocking rules examples  

Citations

... Concerning heuristic models, Uriarte [17] defined a basic heuristic assuming that both armies continuously deal their starting amount of DPF to each other, until one of the armies is destroyed. Kovarsky and Buro [11] proposed a function that gives more importance to having multiple units with less HP than only one unit with full HP: Life Time Damage 2 (LTD2). ...
... Sustained is an extension of the model presented by Uriarte in [17]. It assumes that the amount of damage an army can deal does not decrease over time during the combat (this is obviously an oversimplification, since armies might lose units during a combat, thus decreasing their damage dealing capability) but it models which units can attack each other with a greater level of detail than TS-Lanchester 2 . ...
... X, NO. Y, MONTH YEAR 5 unit type); the target that should be killed first is eliminated from the combat state, and the HP of the survivors is updated (lines [16][17][18][19][20][21][22][23][24][25]. The model keeps doing this until one army is completely annihilated or it cannot kill more units. ...
Article
Game tree search algorithms, such as Monte Carlo Tree Search (MCTS), require access to a forward model (or "simulator") of the game at hand. However, in some games such forward model is not readily available. This paper presents three forward models for two-player attrition games, which we call "combat models", and show how they can be used to simulate combat in RTS games. We also show how these combat models can be learned from replay data. We use StarCraft as our application domain. We report experiments comparing our combat models predicting a combat output and their impact when used for tactical decisions during a real game.
... Specifically, we propose the use of influence maps (a sister technique to potential fields) to achieve kiting. The approach presented in this paper has been evaluated in the context of StarCraft, and incorporated into the NOVA StarCraft bot (Uriarte 2011 ) for testing purposes . One of the main disadvantages of potential fields and influence maps is the need to perform parameter tuning . ...
Conference Paper
Full-text available
Influence Maps have been successfully used in controlling the navigation of multiple units. In this paper, we apply the idea to the problem of simulating a kiting behavior (also known as "attack and flee") in the context of real-time strategy (RTS) games. We present our approach and evaluate it in the popular RTS game StarCraft, where we analyze the benefits that our approach brings to a StarCraft playing bot. Copyright © 2012, Association for the Advancement of Artificial Intelligence. All rights reserved.
... Strategy games have been used as a field of experiments by several publications before. It seems that they are a viable application for agent designs (11,12,13,14,15,16,17,18,19,20,21). The majority of the cited sources presents different techniques to use in a bot for the popular real-time strategy game Starcraft, among them Bayesian networks or potential fields. ...
Article
Full-text available
This thesis explores the use of Bayesian models in multi-player video games AI, particularly real-time strategy (RTS) games AI. Video games are an in-between of real world robotics and total simulations, as other players are not simulated, nor do we have control over the simulation. RTS games require having strategic (technological, economical), tactical (spatial, temporal) and reactive (units control) actions and decisions on the go. We used Bayesian modeling as an alternative to (boolean valued) logic, able to cope with incompleteness of information and (thus) uncertainty. Indeed, incomplete specification of the possible behaviors in scripting, or incomplete specification of the possible states in planning/search raise the need to deal with uncertainty. Machine learning helps reducing the complexity of fully specifying such models. We show that Bayesian programming can integrate all kinds of sources of uncertainty (hidden state, intention, stochasticity), through the realization of a fully robotic StarCraft player. Probability distributions are a mean to convey the full extent of the information we have and can represent by turns: constraints, partial knowledge, state space estimation and incompleteness in the model itself. In the first part of this thesis, we review the current solutions to problems raised by multi-player game AI, by outlining the types of computational and cognitive complexities in the main gameplay types. From here, we sum up the transversal categories of prob- lems, introducing how Bayesian modeling can deal with all of them. We then explain how to build a Bayesian program from domain knowledge and observations through a toy role-playing game example. In the second part of the thesis, we detail our application of this approach to RTS AI, and the models that we built up. For reactive behavior (micro-management), we present a real-time multi-agent decentralized controller inspired from sensory motor fusion. We then show how to perform strategic and tactical adaptation to a dynamic opponent through opponent modeling and machine learning (both supervised and unsupervised) from highly skilled players' traces. These probabilistic player-based models can be applied both to the opponent for prediction, or to ourselves for decision-making, through different inputs. Finally, we explain our StarCraft robotic player architecture and precise some technical implementation details. Beyond models and their implementations, our contributions are threefolds: machine learning based plan recognition/opponent modeling by using the structure of the domain knowledge, multi-scale decision-making under uncertainty, and integration of Bayesian models with a real-time control program.