Figure - available via license: CC BY
Content may be subject to copyright.
A hypothetical solution space which has been pruned through the use of a heuristic.

A hypothetical solution space which has been pruned through the use of a heuristic.

Source publication
Article
Full-text available
The underlying goal of a competing agent in a discrete real-time strategy (RTS) game is to defeat an adversary. Strategic agents or participants must define an a priori plan to maneuver their resources in order to destroy the adversary and the adversary's resources as well as secure physical regions of the environment. This a priori plan can be gen...

Similar publications

Conference Paper
Full-text available
MCTS has been successfully applied to many sequential games. This paper investigates Monte Carlo Tree Search (MCTS) for the simultaneous move game Tron. In this paper we describe two different ways to model the simultaneous move game, as a standard sequential game and as a stacked matrix game. Several variants are presented to adapt MCTS to simulta...

Citations

... In the past half decade, many researchers have taken up the call for AI research within the RTS context. A large portion of this work has been dedicated towards creating fully autonomous agents or army commanders that try to manage all aspects of the game [1,4,5,10,14,15,16]. Usually the intention is for the agent to compete against human players or other autonomous agents [4,5]. ...
... Weissgerber et al. [16] try to reduce the problem into a statistical analysis exercise. Their approach is to use information from past encounters to inform future decisions. ...
Article
Full-text available
So far, the main focus of AI research around RTS games has been towards creating autonomous, virtual opponents to compete against human beings or other autonomous players. At the same time, popular commercial titles are increasing their emphasis on micro-management; neglecting development towards further autonomy. This paper proposes a simple reactive agent that is proficient in combat tasks. This agent will be the basis of individual units; forming the backbone of a multiagent system. Built on this is a novel control scheme that is designed to aid a human player; deferring all strategic decisions to the controller. Finally, the paper will show the results of experiments designed to evaluate the effectiveness of the proposed control scheme.
Preprint
Full-text available
Current AI systems are designed to solve close-world problems with the assumption that the underlying world is remaining more or less the same. However, when dealing with real-world problems such assumptions can be invalid as sudden and unexpected changes can occur. To effectively deploy AI-powered systems in the real world, AI systems should be able to deal with open-world novelty quickly. Inevitably, dealing with open-world novelty raises an important question of novelty difficulty. Knowing whether one novelty is harder to deal with than another, can help researchers to train their systems systematically. In addition, it can also serve as a measurement of the performance of novelty robust AI systems. In this paper, we propose to define the novelty reaction difficulty as a relative difficulty of performing the known task after the introduction of the novelty. We propose a universal method that can be applied to approximate the difficulty. We present the approximations of the difficulty using our method and show how it aligns with the results of the evaluation of AI agents designed to deal with novelty.