Liang li

Liang li
Beijing Institute of Technology | BIT · School of Automation

About

22
Publications
1,208
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
262
Citations
Introduction
Skills and Expertise

Publications

Publications (22)
Chapter
This paper studies the targets-attackers-defender scenario on communication graphs with multiple targets, multiple attackers and one defender who has interception capability. A graph-theoretic approach is employed to research the interactions among agents. For the targets-attackers-defender games, nash equilibrium is considered, and the control pol...
Article
A reconnaissance penetration game is a classic target-attacker-defender game. In this game, a reconnaissance UAV (namely attacker) tries to avoid the defender and reconnoiter a target as close as possible, whereas a target tries to escape the attacker with the help of defender. Practically, the defender is considered constrained in a certain territ...
Chapter
This paper addresses a differential backstepping control strategy for the trajectory tracking of a quadrotor aircraft. We introduce the virtual Lyapunov function with a differential item to design the proposed control strategy which eliminates the high-frequency chattering during the take-off maneuver and largely reduces the number of control param...
Article
Full-text available
This paper considers a pursuit-evasion game with multiple pursuers and a superior evader. A novel cooperative pursuit strategy is proposed to capture a faster evader while maintaining a formation. First, the initial states including position distribution and the minimum required number of pursuers for ensuring capture are obtained based on the idea...
Article
Decision algorithms are one of the key areas of focus in cluster confrontation research. In this paper, a Targets-Attackers-Defenders (TADs) game that includes an attacking team with [Formula: see text] Attackers, a target team with [Formula: see text] targets and a defending team with [Formula: see text] Defenders is considered. In this game, the...
Article
Multi-player pursuit-evasion games are fascinating in both nature and the artificial world. In these games, the purpose of pursuers is to capture the evader who attempt to avoid being captured. This article solves the pursuit-evasion games with communication constraints, and obtains the pursuit and escape strategies of all players and the correspon...
Article
In this paper, the affine formation control problem for multi-agent systems with prescribed convergence time is investigated. Firstly, on the basis of a time-varying scaling function, a distributed continuous control algorithm is designed, under which a stationary affine formation of the nominal configuration is able to be achieved within a prescri...
Article
Full-text available
In this paper, by utilizing fixed-time control technique and a leader-follower control scheme, a three-dimensional (3-D) cooperative guidance law is developed to tackle impact angle constrained simultaneous arrival problem. Firstly, in the line-of-sight (LOS) direction, guidance command for the leader is put forward to make impact time error conver...
Article
Naturally, one wonders when and how the Attacker can chase the Target and when and how the Attacker can evade the Defender. In this paper, we first analyze the possibility of winning the game for the Attacker under the framework of games of kind. Then, we present three different stages and corresponding control strategies, as well as the conditions...
Article
Multi-player pursuit–evasion games are crucial for addressing the maneuver decision problem arising in the cooperative control of multi-agent systems. This work addresses a particular pursuit–evasion game with three players, Target, Attacker, and Defender. The Attacker aims to capture the Target, while avoiding being captured by the Defender and th...
Article
In this paper, a policy iteration-based Q-learning algorithm is proposed to solve infinite horizon linear nonzero-sum quadratic differential games with completely unknown dynamics. The Q-learning algorithm, which employs off-policy reinforcement learning (RL), can learn the Nash equilibrium and the corresponding value functions online, using the da...

Network

Cited By