Available via license: CC BY
Content may be subject to copyright.
Citation: Roux, A.; Changey, S.;
Weber, J.; Lauffenburger, J.-P.
LSTM-Based Projectile Trajectory
Estimation in a GNSS-Denied
Environment. Sensors 2023,23, 3025.
https://doi.org/10.3390/s23063025
Academic Editors: Chee Kiat Seow,
Henrik Hesse, Kai Wen, Soon Yim
Tan and Yunjia Wang
Received: 6 February 2023
Revised: 27 February 2023
Accepted: 5 March 2023
Published: 10 March 2023
Copyright: © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
sensors
Article
LSTM-Based Projectile Trajectory Estimation in a
GNSS-Denied Environment †
Alicia Roux 1,2,*, Sébastien Changey 1, Jonathan Weber 2and Jean-Philippe Lauffenburger 2
1French-German Research Institute of Saint-Louis, 5 Rue du Général Casssagnou, 68300 Saint-Louis, France
2Institut de Recherche en Informatique, Mathématiques, Automatique et Signal (IRIMAS), Université de
Haute-Alsace, 2 Rue des Frères Lumière, 68100 Mulhouse, France
*Correspondence: alicia.roux@uha.fr
† Conference on Artificial Intelligence for Defense, DGA Maîtrise de l’Information, 16–17 November 2022,
Rennes, France.
Abstract:
This paper presents a deep learning approach to estimate a projectile trajectory in a GNSS-
denied environment. For this purpose, Long-Short-Term-Memories (LSTMs) are trained on projectile
fire simulations. The network inputs are the embedded Inertial Measurement Unit (IMU) data, the
magnetic field reference, flight parameters specific to the projectile and a time vector. This paper
focuses on the influence of LSTM input data pre-processing, i.e., normalization and navigation frame
rotation, leading to rescale 3D projectile data over similar variation ranges. In addition, the effect
of the sensor error model on the estimation accuracy is analyzed. LSTM estimates are compared to
a classical Dead-Reckoning algorithm, and the estimation accuracy is evaluated via multiple error
criteria and the position errors at the impact point. Results, presented for a finned projectile, clearly
show the Artificial Intelligence (AI) contribution, especially for the projectile position and velocity
estimations. Indeed, the LSTM estimation errors are reduced compared to a classical navigation
algorithm as well as to GNSS-guided finned projectiles.
Keywords: long-short-term-memory; projectile trajectory; navigation
1. Introduction
Projectile navigation is mainly based on IMU (Inertial Measurement Unit) and GNSS
(Global Navigation Satellite System) measurements due to the high dynamic constraints
imposed on projectiles and the low-cost sensor requirements. Classically, IMU and GNSS
measurements are combined with Kalman Filters to estimate a trajectory. The IMU mea-
surements are integrated to predict the trajectory in order to be corrected by the GNSS
receiver measurements [
1
–
4
]. Nevertheless, GNSS signals are not always available due to
the environment configuration and are vulnerable to jamming and spoofing [
5
–
7
]. For this
purpose, users aim to exclude these measurements for trajectory estimation [8–10].
Moreover, Artificial Intelligence (AI) is increasingly used for defense applications such
as surveillance, reconnaissance, tracking or navigation [
11
–
15
]. Indeed, AI is an interesting
approach to correct model approximations, to limit the influence of incomplete or incorrect
measurements or to determine complex models from system data. Therefore, this paper
presents an AI-based projectile trajectory estimation method in a GNSS-denied environment
using only the embedded IMU and pre-flight parameters specific to the ammunition.
Considering that a trajectory is a time series, AI provides interesting approaches for
its estimation. Indeed, Recurrent Neural Networks (RNNs) are perfectly adapted to time
series prediction as illustrated in [
16
–
18
]. RNNs are composed by feedback loops, i.e., they
memorize past data through hidden states to predict future data. However, the simplest
form of RNNs exhibits convergence issues during the training step such as vanishing or
exploding gradient problems. So another form of recurrent network is considered: the
Sensors 2023,23, 3025. https://doi.org/10.3390/s23063025 https://www.mdpi.com/journal/sensors
Sensors 2023,23, 3025 2 of 18
Long Short-Term Memory (LSTM) [
19
,
20
]. A LSTM includes a memory cell in addition to
hidden states, in order to capture both long-term and short-term time dependencies.
This paper presents an AI-based solution to estimate a projectile trajectory in a GNSS-
denied environment. In summary, the main contributions of this work are:
•
to detail an LSTM-based approach to estimate projectile positions, velocities and Euler
angles from the embedded IMU, the magnetic field reference, pre-flight parameters
and a time vector.
•
to present BALCO (BALlistic COde) [
21
] used to generate the dataset. This simulator
provides true-to-life trajectories of several projectiles according to the ammunition
parameters.
•
to investigate different normalization forms of the LSTM input data in order to evaluate
their contribution on the estimation accuracy. For this purpose, several LSTMs are
trained with different input data normalizations.
•
to study the impact of the local navigation frame rotation on the estimation accuracy.
Rotating the local navigation frame during the training step allows having similar
variation ranges along the three axes, especially for the lateral position, which is
extremely small compared to the two other axes. This method shares the same goals
as normalization but without any information loss.
•
to examine the influence of inertial sensor models on estimation accuracy. For this pur-
pose, two inertial sensor error models are studied in order to evaluate their influence
on LSTM predictions.
•
to compare the LSTM accuracy to a Dead-Reckoning, performed on finned mortar
trajectory. Estimation methods are evaluated through error criteria based on the Root
Mean Square Error and the impact point error.
The outline of the paper is as follows. Section 2presents an introduction to projectile
navigation, AI applications in the military field and to the LSTM basics. Section 3focuses
on the projectile trajectory dataset and LSTM specifications. Section 4presents the data
pre-processing; the input data normalization and the local navigation frame rotation during
training. Finally, Section 5presents projectile trajectory estimation results by analyzing
the influence of the local navigation frame rotation, input data normalization and sensor
model on estimation accuracy.
2. Related Work
This part presents the traditional projectile navigation methods and the sensors used,
the applications of artificial intelligence for defense and the LSTM operating principle.
2.1. Model-Based Projectile Trajectory Estimation
Projectile navigation requires sensors able to resist to extreme conditions (acceleration
shocks around 50,000 g along the longitudinal axis, high rotation rates around 15,000 rpm
for a 155 mm shell) as well as to be relatively small and inexpensive due to the space and
cost constraints imposed on projectiles [
22
–
24
]. For this purpose, projectile navigation
mainly uses IMUs (Inertial Measurement Units) composed by accelerometers, gyrometers,
and magnetometers as well as GNSS (Global navigation satellite system) data. On one hand,
the IMU measurement integration, performed at high frequency (
∼
1000 Hz), provides
an accurate short-term projectile trajectory estimation, but deviates at long term due to
sensor drift [
1
]. On the other hand, GNSS receivers provide accurate long-term position
information at a significantly lower frequency (
∼
10 Hz), but can be easily spoofed and
jammed [
5
–
7
]. Due to their evident complementarity, IMUs and GNSS are classically fused
by different types of Kalman filters for trajectory estimation, as in [2–4].
One challenge with high-speed spinning projectiles is to accurately estimate altitudes.
Depending on projectile specifications, different methods can be considered, as in [
25
,
26
] by
exploiting GNSS signals or in [
9
,
22
] by using accelerometers, gyrometers or magnetometers.
Nevertheless gyrometers are commonly omitted because, under cost limits, many of them
saturate and do not resist to launch phases. Therefore, magnetometers, less expensive
Sensors 2023,23, 3025 3 of 18
and able to resist to high accelerations, are often used [
27
–
29
] with different kinds of
Kalman filters (Extended Kalman Filter, Adaptive Extended Kalman Filter, Mixed Extended-
Unscented Filter).
As mentioned in the introduction (see Section 1), GNSS signals are easily spoofed
and jammed. Therefore, more and more, approaches are proposed in a GNSS-denied
environment in order to estimate a projectile trajectory, using only inertial measurements,
as in [8,30–32].
2.2. AI-Based Trajectory Estimation
AI methods are increasingly used in the military field especially for:
•
Surveillance and target recognition where machine learning algorithms applied to com-
puter vision detect, identify and track objects of interest.
For example, the Maven project, presented by the US Department of Defense (DoD),
focuses on automatic target identification and localization from images collected by
Aerial Vehicles [11].
•
Predictive maintenance to establish the optimal time to change a part of a system, as the
US Army does on F-16 aircraft [12].
•
Military training where AI is used in training simulation software to improve effi-
ciency through various scenarios, such as AIMS (Artificial Intelligence for Military
Simulation) [13].
•
Analysis and decision support to extract and deal with relevant elements in an infor-
mation flow, to analyze a field or to predict events. The Defense Advanced Research
Projects Agency (DARPA) aims to equip US Army helicopter pilots with augmented
reality helmets to support them in operations [14].
•
Cybersecurity, as military systems are strongly sensitive to cyberattacks leading to
loss and theft of critical information. To this end, the DeepArmor program from
SparkCognition uses AI to protect, detect and block networks, computer programs
and data from cyber threats [15].
Although widely integrated in the military development programs, AI is relatively
uncommonly applied to projectile trajectory estimation although some kinds of networks
such as recurrent networks are perfectly adapted to this task.
2.2.1. Recurrent Neural Networks (RNNs)
Recurrent Neural Networks (RNNs) are a class of neural networks perfectly adapted
for time series prediction [
19
,
20
] as they exploit feedback loops, i.e., an RNN cell output at
the previous time is used as input at the current time.
As illustrated in Figure 1, an RNN exploits a time series
x= [x0
,
x1
, ...,
xτ]∈Rτ×F
of
length
τ
with
F
input features to predict an output
y
. Each prediction
yt
is determined by
one RNN cell from the current input
xt
and the previous output
ht−1
, also called hidden
state, which memorizes the past information [19,20].
Figure 1. RNN layer overview, roll and unroll: many-to-many representation.
Vanilla RNN, the simplest RNN structure, suffers from gradient vanishing and explo-
sion during the training step [
33
,
34
]. In the gradient vanishing case, backpropagation from
Sensors 2023,23, 3025 4 of 18
the last layer to the first layer leads to a gradient reduction. Then, the first layer weights
are no longer updated during training and the Vanilla RNN does not learn any features. In
the gradient explosion case, gradients become increasingly large leading to huge weight
updates and thus resulting in Vanilla RNN divergence.
Moreover, to predict
yt
at timestamp
t
, the Vanilla RNN uses only the input
xt
at the
current time and the hidden state
ht−1
at the previous time, containing short-term past
features. For this reason, Vanilla RNN is ineffective to memorize long-term past events.
To overcome these issues, memory cells are added to the Vanilla RNN, forming the Long
Short-Term Memory (LSTM) [19].
2.2.2. Long Short-Term Memory Cell
Based on the recurrent network overview presented above, this paragraph focuses
only on the LSTM cell operating principle. An LSTM is composed by several cells to deal
with short and long-term memory. As shown in Figure 2, to predict
yt
at timestamp
t
,
an LSTM uses the input data
xt
at the current time, the hidden state
ht−1
at the previous
time to memorize short-term past events, and the memory cell state
ct−1
at the previous
time to memorize the long-term past events. An LSTM cell is composed by three gates:
–
the forget gate filters, through a Sigmoid function
σ
, data contained in the concatenation
of
xt
and
ht−1
. Data are forgotten for values close to 0 and are memorized for values
close to 1. The forget gate model is:
ft=σ(Wf.[ht−1,xt] + bf)(1)
–
the input gate extracts relevant information from
[ht−1
,
xt]
by applying a Sigmoid
σ
and a Tanh function. The input gate is represented by the following:
it=σ(Wi.[ht−1,xt] + bf)˜
Ct=tanh(Wc.[ht−1,xt] + bc).(2)
The memory cell
ct
is updated from the forget gate
ft
and the input gate
it
and
˜
Ct
,
to memorize pertinent data:
ct=ft×ct−1+itט
Ct(3)
–
the output gate defines the next hidden state
ht
containing information about previous
inputs. The hidden state
ht
is updated with the memory cell
ct
normalized by a Tanh
function and [ht−1,xt]normalized by a Sigmoid function:
ht=σ(Wh.[ht−1,xt] + bh)×tanh(ct)(4)
with W(.)and b(.), the different gate weights and biases.
Figure 2. LSTM cell operating principle composed by three gates.
Currently, few works have appied recurrent networks in the military context. They are
commonly used for aircraft navigation [
16
,
17
], vehicle trajectory estimation [
35
], maritime
route prediction [
36
] or human motion prediction [
18
,
37
]. It is, however, interesting to
mention [
38
], which focused on projectile trajectory estimation based on LSTMs trained
from incomplete and noisy radar measurements.
Sensors 2023,23, 3025 5 of 18
3. Problem Formulation
This part presents the projectile fire simulation dataset generated by BALCO (BALlistic
COde) [21] and the LSTM input data used to estimate projectile trajectories.
3.1. The Projectile Trajectory Dataset BALCO (BALlistic COde
Results reported in this paper exploit a projectile fire dataset generated by BALCO [
21
,
39
],
i.e., a high fidelity projectile trajectory simulator based on motion equations with six to
seven degrees of freedom and discretized by a seventh order Runge-Kutta method. BALCO
enables the consideration of different earth models (flat earth, spherical, ellipsoidal), dif-
ferent atmospheric models (standard atmosphere or defined by the user) or different
aerodynamic models (axisymmetric or non-axisymmetric projectiles, aerodynamic coeffi-
cients described in correspondence tables or polynomials). BALCO accuracy is validated in
comparison to the reference program PRODAS (Projectile Rocket Ordnance Design and
Analysis System) by considering different projectile types, various initial conditions and
different meteorological conditions.
In order to estimate projectile trajectories, three reference frames are considered.
The local navigation frame
n
(black frame in Figure 3) tangent to the Earth and assumed fixed
during the projectile flight.
The body frame
b
(red frame in Figure 3), which is an ideal hypothetical frame placed
exactly at the projectile gravity center, in which the IMU must be placed, providing perfect
inertial measurements.
The sensor frame
s
(green frame in Figure 3) rigidly fixed to the projectile and misaligned
with the projectile gravity center, considered as the frame where the inertial measurements
are performed.
Figure 3.
Navigation frames (black—local navigation frame
n
, red—body frame
b
, green—sensor
frame
s
) and flight parameters for a finned projectile (fin angle
δf
, initial velocity
v0
, barrel elevation
angle α).
Results reported in this paper are applied to the estimation of a finned mortar trajectory.
The finned projectile dataset, generated by BALCO, includes 5000 fire simulations and
where each one includes:
•
the
inertial measurements in the body frame band in the sensor frame s
, i.e., gy-
rometer
ω∈R3
, accelerometer
a∈R3
and magnetometer
h∈R3
measurements.
Three kinds of inertial measurements are available:
–
the Perfect IMU measurements performed in the body frame
b
(red frame in
Figure 3
), in the ideal case where all the three inertial sensors are perfectly aligned
with the projectile gravity center and where no sensor default model is taken into
account providing ideal inertial measurements, i.e., without any noise or bias.
These measurements are not exploited in this work but are necessary to provide
realistic inertial data.
Sensors 2023,23, 3025 6 of 18
–
the IMU measurements performed in the sensor frame
s
(green frame in Figure 3):
issued from the Perfect IMU measurements where a sensor error model is added.
This error model, specific to each sensor axis, includes a misalignment, a sensitiv-
ity factor, a bias and a noise (assumed zero mean white Gaussian noise). Thus,
this measurement accurately models an IMU embedded in a finned projectile.
–
the IMU DYN measurements performed in the sensor frame
s
(green frame in
Figure 3): issued from IMU measurements to which a transfer function is added to
each sensor. For each sensor, IMU DYN measurements are modeled by:
ysensor,IMU D YN =1
1+as +bs2ysensor,I MU (5)
with
ysensor,IMU
the IMU measurements of the considered sensor,
ysensor,IMU D YN
the
corresponding IMU DYN measurements and with aand b, the coefficients of the
sensor transfer function defined via BALCO. This sensor model allows to model
the response of the three sensors over the operating range.
•
the
magnetic field reference hn∈R3
in the local navigation frame
n
, assumed con-
stant during the projectile flight.
•flight parameters
, which are, in the case of a finned projectile, the fin angle
δf
to
stabilize projectiles, the initial velocity
v0
at barrel exit and the barrel elevation angle
α, relatively important to obtain ballistic trajectories with short ranges.
• a time vector k∆twhere ∆tis the IMU sampling period: ∆t=1×10−3s.
•
the
reference trajectory
, i.e., the projectile position
p∈R3
, velocity
v∈R3
and Euler
angles
Ψ∈R3
in the local navigation frame
n
at the IMU frequency. This trajectory is
used to evaluate the LSTMs accuracy and to compute errors.
3.2. Data Characteristics and LSTM Requirements
The LSTM predictions at time
t
are obtained from three-dimensional input data of size
(Batch si ze
,
Seq len
,
In Features )
, with
Batch si ze
the number of sequences considered,
Seq len
the
number of time steps in the sequence and
In Features
the number of features describing each
time step. The input features are In Features = (M,P,T)∈R16 , such as the following:
-M ∈ R12
the inertial data, including IMU or IMU DYN measurements in the sensor
frame
s
and the reference magnetic field
hn∈R3
in the local navigation frame
n
presented in Section 3.1,
-P ∈ R3
the flight parameters. In the case of a finned projectile, the three flight parameters
are the fin angle δf, the initial velocity v0and the the barrel elevation angle α.
-T ∈ R1
the time vector, such as
T=k∆t
with
k
the considered time step and
∆t
the
IMU sampling period.
Various LSTMs are trained and differ depending on the output features learned. In-
deed, according to the input data of size
(Batch si ze
,
Seq len
,
In Features )
, LSTMs estimate a
projectile trajectory modeled by a vector of size
(Batch si ze
,
Out Features )
and where
Out Features
represents the number of output features. The output features
Out Features
are 9 or 3, de-
pending on the type of LSTM considered. Thus, the following notations are used:
-LST MALL
trained to estimate 9 output features which are the projectile position
p∈R3, velocity v∈R3and Euler angles Ψ∈R3in the navigation frame n.
-LST MPOS
trained to estimate 3 output features,which are the projectile position
p∈R3
expressed in the navigation frame n.
-LST MV EL
trained to estimate 3 output features which are the projectile velocity
v∈R3
expressed in the navigation frame n.
-LST MAN G
trained to estimate 3 output features which are the projectile Euler angles
Ψ∈R3in the navigation frame n.
Sensors 2023,23, 3025 7 of 18
4. LSTM Input Data Preprocessing
This section details the two input data pre-processing methods in order to study their
influence on estimation accuracy. To manage the different projectile dynamics along the
three navigation axes, two data preprocessing methods are investigated: the LSTM input
data normalization and local navigation frame rotation allowing to rescale each component
of a 3D value on similar variation ranges. To this end, LSTMs presented in Section 3.2 are
declined in 8 versions, reported in Table 1, to study the influence of the Min/Max
MM(
.
)
and the Standard Deviation
STD(
.
)
normalization, and the local navigation frame rotation
on estimation accuracy.
Table 1.
Version specifications: Influence of the normalization and the local navigation frame rotation.
Name Normalization Rotation
V1No No
V2MM(T),MM(M),MM(P)No
V3MM(T,M,P)No
V4STD(T),ST D(M),STD(P)No
V5STD(T,M,P)No
V6No Yes
V7MM(T,M,P)Yes
V8STD(T,M,P)Yes
4.1. LSTM Input Data Normalization
Network input data normalization is a preprocessing data approach to rescale input
data on similar variation ranges while preserving the same distribution and ratios as the
original data. Input data normalization is used to prevent some input data features from
having a greater influence than other features during training and to improve gradient
backpropagation convergence. In other words, input data with different ranges can lead
to lower network estimation performance. The small input values have a small influence
during prediction, and therefore the network weights are updated according to the high
input values, which can lead to a significant network weight update and therefore a slower
network convergence or the network convergence to a local minimum.
According to Table 1, two kinds of normalization are used:
•
Min/Max normalization
MM(
.
)
:
xMM =x−xmi n
xmax −xmin
with
xmax
and
xmin
the maximum
and minimum of x.
•
Standard Deviation normalization
STD(
.
)
:
xSTD =x−µx
σx
with
x
the quantity to normalize,
µx=µ(x)
its mean and
σx=σ(x)
its standard deviation. Thus,
xSTD
is a quantity
with a zero-mean and a standard deviation of one.
In order to study the impact of input data normalization on estimation accuracy,
versions
V2
and
V4
use normalization by features while versions
V3
,
V5
,
V7
and
V8
use
normalization for all features. Moreover, the normalization factors
xmax
,
xmin
,
µx
and
σx
are computed before the training step on the training dataset, as in the following:
xmax =1
Nsim ∑Nsim
i=1max{χi},xmin =1
Nsim ∑Nsim
i=1min{χi}(6)
µx=1
Nsim ∑Nsim
i=1µ(χi),σx=1
Nsim ∑Nsim
i=1µ(χi)(7)
with
Nsim
the number of simulation in the training dataset and with
χi
the considered
quantities of the simulation n
º
i, which are
χi=M P T
for versions
V3
,
V5
,
V7
and
V8, and χi=Mor χi=Por χi=Tfor versions V2and V4.
4.2. Local Navigation Frame Rotation
The local navigation frame rotation aims to rotate the local navigation frame
n
by a
fixed angle
γ
(local rotated navigation frame
nγ
), such as
xγ=Rγx
with
x∈R3
defined in
Sensors 2023,23, 3025 8 of 18
n
,
xγ∈R3
expressed in
nγ
and
Rγ∈SO(
3
)
the transition matrix from the local navigation
frame nto the local rotated navigation frame nγas in the following:
Rγ=
cos(γ)−sin(γ)0
sin(γ)cos(γ)0
0 0 1
cos(γ)0sin(γ)
0 1 0
−sin(γ)0cos(γ)
1 0 0
0cos(γ)−sin(γ)
0sin(γ)cos(γ)
. (8)
The navigation frame rotation allows a quantity
x∈R3
expressed in the navigation
frame nto modify its three components in order to have a similar amplitude order for the
three components. The local navigation frame rotation provides similar variation ranges of
a quantity along the three axes. This approach is used to ensure that the LSTMs adequately
estimate a quantity with small magnitudes along one axis, even though the variations are
considerably larger along the other two axes.
As illustrated in Figure 4, the variation range of the projectile position along the y-axis
is significantly smaller than along the x and z-axes and thus, the expressed position in the
local rotated navigation frame
nγ
provides similar amplitudes along the three axes. For
example, projectile position variation ranges along the x and z-axes are around several
kilometers, while the position along the y-axis varies by a few meters. As illustrated in
Figure 4, expressed position in the rotated navigation frame
nγ
provides similar amplitudes
along the three axes.
Figure 4.
(
a
) Local navigation frame
n
and local rotated navigation frame
nγ
. (
b
) Projectile position
in the local navigation frame
n
(blue dashed line), projectile position in the local rotated navigation
frame nγ(red solid line).
All quantities expressed in the local navigation frame are rotated, i.e., the projectile
position
p
, velocity
v
and Euler angles
Ψ
. Moreover, the angle
γ
is fixed for all trajectories
in the dataset and is determined according to the data used in this paper and is the same
to express the position, velocity or Euler angles in the rotated navigation frame. This
angle is determined according to the data used in this paper in particular to have similar
magnitudes for the three positions, as for the velocity. During the training step, labels are
expressed in the local rotated navigation frame
nγ
and LSTMs predict trajectories in
nγ
.
During the test step, LSTMs estimate projectile trajectories in the local rotated navigation
frame nγ, and then, estimations are moved back to the initial local navigation frame n.
5. Results and Analysis
This part of the paper reports LSTM results applied to finned projectiles. A first section
focuses on the influence of the normalization and the local navigation frame rotation on the
estimation accuracy for short training. A second section validates LSTMs on a large dataset
by focusing on the impact of the local navigation frame rotation and the IMU model.
Sensors 2023,23, 3025 9 of 18
The LSTMs’ performances are evaluated in comparison to a classical navigation
algorithm, i.e., a Dead-Reckoning. This algorithm integrates gyrometer
ω
and accelerometer
ameasurements to estimate at each discrete time k:
Rk=Rk−1[ωk∆t]×,
vk=vk−1+(Rk−1ak+g)∆t,
pk=pk−1+vk−1∆t+1
2(Rk−1ak+g)∆2
t,
(9)
with
Rk∈SO(
3
)
the rotation matrix from the sensor frame
s
to the local navigation frame
n
,
g∈R3
the constant gravity vector,
pk∈R3
and
vk∈R3
, respectively, the projectile
position and velocity, and
[
.
]×
the skew matrix. This algorithm is generally used for Kalman
filters for the prediction step to estimate trajectory, as presented in [2–4,8,24,40].
5.1. Impact of the Input Data Normalization and the Local Navigation Frame Rotation
This section reports the estimated trajectories of a finned projectile according to
LST MALL
,
LST MPOS
,
LST MV EL
and
LST MAN G
, each declined in the 8 versions
V1
–
V8
presented in Section 3.2. The training parameters are summarized in the Table 2.
Table 2. Training characteristics of LST MALL,POS,VEL,ANG,V1−8.
Dataset Training Dataset: 100 Simulations (Validation: 10 Simulations)
Test Dataset: 20 Simulations
Input data Batch size: 64 (Seq len: 20 timesteps)
Input data : In Features = (M,P,T)∈R16 (with IMU measurements)
Cost function: Mean Squared Error (MSE)
Training Optimization algorithm: ADAM [41] (Learning rate : 1 ×10−4)
LSTM layer: 2 (Hidden units: 64–128)
Seq len
corresponds to 20 samples representing a window of 0.02
s
as the sensor sam-
pling period is
∆t=
1
×
10
−3s
. This parameter is adjusted according to the input
data used
.
5.1.1. Qualitative Validation: One Finned Projectile Fire Simulation
Figure 5focuses on the estimated positions and orientation for one projectile shot. For
readability reasons, three estimation methods are first compared: the Dead-Reckoning
(9)
,
LST MALL,V1and LST MALL,V6(local navigation frame rotation).
Figure 5.
Estimated projectile position [m] and Euler angles [rad] obtained by the Dead-Reckoning
(green), LST MALL,V1(blue), LST MALL,V6(yellow) and the reference (red).
As shown in Figure 5, positions estimated by the LSTMs are significantly more accu-
rate than the Dead-Reckoning. Nevertheless, LSTMs are only accurate in estimating the
Sensors 2023,23, 3025 10 of 18
pitch and yaw angle. Errors in the roll angle estimation are due to the finned projectile
rotation rate. The LSTMs fail to fully capture the roll angle dynamics. Moreover, the local
navigation frame rotation improves projectile position estimation but slightly degrades
pitch angle estimation.
5.1.2. Quantitative Evaluation: Analysis on the Whole Test Dataset
To validate the previous observations,
LST MALL,POS,VEL,ANG,V1−8
are evaluated on
the
Nsim
simulations in the test dataset according to two criteria based on the Root Mean
Square Error (RMSE) defined as
RMSEx=r1
N∑N
k=1xk,re f −b
xk2
, with
b
x
the estimate,
xre f the reference and Nthe number of samples for one simulation.
The two evaluation criteria are as follows:
•
Success Rate
C1
:number of simulations where a LSTM RMSE is strictly smaller than
the Dead-Reckoning.
•Error Rate C2:percentage of LSTM error compared to Dead-Reckoning errors.
C1=∑Nsim
k=1RMSELSTM <RMSED R,C2=100
Nsim ∑Nsim
k=1
RMSELST M
RMSELST M +RMSEDR (10)
The position, velocity and orientation success rate
C1
and the error rates
C2
are pre-
sented in Figures 6–8. Figures 5–8has been modified for better readability.
Figure 6.
Position analysis: (
a
) Success Rate
C1
, (
b
) Error Rate
C2
(in %) of
LST MALL,V1−V8
and
LST MPOS,V1−V8.
Position analysis results (see Figure 6):
The LSTMs outperform Dead-Reckoning for po-
sition estimation, especially along the y-axis. Normalizations affect position estimates
differently as
STD(T)
,
STD(M)
,
STD(P)
(
V4
) and
MM(T
,
M
,
P)
(
V3
) are not appropri-
ate to this application. In addition, the normalization leads to less accuracy as it implies
a loss of information. Finally, rotating the local navigation frame improves the accuracy
according to C2along the three axis.
Veclocity analysis results (see Figure 7):
As previously, the LSTMs clearly outperform
Dead-Reckoning for velocity estimation. Specialized networks
LST MV EL
are a bit better
than
LST MALL
. The STD normalization for all features
V5
exhibits the best results among
the different normalization options investigated, especially for velocity along the z-axis.
Moreover, rotating the local navigation frame
V6
significantly improves the projectile
velocity estimation along all the three axes.
Sensors 2023,23, 3025 11 of 18
Figure 7.
Velocity analysis: (
a
) Success Rate
C1
, (
b
) Error Rate
C2
(in %) of
LST MALL,V1−V8
and
LST MV EL,V1−V8.
Euler angles analysis results (see Figure 8):
The LSTMs deteriorate the roll
φ
angle esti-
mation compared to the Dead-Reckoning, but accurately estimate the yaw angle
ψ
. As
previously, the
STD(T
,
M
,
P)
(
V5
) normalization of
LST MALL
exhibits the best perfor-
mances for the three Euler angles estimation as well as the local navigation frame rotation.
Figure 8.
Orientation analysis: (
a
) Success Rate
C1
, (
b
) Error Rate
C2
(in %) of
LST MALL,V1−V8
and
LST MANG,V1−V8.
In summary, the LSTM end-to-end estimation is particularly appropriate for projectile
position and velocity estimation. Moreover, from this evaluation study, it can be con-
cluded that specialized networks do not significantly improve the estimation accuracy and
STD(T
,
M
,
P)
(
V5
) normalization is more appropriate to estimate a projectile trajectory
compared to other normalizations. Finally, rotating the local navigation frame is an efficient
method to optimize projectile position and velocity estimation.
Sensors 2023,23, 3025 12 of 18
5.2. Impact of Inertial Measurement Type and Local Navigation Frame Rotation on
Estimation Accuracy
The dataset presented in Section 3.1 contains two kinds of inertial readings; IMU
measurement, used so far, and IMU DYN measurement, where sensors are characterized by a
2nd order model.
This section focuses on the impact of inertial data and navigation frame rotation on
LSTM estimation accuracy. To this end, four LSTMs are trained to estimate positions, veloc-
ities and orientations of a finned projectile:
LST MI MU,V1
(no rotation) ,
LST MI MU DYN,V1
(no rotation),
LST MI MU,V6
(navigation frame rotation) and
LST MI MU DYN,V6
(navigation
frame rotation). LSTM specifications are given in Table 3.
Table 3.
Training characteristics of
LSTMI MU,V1
,
LSTMI MU DYN,V1
,
LSTMI MU,V6
,
LSTMI MU DYN,V6
.
Dataset Training Dataset: 4000 Simulations (Validation: 400 Simulations)
Test Dataset: 400 Simulations
LSTM name No normalization LSTM I MU,V1(with IMU measurements)
& No rotation LSTMI MU DY N,V1(with IMU DYN measurements)
No normalization LST MIMU,V6(with IMU measurements)
& Rotation LSTMI MU DY N,V6(with IMU DYN measurements)
Input data Batch size: 64 (Seq len: 20 timestamp)
Input data : In Features = (M,P,T)∈R16
Cost function: Mean Squared Error (MSE)
Training Optimization algorithm: ADAM [41] (Learning rate: 1 ×10−4)
LSTM layer: 2 (Hidden units: 64–128)
5.2.1. Impact of the Local Navigation Frame Rotation and IMU Measurement
This section focuses on
LST MI MU,V1
(no rotation),
LST MI MU,V6
(rotation) and the
Dead-Reckoning algorithm
(9)
for position, velocity and Euler angles estimation. Network
characteristics are presented in Table 3.
Figure 9presents the average error distributions
¯
e
and the corresponding standard
deviations
σ
evaluated for positions and Euler angles, with the three estimation methods
considered, such as
¯
e=1
N∑N
k=1xre f −b
x, (11)
σ=r1
N∑N
k=1[xre f −b
x]−¯
e2, (12)
where
xre f
is the reference,
b
x
the estimate and
N
the number of samples in the consid-
ered simulation.
According to Figure 9, the Dead-Reckoning mean error dispersion (red) is very large
compared to those of LSTMs (green and blue). Thus, LSTMs are perfectly adapted to esti-
mate finned projectile positions and velocities. Focusing on
pz
,
vx
and
vz
, the
LST MI MU,V1
average errors are not centered on zero compared to
LST MI MU,V6
(rotation). Thus, the local
navigation frame rotation improves these estimates. As previously observed, the LSTMs
fail to estimate the projectile roll angle even if the finned projectile rotation rate is low.
Furthermore, the centering and dispersion of angle errors indicate that the LSTMs suffer to
estimate the projectile orientation despite the yaw angle accuracy.
5.2.2. Impact of the Local Navigation Frame Rotation and IMU DYN Measurement
Figure 10 presents the average error
¯
e(12)
distributions for positions, velocities, and Euler
angles estimated by
LST MI MU DYN,V1
(no rotation),
LST MI MU DYN,V6
(rotation), and the
Dead-Reckoning (9).
Sensors 2023,23, 3025 13 of 18
Figure 9.
Average position, velocity and orientation error histogram obtained by
LST MIMU,V1
(blue),
LST MIMU,V6(green) and Dead-Reckoning (red).
Figure 10.
Average position, velocity and orientation error histogram obtained by
LST MIMU DYN ,V1
(blue), LST MIMU DYN ,V6(green) and Dead-Reckoning (red).
Sensors 2023,23, 3025 14 of 18
According to Figure 10, the LSTMs accurately estimate the projectile position and
velocity over a large dataset, despite dynamic inertial data causing, in contrast, the Dead-
Reckoning divergence. Furthermore, the error distribution centering analysis allows to
conclude that the local navigation frame rotation improves the estimation of
py
,
vx
and
vy
.
As for previous orientation estimation results, both LSTMs fail to estimate the projectile
roll angle φ.
5.2.3. Evaluation Metric
The performance of
LST MI MU V1
,
LST MI MU V6
,
LST MI MU DY N V1
,
LST MI MU DY N V6
,
and the Dead-Reckoning
(9)
are evaluated using two evaluation criteria computed for each
simulation in the test dataset:
•Mean Absolute Error:
MAE =1
N
N
∑
k=1
|x−b
x|(13)
with xthe reference, b
xthe estimate N, the number of samples.
•
SCORE: Number of simulations in the test dataset where the considered method
obtains the smallest RMSE.
The average of each criteria are evaluated on the dataset as:
Cχ=1
Nsim
Nsim
∑
k=1
χk(14)
with
χ
the selected evaluation criterion and
Nsim
the number of simulations in the test dataset.
CMAE Criterion analysis:
Figure 11 presents the MAE average
CMAE
, evaluated on the
whole test dataset for
LST MI MU,V1
,
LST MI MU,V6
,
LST MI MU DYN,V1
,
LST MI MU DYN,V6
and the Dead-Reckoning.
The LSTMs accurately estimate position and velocity, both with IMU and IMU DYN
measurement, with errors around a few meters for the positions. This criterion confirms that
the local navigation frame rotation improves
pz
,
vx
and
vz
estimation for LSTMs trained
with IMU measurement, and
py
,
vx
and
vy
estimation for LSTMs trained with IMU DYN
measurement. As expected, LSTMs are not adapted to estimate the projectile roll angle
contrary to the pitch and the yaw angle.
Figure 11.
Position, velocity and orientation MAE average
CMAE
obtained by
LST MIMU,V1
,
LST MIMU,V6,LSTMI MU DY N,V1,LSTMI MU DY N,V6and the Dead-Reckoning.
CSCORE Criterion analysis:
Figure 12 presents the score,
CSCORE
, evaluated on the
whole test dataset for
LST MI MU,V1
,
LST MI MU,V6
,
LST MI MU DYN,V1
,
LST MI MU DYN,V6
and the Dead-Reckoning.
Sensors 2023,23, 3025 15 of 18
Figure 12.
Position, velocity and orientation score obtained by
LST MIMU,V1
,
LST MIMU,V6
,
LST MIMU DYN ,V1,LSTMI MU DYN,V6and the Dead-Reckoning.
According to Figure 12, an LSTM is an accurate approach to estimate projectile position
and velocity in a GNSS-denied environment. However, a LSTM is not the optimal method
for the orientation estimation. Furthermore, an LSTM is able to generalize the learned
features over a large projectile fire dataset as well as learn and predict trajectories from dif-
ferent sensor models. Finally,
CMAE
and
CSCORE
analysis confirms that the local navigation
frame rotation is an appropriate method to optimize
pz
,
vx
and
vz
for LSTMs trained with
IMU measurement, and py,vx, and vyfor LSTMs trained with IMU DYN measurement.
5.2.4. Errors at Impact Point
This section focuses on the errors at the impact point, i.e., position errors
(px
,
py)
at
the final time of a shot. Figure 13a shows the impact point errors of LSTMs with IMU and
IMU DYN measurements and the Dead-Reckoning. Figure 13b presents where the impact
point errors are located in different error zones.
Whatever the inertial sensor model, The Dead-Reckoning impact point errors are
greater than 100 m, contrary to LSTMs. Focusing on IMU measurements, the majority of
LST MI MU,V6
(rotation) impact point errors are less than 5 m contrary to
LST MI MU,V1
(no
rotation) where errors are lower than 20 m. Thus, the local navigation frame rotation allows
to strongly reduce errors at the impact point. Focusing on IMU DYN measurements, the local
navigation frame rotation deteriorates estimation accuracy. Indeed, 266 simulations have
impact point errors less than 5 m with
LST MI MU DYN,V1
while 180 simulations have impact
point errors less than 5 m with LSTMI MU DY N,V6.
Figure 13.
(
a
) Errors at impact point obtained by
LST MIMU,V1
and
LST MIMU DYN ,V1
(blue cross),
LST MIMU,V6
and
LST MIMU DYN ,V6
(green cross) and the Dead-Reckoning (red dot). (
b
) Impact
point error location.
Sensors 2023,23, 3025 16 of 18
In summary, an LSTM is an accurate approach to estimate projectile position as errors
are less than ten meters in a GNSS-denied environment. These results are comparable to
those obtained by commercial GNSS-guided mortars. Moreover, the local navigation frame
rotation is useful with IMU measurement and allows to minimize position errors.
6. Conclusions
This paper presents a deep learning approach to estimate a gun-fired projectile tra-
jectory in a GNSS-denied environment. Long-Short-Term-Memories (LSTMs) are trained
from the embedded IMU, the magnetic field reference, flight parameters specific to the
projectile (the fin angle, the initial velocity, the barrel elevation angle) and a time vector.
The impact of three preprocessing methods are analyzed: the input data normalizations,
the local navigation frame rotation and the inertial sensor model. According to the reported
results, the LSTMs accurately estimate projectile positions and velocities compared to a
conventional navigation algorithm as errors are around ten meters, similar to the GNSS-
guided projectile accuracy. Nevertheless, LSTM suffers in the estimation of the projectile
orientation, especially for the high dynamic roll angle. The input data normalization pro-
vides no interesting results while the local navigation frame rotation optimizes the position
and velocity estimation. Moreover, the results prove that the LSTMs generalize the learned
features on large datasets independently of the inertial sensor model considered. Based
on results reported in this paper, the next step is to implement a deep Kalman filter by
considering the LSTM predictions as observations.
Author Contributions:
Methodology, A.R.; Validation, A.R.; Investigation, A.R.; Writing—original
draft, A.R.; Supervision, S.C., J.W. and J.-P.L. All authors have read and agreed to the published
version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
Groves, P.D. Principles of GNSS, inertial, and multisensor integrated navigation systems. IEEE Aerosp. Electron. Syst. Mag.
2015
,
30, 26–27. [CrossRef]
2.
Zhao, H.; Li, Z. Ultra-tight GPS/IMU integration based long-range rocket projectile navigation. Def. Sci. J.
2016
,66, 64–70.
[CrossRef]
3.
Fairfax, L.D.; Fresconi, F.E. Loosely-coupled GPS/INS state estimation in precision projectiles. In Proceedings of the 2012
IEEE/ION Position, Location and Navigation Symposium, Myrtle Beach, SC, USA, 23–26 April 2012; pp. 620–624.
4.
Wells, L.L. The projectile GRAM SAASM for ERGM and Excalibur. In Proceedings of the IEEE 2000. Position Location and
Navigation Symposium (Cat. No. 00CH37062), San Diego, CA, USA, 13–16 March 2000; pp. 106–111.
5.
Duckworth, G.L.; Baranoski, E.J. Navigation in GNSS-denied environments: Signals of opportunity and beacons. In Proceedings
of the NATO Research and Technology Organization (RTO) Sensors and Technology Panel (SET) Symposium, 2007 ; pp. 1–14.
6.
Ruegamer, A.; Kowalewski, D. Jamming and spoofing of gnss signals–an underestimated risk?! Proc. Wisdom Ages Chall. Mod.
World 2015,3, 17–21.
7.
Schmidt, D.; Radke, K.; Camtepe, S.; Foo, E.; Ren, M. A survey and analysis of the GNSS spoofing threat and countermeasures.
ACM Comput. Surv. (CSUR) 2016,48, 1–31. [CrossRef]
8.
Roux, A.; Changey, S.; Weber, J.; Lauffenburger, J.P. Projectile trajectory estimation: Performance analysis of an Extended Kalman
Filter and an Imperfect Invariant Extended Kalman Filter. In Proceedings of the 2021 9th International Conference on Systems
and Control (ICSC), Caen, France, 24–26 November 2021; pp. 274–281.
9.
Combettes, C.; Changey, S.; Adam, R.; Pecheur, E. Attitude and velocity estimation of a projectile using low cost magne-
tometers and accelerometers. In Proceedings of the 2018 IEEE/ION Position, Location and Navigation Symposium (PLANS),
Monterey, CA, USA, 23–26 April 2018; pp. 650–657.
10.
Fiot, A.; Changey, S.; Petit, N. Attitude estimation for artillery shells using magnetometers and frequency detection of accelerom-
eters. Control. Eng. Pract. 2022,122, 105080. [CrossRef]
Sensors 2023,23, 3025 17 of 18
11.
Tarraf, D.C.; Shelton, W.; Parker, E.; Alkire, B.; Carew, D.G.; Grana, J.; Levedahl, A.; Léveillé, J.; Mondschein, J.; Ryseff, J.; et al. The
Department of Defense Posture for Artificial Intelligence: Assessment and Recommendations; Technical Report; Rand National Defense
Research Institute: Santa Monica, CA, USA, 2019.
12.
Zeldam, S. Automated Failure Diagnosis in Aviation Maintenance Using Explainable Artificial Intelligence (XAI). Master ’s
Thesis, University of Twente, Enschede, The Netherlands, 2018.
13.
Ustun, V.; Kumar, R.; Reilly, A.; Sajjadi, S.; Miller, A. Adaptive synthetic characters for military training. arXiv
2021
,
arXiv:2101.02185.
14.
Svenmarck, P.; Luotsinen, L.; Nilsson, M.; Schubert, J. Possibilities and challenges for artificial intelligence in military appli-
cations. In Proceedings of the NATO Big Data and Artificial Intelligence for Military Decision Making Specialists’ Meeting,
Bordeaux, France, 30 June–1 July 2018; pp. 1–16.
15. Ventre, D. Artificial Intelligence, Cybersecurity and Cyber Defence; John Wiley & Sons: Hoboken, NJ, USA, 2020.
16.
Shi, Z.; Xu, M.; Pan, Q.; Yan, B.; Zhang, H. LSTM-based flight trajectory prediction. In Proceedings of the 2018 International Joint
Conference on Neural Networks (IJCNN), 8–13 July 2018; pp. 1–8.
17.
Gaiduchenko, N.E.; Gritsyk, P.A.; Malashko, Y.I. Multi-Step Ballistic Vehicle Trajectory Forecasting Using Deep Learning
Models. In Proceedings of the 2020 International Conference Engineering and Telecommunication (En&T), Dolgoprudny, Russia,
25–26 November 2020; pp. 1–6.
18.
Al-Molegi, A.; Jabreel, M.; Ghaleb, B. STF-RNN: Space time features-based recurrent neural network for predicting people
next location. In Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Athens, Greece,
6–9 December 2016; pp. 1–7.
19.
Sherstinsky, A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Phys. D
Nonlinear Phenom. 2020,404, 132306. [CrossRef]
20.
Yu, Y.; Si, X.; Hu, C.; Zhang, J. A review of recurrent neural networks: LSTM cells and network architectures. Neural Comput.
2019,31, 1235–1270. [CrossRef] [PubMed]
21.
Wey, P.; Corriveau, D.; Saitz, T.A.; de Ruijter, W.; Strömbäck, P. BALCO 6/7-DoF trajectory model. In Proceedings of the 29th
International Symposium on Ballistics, Edinburgh, Scotland, 9–13 May 2016; Volume 1, pp. 151–162.
22.
Liu, F.; Su, Z.; Zhao, H.; Li, Q.; Li, C. Attitude measurement for high-spinning projectile with a hollow MEMS IMU consisting of
multiple accelerometers and gyros. Sensors 2019,19, 1799. [CrossRef] [PubMed]
23.
Rogers, J.; Costello, M. Design of a roll-stabilized mortar projectile with reciprocating canards. J. Guid. Control Dyn.
2010
,
33, 1026–1034. [CrossRef]
24.
Jardak, N.; Adam, R.; Changey, S. A Gyroless Algorithm with Multi-Hypothesis Initialization for Projectile Navigation. Sensors
2021,21, 7487. [CrossRef] [PubMed]
25.
Radi, A.; Zahran, S.; El-Sheimy, N. GNSS Only Reduced Navigation System Performance Evaluation for High-Speed Smart
Projectile Attitudes Estimation. In Proceedings of the 2021 International Telecommunications Conference (ITC-Egypt), Alexandria,
Egypt, 13–15 July 2021; pp. 1–4.
26.
Aykenar, M.B.; Boz, I.C.; Soysal, G.; Efe, M. A Multiple Model Approach for Estimating Roll Rate of a Very Fast Spinning Artillery
Rocket. In Proceedings of the 2020 IEEE 23rd International Conference on Information Fusion (FUSION), Rustenburg, South
Africa, 6–9 July 2020; pp. 1–7.
27.
Changey, S.; Pecheur, E.; Bernard, L.; Sommer, E.; Wey, P.; Berner, C. Real time estimation of projectile roll angle using
magnetometers: In-flight experimental validation. In Proceedings of the 2012 IEEE/ION Position, Location and Navigation
Symposium, Myrtle Beach, SC, USA, 23–26 April 2012; pp. 371–376.
28.
Zhao, H.; Su, Z. Real-time estimation of roll angle for trajectory correction projectile using radial magnetometers. IET Radar Sonar
Navig. 2020,14, 1559–1570. [CrossRef]
29.
Changey, S.; Beauvois, D.; Fleck, V. A mixed extended-unscented filter for attitude estimation with magnetometer sensor. In
Proceedings of the 2006 American Control Conference, Minneapolis, MN, USA, 14–16 June 2006; p. 6.
30.
Schmidt, G.T. Navigation sensors and systems in GNSS degraded and denied environments. Chin. J. Aeronaut.
2015
,28, 1–10.
[CrossRef]
31.
Fiot, A.; Changey, S.; Petit, N.C. Estimation of air velocity for a high velocity spinning projectile using transerse accelerometers. In
Proceedings of the 2018 AIAA Guidance, Navigation, and Control Conference, Kissimmee, FL, USA, 8–12 January 2018; p. 1349.
32.
Changey, S.; Fleck, V.; Beauvois, D. Projectile attitude and position determination using magnetometer sensor only. In Proceedings
of the Intelligent Computing: Theory and Applications III; SPIE: Orlando, FL, USA, 2005; Volume 5803, pp. 49–58.
33. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997,9, 1735–1780. [CrossRef] [PubMed]
34.
Hochreiter, S.; Bengio, Y.; Frasconi, P.; Schmidhuber, J. Gradient flow in recurrent nets: The difficulty of learning long-term
dependencies. In A Field Guide to Dynamical Recurrent Neural Networks; IEEE Press: Piscataway, NJ, USA, 2001.
35.
Park, S.H.; Kim, B.; Kang, C.M.; Chung, C.C.; Choi, J.W. Sequence-to-sequence prediction of vehicle trajectory via LSTM encoder-
decoder architecture. In Proceedings of the 2018 IEEE intelligent vehicles symposium (IV), Changshu, China,
26–30 June 2018
;
pp. 1672–1678.
36.
Sørensen, K.A.; Heiselberg, P.; Heiselberg, H. Probabilistic maritime trajectory prediction in complex scenarios using deep
learning. Sensors 2022,22, 2058. [CrossRef] [PubMed]
Sensors 2023,23, 3025 18 of 18
37.
Barsoum, E.; Kender, J.; Liu, Z. Hp-gan: Probabilistic 3d human motion prediction via gan. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1418–1427.
38.
Hou, L.h.; Liu, H.j. An end-to-end lstm-mdn network for projectile trajectory prediction. In Proceedings of the Intelligence
Science and Big Data Engineering, Big Data and Machine Learning: 9th International Conference, IScIDE 2019, Nanjing, China,
17–20 October 2019; pp. 114–125.
39.
Corriveau, D. Validation of the NATO Armaments Ballistic Kernel for use in small-arms fire control systems. Def. Technol.
2017
,
13, 188–199. [CrossRef]
40.
Roux, A.; Changey, S.; Weber, J.; Lauffenburger, J.P. Cnn-based invariant extended kalman filter for projectile trajectory
estimation using imu only. In Proceedings of the 2021 International Conference on Control, Automation and Diagnosis (ICCAD),
Grenoble, France, 3–5 November 2021; pp. 1–6.
41. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980.
Disclaimer/Publisher’s Note:
The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.