ArticlePDF Available

On some algorithms related to matrices with coefficients in a finite field and their computational complexity

Authors:

Abstract

In the literature survey, there exist several research papers devoted to the generation of non-singular matrices with coefficients in a finite field. One of these papers refers to the generation of such matrices through the multiplication of polynomials modulus a primitive polynomial. However, the complexity bound given for the algorithm is not accurate. Thus, in this paper, we conduct a new analysis on the complexity of the same. We also remove the restriction of using a primitive polynomial to generate the matrix by using an arbitrary monic polynomial over a finite field whose independent term is distinct from zero.
Journal of Science and Technology on Information security
16 No 1.CS (21) 2024
This manuscript is received on May 18, 2024. It is
commented on June 4, 2024 and is accepted on June 27, 2024
by the first reviewer. It is commented on June 5, 2024 and is
accepted on June 5, 2024 by the second reviewer.
Pablo Freyre Arrozarena, Alejandro Freyre Echevarría, Ramses Rodríguez Aulet,
Ernesto Domínguez Fiallo, Samir Alzugaray Vizcaino
Abstract In the literature survey, there exist
several research papers devoted to the generation
of non-singular matrices with coefficients in a
finite field. One of these papers refers to the
generation of such matrices through the
multiplication of polynomials modulus a
primitive polynomial. However, the complexity
bound given for the algorithm is not accurate.
Thus, in this paper, we conduct a new analysis on
the complexity of the same. We also remove the
restriction of using a primitive polynomial to
generate the matrix by using an arbitrary monic
polynomial over a finite field whose independent
term is distinct from zero.
Tóm tắt Quan nghiên cứu thấy rằng, tồn
tại nhiều phương pháp sinh các ma trận không
suy biến trên trường hữu hạn. Một trong những
công bố khoa học đã đề cập đến việc tạo ra các
ma trận như vậy thông qua phép nhân của các
đa thức theo -đun một đa thức nguyên thủy.
Tuy nhiên, đánh giá độ phức tạp của thuật toán
được đưa ra là chưa chính xác. Do đó, trong bài
báo này, chúng tôi đề xuất một phân tích mới
cho độ phức tạp để khắc phục hạn chế trong các
công bố trước.liên quan đến việc sử dụng đa
thức nguyên thủy để tạo ma trận bằng cách sử
dụng đa thức monic tùy ý trên một trường hữu
hạn có số hạng độc lập khác 0.
Keywordsnon-singular matrices, multiplication of
polynomials, computational complexity.
Từ khóa— ma trận không suy biến, phép nhân đa
thức, độ phức tạp tính toán.
I. INTRODUCTION
The problems related to the generation of
non-singular matrices, find their inverses and
multiply two matrices are classical in linear
algebra. All these problems have complexity
󰇛󰇜 when solved using naive algorithms [1].
However, there exist methods whose
computational complexity is lower than this
bound, such as the one presented in [2] for
matrix multiplication which runs
in 󰇛󰇜. For the case of generating non-
singular matrices, it is known that on average it
takes between 3 and 4 random selections to
obtain a non-singular matrix over the finite field
, which consumes about 3 or 4
bits ( bits each time) [2, 3]. In addition, the
work from Randall [4] shows
an efficient algorithm for the generation of a
random non-singular matrix over . In the
particular case of , the algorithm has a
time complexity of
󰇛󰇜󰇛󰇜 where 󰇛󰇜 is the
computational complexity for multiplying
two matrices. Finally, one can generate
non-singular matrices through
the multiplication of polynomials modulus a
primitive polynomial over a finite
field as presented in [5].
When working in the field such that
with p a prime number and , the latter
method is stated to have computational
complexity 󰇛󰇛󰇜󰇛󰇜󰇜 [5],
which we believe is not an accurate upper bound
for the algorithm. Moreover, as the size of the
matrix grows, it will be difficult to execute the
algorithm due the complexity of finding
primitive polynomials with a high degree.
Our contributions: This paper addresses the
complexity of the algorithms presented in [5],
On some algorithms related to matrices with
coefficients in a finite field and their
computational complexity
DOI: https://doi.org/10.54654/isj.v1i21.1032
Journal of Science and Technology on Information security
No 1.CS (21) 2024 17
showing that the upper bound presented by its
authors is not accurate and providing the
mathematical proofs that allow us to reduce the
upper bound of such algorithms, also removing
the constraint of using primitive polynomials as
a mandatory input parameter by substituting
them with a monic polynomial with a non-zero
independent coefficient. Furthermore, we
analyze special cases of the input parameters of
the algorithms described later in this paper that
allow us to reduce their computational
complexity to be close to those in the state-of-
the-art of non-singular matrix generation.
Finally, we remark that the matrices obtained
through our algorithms can be used in key-
dependent encryption schemes where part of our
algorithm’s input parameters are derived from
the key. The output of our algorithms can also
be used for McEliece-type cryptosystems.
II. MATHEMATICAL PRELIMINARIES
Let denote the finite field q elements,
and G be an arbitrary group acting over an
arbitrary set . We denote by the action
of on . The stabilizer of points
 in G is denoted by 󰇛󰇜
󰇛󰇜󰇛󰇜. Let
󰇛󰇜 be a basis for , then the basic
orbits are the sets of points 󰇛󰇜󰇥󰇛󰇜
󰇦
For a given base , the Schreier structure
[6] is defined as the arrangement
󰇟󰇠 in which
󰇩
󰇛󰇜󰇛󰇜
󰇛󰇜󰇪
where:
- 󰇛󰇜
- 󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
- 󰇛󰇜󰇛󰇜󰇛󰇜,
.
We call a right transversal for 󰇛󰇜 in 󰇛󰇜
to the set  of the
representatives of the right cosets of 󰇛󰇜 in
󰇛󰇜, being and the index of
󰇛󰇜 in 󰇛󰇜. Every element can be
expressed in a unique way as the product of
elements of a right transversal 
for 󰇛󰇜 in 󰇛󰇜, i.e.,

where 󰇛󰇜󰇛󰇜

. A random selection of the elements
of G can be reached by randomly selecting the
elements of  [6-9].
Given a monic polynomial󰇛󰇜󰇟󰇠 of
degree n, the companion matrix is:



where  are the
coefficients of 󰇛󰇜. The order  of
matrix is the smallest positive integer such
that 󰇛󰇜 divides . From [10] it is
known that:
󰇛󰇜󰇛󰇜
is equivalent to
󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜.
III. THE SCHREIER STRUCTURE USED IN [5]
Let us briefly discuss the Schreier structure
analyzed in [5] which consider the base
󰇛󰇜 for , where 
󰇧󰇨 is the canonical vector with
a 1 in position i.
Let  
and for
each we define  as:
󰇛󰇜
󰇛󰇜
󰇛󰇜



󰇛󰇜
having the matrix , which stabilizes all the
elements of the base before , of the form:
Journal of Science and Technology on Information security
18 No 1.CS (21) 2024
󰇭
󰇮󰇛󰇜
where  is the companion matrix of a
primitive polynomial 󰇛󰇜󰇟󰇠 such that
󰇛󰇜.
In addition, the matrices are of the form:






,




󰇛󰇜

󰇛󰇜

Finally, we have that:
- is the canonical vector with a 1 at the i-
th coordinate.
- The values of , having a 1 at the i-th
coordinate are calculated as:
󰇛󰇜
 󰇛󰇜
- 󰇛
󰇜,…,
󰇛
󰇜,…,

󰇛
󰇜
- 
󰇛

 󰇜,

󰇛

 󰇜,…,


󰇛

 󰇜
having  are arbitrary value
of .
IV. ALGORITHMS WITH THE MULTIPLICATION OF
TWO POLYNOMIALS MODULUS A MONIC
POLYNOMIAL WITH NONZERO INDEPENDENT
COEFFICIENT
From the Schreier structure shown in
Section III, the authors of [5] were able to
define some algorithms to generate a non-
singular matrix A, calculate its inverse  and
multiply two matrices AB where and
. However, these algorithms use
primitive polynomials and when the degree of
the same and/or the size of grows, it is
difficult to obtain such polynomials to carry on
with the procedure of the algorithms. Such
drawback can be avoided by using arbitrary
monic polynomials having their independent
coefficient distinct from zero.
Theorem 1. Let 󰇛󰇜 be a
basis for . the multiplication of a
vector 󰇛󰇜 by , where
is a right transversal or 󰇛󰇜 in 󰇛󰇜,
, can be expressed as
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜


󰇛󰇜
where 󰇛󰇜󰇟󰇠 is an arbitrary monic
polynomial of degree having 
whose order satisfies that
Journal of Science and Technology on Information security
No 1.CS (21) 2024 19
,  with
 and  having  and
.
Proof. To demonstrate the previous theorem
let us construct the following Schreier structure.
Let 

be
the Schreier structure associated to where:
- is the companion matrix of the
arbitrary monic polynomial 󰇛󰇜
󰇟󰇠
, whose order satisfies that .
- 󰇛󰇜
󰇛
󰇜 where is an
arbitrary value of .
-
 and
.
- is the companion matrix of a primitive
polynomial 󰇛󰇜
󰇟󰇠.
- for having .
- 
and
 with
.
Later, for we define  as:
󰇛󰇜
󰇛󰇜
󰇛󰇜



󰇛󰇜
󰇛󰇜
󰇛󰇜


󰇛󰇜
where:
- is the canonical vector with a 1 at the
i-th coordinate.
- The values of , having a 1 at the i-th
coordinate are calculated as:
󰇛󰇜
 󰇛󰇜
- 󰇛
󰇜,…,
󰇛
󰇜,…, 󰇛󰇜
󰇧
󰇨,
󰇛󰇜

 

- is a matrix of the form (1) where
 is the companion matrix of the arbitrary
monic polynomial 󰇛󰇜󰇟󰇠 of degree
having  whose order satisfies
that . The matrix
stabilizes all the elements of the base before
.
-
 and .
- The matrix of the form:
󰇭
󰇮
where  has order equal to 
and is the companion matrix of a primitive
polynomial 󰇛󰇜󰇟󰇠 of degree .
- for having
.
- 
 and
󰇛󰇜 󰇛󰇜 with
.
- The matrices are of the form:





Journal of Science and Technology on Information security
20 No 1.CS (21) 2024
󰇛󰇜




-
󰇛󰇜
󰇛󰇜

󰇛󰇜
󰇛󰇜
󰇛󰇜
Then selecting the elements of the right
transversal for 󰇛󰇜 in 󰇛󰇜,  if
such that:
- 󰇛󰇜
- For 󰇛󰇜 or:
󰇛󰇜󰇭
 󰇮󰇡󰇢󰇛󰇜
󰇭
 󰇮󰇡󰇢󰇡󰇢󰇛󰇜
having  and
󰇛󰇜
then the multiplication of a vector
󰇛󰇜 by can be expressed as:
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜


󰇛󰇜
Therefore, the proof is complete.
An example of the development of the
Schreier structure used to demonstrate Theorem
1 is shown in Appendix A.
Using the above reasoning, we obtain a
method for generating a non-singular matrix
based in the multiplication of polynomials
modulus a monic polynomial with non-zero
independent coefficient as shown in the next
algorithm.
Algorithm 1: Generate a matrix 󰇡󰇢
Require: ,  and .
Require: Monic polynomials 󰇛󰇜
 󰇟󰇠
Require:  where is the order of 󰇛󰇜
Ensure: A nonsingular matrix 󰇡󰇢
// Calculate the first row of A ()
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜
// Calculate the j-th row of A ()
for to do:
󰇛󰇜
for down to do:



 
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
end for
󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜
end for
It worth to remark that using monic
polynomials we cannot generate the whole
. For the subset generated by using
these polynomials we will use the notation
󰇡󰇢 and its cardinality is given
in the following proposition.
Proposition 1. The cardinality of
󰇡󰇢, denoted by 󰇡󰇢
is equal to
󰇡󰇢

 󰇭
 󰇮
Proof. The cardinality of
󰇡󰇢 is the consequence of the
construction of the Schreier structure used to
demonstrate Theorem 1.
Analogously to the method presented in
Algorithm 1 one can deduce the procedures to
obtain the inverse of a matrix
Journal of Science and Technology on Information security
No 1.CS (21) 2024 21
󰇡󰇢 and to multiply a vector for
a matrix or its inverse . The pseudo-
codes of these algorithms are presented in
Appendix B.
V. ALGORITHMS FOR MATRICES WITH
VARIABLE RIGHT TRANSVERSAL
Let us consider that the values will always
be the same regardless the remaining input
parameters of Algorithm 1. If one sets up
having  then the
algorithm is still able to generate a matrix
󰇡󰇢. In addition, let us enunciate
the following proposition.
Proposition 2. Let 󰇛󰇜
󰇟󰇠 and 󰇛󰇜

󰇟󰇠 having  for two
inputs of Algorithm 1 and let for
. If for some value of i it is true
that 󰇛󰇜󰇛󰇜 then the algorithm will
produce two distinct matrices.
Proof. Let 󰇛󰇜󰇛󰇜 and let us assume
that the first row of each matrix are equal. Since
the first row of a matrix generated using
Algorithm 1 is obtained as the result of then
two arbitrary matrices produced by the method
will have an equal first row if and only if
󰇛󰇜󰇛󰇜. Hence, when 󰇛󰇜󰇛󰇜
the first row of both matrices are different.
For , let 󰇛󰇜󰇛󰇜 and the
first rows of both matrices be equal and
let us assume that the i-th row of the matrices
are also equal. Since the i-th row is obtained as
 then it must be true that the
values  on both matrices are equal as well as
the polynomials 󰇛󰇜 and 󰇛󰇜. However,
󰇛󰇜󰇛󰇜, therefore the i-th row of both
matrices are different.
As result of Proposition 2 we obtain the
number of distinct matrices of  that can
be generated using Algorithm 1 when
for  as stated in the following
corollary.
Corollary 1. The number of distinct
matrices of  that can be generated
using Algorithm 1 when for
 is:

 󰇛󰇜
Finally, if we take under consideration the
following set of input parameters:
-  and
.
- Monic polynomials 󰇛󰇜

󰇟󰇠 where 󰇛󰇜 is even and
. If then else 

.
-
where is the order of 󰇛󰇜
and .
then Algorithm 1 is able to generate an
involutory matrix 󰇡󰇢.
VI. COMPUTATIONAL COMPLEXITY OF THE
ALGORITHMS
In the preceding section we discuss the
definition of new algorithms for generating non-
singular matrices as result of the multiplication
of polynomials modulus a monic polynomial
with non-zero independent coefficient. The new
algorithms avoid the use of primitive
polynomials which can be a drawback as the
size of or parameter n increases. As result of
this trade-off, the number of matrices that can
be generated by these algorithms was reduced
as stated in Proposition 1. However, the use of
monic polynomials instead of primitive ones
does not have any implication towards the time
complexity of the algorithms since we only
substitute the primitive polynomials from the
original version of the same presented in [5] by
monic polynomials of identical degree. Thus,
any complexity analysis we conduct over the
algorithms described in this paper can be
extended to those presented in [5].
Journal of Science and Technology on Information security
22 No 1.CS (21) 2024
It is well-known that the naive polynomial
multiplication algorithm is not the optimal way
to multiply two polynomials. There are several
research papers where this topic is studied and
to the best of our knowledge the complexity of
multiplying two polynomials of degree less than
n in 󰇟󰇠 is given in [11]. However, to show
that the bound given in [5] is not tight we will
use the same complexity formula related to the
multiplication of polynomials the authors have
used, which is equals to 󰇛󰇜
according to [12].
In [5], the authors assume that every
polynomial multiplication carried by their
methods have complexity 󰇛󰇜
which is not true since they work with
polynomials of degree strictly less than n to
calculate the intermediate values necessary to
construct each row of the matrix. In addition, let
us consider the following:
- The polynomials used within the
algorithm are of degree at most 
 with coefficients in 󰇟󰇠.
- The complexity of multiplying two
elements on , denoted , is the same
regardless the degree of the polynomial 󰇛󰇜
from which they are coefficients and it is
󰇛󰇜 where b is the size in bits of
the same.
- We denote by 󰇛󰇜 the
complexity of multiplying two polynomials
module an i-degree polynomial.
Proposition 3. The computational
complexity of Algorithm 1 is
󰇛󰇜󰇛󰇜
 󰇛󰇜


Proof. For demonstrating the proposition,
let us show the number of polynomial and finite
field multiplications carried by the algorithm on
each step according to the Schreier structure
defined in Section V.
Polynomial
multiplications
Finite field
multiplications
-
-
-

1

From the above decomposition we obtain
that the complexity of all the multiplications of
two polynomials modulus an i−degree
polynomial defined over where 
is given by the formula
󰇛󰇛󰇜󰇛󰇜󰇜
which can be simply written as:
󰇛󰇜

In addition, the amount of element
multiplications in whose computational
complexity is is equal to:



 
 󰇛󰇜


Then, it is straightforward to see that the
computational complexity of the algorithm is:
󰇛󰇜
 󰇛󰇜


Therefore, the proof is complete.
In [5] the total computational complexity of
multiplying two elements in was deemed to
be constant and therefore removed from the
complexity expression of the algorithm.
However, as one can see in the proof of
Proposition 3, the number of element
multiplications in increases by quadratic
order as the value of n does. Thus, one cannot
simply discard such value when calculating the
complexity of the algorithm. Moreover, as we
aforementioned the authors of [5] were absolute
to say that all multiplication of polynomials
modulus an i-degree polynomial had
Journal of Science and Technology on Information security
No 1.CS (21) 2024 23
computational complexity of which we have
shown is not the case when the module
polynomial have degree strictly lower than n.
This leads us to conclude that a more accurate
upper bound for the algorithm presented in [5]
for generating a non-singular matrix is given by
the result of Proposition 3. The remaining of
this section is dedicated to address the
complexity of the algorithms presented in
Appendix A.
It is known that the complexity of finding
the inverse of a polynomial modulus an
i−degree polynomial over has complexity
󰇛󰇜 [13]. This yields in the following
proposition.
Proposition 4. The computational
complexity of Algorithm 2 is
󰇛󰇜󰇛󰇜
 󰇛󰇜



 󰇛󰇜
Proof. To prove the above proposition, we
follow the same reasoning as for Proposition 3.
Let us show the number of polynomial and
element multiplications carried by the
algorithm.
Calculation of the
i-th row
Polynomial
multiplications
Finite field
multiplications

-
-
󰇛󰇜



1
󰇛󰇜




2+1
󰇛󰇜



󰇛󰇜

 

󰇛󰇜



From the above decomposition we obtain
that the complexity of all the multiplications of
two polynomials modulus an i−degree
polynomial defined over where 
is given by the formula
󰇛󰇛󰇜󰇛󰇜󰇜
which can be simply written as:

 󰇛󰇜
In addition, the amount of element
multiplications in whose computational
complexity is is equal to:



 
 󰇛󰇜


Later, for calculating the i-th row of the
matrix the algorithm performs a polynomial
inversion modulus an 󰇛󰇜-degree
polynomial, and the complexity of making all
polynomial inversions is given by the formula:


Then, it is straightforward to see that the
computational complexity of Algorithm 2 is
equal to:
󰇛󰇜
 󰇛󰇜

 

󰇛󰇜
Therefore, the proof is complete.
Although the result of Proposition 4
encompasses all the operations made within the
algorithm, by applying the rule of the sum for
algorithmic complexity we can discard the two
rightmost members of the formula given in the
proposition. Hence we can summarize the
complexity of Algorithm 2 in the following
corollary.
Corollary 2. The computational complexity
of Algorithm 2 is
󰇛󰇜
 󰇛󰇜

 
Finally, let us discuss the complexity of
multiplying a row vector by a matrix or its
inverse. The next two propositions give the
complexity bound for the same.
Journal of Science and Technology on Information security
24 No 1.CS (21) 2024
Proposition 5. The computational
complexity of Algorithm 3 is
󰇛󰇜
 󰇛󰇜
Proof. Let us show the number of
polynomial and field element multiplications in
Algorithm 3.
Calculation of the result vector
󰇛󰇜
Polynomial multiplications

Finite field multiplications
From the above decomposition is easy to
check that the total complexity of multiplying
two polynomials modulus an i-degree
polynomial defined over is:
󰇛󰇜

whereas the complexity of multiplying
elements of is:

 󰇛󰇜
Hence, the complexity of the algorithm is
 󰇛󰇜
Therefore, the proof is complete.
Proposition 6. The computational
complexity of Algorithm 4 is
󰇛󰇜

 󰇛󰇜
Proof. The demonstration of the proposition
is identical to the one for Proposition 5 but we
also have to take into account the number of
polynomial inversions made by the algorithm
whose complexity is equal to:
󰇛󰇛󰇜󰇜


which yields in that the complexity of the
algorithm is


 󰇛󰇜
This completes the proof.
To this point we have shown several
algorithms related to the generation of non-
singular matrices with coefficients over and
the multiplication of a row vector by these
matrices as well as discussed the computational
complexity of the same. We like to remark that
although we do not improve the time
complexity of the algorithm introduced in [4]
our algorithms use, at most, n random elements
of while the one presented in [4] uses
󰇛󰇜 random field elements in average.
Furthermore, in order to multiply a row vector
by a matrix or its inverse we do not have to
represent the matrix to make the calculations.
VII. REDUCING THE COMPLEXITY OF
GENERATING A NON-SINGULAR MATRIX AND ITS
INVERSE
For the case of general matrices there exist
several research papers that propose methods to
obtain their inverses [1416]. Other algorithms
focus on special types of matrices such as
positive definite matrix [17], tridiagonal matrix
[18] and diagonal matrix [19]. However, such
algorithms have computational complexity of
󰇛󰇜. The first method to have a complexity
strictly lower than 󰇛󰇜 was proposed by
Strassen in [20] having a running time of
󰇛󰇜 and, after several improvement to the
index of n, Coppersmith and Winograd obtain a
method capable of finding the inverse of a
matrix in 󰇛󰇜 [21]. There exists methods
whose running time is 󰇛󰇜 such as the
proposed by Traub for inversion of classical
Vandermonde matrices [22], which has since
been extended to many important cases beyond
the classical Vandermonde case. Nonetheless,
we do not find any reference where the
Journal of Science and Technology on Information security
No 1.CS (21) 2024 25
generation of the inverse matrix take less than
󰇛󰇜 operations. Through this section we
discuss some input configurations for the
algorithms discussed in this paper which allow
to reduce their computational complexity.
Proposition 7. Let 
,  and let while
. Then, the complexity of
Algorithm 1 is 󰇛󰇜󰇛󰇜
Proof. Notice that if  then no field
element multiplications are carried by
Algorithm 1. Furthermore, if  we have
then the operation
󰇛󰇜󰇛󰇜
where 󰇛󰇜
can be reduced to
󰇛󰇜󰇛󰇜
which is equal to 󰇛󰇜 given that such
polynomial have degree lower than 󰇛󰇜. Thus,
no complexity is added for the multiplication of
polynomials modulus 󰇛󰇜 when . Later,
to construct the j-th row is only necessary to
operate:
󰇛󰇜󰇛󰇜
whose complexity is . Hence, it is easy to
check that complexity of Algorithm 1 in this
case is equal to 󰇛󰇜.
From the result of the above proposition
one can obtain the following corollary.
Corollary 3. Let ,
 and let while
. Then, the complexity of Algorithm 1
is 󰇛󰇜
Remark 1. It worth noticing that for matrix
sizes of order of thousands the number of
operations made by the new configuration of
Algorithm 1 is lower than the one presented by
Coppersmith and Winograd [21].
Proposition 8. Let 
,  and let
while . Then, the complexity of
Algorithm 1 is 󰇛󰇜
Proof. Using an analogous reasoning to
proof of Proposition 7 one can easily obtain that
the complexity of the algorithm is given by the
formula
󰇛󰇜󰇛󰇜󰇛󰇜
from where it is straightforward to notice
that 󰇛󰇜󰇛󰇜
This completes the proof.
The results from Propositions 7 and 8 can be
generalized in the following theorem.
Theorem 2. Let  ,
 and let while
. Then, the complexity of
Algorithm 1 is 󰇛󰇜
Proof. It is the consequence of the
combination of the proofs for Propositions 3, 7
and 8.
Notice that, as result of Theorem 2, when i
= n then the algorithm running time is roughly
bounded by 󰇛󰇜 which
correspond to the bound given in [5].
Furthermore, the more we use the greater
is the subspace of matrices we can generate by
means of our proposal.
In the next table we show a comparison of
the result from Theorem 2 and other methods to
obtain non-singular matrices in the literature.
Reference
Type of
matrix
Upper bound for
complexity
Theorem 2
Arbitrary
󰇛󰇜
Ref. [17]
Positive
definite
󰇛󰇜
Ref. [18]
Tridiagonal
󰇛󰇜
Ref. [19]
Diagonal
󰇛󰇜
Ref. [20]
Arbitrary
󰇛󰇜
Ref. [21]
Arbitrary
󰇛󰇜
Ref. [22]
Classical
Vandermonde
󰇛󰇜
Journal of Science and Technology on Information security
26 No 1.CS (21) 2024
Once we have analyzed special input cases
of the algorithm to obtain nonsingular matrices
in less than 󰇛󰇜 operations let us show that
the complexity of multiplying a row vector by
such matrices is equivalent to multiplication of
polynomials modulus an n-degree polynomial
over .
Proposition 9. Let 
,  and let while
. Then, the complexity of
Algorithm 3 is
󰇛󰇜󰇛󰇜
Proof. It worth noticing that in Algorithm 3
the input vector 󰇛󰇜 remains
constant regardless the value of 
 given by the fact that both  and
are equal to 0. Hence the result of the
multiplication of by the non-singular matrix A
is equivalent to
󰇛󰇜󰇛󰇜
whose complexity is
󰇛󰇜. □
Following the reasoning of Propositions 7
and 8 which yield in Theorem 2 we can also
generalize the result of the above proposition as
stated in the following corollary.
Corollary 4. Let ,
 and let while
. Then, the complexity of
Algorithm 3 is 󰇛󰇜
In addition, we believe that the prior
knowledge of the factorization of the monic
polynomials will also have influence on the
complexity of the algorithms, but such is a topic
we will board in a future research.
VIII. CONCLUSIONS
In this paper several methods for the
construction of non-singular matrices were
studied. We show that one can substitute the
primitive polynomials used in [5] by monic
polynomials with non-zero independent
coefficient and yet generate non-singular
matrices without affect the complexity of the
algorithm. In this fashion we provide the proofs
of the complexity bounds of the methods
presented here which are more accurate than the
ones in [5]. In addition, we analyze some
special input configurations which allow to
reduce the complexity of the algorithm to
generate non-singular matrices to
󰇛󰇜 and the algorithm to
multiply a row vector by a non-singular matrix
to 󰇛󰇜. Finally, we generalize
the results for a given configuration of the input
of the algorithms which take 
, 
while  for some .
APPENDIX A. EXAMPLE OF THE PROPOSED
SCHREIER STRUCTURE
Let 󰇭
󰇮
be the companion matrix of the arbitrary monic
polynomial
󰇛󰇜󰇟󰇠
with order 5 and let
󰇭
󰇮
be the companion matrix of the primitive
polynomial 󰇛󰇜. Then will
be:
󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭


󰇮
󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭


󰇮
󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭
󰇮
Journal of Science and Technology on Information security
No 1.CS (21) 2024 27
󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭


󰇮
󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭


󰇮
Let 󰇭
󰇮
󰆒
where
󰆒 is the companion matrix of the
polynomial 󰇛󰇜 with
order 4 and let
󰇭
󰇮󰆒
where 󰆒 is the companion matrix of the
primitive polynomial󰇛󰇜.
Then will be:
󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭


󰇮
󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭


󰇮
󰇛   󰇜
󰇭
󰇮󰇛󰇛   󰇜
󰇭
󰇮󰇛   󰇜󰇜
󰇭
󰇮
󰆄
󰆈
󰆈
󰆈
󰆈
󰆈
󰆈
󰆈
󰆈
󰆈
󰆈
󰆅
󰆈
󰆈
󰆈
󰆈
󰆈
󰆈
󰆈
󰆈
󰆈
󰆈
󰆆
󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭


󰇮
󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭


󰇮
󰇛   󰇜
󰇭
󰇮
Let 󰇭
󰇮
󰆒󰇡
󰇢
where
󰆒 is the companion matrix of the
polynomial 󰇛󰇜 with order 3.
Since the polynomial 󰇛󰇜 is primitive, the
matrix is not necessary. Then will be:
󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭


󰇮
󰇛   󰇜
󰇭
󰇮
󰆄
󰆈
󰆈
󰆈
󰆈
󰆅
󰆈
󰆈
󰆈
󰆈
󰆆
󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭


󰇮
󰇛   󰇜
󰇭
󰇮
󰆄
󰆈
󰆈
󰆈
󰆈
󰆅
󰆈
󰆈
󰆈
󰆈
󰆆
󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭


󰇮
󰇛   󰇜
󰇭
󰇮
󰆄
󰆈
󰆈
󰆈
󰆈
󰆅
󰆈
󰆈
󰆈
󰆈
󰆆
󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭


󰇮
Finally, taking 󰇛󰇜 then will
be:
󰇛   󰇜
󰇭
󰇮󰇛   󰇜
󰇭
󰇮
󰆄
󰆈
󰆈
󰆈
󰆈
󰆅
󰆈
󰆈
󰆈
󰆈
󰆆
󰇛   󰇜
󰇭


󰇮
󰆄
󰆈
󰆈
󰆈
󰆈
󰆅
󰆈
󰆈
󰆈
󰆈
󰆆
Journal of Science and Technology on Information security
28 No 1.CS (21) 2024
󰇛   󰇜
󰇭
󰇮
󰆄
󰆈
󰆈
󰆈
󰆈
󰆅
󰆈
󰆈
󰆈
󰆈
󰆆
󰇛   󰇜
󰇭
󰇮
󰆄
󰆈
󰆈
󰆈
󰆈
󰆅
󰆈
󰆈
󰆈
󰆈
󰆆
󰇛   󰇜
󰇭


󰇮
󰆄
󰆈
󰆈
󰆈
󰆈
󰆅
󰆈
󰆈
󰆈
󰆈
󰆆
󰇛   󰇜
󰇭
󰇮
󰆄
󰆈
󰆈
󰆈
󰆈
󰆅
󰆈
󰆈
󰆈
󰆈
󰆆
󰇛   󰇜
󰇭
󰇮
󰆄
󰆈
󰆈
󰆈
󰆈
󰆅
󰆈
󰆈
󰆈
󰆈
󰆆
APPENDIX B. ALGORITHMS
Algorithm 2: Generate the matrix 
Require: ,  and .
Require: Monic polynomials 󰇛󰇜
 󰇟󰇠 
Require:  where is the order of
󰇛󰇜
Ensure: The matrix 
// Calculate the j-th row of  ()
for to do:
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
for down to do:



 
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
end for
󰇛󰇜
end for
Algorithm 3: Multiply a vector for a matrix
󰇡󰇢
Require: ,  and .
Require: Monic polynomials 󰇛󰇜
 󰇟󰇠
Require:  where is the order of 󰇛󰇜
Require: The vector 󰇛󰇜
Ensure:
for down to do:


 
󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
end for
󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
return 󰇛󰇜
Algorithm 4: Multiply a vector for the matrix 
Require: ,  and .
Require: Monic polynomials 󰇛󰇜
 󰇟󰇠
Require:  where is the order of 󰇛󰇜
Require: The vector 󰇛󰇜
Ensure: 
󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
for to do:


 
󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
end for
return 󰇛󰇜
REFERENCES
[1] Cormen, T. H., Leiserson, C. E., Rivest, R. L.
and Stein, C.: “Introduction to Algorithms”.
MIT Press, Massachusetts, 2022.
[2] Alman, J. and Williams, V. V.: “A refined laser
method and faster matrix multiplication”. In
Proceedings of the 2021 ACM-SIAM
Symposium on Discrete Algorithms (SODA),
pp. 522539, 2021.
[3] Li, Y.-X., Li, D.-X., and Wu, C.-K.: “How to
generate a random nonsingular, matrix in
McEliece’s public-key cryptosystem”. In
Proceedings Singapore ICCS/ISITA92, pp.
268269, IEEE, 1992.
[4] Randall, D.: “Efficient generation of random
nonsingular matrices”. Random Structures &
Algorithms, Vol. 4(1), pp. 111118, 1993.
[5] Freyre, P., Díaz, N. and Morgado, E.: “Fast
algorithm for the multiplication of a row vector
by a randomly selected matrix A”. Journal of
Discrete Mathematical Sciences and
Cryptography, Vol. 12(5), pp. 533549, 2009.
[6] Murray, S.H.: “The Schereier-Sims algorithm”.
Essay submitted to the Department of
Journal of Science and Technology on Information security
No 1.CS (21) 2024 29
Mathematics of the Australian National
University, 2003.
[7] Holt, D.F., Eick, B. and O’Brien, E.A.:
“Handbook of computational group theory”.
CRC Press, 2005.
[8] Cannon, J. “A computational toolkit for finite
permutation groups”. In Proceedings of the
Rutgers Group Theory Year, Vol. 1984, pp. 1
18, 1983.
[9] Green, J. A.: “Sets and groups: A first course in
algebra”. Springer, 1988.
[10] Peterson, W.W. and Weldon, E.J.: “Error-
correcting codes”. MIT Press, Massachusetts,
1972.
[11] Harvey, D. and Der Hoeven, J.: “Faster
polynomial multiplication over finite fields
using cyclotomic coefficient rings”. Journal of
Complexity, Vol. 54, pp. 101404, 2019.
[12] Cantor, D.G. and Kaltofen, E.: “On fast
multiplication of polynomials over arbitrary
algebras”. Acta Informatica, Vol. 28(7), pp.
693701, 1991.
[13] Brent, R.P. and Zimmermann, P.: “Modern
Computer Arithmetic”. Cambridge University
Press, 2010.
[14] Althoen, S.C., Mclaughlin, R.: “Gauss-jordan
reduction: A brief history”. The American
Mathematical Monthly, Vol. 94(2), pp. 130
142, 1987.
[15] Press, W.H., Teukolsky, S.A., Vetterling W.T.
and Flannery, B.P.: “Numerical recipes: the art
of scientific computing”. Cambridge
University Press, 1992.
[16] Krishnamoorthy, A. and Menon, D.: “Matrix
inversion using Cholesky decomposition”. In
2013 Signal Processing: Algorithms,
Architectures, Arrangements and Applications,
pp. 7072, IEEE, 2013.
[17] Vajargah, B.F.: “A way to obtain Monte Carlo
matrix inversion with minimal error”. Applied
Mathematics and Computation, Vol. 191(1),
pp. 225233, 2007.
[18] Huang, Y. and McColl, W.: “Analytical
inversion of general tridiagonal matrices”.
Journal of Physics A: Mathematical and
General, Vol. 30(22), 1997.
[19] Ries, F., De Marco, T. and Guerrieri, R.:
“Triangular matrix inversión on heterogeneous
multicore systems”. IEEE Transactions on
parallel and distributed systems, Vol. 23(1), pp.
177 184, 2011.
[20] Strassen, V. et. al.: “Gaussian elimination is
not optimal”. Numerische Mathematik, Vol.
13(4), pp. 354 356, 1969.
[21] Coppersmith, D. and Winograd, S.: “On the
asymptotic complexity of matrix
multiplication”. SIAM Journal on Computing,
Vol. 11(3), pp. 472 492, 1982.
[22] Traub, J.F.: “Associated polynomials and
uniform methods for the solution of linear
problems”. SIAM Review, Vol 8(3), pp. 277
301, 1966.
Journal of Science and Technology on Information security
30 No 1.CS (21) 2024
ABOUT THE AUTHORS
Pablo Freyre Arrozarena
Workplace: Institute of
Cryptography. University of Havana.
Email: pfreyre@matcom.uh.cu
Education: Graduated of
Mathematics in 1988. Receive his
Doctor’s degree in 1998.
Research interests: Currently works in the fields of
symmetric cryptography and post-quantum
cryptography.
Tên tác giả: Pablo Freyre Arrozarena
Cơ quan công tác: Viện mật mã Đại học Havana, Cuba
Email: pfreyre@matcom.uh.cu
Quá trình đào tạo: Tốt nghiệp chuyên ngành Toán năm
1988. Nhận bằng Tiến sĩ năm 1998.
Hướng nghiên cứu hiện nay: Mật mã đối xứng, mật mã
hậu lượng tử.
Alejandro Freyre Echevarría
Workplace: Institute of
Cryptography. University of Havana.
Email: freyreealejandro@gmail.com
Education: Graduated of Computer
Sciences in 2020. Received his
Master’s degree in 2023.
Research interests: Currently works
in the fields of symmetric cryptography, optimization
and post-quantum cryptography.
Tên tác giả: Alejandro Freyre Echevarría
Cơ quan công tác: Viện mật mã Đại học Havana, Cuba
Email: freyreealejandro@gmail.com
Quá trình đào tạo: Nhận bằng Đại học chuyên ngành
Khoa học Máy tính năm 2020. Nhận bằng Thạc
năm 2023.
Hướng nghiên cứu hiện nay: Mật mã đối xứng, mật mã
hậu lượng tử.
Ernesto Dominguez Fiallo
Workplace: Institute of
Cryptography. University of Havana.
Education: Graduated of Mathematics
in 2015. Received his Master’s
degree in 2019.
Research interest: Currently works in
the field of post-quantum cryptography, specifically in
code-based cryptography.
Tên tác giả: Ernesto Dominguez Fiallo
Cơ quan công tác: Viện mật mã Đại học Havana, Cuba
Email: sontn.mta@gmail.com
Quá trình đào tạo: Nhận bằng Đại học chuyên ngành
Toán năm 2015. Nhận bằng Thạc sĩ năm 2019.
Hướng nghiên cứu hiện nay: mật mã hậu lượng tử, mật
mã dựa trên mã hóa.
Ramses Rodríguez Aulet
Workplace: Institute of
Cryptography. University of Havana.
Email: ramsesrusia@yahoo.com
Education: Graduated of
Mathematics in 2017. Receive his
Master’s degree in 2020.
Research interest: Currently works in the fields of
symmetric cryptography and post-quantum
cryptography.
Tên tác giả: Ramses Rodríguez Aulet
Cơ quan công tác: Viện mật mã Đại học Havana, Cuba
Email: ramsesrusia@yahoo.com
Quá trình đào tạo: Nhận bằng Đại học chuyên ngành
Toán năm 2017. Nhận bằng Thạc sĩ năm 2020.
Hướng nghiên cứu hiện nay: Mật mã đối xứng, mật mã
hậu lượng tử.
Samir Alzugaray Vizcaino
Workplace: Institute of
Cryptography. University of Havana.
Education: Graduated of
Mathematics in 2016.
Research interest: Currently works in
the fields of symmetric cryptography
and post-quantum cryptography.
Tên tác giả: Samir Alzugaray Vizcaino
Cơ quan công tác: Viện mật mã Đại học Havana, Cuba
Quá trình đào tạo: Nhận bằng Đại học chuyên ngành
Toán năm 2016.
Hướng nghiên cứu hiện nay: Mật mã đối xứng, mật mã
hậu lượng tử.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The properties of associated polynomials have never been investigated in themselves. We shall try to demonstrate that associated polynomials provide a useful unifying concept. Although many of the results of this paper are new, we shall also present known results in our framework. Let P(t) be a monic polynomial of degree n,
Article
We prove that for a fixed prime p, polynomials in Fp[X] of degree n may be multiplied in O(nlogn4log∗n) bit operations. Previously, the best known bound was O(nlogn8log∗n).
Book
The origins of computation group theory (CGT) date back to the late 19th and early 20th centuries Since then the field has flourished particularly during the past 30 to 40 years and today it remains a lively and active branch of mathematicsThe Handbook of Computational Group Theory offers the first complete treatment of all the fundamental methods and algorithms in CGT presented at a level accessible even to advanced undergraduate students It develops the theory of algorithms in full detail and highlights the connections between the different aspects of CGT and other areas of computer algebra While acknowledging the importance of the complexity analysis of CGT algorithms the authors' primary focus is on algorithms that perform well in practice rather than on those with the best theoretical complexity Throughout the book applications of all the key topics and algorithms to areas both within and outside of mathematics demonstrate how CGT fits into the wider world of mathematics and science The authors include detailed pseudocode for all of the fundamental algorithms and provide detailed worked examples that bring the theorems and algorithms to life.
Article
This paper seeks to present a new algorithm for the multiplication of a row vector by a randomly selected matrix A or by the inverse of A. The random matrices are included in the general linear group GLn (GF(q)), q a power of a prime p. The algorithm needs n ² random numbers of the finite field GF(q) and has a complexity of 0(n ²log n log log n) when the field is prime. This algorithm is more efficient than other known algorithms in terms of complexity and random elements required and it can be used in several cryptographic applications.
Article
Notice biographique de Wilhehn Jordan (1842-1899) (a ne pas confondre avec le mathematicien Camille Jordan 1838-1922) et introduction de sa methode comme sous le nom de methode de reduction de Gauss-Jordan
Article
In this paper we give a complete analysis for general tridiagonal matrix inversion for both non-block and block cases, and provide some very simple analytical formulae which immediately lead to closed forms for some special cases such as symmetric or Toeplitz tridiagonal matrices.
Article
t. Below we will give an algorithm which computes the coefficients of the product of two square matrices A and B of order n from the coefficients of A and B with tess than 4.7-n l°g7 arithmetical operations (all logarithms in this paper are for base 2, thus tog 7 ~ 2.8; the usual method requires approximately 2n 3 arithmetical operations). The algorithm induces algorithms for inverting a matrix of order n, solving a system of n linear equations in n unknowns, computing a determinant of order n etc. all requiring less than const n l°g 7 arithmetical operations. This fact should be compared with the result of KLYUYEV and KOKOVKINSHCHERBAK [1 ] that Gaussian elimination for solving a system of linearequations is optimal if one restricts oneself to operations upon rows and columns as a whole. We also note that WlNOGRAD [21 modifies the usual algorithms for matrix multiplication and inversion and for solving systems of linear equations, trading roughly half of the multiplications for additions and subtractions. It is a pleasure to thank D. BRILLINGER for inspiring discussions about the present subject and ST. COOK and B. PARLETT for encouraging me to write this paper. 2. We define algorithms e~, ~ which multiply matrices of order m2 ~, by induction on k: ~,0 is the usual algorithm, for matrix multiplication (requiring m a multiplications and m 2 (m- t) additions), e~,k already being known, define ~, ~ +t as follows: If A, B are matrices of order m 2 k ~ to be multiplied, write (All A~2 t (B~I B12~