ArticlePDF Available

Classification and Numbering of Dental Radiographs for an Automated Human Identification System

Authors:

Abstract and Figures

Dental based human identification is commonly used in forensic. In a case of large scale investigation, manual identification needs a large amount of time. In this paper, we developed an automated human identification system based on dental radiographs. The system developed has two main stages. The first stage is to arrange a database consisting of labeled dental radiographs. The second stage is the searching process in the database in order to retrieve the identification result. Both stages use a number of image processing techniques, classification methods, and a numbering system in order to generate dental radiograph's features and patterns. The first technique is preprocessing which includes image enhancement and binarization, single tooth extraction, and feature extraction. Next, we performed dental classification process which aims to classify the extracted tooth into molar or premolar using the binary support vector machine method. After that, a numbering process is executed in accordance with molar and premolar pattern obtained in the previous process. Our experiments using 16 dental radiographs that consist of 6 bitewing radiographs and 10 panoramic radiographs, 119 teeth objects in total, has shown good performance of classification. The accuracy value of dental pattern classification and dental numbering system are 91.6 % and 81.5% respectively.
Content may be subject to copyright.
TELKOMNIKA, Vol.10, No.1, March 2012, pp. 137~146
ISSN: 1693-6930
accredited by DGHE (DIKTI), Decree No: 51/Dikti/Kep/2010 137
Received August 15, 2011; Revised November 7, 2011; Accepted February 7, 2012
Classification and Numbering of Dental Radiographs
for an Automated Human Identification System
Anny Yuniarti*, Anindhita Sigit Nugroho, Bilqis Amaliah, Agus Zainal Arifin
Laboratory of Vision and Image Processing, Department of Informatics
Faculty of Information Technology, Institut Teknologi Sepuluh Nopember
Kampus ITS, Sukolilo, Surabaya 60111, Indonesia, Telp/fax 031-5939214/031-5913804
e-mail: anny@if.its.ac.id
Abstrak
Identifikasi manusia berbasis data gigi umum digunakan dalam forensik. Dalam kasus investigasi
yang besar, proses pengidentifikasian manusia yang dilakukan secara manual memerlukan waktu yang
lama. Pada makalah ini dikembangkan sebuah sistem identifikasi manusia otomatis menggunakan
radiografi gigi. Sistem yang dibangun bekerja dengan 2 tahapan utama. Tahapan pertama adalah
menyusun sebuah database berisi data radiografi gigi berlabel. Tahapan kedua adalah melakukan
pencarian pada database untuk mendapatkan hasil identifikasi. Kedua tahapan tersebut menggunakan
serangkaian proses pengolahan citra dan klasifikasi serta penomoran untuk mendapatkan pola dan fitur
radiografi gigi. Pertama, dilakukan prapemrosesan yang meliputi perbaikan dan binarisasi citra, ekstraksi
gigi tunggal, dan ekstraksi fitur. Selanjutnya, dilakukan proses klasifikasi gigi untuk mengklasifikasikan gigi
menjadi molar dan premolar dengan menggunakan metode binary support vector machine (SVM). Setelah
itu, proses penomoran pada gigi dilakukan sesuai pola molar dan premolar yang diperoleh pada tahap
sebelumnya. Percobaan menggunakan 16 radiografi gigi yang terdiri dari 6 radiografi bitewing dan 10
radiografi panoramik dengan total 119 objek gigi menunjukkan nilai akurasi yang baik, yaitu 91,6% untuk
proses klasifikasi gigi menjadi molar dan premolar dan 81,51% untuk proses penomoran gigi.
Kata kunci: forensik, identifikasi manusia, radiografi gigi, segmentasi, sistem penomoran gigi
Abstract
Dental based human identification is commonly used in forensic. In a case of large scale
investigation, manual identification needs a large amount of time. In this paper, we developed an
automated human identification system based on dental radiographs. The system developed has two main
stages. The first stage is to arrange a database consisting of labeled dental radiographs. The second
stage is the searching process in the database in order to retrieve the identification result. Both stages use
a number of image processing techniques, classification methods, and a numbering system in order to
generate dental radiograph’s features and patterns. The first technique is preprocessing which includes
image enhancement and binarization, single tooth extraction, and feature extraction. Next, we performed
dental classification process which aims to classify the extracted tooth into molar or premolar using the
binary support vector machine method. After that, a numbering process is executed in accordance with
molar and premolar pattern obtained in the previous process. Our experiments using 16 dental
radiographs that consist of 6 bitewing radiographs and 10 panoramic radiographs, 119 teeth objects in
total, has shown good performance of classification. The accuracy value of dental pattern classification
and dental numbering system are 91.6 % and 81.5% respectively.
Keywords: forensic, human identification, dental radiographs, segmentation, dental numbering system
1. Introduction
Biometric is a tool of identification that has been broadly used in many applications. A
biometric identification system is based on physical characteristics such as face [1], fingerprint,
palmprint [2], eyes (iris, retina) and DNA. However, many of those characteristics are only
suitable for ante mortem (AM) identification when a person to be identified is still alive. They
cannot be used for postmortem (PM) identification especially in the case of decay or severe
body damage caused by fire or collision [3].
Teeth are parts of human organ that are not easily decayed, located inside mouth and
thus they are more protected from decaying after human’s death. Therefore, teeth based
identification is one of reliable tools for PM identification.
TELKOMNIKA
Vol.10, No.1,
138
On average, human
has 32 teeth; each tooth has five surfaces, meaning that inside a
mouth there are 160 tooth surfaces with various conditions. If we use dental features as a tool
of identification, manual matching of AM with PM data needs a large amount of time and some
expertise. Therefore, a computer
In order to create an automated identification system, dental features on a dental
radiograph need to be extracted and saved in a database. During identification, features of e
tooth on the input are extracted and compared to those in the database. This matching process
will take a long time to complete if we do not reduce the search space. In this paper, we reduce
the search space by comparing the pattern and numbering of te
matched dental and numbering pattern. Therefore, we can enhance the effectiveness of the
identification system.
Figure 1 shows the international dental numbering system which also shows the molar
and premolar teeth. The
re are 32 teeth in adult people, sixteen teeth on each jaw. There are
two jaws, maxilla and mandible. Each jaw is divided into two groups, left and right. Thus, each
group consists of eight teeth comprised of two bicuspid, one cuspid, two premolar teeth, a
three molar teeth. In this research we only use molar and premolar teeth as part of dental
pattern, since molar and premolar teeth are usually stronger than others.
The international dental numbering system has teeth number from 1 to 32, starting from
t
maxilla (#16). Next, the numbering is continued to the third molar in the left mandible (#17) and
around the mandible until we find the third mola
There are three kinds of dental radiographs: bitewing, panoramic, and periapical.
literatures [3 - 5
], bitewing images are usually used for identification. However, in this paper, we
tested our method not only to bitewing radiogr
radiographs have wider space between upper and lower jaw, whereas, in panoramic
radiographs, the upper and lower jaw are closer.
Automated dental based identification consists of extracting dental feature
matching itself [5 - 7
]. In this paper, the dental feature used for identifying human is the
arrangement of Molar and Premolar teeth and the numbering of each radiograph. In order to
have this arrangement, we have to classify each tooth in a r
But first, we have to extract the tooth using several image processing techniques. The tooth
separation is crucial to the system. Our tooth separation method has been able to correctly
extract single tooth. There are only
segmented due to very high intensity in the lower jaw bone.
The rest of the paper is organized as follows. Section 2 gives an explanation of the
method used in this research. Section 3 explains the result
present the conclusion and future works.
1
2
32
31
Right
Figure 1. A system of dental numbering in adults
2. Research Method
In this section, the proposed system design and three main functions, namely pre
processing, teeth separation, classification and numbering system, are explained. All functions
in the proposed system are implemented using Matlab 7.0.
There are two main pha
Figure 2. They are dental data recording phase and identification phase. In the dental data
recording phase, dental radiographs are processed. There are three main functions in this
phase, namely
preprocessing, teeth separation, classification and numbering. The results of this
phase are dental patterns, which next to be recorded in a database along with the original
Vol.10, No.1,
March 2012 : 137 – 146
has 32 teeth; each tooth has five surfaces, meaning that inside a
mouth there are 160 tooth surfaces with various conditions. If we use dental features as a tool
of identification, manual matching of AM with PM data needs a large amount of time and some
expertise. Therefore, a computer
-
aided for an identification system is needed.
In order to create an automated identification system, dental features on a dental
radiograph need to be extracted and saved in a database. During identification, features of e
tooth on the input are extracted and compared to those in the database. This matching process
will take a long time to complete if we do not reduce the search space. In this paper, we reduce
the search space by comparing the pattern and numbering of te
eth only. This results in a list of
matched dental and numbering pattern. Therefore, we can enhance the effectiveness of the
Figure 1 shows the international dental numbering system which also shows the molar
re are 32 teeth in adult people, sixteen teeth on each jaw. There are
two jaws, maxilla and mandible. Each jaw is divided into two groups, left and right. Thus, each
group consists of eight teeth comprised of two bicuspid, one cuspid, two premolar teeth, a
three molar teeth. In this research we only use molar and premolar teeth as part of dental
pattern, since molar and premolar teeth are usually stronger than others.
The international dental numbering system has teeth number from 1 to 32, starting from
he third molar in the right maxilla (#1), going through the maxilla to the third molar in the left
maxilla (#16). Next, the numbering is continued to the third molar in the left mandible (#17) and
around the mandible until we find the third mola
r in the right mandible (#32) [4
].
There are three kinds of dental radiographs: bitewing, panoramic, and periapical.
], bitewing images are usually used for identification. However, in this paper, we
tested our method not only to bitewing radiogr
aphs, but also to panoramic radiographs. Bitewing
radiographs have wider space between upper and lower jaw, whereas, in panoramic
radiographs, the upper and lower jaw are closer.
Automated dental based identification consists of extracting dental feature
]. In this paper, the dental feature used for identifying human is the
arrangement of Molar and Premolar teeth and the numbering of each radiograph. In order to
have this arrangement, we have to classify each tooth in a r
adiograph into Molar or Premolar.
But first, we have to extract the tooth using several image processing techniques. The tooth
separation is crucial to the system. Our tooth separation method has been able to correctly
extract single tooth. There are only
two of sixteen images that have not been correctly
segmented due to very high intensity in the lower jaw bone.
The rest of the paper is organized as follows. Section 2 gives an explanation of the
method used in this research. Section 3 explains the result
s and analysis. In Section 4, we
present the conclusion and future works.
Right Maxilla
Left Maxilla
3
4
5
6
7
8
9
10
11
12
13
14
15
16
30
29
28
27
26
25
24
23
22
21
20
19
18
17
Right
Mandible
Left Mandible
Figure 1. A system of dental numbering in adults
In this section, the proposed system design and three main functions, namely pre
processing, teeth separation, classification and numbering system, are explained. All functions
in the proposed system are implemented using Matlab 7.0.
There are two main pha
ses in the proposed human identification system as shown in
Figure 2. They are dental data recording phase and identification phase. In the dental data
recording phase, dental radiographs are processed. There are three main functions in this
preprocessing, teeth separation, classification and numbering. The results of this
phase are dental patterns, which next to be recorded in a database along with the original
ISSN: 1693-6930
has 32 teeth; each tooth has five surfaces, meaning that inside a
mouth there are 160 tooth surfaces with various conditions. If we use dental features as a tool
of identification, manual matching of AM with PM data needs a large amount of time and some
aided for an identification system is needed.
In order to create an automated identification system, dental features on a dental
radiograph need to be extracted and saved in a database. During identification, features of e
ach
tooth on the input are extracted and compared to those in the database. This matching process
will take a long time to complete if we do not reduce the search space. In this paper, we reduce
eth only. This results in a list of
matched dental and numbering pattern. Therefore, we can enhance the effectiveness of the
Figure 1 shows the international dental numbering system which also shows the molar
re are 32 teeth in adult people, sixteen teeth on each jaw. There are
two jaws, maxilla and mandible. Each jaw is divided into two groups, left and right. Thus, each
group consists of eight teeth comprised of two bicuspid, one cuspid, two premolar teeth, a
nd
three molar teeth. In this research we only use molar and premolar teeth as part of dental
The international dental numbering system has teeth number from 1 to 32, starting from
he third molar in the right maxilla (#1), going through the maxilla to the third molar in the left
maxilla (#16). Next, the numbering is continued to the third molar in the left mandible (#17) and
].
There are three kinds of dental radiographs: bitewing, panoramic, and periapical.
In
], bitewing images are usually used for identification. However, in this paper, we
aphs, but also to panoramic radiographs. Bitewing
radiographs have wider space between upper and lower jaw, whereas, in panoramic
Automated dental based identification consists of extracting dental feature
s and feature
]. In this paper, the dental feature used for identifying human is the
arrangement of Molar and Premolar teeth and the numbering of each radiograph. In order to
adiograph into Molar or Premolar.
But first, we have to extract the tooth using several image processing techniques. The tooth
separation is crucial to the system. Our tooth separation method has been able to correctly
two of sixteen images that have not been correctly
The rest of the paper is organized as follows. Section 2 gives an explanation of the
s and analysis. In Section 4, we
In this section, the proposed system design and three main functions, namely pre
-
processing, teeth separation, classification and numbering system, are explained. All functions
ses in the proposed human identification system as shown in
Figure 2. They are dental data recording phase and identification phase. In the dental data
recording phase, dental radiographs are processed. There are three main functions in this
preprocessing, teeth separation, classification and numbering. The results of this
phase are dental patterns, which next to be recorded in a database along with the original
TELKOMNIKA ISSN: 1693-6930
Classification and Numbering of Dental Radiographs for An Automated …. (Anny Yuniarti)
139
radiographs. The identification phase aims to identify a dental radiograph, called a query,
belongs to which data in the database. The functions applied to the dental radiograph in the
identification phase are similar to those applied to radiographs in the recording phase. The
result of classification and numbering system in this phase is used as a search query which
leads to an identification result.
Figure 2. Design of the proposed human identification system
In the pre-processing step, a dental radiograph which has been digitalized are loaded
from local hard disk. The dental radiograph can be bitewing or panoramic. For panoramic
radiographs, we only take the molar and premolar part. Next, we perform image enhancement
that aims to equalize the brightness level so that there is no pixel that has very high intensity
level compare to its neighbors. This usually happens to dental fillings. Next, we perform the
contrast enhancement. Generally, dental radiographs have low contrast. In order to facilitate
process of teeth separation with the background, we increase the contrast using morphological
operation and top-hat and bottom-hat operator [8]. After that, we perform local histogram
equalization which is called Contrast-Limited Adaptive Histogram Equalization (CLAHE) [8].
After pre-processing, the grayscale digital radiographs are then converted into binary
images using Otsu's thresholding method [8] followed by closing and opening operation to
smooth the teeth contour and remove noises. Next, we perform horizontal integral projection [9]
followed by spline method to separate the image into a maxilla image and a mandible image.
Finally, we use the vertical integral projection method on each maxilla and mandible image
independently to extract single tooth image.
The next process is dental feature extraction of each tooth. This step is used for
classifying each tooth into molar or premolar class. The dental features are area of each tooth
and ratio of each tooth's width and height. After feature extraction, we classify each tooth into
molar or premolar class using binary support vector machine (SVM) method. SVM is a famous
binary classification method. Given a set of training data, each marked as belonging to one of
two classes, an SVM training method creates a model that predicts a new data is in one class or
the other [10]. An SVM model is a representation of all data as points in space and a clear gap
that separate data into two categories. This clear gap is often called as a hyperplane. This
hyperplane is built as wide as possible. New testing data are then mapped into the same space
and predicted as a member of a class based on which side of the hyperplane they fall on.
The molar-premolar pattern of each image is then refined using default patterns. There
are two kinds of default patterns, as shown in Figure 3, namely patterns for right side of teeth
(Pattern 1) and patterns for left side (Pattern 2). In this paper, the first step of numbering system
is find which default pattern that has highest similarity value with the pattern sequence tested.
The similarity matrix is computed using simplified Smith-Waterman algorithm [11] as in Equation
(1). Let
m
tttT K
21
=
be a sequence of dental numbering,
m
pppP K
21
=
be a dental pattern and
nm
. The similarity matrix O = {O
ij
} consists of similarity degrees between T
i
and P
j
segment
pair.
Dental data recording phase
Identification phase
Dental
Database
Search
Query
Dental
Radiographs Pre-
processing Teeth
Separation
Classification
and
Numbering
A Dental
Radiograph Pre-
processing Teeth
Separation
Classification
and
Numbering
Identification
Result
ISSN: 1693-6930
TELKOMNIKA Vol.10, No.1, March 2012 : 137 – 146
140
=+
=
jiji
jiji
ij
ptkO
ptO
O if}0,,max{
if1
3
1
1,1
1,1
(1)
Using Equation (1), we compute four similarity matrices, namely Omax
1
, Omax
2
,
Oman
1
, and Oman
2
. Then the maximum value of Omax
1
is added to that of Oman
1
and
compared to sum of the maximum value of Omax
2
and Oman
2
. If the sum of the maximum value
of Omax
1
and Oman
1
is higher than that of Omax
2
and Oman
2
, then we choose Pattern 1.
Otherwise, we choose Pattern 2. The next step is defining position of dental numbering based
on the chosen default pattern. First, find an element of similarity matrix O
kl
that has maximum
value, set a column index and a row index based on the element’s position, i.e. the column
index = k, and the row index = l. Then, label each tooth in maxilla and mandible with number as
in default pattern numbering system, i.e. p
l-k+i
, 1 i k.
As an illustration, suppose that patterns resulted from the SVM classification results are
molar-molar-molar-premolar-premolar (MMMPP) for maxilla and MMPP for mandible. Using
equation (1), the similarity matrices are as shown in Figure 4. Then find the maximum value of
each matrix, i.e. Smax
1
=5, Smax
2
=3, Sman
1
=4, Sman
2
=2. Thus, the score of Pattern 1 is 9,
while the score of Pattern 2 is 5. Therefore we choose Pattern 1 as the default pattern. Next,
label each tooth using teeth alignment method described above. For the maxilla sequence
(MMMPP), k = 5 and l = 5. Then the resulted number sequence is 1-2-3-4-5. For the mandible
sequence (MMPP), k = 4 and l = 5. Therefore the number sequence is 31-30-29-28.
Pattern 1
M
M
M
P P P
Maxilla 1 2 3 4 5 6
Mandible
32
31
30
29
28
27
Pattern 2
P P P M
M
M
Maxilla 11
12
13
14
15
16
Mandible
22
21
20
19
18
17
Figure 3. Two default patterns of dental numbering
Figure 4. The similarity matrices between two default patterns and the pattern MMMPP-MMPP.
The last procedure in the proposed identification system is related to database access.
Final images after the classification and numbering process are stored in the database along
with dental data such as a unique serial number, name and age of the radiograph’s owner, date
or recording, molar-premolar pattern, numbering pattern, area and ratio features, and file path of
the original image, picture of the owner, and the classified image.
We use the pattern of molar-premolar and numbering both in the maxilla and mandible
as the query of identification process. The result of this kind of query may include more than one
identified person. For further processing, user may add area and ratio features as part of the
query. Using these features, the system will choose data in the database that has equal area
and ratio.
TELKOMNIKA ISSN: 1693-6930
Classification and Numbering of Dental Radiographs for An Automated …. (Anny Yuniarti)
141
3. Results and Discussion
We use 16 dental radiograph images, composed of 6 bitewing radiographs and 10
panoramic radiographs. Based on an expert identification, there are 37 teeth objects identified
in the 6 bitewing radiographs. Whereas in the panoramic radiographs, there are 82 teeth objects
identified by the expert. Therefore, there are 119 objects of tooth in total. Three samples of the
system's input image are as shown in Figure 5(a-c).
3.1. Pre-processing and Teeth Separation
In the first process, input images are successfully enhanced as shown in Figure 6.
However, in the case of tooth object having too low intensity approaches background's intensity,
or in the case of lower jaw bone having too high intensity approaches teeth object's intensity,
this process does not perform well, as in three out of sixteen images in our experiment.
In the binarization process, the enhanced images are successfully converted into binary
images. Except for the three images having the intensity problem as we explained before, all
binary images have the properties as follows. The white pixels of the binary images represent
teeth objects, whereas non-hole black pixels represent background. Sample outputs of the
binarization process are as shown in Figure 7.
(a) (b) (c)
Figure 5. Sample input to the system: (a) a bitewing radiograph (b) a left-cropped panoramic
radiograph (c) right-cropped panoramic radiograph
(a) (b) (c)
Figure 6. Results of enhancement process applied to three sample inputs as in Figure 5.
(a) (b) (c)
Figure 7. Results of binarization process applied to three enhanced images as in Figure 6(a-c).
The process after binarization is separating each radiograph into two parts, namely
maxilla and mandible part. Our experiments show that using the horizontal integral projection
followed by spline method, we can split the radiograph into two regions (maxilla and mandible)
well (see Figure 8). Figure 8(a) and 8(d) are the two regions resulted from Figure 7(a). We can
see that bitewing radiographs are easier to be horizontally separated. Figure 8(b, e) and 8(c, f)
ISSN: 1693-6930
TELKOMNIKA Vol.10, No.1, March 2012 : 137 – 146
142
are resulted from Figure 7(b) and (c) respectively. Here, we can successfully separate the
binary image into maxilla and mandible images although the upper and lower jaws are very
close in panoramic radiographs.
Each region is processed further by applying the vertical integral projection followed by
spline method to separate the teeth region into single tooth region. Overall our method performs
well in our experiments except for two images that have very high lower jaw bone intensity. In
the case of molar tooth that has double roots in mandible, our method performs well too
because we only take 3/5 upper part of mandible. Hence pixel values of tooth root are not
included in the computation of teeth separation.
3.2. Classification and Numbering
In the classification process, firstly we considered a tooth object is an isolated area
having more than 6000 pixels. From each tooth object, we extracted its area, ratio of height and
width, and its centroid. Based on these features, we classify each tooth into molar or premolar
using binary SVM method. As a comparison, we also implemented the classification using k-
nearest neighbor (kNN) method, a simpler method than SVM, with k = 9.
Based on our experiments, there is significant difference between accuracy of SVM
classification result and that of kNN’s result. Using the SVM method, the total accuracy value
reaches 89.07% or 106 out of 119 objects were truly classified. Whereas, the average accuracy
of the kNN method reaches 77.31% or 92 out of 119 objects were truly classified.
Next, we applied the numbering system by marking all teeth using a number and we
also modified the class using standard numbering system in order to avoid abnormal molar and
premolar pattern. As an example, if a classification process results in a pattern such as
premolar-molar-premolar-premolar (P-M-P-P), then the pattern will be modified into M-M-P-P.
This strategy has been able to improve the system’s accuracy to 91.60%. Hence, 109 out of 119
objects are now classified correctly. The implemented numbering system also performs well.
There are 97 out of 119 objects numbered correctly. This leads to a total accuracy value of
81.51%. Details of classification and numbering accuracy value are shown in Table 1 and Table
2 respectively. Whereas, sample output images are shown in Figure 9. In Figure 9, extracted
teeth are marked using a yellow line, labeled by M for molar class or P for premolar class
followed by a number representing the numbering's result.
(a) (b) (c)
(d) (e) (f)
Figure 8. Results of teeth separation applied to three binary images as in Figure 7(a-c);
Top row: maxilla regions. Bottom row: mandible regions.
(a) (b) (c)
Figure 9. Results of classification and numbering process applied to extracted teeth as
in Figure 8(a-f).
Table 1. The accuracy of molar-premolar classification
TELKOMNIKA ISSN: 1693-6930
Classification and Numbering of Dental Radiographs for An Automated …. (Anny Yuniarti)
143
No Filename
Classification Accuracy (%)
kNN SVM SVM followed
by default
pattern
modification
1 bit1_Right.tif 100.00 100.00 100.00
2 bit2_Right.tif 71.42 100.00 100.00
3 bit3_Left.tif 80.00 80.00 80.00
4 bit4_Right.tif 71.42 100.00 100.00
5 bit5_Left.tif 57.14 71.42 85.71
6 bit6_Right.tif 33.33 66.67 66.67
7 pan1_Left.tif 100.00 100.00 100.00
8 pan1_Right.tif 87.50 100.00 100.00
9 pan25_Left.tif 88.89 88.89 88.89
10 pan25_Right.tif 100.00 100.00 100.00
11 pan34_Left.tif 75.00 100.00 100.00
12 pan34_Right.tif 71.42 100.00 100.00
13 pan50_Left.tif 50.00 75.00 75.00
14 pan50_Right.tif 55.56 55.56 66.67
15 pan70_Left.tif 87.50 100.00 100.00
16 pan70_Right.tif 100.00 87.50 100.00
Total accuracy out of 119 tooth
objects 77.31 89.07 91.60
Table 2. The accuracy of numbering using teeth alignment
No Filename Numbering Accuracy (%)
1 bit1_Right.tif 60.00
2 bit2_Right.tif 100.00
3 bit3_Left.tif 60.00
4 bit4_Right.tif 100.00
5 bit5_Left.tif 85.71
6 bit6_Right.tif 00.00
7 pan1_Left.tif 100.00
8 pan1_Right.tif 100.00
9 pan25_Left.tif 88.89
10 pan25_Right.tif 100.00
11 pan34_Left.tif 100.00
12 pan34_Right.tif 100.00
13 pan50_Left.tif 50.00
14 pan50_Right.tif 33.67
15 pan70_Left.tif 100.00
16 pan70_Right.tif 100.00
Total accuracy out of 119 tooth
objects 81.51
3.3. Identification System
The proposed automated human identification system was implemented using MySQL
database server and Matlab 7.0. The system consists of four user interfaces. The first user
interface is used for classification and numbering of dental radiographs. Sample input and
output of the classification and numbering system is as shown in Figure 10. In the system's user
interface, there are 6 buttons consisting of "Open Image" button to load an input image from
local disk, "Proceed" button to perform the proposed methods, "Save" button to store the
radiograph and its properties including dental pattern and numbering into the database,
"Search" button to find a match of current radiograph in the database based on its properties,
"Database" button to browse the database's contents, and "Exit" button to quit the application.
The second user interface aims to add a classified radiograph into the database. This
interface only appears after users click the “Save” button in the first user interface. In this
window, users may add additional information such as name, age, picture, and other
information. Figure 11 shows an example of the action.
The third window appears when users click the “Search” button in the first user
interface. This window aims to find whether there is matched data in the database based on the
resulted pattern and numbering. The search process may results in zero, one, or more than one
identity. This is because we only comparing the dental pattern including the dental numbering.
Figure 12 shows an example when the system found exactly one matched result.
ISSN: 1693-6930
TELKOMNIKA Vol.10, No.1, March 2012 : 137 – 146
144
Figure 10. The classification and numbering system's user
interface.
Figure 11. The user interface for
saving new data into database.
Figure 12. The user interface for searching process. Right image is the query, left image is the
result.
The last user interface aims for viewing and querying the database. In this window,
users are able to view all data in the database and to execute query based on pattern or
numbering. Figure 13 illustrates query “14-15-16” that results in one found data.
TELKOMNIKA ISSN: 1693-6930
Classification and Numbering of Dental Radiographs for An Automated …. (Anny Yuniarti)
145
Figure 13. The user interface for querying database. Users are asked to enter the pattern or
numbering in the Search textfield.
4. Conclusion
The proposed system has been successfully implemented and is able to generate
dental pattern and numbering based on dental radiographs. In this paper, we have shown that
our method can be applied not only to bitewing radiographs, but also to panoramic radiographs.
The total accuracy value of dental pattern classification is 91.6% and the total accuracy of
dental numbering system is 81.5%. However, there are some images that cannot be segmented
correctly, due to low intensities of tooth objects. This error propagates into next processes and
hence leads to incorrect classification and numbering. Therefore, the segmentation method still
needs further research.
Acknowledgment
The authors wish to acknowledge the Institute of Research and Public Services, Institut
Teknologi Sepuluh Nopember (ITS), which has financed the program through the letter of
agreement implementation research: 781/I2.7/PM/2011 Date: 1 April 2011. Dental radiographs
used in this research are obtained from the Hiroshima University Hospital, Hiroshima, Japan.
References
[1] Muntasa A, Sirajudin IA, Purnomo MH. Appearance Global and Local Structure Fusion for Face Image
Recognition. TELKOMNIKA Indonesian Journal of Electrical Engineering. 2011; 9(1): 125-132.
[2] Putra IKGD, Erdiawan. High Performance Palmprint Identification System Based On Two Dimensional
Gabor. TELKOMNIKA Indonesian Journal of Electrical Engineering. 2010; 8(3): 309-318.
[3] Lin PL, Lai YH, Huang PW. An Effective Classification and Numbering System for Dental Bitewing
Radiographs using Teeth Region and Contour Information. Pattern Recognition Journal. 2010; 43:
1380-1392.
[4] Mahoor MH, Abdel-Mottaleb M. Classification and Numbering of Teeth in Dental Bitewing Images.
Pattern Recognition Journal. 2005; 38: 577-586.
[5] Abdel-Mottaleb M., Nomir O, Nasser DE, Fahmy G, Ammar H. Challenges of Developing an
Automated Dental Identification System. 64th IEEE Midwest Symposium on Circuits and Systems.
Cairo, Egypt. 2004.
ISSN: 1693-6930
TELKOMNIKA Vol.10, No.1, March 2012 : 137 – 146
146
[6] Ammar H, Abdel-Mottaleb M, Jain A. Automated Dental Identification System (ADIS). 8th Annual
International Conference on Digital Government Research: Bridging Discipline & Domains.
Philadelphia, Pennsylvania, USA. 2007; 228: 248-249.
[7] Samopa F. Tooth Shape Measurement on Dental Radiographs for Forensic Personal Identification.
Dissertation of Department of Information Engineering, Graduate School of Engineering, Hiroshima
University. Hiroshima, Japan; 2009.
[8] Gonzalez RC, Woods RE. Digital Image Processing Using MATLAB. New Jersey, USA: Pearson
Prentice-Hall, Pearson Education, Inc. 2004.
[9] Hermawati FA, Koesdijarto R. A Real-Time License Plate Detection System for Parking Access.
TELKOMNIKA Indonesian Journal of Electrical Engineering. 2010; 8(2): 97-106.
[10] Duda RO, Hart PE, Stork DG. Pattern Classification Second Edition. New York, USA: Wiley-
Interscience. 2001.
[11] Smith TF, Waterman MS. Identification of common molecular subsequences. Journal of Molecular
Biology. 1981; 147: 195–197.
... Traditional Machine Learning (ML) and Deep Learning (DL) methods, incorporating mathematical morphology (Mendonça 2004), active contour (Said et al. 2006), and level-set (Shah et al. 2006), have been employed for teeth segmentation. Additionally, Bayesian techniques (Rad et al. 2013), Linear models (Mahoor and Abdel-Mottaleb 2005), and Support Vector Machines (Aeini and Fariborz 2010), using hand-crafted features derived from Fourier descriptors (Anny et al. 2012), textures (Shen et al. 2017), and contours (Lin et al. 2010) have facilitated classification tasks. ...
Article
Full-text available
Dental caries detection holds the key to unlocking brighter smiles and healthier lives by identifying one of the most common oral health issues early on. This vital topic sheds light on innovative ways to combat tooth decay, empowering individuals to take control of their oral health and maintain radiant smiles. This research paper delves into the realm of transfer learning techniques, aiming to elevate the precision and efficacy of dental caries diagnosis. Utilizing Keras ImageDataGenerator, a rich and balanced dataset is crafted by augmenting teeth images from the Kaggle teeth dataset. Five cutting-edge pre-trained architectures are harnessed in the transfer learning approach: EfficientNetV2B3, VGG19, InceptionResNetV2, Xception, and ResNet50, with each model, initialized using ImageNet weights and tailored top layers. A comprehensive set of evaluation metrics, encompassing accuracy, precision, recall, F1-score, and false negative rates are employed to gauge the performance of these architectures. The findings unveil the unique advantages and drawbacks of each model, illuminating the path to an optimal choice for dental caries detection using Grad-CAM (Gradient-weighted Class Activation Mapping). The testing accuracies achieved by EfficientNetV2B3, VGG19, InceptionResNetV2, Xception, and ResNet50 models stand at 95.89%, 96.58%, 93.15%, 93.15%, and 94.18%, respectively. The Training accuracies stood at 100%, 99.91%, 100%, 100% and 100%, meanwhile on validation we achieved 97.63%, 96.68%, 98.82%, 96.68%, and 100% accuracies for EfficientNetV2B3, VGG19, InceptionResNetV2, Xception, and ResNet50 models respectively. Capitalizing on transfer learning and juxtaposing diverse pre-trained architectures, this research paper paves the way for substantial advancements in dental diagnostic capabilities, culminating in enhanced patient outcomes and superior oral health.
... Using digital images to identify teeth automatically is a crucial component of intelligent health care. [9,10,16,21] Studies have been conducted on AI for charting purposes on cone-beam computed tomographies, bitewing radiographs, and periapical radiographs. [11,12,22,23] Nevertheless, a panoramic radiograph is the most suitable technique for charting. ...
Article
Full-text available
Background Dentists begin the diagnosis by identifying and enumerating teeth. Panoramic radiographs are widely used for tooth identification due to their large field of view and low exposure dose. The automatic numbering of teeth in panoramic radiographs can assist clinicians in avoiding errors. Deep learning has emerged as a promising tool for automating tasks. Our goal is to evaluate the accuracy of a two-step deep learning method for tooth identification and enumeration in panoramic radiographs. Materials and Methods In this retrospective observational study, 1007 panoramic radiographs were labeled by three experienced dentists. It involved drawing bounding boxes in two distinct ways: one for teeth and one for quadrants. All images were preprocessed using the contrast-limited adaptive histogram equalization method. First, panoramic images were allocated to a quadrant detection model, and the outputs of this model were provided to the tooth numbering models. A faster region-based convolutional neural network model was used in each step. Results Average precision (AP) was calculated in different intersection-over-union thresholds. The AP50 of quadrant detection and tooth enumeration was 100% and 95%, respectively. Conclusion We have obtained promising results with a high level of AP using our two-step deep learning framework for automatic tooth enumeration on panoramic radiographs. Further research should be conducted on diverse datasets and real-life situations.
... Many automated systems have been developed to overcome such complications; some systems utilized classical machine learning techniques such as active contour [9], Bayesian methods [10], and support vector machines [11] with hand-crafted features. The design of hand-crafted features has a huge negative impact on the performance of the abovementioned algorithms. ...
Preprint
Full-text available
Dental radiography plays a crucial role in clinical diagnosis, treatment, and prognosis. In recent years, researchers have explored cutting-edge technologies to develop automated systems that can analyze radiographic imagery and support medical practitioners. The field of Artificial Intelligence (AI) has witnessed rapid advancements, with various approaches being developed or improved upon. While Convolutional Neural Networks (CNNs) have been widely used in medical image segmentation, the U-Net architecture has emerged as a standout performer due to its exceptional segmentation capabilities. This paper presents a proof of concept for the Attention U-Net archi-tecture applied to the task of teeth segmentation. The study demonstrates the superior performance of this network in accurately segmenting teeth using a newly available benchmark dataset called Tufts Dental X-Ray Dataset. When trained and tested on 10-fold cross-validation, the model achieved an average dice coefficient of 95.01%, intersection over union of 90.6%, and pixel accuracy of 98.82%. These scores surpass those of all other networks implemented on the same dataset. By leveraging the Attention U-Net architecture, our research showcases the potential of advanced AI techniques in dental radiography. The findings contribute to the ongoing efforts in developing automated systems that can assist dental professionals in their clinical practice.
... Yuniarti et al. [16] used 16 images (6 bitewing and 10 panoramic) to detect and number teeth with a method that achieved an accuracy of 91.6% in detection and 81.5% in numbering. ...
Article
Full-text available
Analysis of dental radiographs and images is an important and common part of the diagnostic process in daily clinical practice. During the diagnostic process, the dentist must interpret, among others, tooth numbering. This study is aimed at proposing a convolutional neural network (CNN) that performs this task automatically for panoramic radiographs. A total of 8,000 panoramic images were categorized by two experts with more than three years of experience in general dentistry. The neural network consists of two main layers: object detection and classification, which is the support of the previous one and a transfer learning to improve computing time and precision. A Matterport Mask RCNN was employed in the object detection. A ResNet101 was employed in the classification layer. The neural model achieved a total loss of 6.17% (accuracy of 93.83%). The architecture of the model achieved an accuracy of 99.24% in tooth detection and 93.83% in numbering teeth with different oral health conditions.
... 3,16 Several studies have been conducted for the detection and numbering of teeth using Bayesian techniques, 17 linear models, 18 or binary support vector machines (SVM). 19,20 Nonetheless, the image features are often ...
Article
Objectives: The present study aimed to evaluate the performance of a Faster Region-based Convolutional Neural Network (R-CNN) algorithm for tooth detection and numbering on periapical images. Methods: The data sets of 1686 randomly selected periapical radiographs of patients were collected retrospectively. A pre-trained model (GoogLeNet Inception v3 CNN) was employed for pre-processing, and transfer learning techniques were applied for data set training. The algorithm consisted of: (1) the Jaw classification model, (2) Region detection models, and (3) the Final algorithm using all models. Finally, an analysis of the latest model has been integrated alongside the others. The sensitivity, precision, true-positive rate, and false-positive/negative rate were computed to analyze the performance of the algorithm using a confusion matrix. Results: An artificial intelligence algorithm (CranioCatch, Eskisehir-Turkey) was designed based on R-CNN inception architecture to automatically detect and number the teeth on periapical images. Of 864 teeth in 156 periapical radiographs, 668 were correctly numbered in the test data set. The F1 score, precision, and sensitivity were 0.8720, 0.7812, and 0.9867, respectively. Conclusion: The study demonstrated the potential accuracy and efficiency of the CNN algorithm for detecting and numbering teeth. The deep learning-based methods can help clinicians reduce workloads, improve dental records, and reduce turnaround time for urgent cases. This architecture might also contribute to forensic science.
... Utilizing classical Machine Learning (ML) techniques such as mathematical morphology [2], active contour [3], and levelset [4] have been used for teeth segmentation. Additionally, Bayesian techniques [5], linear models [6], and support vector machines [7] with hand-crafted features extracted utilizing Fourier descriptors [5], contours [8], and textures [4] have been used to perform classification. However, most of these methods require careful engineering of the feature extractor to transform the raw data into an appropriate representation for the algorithms to detect or classify the input images. ...
Article
Full-text available
The application of Artificial Intelligence in dental healthcare has a very promising role due to the abundance of imagery and non-imagery-based clinical data. Expert analysis of dental radiographs can provide crucial information for clinical diagnosis and treatment. In recent years, Convolutional Neural Networks have achieved the highest accuracy in various benchmarks, including analyzing dental X-ray images to improve clinical care quality. The Tufts Dental Database, a new X-ray panoramic radiography image dataset, has been presented in this paper. This dataset consists of 1000 panoramic dental radiography images with expert labeling of abnormalities and teeth. The classification of radiography images was performed based on five different levels: anatomical location, peripheral characteristics, radiodensity, effects on the surrounding structure, and the abnormality category. This first-of-its-kind multimodal dataset also includes the radiologist's expertise captured in the form of eye-tracking and think-aloud protocol. The contributions of this work are 1) publicly available dataset that can help researchers to incorporate human expertise into AI and achieve more robust and accurate abnormality detection; 2) a benchmark performance analysis for various state-of-the-art systems for dental radiograph image enhancement and image segmentation using deep learning; 3) an in-depth review of various panoramic dental image datasets, along with segmentation and detection systems. The release of this dataset aims to propel the development of AI-powered automated abnormality detection and classification in dental panoramic radiographs, enhance tooth segmentation algorithms, and the ability to distill the radiologist's expertise into AI.
Article
Dental periapical X-rays are used as a popular tool by dentists for diagnosis. To provide dentists with diagnostic support, in this paper, we achieve automated teeth recognition of dental periapical X-rays by using deep learning techniques, including teeth location and classification. Convolutional neural network(CNN) is a popular method and has made large improvements in medical image applications. However, in our specific task, the performance of CNN is limited by lack of data and too many teeth positions in X-rays. Addressing this problem, we consider to utilize the prior dental knowledge, and therefore we propose a relation-based framework to handle the teeth location and classification task. According to the relation in teeth labels, we apply a special label reconstruction technique to decompose the teeth classification task, and use a multi-task CNN to classify the teeth positions. Meanwhile, for teeth location task, we design a proposal correlation module to use the information in teeth positions, and insert it into the multi-task CNN. A teeth sequence refinement module is used for the post processing. Our experiment results show that our relation-based framework achieves high teeth classification and location performance, which is a big improvement compared to the direct use of famous detection structures. With reliable teeth information, our method can provide automated diagnostic support for the dentists.
Article
Objectives This study aimed to develop a fully automated human identification method based on a convolutional neural network (CNN) with a large-scale dental panoramic radiograph (DPR) dataset. Methods In total, 2,760 DPRs from 746 subjects who had 2 to 17 DPRs with various changes in image characteristics due to various dental treatments (tooth extraction, oral surgery, prosthetics, orthodontics, or tooth development) were collected. The test dataset included the latest DPR of each subject (746 images) and the other DPRs (2,014 images) were used for model training. A modified VGG16 model with two fully connected layers was applied for human identification. The proposed model was evaluated with rank-1, –3, and −5 accuracies, running time, and gradient-weighted class activation mapping (Grad-CAM)–applied images. Results This model had rank-1,–3, and −5 accuracies of 82.84%, 89.14%, and 92.23%, respectively. All rank-1 accuracy values of the proposed model were above 80% regardless of changes in image characteristics. The average running time to train the proposed model was 60.9 sec per epoch, and the prediction time for 746 test DPRs was short (3.2 sec/image). The Grad-CAM technique verified that the model automatically identified humans by focusing on identifiable dental information. Conclusion The proposed model showed good performance in fully automatic human identification despite differing image characteristics of DPRs acquired from the same patients. Our model is expected to assist in the fast and accurate identification by experts by comparing large amounts of images and proposing identification candidates at high speed.
Article
Full-text available
Abstrak Analisis komponen utama (PCA) dan analisis deskriminan linear (LDA) merupakan metode ekstraksi berbasis penampakan yang menghasilkan fitur-fitur dengan struktur global. Fitur-fitur dengan struktur global mempunyai kelemahan, yaitu fitur-fitur dengan struktur lokal tidak dapat dicirikan. Proyeksi pelestarian lokalitas (LPP) dan wajah-Laplacian orthogonal (OLF) merupakan metode ekstraksi model penampakan yang menghasilkan fitur- Abstract Principal component analysis (PCA) and linear descriminant analysis (LDA) are an extraction method based on appearance with the global structure features. The global structure features have a weakness; that is the local structure features can not be characterized. Whereas locality preserving projection (LPP) and orthogonal laplacianfaces (OLF) methods are an appearance extraction with the local structure features, but the global structure features are ignored. For both the global and the local structure features are very important. Feature extraction by using the global or the local structures is not enough. In this research, it is proposed to fuse the global and the local structure features based on appearance. The extraction results of PCA and LDA methods are fused to the extraction results of LPP. Modelling results were tested on the Olivetty Research Laboratory database face images. The experimental results show that our proposed method has achieved higher recognation rate than PCA, LDA, LPP and OLF Methods.
Article
Full-text available
Palmprint is a relatively new in physiological biometrics. Palmprint region of interest (ROI) segmentation and feature extraction are two important issues in palmprint recognition. The main problem in palmprint recognition system is how to extract the region of interest (ROI) and the features of palmprint. This paper introduces two steps in center of mass moment method for ROI segmentation and then applied the Gabor two dimensional (2D) filters to obtain palm code as palmprint feature vector. Normalized Hamming distance is used to measure the similarity degrees of two feature vectors of palmprint. The system has been tested by using database 1000 palmprint images, are generated from 5 samples from each of the 200 persons randomly selected. Experiment results show that this system can achieve a high performance with success rate about 98.7% (FRR=1.1667%, FAR=0.1111%, T=0.376).
Article
We propose a dental classification and numbering system to effectively segment, classify, and number teeth in dental bitewing radiographs. An image enhancement method that combines homomorphic filtering, homogeneity-based contrast stretching, and adaptive morphological transformation is proposed to improve both contrast and illumination evenness of the radiographs simultaneously. Iterative thresholding and integral projection are adapted to isolate teeth to regions of interest (ROIs) followed by contour extraction of the tooth and the pulp (if available) from each ROI. A binary linear support vector machine using the skew-adjusted relative length/width ratios of both teeth and pulps, and crown size as features is proposed to classify each tooth to molar or premolar. Finally, a numbering scheme that combines a missing teeth detection algorithm and a simplified version of sequence alignment commonly used in bioinformatics is presented to assign each tooth a proper number. Experimental results show that our system has accuracy rates of 95.1% and 98.0% for classification and numbering, respectively, in terms of number of teeth tested, and correctly classifies and numbers the teeth in four images that were reported either misclassified or erroneously numbered, respectively.
Conference Paper
Law enforcement agencies have exploited biometrics for decades as key tools in forensic identification. With the evolution in information technology and the huge volume of cases that need to be investigated by forensic specialists, automating the process of forensic identification became inevitable. Postmortem (PM) identification, encountered in mass disasters (e.g. natural disasters, plane crashes, etc), requires use of biometric characteristics that resist early decay of body tissues as well as withstand severe conditions. To this end, dental features are the best candidates for PM identification [1]. In 1997, the Criminal Justice Information Services (CJIS) division of the FBI created a Dental Task Force (DTF) to foster the creation of an Automated Dental Identification System (ADIS). ADIS will provide automated search and matching capabilities for digitized radiographs and photographic images, so as to come with a short match list for dental forensic experts. Research teams from West Virginia University (WVU), Michigan State University (MSU), and University of Miami (UM), in coordination with CJIS, are collaboratively developing a research prototype of ADIS. Creating ADIS is a question of providing a highly automated environment that integrates efficient image processing and pattern recognition techniques thus achieving both high accuracy and timeliness. To this end, we are not only looking at automating the steps followed by forensic experts to examine dental radiographs of subjects and compare them against those of missing or unidentified persons. But we are also looking at intelligent analysis of radiographs in order to utilize underlying image structures that are often difficult to be assessed merely by visual examination [2].
Article
We present an algorithm to classify and assign numbers to teeth in bitewing dental images. The goal is to use the result of this algorithm in an automated dental identification system. We use Bayesian classification to classify the teeth in a bitewing image into molars and premolars and assign an absolute number to each tooth based on the common numbering system used in dentistry. Fourier descriptors of the teeth contours are used as features in the Bayesian classification. After the Bayesian classification, the spatial relation between the two types of teeth is considered to number each tooth and correct the misclassification of some teeth in order to obtain high precision results. Comparison between the two kinds of FDs was done to select the best method for teeth classification. Experiments with 50 bitewing images containing more than 400 teeth show that our method is capable of classifying and assigning absolute index number to the teeth with high accuracy.