Fig 4 - available from: SN Applied Sciences
This content is subject to copyright. Terms and conditions apply.
Basic shapes of line charts

Basic shapes of line charts

Source publication
Article
Full-text available
In this paper, we propose a new convolutional neural network (CNN) architecture to build a multi-label classifier that categorizes line chart images according to their characteristics. The class labels are organized in the form of trend property (increasing or decreasing) and functional property (linear or exponential). In the proposed method, the...

Context in source publication

Context 1
... (2020) 2:1250 | https://doi.org/10.1007/s42452-020-3055-y Research Article method, separate datasets were generated for each label; in the LP method, the labels were transformed as; "linearly increasing", "linearly decreasing", "exponentially increasing" and "exponentially decreasing". The basic shapes of these line chart types are shown in Fig. 4. A linear chart has a straight-line shape in its graphical representation. It increases or decreases with a constant rate of change. A linear function forms a linear chart if visualized, and the function has a mathematical form as indicated in Eq. 3. If the constant a is positive, the function is named as "linearly increasing"; ...

Similar publications

Chapter
Full-text available
Cylindrical algebraic decomposition (CAD) is a fundamental tool in computational real algebraic geometry. Previous studies have shown that machine learning (ML) based approaches may outperform traditional heuristic ones on selecting the best variable ordering when the number of variables \(n\le 4\). One main challenge for handling the general case...

Citations

... -Precision: High precision indicates accurate identification of "Lie" instances, while low precision suggests more false positives [21]. -Recall: High recall reflects the model's effectiveness in capturing "Lie" instances, whereas low recall implies missed "Lie" instances [22]. -F1-Score: Balances precision and recall, with a higher F1 score indicating a better balance. ...
Article
Full-text available
This article details a study on enhancing deception detection accuracy by using Hybrid Deep Neural Network (HDNN) models. The research, focusing on fear-related micro-expressions, utilizes a diverse dataset of responses to high-stakes questions. It analyzes facial action units (AUs) and pupil size variations through data preprocessing and feature extraction. The HDNN model outperforms the traditional Convolutional Neural Network (CNN) with a 91% accuracy rate. The findings’ implications for security, law enforcement, psychology, and behavioral treatments are discussed. Ethical considerations of deception detection technology deployment and future research directions, including cross-cultural studies, real-world assessments, ethical guidelines, studies on emotional expression dynamics, “explainable AI” development, and multimodal data integration, are also explored. The study contributes to deception detection knowledge and highlights the potential of machine learning techniques, especially HDNN, in improving decision-making and security in high-stakes situations.
... Although it is the best practice, it is unnecessary to use both (textual and graphical) information when deciding on the chart type. Authors and scientific papers that report only on making a decision based on graphical information are Bajić et al. [4,6], Image & graphic reader [32], Beagle [10], Chart-Text [15], and others [11,[24][25][26][27][28][33][34][35][36][37][38][39][40][41][42][43]. ...
... The two most popular methods for edge detection in chart images are canny and thinning. The scientific papers and authors that report on using edge detection are Reverse-Engineering Visualizations [5], Zhou and Tan [30,31], Image & graphic reader [32], ReVision [18], Mishchenko and Vassilieva [54][55][56], ChartSense [21], Chart Decoder [44], Chart-Text [15], Visualizing for the Non-Visual [16], and others [3,19,25,27,33,36,42,45,52]. ...
Article
Full-text available
This paper presents a complete review of different approaches across all components of the chart image detection and classification up to date. A set of 89 scientific papers is collected, analyzed, and enlisted into four categories: chart-type classification, chart text processing, chart data extraction, and chart description generation. Detailed information about problem formulation and a research field is provided, and an overview of used methods in each category. Each paper's contribution is noted, including the essential information for authors in this research field. In the end, a comparison is made between the reported results. The state-of-the-art methods in each category are described, and a research direction is given. We have also analyzed the open challenges that still exist and require the author's attention.
... Label powerset transforms MDC problems into multi-class classification problems by combining all unique class labels in the training set as a new single label [43]. As a result of this process, each instance in the training set has only one target attribute with one class label. ...
... However, when line charts are published in images, the raw data is lost. Recovering the underlying information of line charts will improve the performance of existing chart classification and question-answering systems such as [9,10,14]. It is trivial to identify the maximum value in a line chart given the raw data, but not so given only an Figure 1. ...
Preprint
Full-text available
Line Chart Data Extraction is a natural extension of Optical Character Recognition where the objective is to recover the underlying numerical information a chart image represents. Some recent works such as ChartOCR approach this problem using multi-stage networks combining OCR models with object detection frameworks. However, most of the existing datasets and models are based on "clean" images such as screenshots that drastically differ from camera photos. In addition, creating domain-specific new datasets requires extensive labeling which can be time-consuming. Our main contributions are as follows: we propose a synthetic data generation framework and a one-stage model that outputs text labels, mark coordinates, and perspective estimation simultaneously. We collected two datasets consisting of real camera photos for evaluation. Results show that our model trained only on synthetic data can be applied to real photos without any fine-tuning and is feasible for real-world application.
... Chagas et al. [28] compared listed methods and showed that CNNs outperform by roughly 20%. Publications by Bajić and Job [11], Kosemen and Birant [29], Ishihara et al. [30], and Dadhich et al. [31] use custom CNN architectures for chart-type classification. CNNs can also be used out-of-the-box; some are available as pre-trained models. ...
Article
Full-text available
Charts are often used for the graphical representation of tabular data. Due to their vast expansion in various fields, it is necessary to develop computer algorithms that can easily retrieve and process information from chart images in a helpful way. Convolutional neural networks (CNNs) have succeeded in various image processing and classification tasks. Nevertheless, the success of training neural networks in terms of result accuracy and computational requirements requires careful construction of the network layers’ and networks’ parameters. We propose a novel Shallow Convolutional Neural Network (SCNN) architecture for chart-type classification and image generation. We validate the proposed novel network by using it in three different models. The first use case is a traditional SCNN classifier where the model achieves average classification accuracy of 97.14%. The second use case consists of two previously introduced SCNN-based models in parallel, with the same configuration, shared weights, and parameters mirrored and updated in both models. The model achieves average classification accuracy of 100%. The third proposed use case consists of two distinct models, a generator and a discriminator, which are both trained simultaneously using an adversarial process. The generated chart images are plausible to the originals. Extensive experimental analysis end evaluation is provided for the classification task of seven chart classes. The results show that the proposed SCNN is a powerful tool for chart image classification and generation, comparable with Deep Convolutional Neural Networks (DCNNs) but with higher efficiency, reduced computational time, and space complexity.
... In this work, the CNN model is used as a feature extractor, unlike conventional approaches. The (Kosemen & Birant, 2020) work focuses on the classification within the line chart category using Label Powerset (LP-CNN) and Binary Relevance (BR-CNN) methods. DocFigure (Jobin et al., 2019) used a combination of fisher vector encoded features computed from a fully connected layer. ...
Article
Non-Textual images like charts and tables are unlike natural images in various aspects, including high inter-class similarities, low intra-class similarities, substantial textual component proportions, and lower resolutions. This paper proposes a novel Multi-Dilated Context Aggregation based Dense Network (MDCADNet) addressing the multi-resolution and larger receptive field modeling need for the non-textual component classification task. MDCADNet includes a densely connected convolutional network for the feature map computation as front-end with a multi-dilated Backend Context Module (BCM). The proposed BCM generates multi-scale features and provides a systematic context aggregation of both low and high-level feature maps through its densely connected layers. Additionally, the controlled multi-dilation scheme offers a more extensive scale range for better prediction performance. A thorough quantitative evaluation has been performed on seven benchmark datasets for demonstrating the generalization capability of MDCADNet. Experimental results show MDCADNet performs consistently better than the state-of-the-art models across all datasets.
Article
Identifying and understanding risk perceptions—“how bad are the harms” to humans or to what they value that people see as potentially or actually arising from entities or events—has been critical for risk analysis, both for its own sake, and for expected associations between risk perceptions and subsequent outcomes, such as risky or protective behavior, or support for hazard management policies. Cross-sectional surveys have been the dominant method for identifying and understanding risk perceptions, yielding valuable data. However, cross-sectional surveys are unable to probe the dynamics of risk perceptions over time, which is critical to do while living in a dynamically hazardous world and to build causal understandings. Building upon earlier longitudinal panel studies of Americans’ Ebola and Zika risk perceptions using multi-level modeling to assess temporal changes in these views and inter-individual factors affecting them, we examined patterns in Americans’ COVID-19 risk perceptions in six waves across 14 months. The findings suggest that, in general, risk perceptions increased from February 2020 to April 2021, but with varying trends across different risk perception measures (personal, collective, affective, affect, severity, and duration). Factors in baseline risk perceptions (Wave 1) and inter-individual differences across waves differed even more: baseline ratings were associated with how immediate the threat is (temporal distance) and how likely the threat would affect people like oneself (social distance), and following the United States news about the pandemic. Inter-individual trend differences were shaped most by temporal distance, whether local coronavirus infections were accelerating their upward trend, and subjective knowledge about viral transmission. Associations of subjective knowledge and risk trend with risk perceptions could change signs (e.g. from positive to negative) over time. These findings hold theoretical implications for risk perception dynamics and taxonomies, and research design implications for studying risk perception dynamics and their comparison across hazards.