Figure - uploaded by Abbas Ahmadi
Content may be subject to copyright.
Different sets of parameters based on Taguchi parameter tuning

Different sets of parameters based on Taguchi parameter tuning

Source publication
Article
Full-text available
Purpose Query-based summarization approaches might not be able to provide summaries compatible with the user’s information need, as they mostly rely on a limited source of information, usually represented as a single query by the user. This issue becomes even more challenging when dealing with scientific documents, as they contain more specific sub...

Context in source publication

Context 1
... we assume the four parameters at three levels for Taguchi tuning by means of Minitab software. According to the experimental design table by the Taguchi method, the number of experiments required to set the above-mentioned parameters at three levels is 9. Based on this experiment, different sets of parameters are shown in Table 2 and Figure 4. The values are averaged over 30 different runs to be credible. ...

Similar publications

Preprint
Full-text available
Joint extraction of entities and relations from unstructured texts is a crucial task in information extraction. Recent methods achieve considerable performance but still suffer from some inherent limitations, such as redundancy of relation prediction, poor generalization of span-based extraction and inefficiency. In this paper, we decompose this ta...
Preprint
Full-text available
The Hessian of a neural network captures parameter interactions through second-order derivatives of the loss. It is a fundamental object of study, closely tied to various problems in deep learning, including model design, optimization, and generalization. Most prior work has been empirical, typically focusing on low-rank approximations and heuristi...
Article
Full-text available
The dynamics of hexapods (Stewart platforms) has been extensively studied for several decades. In this problem, the equations of motion are usually constructed using the basic theorems of mechanics. Lagrange equations of the second kind are often constructed for the same purpose. In the present paper, a new form of dynamic equations is considered....
Article
Full-text available
It is desirable to maintain high accuracy and runtime efficiency at the same time in lane detection. However, due to the long and thin properties of lanes, extracting features with both strong discrimination and perception abilities needs a huge amount of calculation, which seriously slows down the running speed. Therefore, we design a more efficie...
Article
Full-text available
ABSTRACT This paper examined the relationship between overloaded curriculum, excessive daily academic activities and the learning effectiveness of Junior secondary school students. The study was guided by two specific objectives and two null hypotheses. The population of the study comprised all JSS 3 students in public secondary schools in Uyo Educ...

Citations

... Prior work in human-AI summarization has mainly focused on direct model guidance (e.g., utilizing concepts/queries [2-4, 17, 54, 59], typing [56], ratings [6,20], gaze [6]) or simply post-editing (e.g., [36,44]). A few works have supported summarization based on user-selected points from the source [4,53,54,57,60], but the points are ranked or clustered by the system without the same narrative structure as the source document. Another work proposed but did not implement or evaluate model guidance through in-context content selection [21]. ...
... There are several works on human-LLM summarization of long documents [21,60] and some for scientific long documents in particular [4,58,59]. Long documents provide a unique challenge for human-LLM summarization, as there is a great deal of information for the human to digest. ...
... Long et al. examine how LLM scaffolding can help people generate relatable hooks for complex scientific topics [37], and Kim et al. investigate how science writers can generate extended metaphors for scientific ideas with the help of an LLM [31]. The couple works that have explored human-AI scientific summarization do not allow for in-context selection of source content but rely on selection of concepts [59] or out-of-context sentences extracted based on queries [4]. ...
Preprint
Full-text available
Research-paper blog posts help scientists disseminate their work to a larger audience, but translating papers into this format requires substantial additional effort. Blog post creation is not simply transforming a long-form article into a short output, as studied in most prior work on human-AI summarization. In contrast, blog posts are typically full-length articles that require a combination of strategic planning grounded in the source document, well-organized drafting, and thoughtful revisions. Can tools powered by large language models (LLMs) assist scientists in writing research-paper blog posts? To investigate this question, we conducted a formative study (N=6) to understand the main challenges of writing such blog posts with an LLM: high interaction costs for 1) reviewing and utilizing the paper content and 2) recurrent sub-tasks of generating and modifying the long-form output. To address these challenges, we developed Papers-to-Posts, an LLM-powered tool that implements a new Plan-Draft-Revise workflow, which 1) leverages an LLM to generate bullet points from the full paper to help users find and select content to include (Plan) and 2) provides default yet customizable LLM instructions for generating and modifying text (Draft, Revise). Through a within-subjects lab study (N=20) and between-subjects deployment study (N=37 blog posts, 26 participants) in which participants wrote blog posts about their papers, we compared Papers-to-Posts to a strong baseline tool that provides an LLM-generated draft and access to free-form LLM prompting. Results show that Papers-to-Posts helped researchers to 1) write significantly more satisfying blog posts and make significantly more changes to their blog posts in a fixed amount of time without a significant change in cognitive load (lab) and 2) make more changes to their blog posts for a fixed number of writing actions (deployment).
... A novel approach that exploits the user's opinion in two stages was introduced in [18]. First, the query is refined by user-selected keywords, key phrases, and sentences extracted from the document collection. ...
Chapter
Full-text available
When writing an academic paper, researchers often spend considerable time reviewing and summarizing papers to extract relevant citations and data to compose the Introduction and Related Work sections. To address this problem, we propose QuOTeS, an interactive system designed to retrieve sentences related to a summary of the research from a collection of potential references and hence assist in the composition of new papers. QuOTeS integrates techniques from Query-Focused Extractive Summarization and High-Recall Information Retrieval to provide Interactive Query-Focused Summarization of scientific documents. To measure the performance of our system, we carried out a comprehensive user study where participants uploaded papers related to their research and evaluated the system in terms of its usability and the quality of the summaries it produces. The results show that QuOTeS provides a positive user experience and consistently provides query-focused summaries that are relevant, concise, and complete. We share the code of our system and the novel Query-Focused Summarization dataset collected during our experiments at https://github.com/jarobyte91/quotes.
... Most of the queries are complex real-world questions related to the input text documents [22]. In spite of its relevance and significance of query-based text summary, the extracted sentences for query-based text summary have not been fully explored [23]. Our main objective is to create a redundancy-free query-based text summary. ...
... Interactive query-based approach For summarization of scientific documents, an interactive query-based approach is proposed by Bayatmakou et al. [23]. Initially, query is refined by user selected keywords or keyphrases. ...
Article
Full-text available
This paper presents a query-based extractive text summarization approach by using sense-oriented semantic relatedness measure. To find the query relevant sentences, we have to find semantic relatedness measure between query and input text sentences. To find the relatedness score, we need to know the exact sense of the words present in query and input text sentences. Word sense disambiguation (WSD) finds the actual meaning of a word according to its context of the sentence. We have proposed a WSD technique to extract query relevant sentences which is used to find a sense-oriented sentence semantic relatedness score between the query and input text sentence. Here, a feature-based method is presented to find semantic relatedness score between query and input text sentence. Finally the proposed query-based text summary method uses relevant and redundancy-free features to form cluster. There is a high probability that same featured cluster may contain redundant sentences. Therefore, a redundancy removal method is proposed to get redundancy-free sentences. In the end, redundancy-free query relevant sentences are obtained with an information rich summary. We have evaluated our proposed WSD technique with other existing methods by using Senseval and SemEval datasets and proposed Sense-Oriented Sentence Semantic Relatedness Score by using Li et al. dataset. We compare our proposed query-based extractive text summarization method with other methods participated in Document Understanding Conference and as well as with current methods. Evaluation and comparison state that the proposed query-based extractive text summarization method outperforms many existing and recent methods.
... A novel approach that exploits the user's opinion in two stages was introduced in [18]. First, the query is refined by user-selected keywords, key phrases, and sentences extracted from the document collection. ...
Preprint
Full-text available
When writing an academic paper, researchers often spend considerable time reviewing and summarizing papers to extract relevant citations and data to compose the Introduction and Related Work sections. To address this problem, we propose QuOTeS, an interactive system designed to retrieve sentences related to a summary of the research from a collection of potential references and hence assist in the composition of new papers. QuOTeS integrates techniques from Query-Focused Extractive Summarization and High-Recall Information Retrieval to provide Interactive Query-Focused Summarization of scientific documents. To measure the performance of our system, we carried out a comprehensive user study where participants uploaded papers related to their research and evaluated the system in terms of its usability and the quality of the summaries it produces. The results show that QuOTeS provides a positive user experience and consistently provides query-focused summaries that are relevant, concise, and complete. We share the code of our system and the novel Query-Focused Summarization dataset collected during our experiments at https://github.com/jarobyte91/quotes.
Article
Background An abundance of rapidly accumulating scientific evidence presents novel opportunities for researchers and practitioners alike, yet such advantages are often overshadowed by resource demands associated with finding and aggregating a continually expanding body of scientific information. Data extraction activities associated with evidence synthesis have been described as time-consuming to the point of critically limiting the usefulness of research. Across social science disciplines, the use of automation technologies for timely and accurate knowledge synthesis can enhance research translation value, better inform key policy development, and expand the current understanding of human interactions, organizations, and systems. Ongoing developments surrounding automation are highly concentrated in research for evidence-based medicine with limited evidence surrounding tools and techniques applied outside of the clinical research community. The goal of the present study is to extend the automation knowledge base by synthesizing current trends in the application of extraction technologies of key data elements of interest for social scientists. Methods We report the baseline results of a living systematic review of automated data extraction techniques supporting systematic reviews and meta-analyses in the social sciences. This review follows PRISMA standards for reporting systematic reviews. Results The baseline review of social science research yielded 23 relevant studies. Conclusions When considering the process of automating systematic review and meta-analysis information extraction, social science research falls short as compared to clinical research that focuses on automatic processing of information related to the PICO framework. With a few exceptions, most tools were either in the infancy stage and not accessible to applied researchers, were domain specific, or required substantial manual coding of articles before automation could occur. Additionally, few solutions considered extraction of data from tables which is where key data elements reside that social and behavioral scientists analyze.
Conference Paper
Full-text available
The summarization technologies have been increasingly used in recent decades. Those technologies are a very important part of emerging topics in computer science and engineering, such that Natural Language Processing (NLP). Several methods have been used for analysis to get good summary results. There are two types of document summaries: single document summaries and multi-document summaries. Single document summaries aim to extract information from a single document to get new and relevant summary information, while multi-document summaries extract information from multiple documents. This study focuses on the activities of the semantic literature in previous studies to obtain the basis of the widely used base methods and data sets used in this study. Data was collected from Scopus publication sources from 2019 until 2022 Q2 for analysis. Researchers use guidelines from the semantic literature method by using the basis of Kitchenham and Charters as a reference in its design. In this study, there were forty-eight articles obtained from the filtering results of several criteria used, including exclusion and quality assessment.