Fig 1 - uploaded by Alejandro Rago
Content may be subject to copyright.
Example use case diagram

Example use case diagram

Source publication
Article
Full-text available
Quality-attribute requirements describe constraints on the development and behavior of a software system, and their satisfaction is key for the success of a software project. Detecting and analyzing quality attributes in early development stages provides insights for system design, reduces risks, and ultimately improves the developers’ understandin...

Contexts in source publication

Context 1
... recurring concept in requirements is called early aspect (or requirement- level aspect), and it is said to crosscut specic require-ments. For example, let's assume that the system func- tionality is partitioned into a set of use cases, as illus- trated in Figure 1. The system in this diagram is an excerpt from the Health Watcher System (HWS) [18]. ...
Context 2
... general, the relationship between early aspects and quality attributes is one-to-many, but this relationship is not mandatory. In the example of Figure 1, the Persistence aspect is mainly associated to a modiability quality-attribute. Alternatively, the same aspect could point to other quality attributes such as security or performance. ...
Context 3
... descrip- tor essentially represents a particular system behav- ior, so that a set (of descriptors) contains semantically- related behaviors. For instance, the Persistence aspect in Figure 1 was actually suggested by SAET after pro- cessing the accompanying use cases. This aspect has descriptors such as: <retrieves, list>, <saves, data>, <search, repository>, <store, information>, <rolled- back, changes>, <updated, complaint>, among others. ...
Context 4
... dene a token as a basic unit of text with several attributes of the form <attribute, value>, for exam- ple: <kind, use-case>, <weight, 1>, <occurrences, 4>, etc. Table 1 describes the attributes associated to a to- ken. Thus, the list of tokens contains words selected To clarify the processing of the lters, let's consider the use cases of Figure 1, as detailed in Table 2. Addi- tionally, we know that the system has modiability and availability requirements, as described in Table 3. ...
Context 5
... the HWS application provides on-line access for users to regis- ter complaints, read health notices, and make queries regarding health issues. There are two relevant actors in this system: employees and citizens (remember Fig- ure 1). An employee can record, update, delete, print, search, and change the records, which are stored in the system repository. ...
Context 6
... performs better when using only early aspects as input, we used the results of that conguration to perform the cross-checking analysis. The rationale of our analyses is sketched in Figures 9 and 10, for HWS and CRS respectively. Tables 8 and 9). ...

Similar publications

Article
Full-text available
Bibliome mining is a powerful technique for large-scale information extraction from textual data and connecting between biological entities as well as functional hypotheses. Currently, most bibliome mining is used for some specific studies involving genes and proteins; however, much less efforts have been focused on metabolites. In addition to appl...

Citations

... In this paper, we are particularly interested in modeling use cases to infer specific security concerns found in use cases steps semantically, similarly to approaches in Wouters et al. (2000), Rago et al. (2013) and Yahya et al. (2014), without manually inspecting the use case specification first. Our research builds upon prior work by addressing security concerns found at the individual interaction steps of a use case through the web application's user interface and navigational structure. ...
... Incorporating RUCM, OWL, and SPARQL for security analysis requires adoption by engineers. However, work, such as Rago et al. (2013), has shown the application of OWL to model use cases to find candidate concerns. Additionally, SPARQL is an effective way to retrieve and manipulate data stored in RDF format, where SELECT and WHERE clauses are similar to how SQL query statements are written to retrieve information from relational databases. ...
... Our work aims to improve the consistency of security requirements identified from the use cases. Our work is different from the work of Rago et al. (2013), Yahya et al. (2014), and Couto et al. (2014) in ways that: (1) place constraints on specific features of the use cases to have predefined security concerns concepts; ...
Article
Full-text available
Identifying security concerns is a security activity that can be integrated into the requirements development phase. However, it has been shown that manually identifying concerns is a time-consuming and challenging task. The software engineering community has utilized natural language processing and query systems to automatically find part of the requirement specification with a specific concern. This research presents an ontology-based recommender system to suggest security concerns based on use case semantic rules and build on recent studies to find concerns in use cases. Our approach is to model use cases for interface design and map specific parts of use cases to the Application Security Verification Standard (ASVS) based on security concerns at the interaction steps of use cases. We conducted two evaluations, where we generated use case models from Restricted Use Case Modeling (RUCM) descriptions and then used semantic rules to infer where a specific security concern is in the use case models. These evaluations show that the recommender achieves up to 100% precision and recall for modeling use cases and recommending security concerns when the use case steps strictly adhere to rules for RUCM use cases. Otherwise, the modeling precision and recall will have arbitrary values, thus affecting the precision and recall for the recommended security concerns. As the main contribution, our approach can address security concerns for ASVS at the level of use case interaction steps.
... Automatic analysis [44] 2012 DePaul07, EU procurement Tool [95] 2013 DePaul07, CCHIT, iTrust, openEMR, DUA, RFP Automatic analysis [85] 2013 HWS, CRS (OSS projects) Tool [86] 2014 CCHIT, WorldVista Automatic analysis [91] 2014 DePaul07 ...
... We found one paper on early aspect mining [85]. The study evaluated a tool to detect quality requirements aspects in a requirements document. ...
Article
Full-text available
Quality requirements deal with how well a product should perform the intended functionality, such as start-up time and learnability. Researchers argue they are important and at the same time studies indicate there are deficiencies in practice. Our goal is to review the state of evidence for quality requirements. We want to understand the empirical research on quality requirements topics as well as evaluations of quality requirements solutions. We used a hybrid method for our systematic literature review. We defined a start set based on two literature reviews combined with a keyword-based search from selected publication venues. We snowballed based on the start set. We screened 530 papers and included 84 papers in our review. Case study method is the most common (43), followed by surveys (15) and tests (13). We found no replication studies. The two most commonly studied themes are (1) differentiating characteristics of quality requirements compared to other types of requirements, (2) the importance and prevalence of quality requirements. Quality models, QUPER, and the NFR method are evaluated in several studies, with positive indications. Goal modeling is the only modeling approach evaluated. However, all studies are small scale and long-term costs and impact are not studied. We conclude that more research is needed as empirical research on quality requirements is not increasing at the same rate as software engineering research in general. We see a gap between research and practice. The solutions proposed are usually evaluated in an academic context and surveys on quality requirements in industry indicate unsystematic handling of quality requirements.
... We found one paper on early aspect mining [SR58]. The study evaluated a tool to detect quality requirements aspects in a requirements document. ...
... Overview of the test studies. The dataset are described inTable 7. SR67] 2013 DePaul07, CCHIT, iTrust, openEMR, DUA, RFP Automatic analysis[SR58] 2013 HWS, CRS (OSS projects) ...
Preprint
Full-text available
Quality requirements deal with how well a product should perform the intended functionality, such as start-up time and learnability. Researchers argue they are important and at the same time studies indicate there are deficiencies in practice. Our goal is to review the state of evidence for quality requirements. We want to understand the empirical research on quality requirements topics as well as evaluations of quality requirements solutions. We used a hybrid method for our systematic literature review. We defined a start set based on two literature reviews combined with a keyword-based search from selected publication venues. We snowballed based on the start set. We screened 530 papers and included 84 papers in our review. Case study method is the most common (43), followed by surveys (15) and tests (13). We found no replication studies. The two most commonly studied themes are 1) Differentiating characteristics of quality requirements compared to other types of requirements, 2) the importance and prevalence of quality requirements. Quality models, QUPER, and the NFR method are evaluated in several studies, with positive indications. Goal modeling is the only modeling approach evaluated. However, all studies are small scale and long-term costs and impact are not studied. We conclude that more research is needed as empirical research on quality requirements is not increasing at the same rate as software engineering research in general. We see a gap between research and practice. The solutions proposed are usually evaluated in an academic context and surveys on quality requirements in industry indicate unsystematic handling of quality requirements.
... Some of these concerns, for instance, are the precursors of key design decisions to be made by architects [4]. Typical examples of crosscutting concerns are performance, security, distribution, among others, which tend to be understated in textual use cases [5]. The analysis of concerns can help to improve the architecture [6] or enable the validation of quality-attribute requirements [7]. ...
... CCCs at the requirements level cut across the description of the problem domain, and often have a broad impact on issues of scoping, prioritization, and architectural design. Requirements are traditionally focused on functionality, providing little help to capture other kinds of concerns (e.g., synchronization, logging, profiling, security, among others) [5]. Often, CCCs encompass non-functional (or quality-attribute) considerations. ...
... More recently, the authors have developed the REAssistant tool [5,12]. REAssistant supports the extraction of semantic information from textual use cases in order to uncover latent CCCs. ...
Article
Full-text available
Software requirements are often described in natural language because they are useful to communicate and validate. Due to their focus on particular facets of a system, this kind of specifications tends to keep relevant concerns (also known as early aspects) from the analysts’ view. These concerns are known as crosscutting concerns because they appear scattered among documents. Concern mining tools can help analysts to uncover concerns latent in the text and bring them to their attention. Nonetheless, analysts are responsible for vetting tool-generated solutions, because the detection of concerns is currently far from perfect. In this article, we empirically investigate the role of analysts in the concern vetting process, which has been little studied in the literature. In particular, we report on the behavior and performance of 55 subjects in three case-studies working with solutions produced by two different tools, assessed in terms of binary classification measures. We discovered that analysts can improve “bad” solutions to a great extent, but performed significantly better with “good” solutions. We also noticed that the vetting time is not a decisive factor to their final accuracy. Finally, we observed that subjects working with solutions substantially different from those of existing tools (better recall) can also achieve a good performance.
... To support this task, RE researchers have been proposing automatic or semi-automatic approaches for identifying NFRs in requirements documents ( Kayed, Hirzalla, Samhan, & Alfayoumi, 2009;Ko, Park, Seo, & Choi, 2007;Rago, Marcos, & Diaz-Pace, 2013 ; https Sharma, Ramnani, & Sengupta, 2014;Vlas & Robinson, 2011 ). In recent years, machine learning (ML) algorithms have been integrated into these approaches with promising results being reported ( Cleland-Huang et al., 2007;Knauss, Houmb, Schneider, Islam, & Jürjens, 2011;Mahmoud, 2015;Mahmoud & Williams, 2016 ). ...
Article
Full-text available
Context Recent developments in requirements engineering (RE) methods have seen a surge in using machine-learning (ML) algorithms to solve some difficult RE problems. One such problem is identification and classification of non-functional requirements (NFRs) in requirements documents. ML-based approaches to this problem have shown to produce promising results, better than those produced by traditional natural language processing (NLP) approaches. Yet, a systematic understanding of these ML approaches is still lacking. Method This article reports on a systematic review of 24 ML-based approaches for identifying and classifying NFRs. Directed by three research questions, this article aims to understand what ML algorithms are used in these approaches, how these algorithms work and how they are evaluated. Results (1) 16 different ML algorithms are found in these approaches; of which supervised learning algorithms are most popular. (2) All 24 approaches have followed a standard process in identifying and classifying NFRs. (3) Precision and recall are the most used matrices to measure the performance of these approaches. Finding The review finds that while ML-based approaches have the potential in the classification and identification of NFRs, they face some open challenges that will affect their performance and practical application. Impact The review calls for the close collaboration between RE and ML researchers, to address open challenges facing the development of real-world ML systems. Significance The use of ML in RE opens up exciting opportunities to develop novel expert and intelligent systems to support RE tasks and processes. This implies that RE is being transformed into an application of modern expert systems.
... Assessing the quality of use case specifications in a common concern as being used in several phases of development process, and also the quality-attribute information scattered across several documents. Rago et al. [28] presented a semi-automated approach to identify latent quality attributes and implemented tool support using NLP techniques to determine quality-attribute information from the use case specification. Fantechi et al. [19] presented the application of use case analysis techniques based on a linguistic approach to detect NL issues within the use case specification. ...
Conference Paper
Use case modeling refers to the process of identifying scenarios written in some natural language text, particularly to capture interactions between the system and associated actors. Several approaches have been proposed to maintain the synergy of use cases with other software models, but no systematic transformation approach is available to extract use case scenarios from the textual requirements specification. In this paper, we propose a systematic transformation approach that automatically extracts various use case elements from textual problem specifications. The approach uses Natural Language (NL) parser to identify Parts-Of-Speech (POS) tags, Type Dependencies (TDs) and semantic roles from the input text specification to populate use case elements. It further makes use of the questionnaire-based approach to develop the remaining unpopulated parts of the use case template. The paper demonstrates the applicability of the proposed approach by applying both industry and research-level case studies. The results highlight that the generated output is correct, consistent, non-redundant and complete, and helpful to use case developers in further analysis and documentation.
... To support this task, RE researchers have been proposing automatic or semi-automatic approaches for identifying NFRs in requirements documents ( Kayed, Hirzalla, Samhan, & Alfayoumi, 2009;Ko, Park, Seo, & Choi, 2007;Rago, Marcos, & Diaz-Pace, 2013 ; https Sharma, Ramnani, & Sengupta, 2014;Vlas & Robinson, 2011 ). In recent years, machine learning (ML) algorithms have been integrated into these approaches with promising results being reported ( Cleland-Huang et al., 2007;Knauss, Houmb, Schneider, Islam, & Jürjens, 2011;Mahmoud, 2015;Mahmoud & Williams, 2016 ). ...
... The key idea is to exploit the relations between concepts in order to improve classification models. However, in our experience with automated processing of RE documents (e.g., use case specifications) (Rago et al. 2013(Rago et al. , 2016c, the semantic enrichment of those documents with concepts from general-purpose knowledge sources (Mahmoud and Carver 2015) might be counterproductive, because they tend to introduce noise that hinders the construction of good classification models (Egozi et al. 2011). In practice, requirements are written with a ''special'' lexicon for the sake of stakeholders' communication (Kamalrudin et al. 2011), using specific terminology to convey interactions between an actor and the system and limiting the number of words to express such behaviors (e.g., write, store and save mean the same in a use case specification but not in a dictionary). ...
Article
Full-text available
Engineering activities often produce considerable documentation as a by-product of the development process. Due to their complexity, technical analysts can benefit from text processing techniques able to identify concepts of interest and analyze deficiencies of the documents in an automated fashion. In practice, text sentences from the documentation are usually transformed to a vector space model, which is suitable for traditional machine learning classifiers. However, such transformations suffer from problems of synonyms and ambiguity that cause classification mistakes. For alleviating these problems, there has been a growing interest in the semantic enrichment of text. Unfortunately, using general-purpose thesaurus and encyclopedias to enrich technical documents belonging to a given domain (e.g. requirements engineering) often introduces noise and does not improve classification. In this work, we aim at boosting text classification by exploiting information about semantic roles. We have explored this approach when building a multi-label classifier for identifying special concepts, called domain actions, in textual software requirements. After evaluating various combinations of semantic roles and text classification algorithms, we found that this kind of semantically-enriched data leads to improvements of up to 18% in both precision and recall, when compared to non-enriched data. Our enrichment strategy based on semantic roles also allowed classifiers to reach acceptable accuracy levels with small training sets. Moreover, semantic roles outperformed Wikipedia- and WordNET-based enrichments, which failed to boost requirements classification with several techniques. These results drove the development of two requirements tools, which we successfully applied in the processing of textual use cases.
... Analyzing specifically the included papers (underlined) in the intersection of the pair Blue-Green ( Fig. 4), we can observe that Green team included papers not completely related to the research question (misinterpretation of the research question). Two of the included papers -(Preiss et al. 2001) and (Rago et al. 2013)were not about use cases quality attributes, but about quality characteristics expected for a software according to its requirements specifications (controversial understanding on the research topic). The first paper intends to use these features as the basis for a software development, while the second one intends to extract them from the specifications through mining. ...
Article
Full-text available
The evidence-based software engineering approach advocates the use of evidence from empirical studies to support the decisions on the adoption of software technologies by practitioners in the software industry. To this end, many guidelines have been proposed to contribute to the execution and repeatability of literature reviews, and to the confidence of their results, especially regarding systematic literature reviews (SLR). To investigate similarities and differences, and to characterize the challenges and pitfalls of the planning and generated results of SLR research protocols dealing with the same research question and performed by similar teams of novice researchers in the context of the software engineering field. We qualitatively compared (using Jaccard and Kappa coefficients) and evaluated (using DARE) same goal SLR research protocols and outcomes undertaken by similar research teams. Seven similar SLR protocols regarding quality attributes for use cases executed in 2010 and 2012 enabled us to observe unexpected differences in their planning and execution. Even when the participants reached some agreement in the planning, the outcomes were different. The research protocols and reports allowed us to observe six challenges contributing to the divergences in the results: researchers’ inexperience in the topic, researchers’ inexperience in the method, lack of clearness and completeness of the papers, lack of a common terminology regarding the problem domain, lack of research verification procedures, and lack of commitment to the SLR. According to our findings, it is not possible to rely on results of SLRs performed by novices. Also, similarities at a starting or intermediate step during different SLR executions may not directly translate to the next steps, since non-explicit information might entail differences in the outcomes, hampering the repeatability and confidence of the SLR process and results. Although we do have expectations that the presence and follow-up of a senior researcher can contribute to increasing SLRs’ repeatability, this conclusion can only be drawn upon the existence of additional studies on this topic. Yet, systematic planning, transparency of decisions and verification procedures are key factors to guarantee the reliability of SLRs.
... A especificação do UC pode ser uma atividade demorada e propensa à falhas, pois as especificações geralmente são escritas em linguagem natural [17]. Além disso, profissionais e estudantes em engenharia de software apresentam dificuldades durante a especificação do UC, tais como: dificuldade em especificar os passos dos fluxos nos UCs, dificuldade em organizar as informações na especificação do UC, dentre outras [1] [5] [13]. ...
Conference Paper
Full-text available
Contexto: Casos de Uso (Use Cases – UCs) tornaram-se um importante artefato para a especificação dos requisitos de software. No entanto, há várias dificuldades que impedem estudantes e engenheiros de software de especificarem UCs de forma correta. Objetivo: Para explorar e entender as dificuldades em especificar UCs, foi realizado um estudo qualitativo. Método: Utilizou-se entrevistas semiestruturadas com estudantes visando identificar estas dificuldades. A análise dos dados foi conduzida utilizando procedimentos do método Grounded Theory (GT). Resultados: Foi proposto um modelo que apresenta as dificuldades encontradas para especificação de UC. Estas dificuldades foram classificadas nas categorias: abstração dos requisitos, identificação e descrição dos fluxos, relacionamento entre as regras de negócio e os fluxos na especificação do UC. Conclusões: O modelo serve como base para futuras pesquisas na área, bem como no apoio à sugestão de práticas para melhorar o processo de ensino/aprendizagem dos alunos na especificação de UC.