The maps of the different statistical analysis methods. 

The maps of the different statistical analysis methods. 

Source publication
Article
Full-text available
Each Humanities researcher has its own way to deal with data (collection, coding, analysis and interpretation). All these specific ways of working are not shared-each researcher is reinventing his/her own method while analyzing data without any previous experience. Nevertheless, developing and sharing these methods should be useful to the research...

Context in source publication

Context 1
... this section we will briefly describe the selected statistical analysis methods and their corresponding process model represented with Map. Figure 2 presents the five different methods with their specific intentions and strategies. The Principal Component Analysis (PCA) (Pearson 1901) (Hotelling 1933) is a technique of factor analysis. The analysis takes as input a table with n rows and p columns and aims to reduce the size of the table by determining new reduced variables containing more information. This method provides a scatter plot (with adjustment of the cloud computing individuals and inertia calculation to measure the dispersion of the cloud using a correlation matrix). The Correspondence Analysis (Benzecri, 1982) is similar to PCA but while the PCA applies to continuous numeric variables, the CA applies to two categorical variables. The method is used to study the link between two qualitative variables. The data normalization is performed using a Chi- square test (particularly by correlation analysis). A contingency table is established as a homogeneous comprehensive table. The average profiles (centroids) allow determining whether and how a class of individuals differs from the general population. The Multiple Correspondence Analysis (MCA) (Benzécri 1973) is similar to a CA applied to more than two variables. The data normalization table is performed so as to obtain binary numbers. The rows represent the individuals, whereas the columns represent the variables. This table is then transformed into a complete disjunctive table. The rows still represent individuals, but the columns represent the terms (each term is connected to a variable). Once the number of variables increases, the data is represented as a hyper-contingency table (called Burt table). The similarity between individuals is determined by the number of terms in common. Thus, two terms are similar if they are present or absent in many individuals. The profile calculation is also used to get the total of the rows and columns, as in CA. Each profile forming a cloud of points is then projected into a different space. The projection onto a single plane allows highlighting a series of orthogonal directions, to study the projections of two clouds, which allows to choose the number of projections axes and to study the values representing the inertia of each ...

Citations

... La grande originalité de cette approche réside dans la manière dont sont organisés les composants et qui facilite leur sélection et leur réutilisation. Chaque famille est définie pour un domaine spécifique et cette approche a été utilisée avec succès dans plusieurs domaines tels que la prise de décision (Kornyshova, 2011), la définition des besoins (Deneckere et al, 2011), l'analyse de données dans les sciences humaines et sociales (Ammar et al, 2014 ), la cocréation de services (Ralyté, 2013), les projets agiles (Deneckere, 2015), etc. Cette approche est inspirée du concept de lignes de produit logiciels (Clements, 2001) (Weiss, 1999 ). L'ingénierie des lignes de produits repose sur le développement des produits logiciels basé sur de la personnalisation massive. ...
... hy Process) (Saaty, 1980), des méthodes de surclassement (Roy, 1996), des méthodes de pondération (Keeney, 1999), et des méthodes floues (Fuller et al., 1996).Gordijn & Akkermans, 2001), i* (Yu, 1995), certaines techniques de pensées créatives (Michalko, 2006Michalko, , 2011) ou encore quelques jeux d'innovation (Gray et al., 2010) (Hohmann, 2009).Ammar et al, 2014 ). Les chercheurs en SHS ont chacun leur propre manière de travailler les données (collection, codage, analyse et interprétation). Ces manières sont très hétérogènes et nécessitent une représentation unique et partagée. De plus, beaucoup d'analyses de données sont faites avec des méthodes statistiques d'analyse, pour trouver des corréla ...
... La grande originalité de cette approche réside dans la manière dont sont organisés les composants et qui facilite leur sélection et leur réutilisation. Chaque famille est définie pour un domaine spécifique et cette approche a été utilisée avec succès dans plusieurs domaines tels que la prise de décision (Kornyshova, 2011), la définition des besoins (Deneckere et al, 2011), l'analyse de données dans les sciences humaines et sociales (Ammar et al, 2014), la cocréation de services (Ralyté, 2013), les projets agiles (Deneckere, 2015 ...
Article
Enterprise Architecture (EA) frameworks have justified their efficiency to improve the enterprise functioning by providing a whole vision of an organization aligned to its strategy. However, these frameworks are often heavy and are not used integrally as they are not completely suitable for the situation at hand. To tackle this problem, we suggest using a component-based vision of EA frameworks. Our goal is to identify a set of EA components that could be used independently from each other and to use them depending on the context. This approach could be used to implement the whole EA method in an organization, to provide a progressive integration of different components, or to improve an existing EA by adding the lacking components. We called our proposal the SEA (Situational Enterprise Architecture) approach. In this paper, we propose a model to formalize EA components and illustrate this model with the TOGAF framework.