Three scenarios showing the definition of per-type scales and the operation of scaling interactive sliders.

Three scenarios showing the definition of per-type scales and the operation of scaling interactive sliders.

Source publication
Conference Paper
Full-text available
The performance of parallel and distributed applications is highly dependent on the characteristics of the execution envi-ronment. In such environments, the network topology and char-acteristics tell how fast data can be transmitted and placed in the resources. These are key phenomena to understand the be-havior of such applications and possibly im...

Context in source publication

Context 1
... imple- mentation automatically defines the initial scaling so that the maximum size of all objects are the same. Figure 4 illustrate how the automatic scaling of our imple- mentation works. The values depicted in the figure within the geometric shapes are in Megaflops, for hosts (squares), and in Megabits/second, for links (diamond). ...

Similar publications

Article
Full-text available
We introduce a high-performance cost-effective network topology called Slim Fly that approaches the theoretically optimal network diameter. Slim Fly is based on graphs that approximate the solution to the degree-diameter problem. We analyze Slim Fly and compare it to both traditional and state-of the-art networks. Our analysis shows that Slim Fly h...
Article
Full-text available
System expandability becomes a major concern for highly parallel computers and data centers, because their number of nodes gradually increases year by year. In this context we propose a low-degree topology and its floor layout in which a cabinet or node set can be newly inserted by connecting short cables to a single existing cabinet. Our graph ana...

Citations

... Flow maps have also been proposed, where discrete spatiotemporal data is represented as a continuous function (KDE), and flow maps are extracted using a 3D gravity model for temporal trends (considered as movement flows) [20]. Scalabilitycentric approaches (e.g., VIVA [36]) have suggested the use of timeline visualizations, in association with network data, to construct node glyphs for representing resource flow characteristics in large-scale distributed systems. ...
Preprint
Full-text available
The electrical power grid is a critical infrastructure, with disruptions in transmission having severe repercussions on daily activities, across multiple sectors. To identify, prevent, and mitigate such events, power grids are being refurbished as 'smart' systems that include the widespread deployment of GPS-enabled phasor measurement units (PMUs). PMUs provide fast, precise, and time-synchronized measurements of voltage and current, enabling real-time wide-area monitoring and control. However, the potential benefits of PMUs, for analyzing grid events like abnormal power oscillations and load fluctuations, are hindered by the fact that these sensors produce large, concurrent volumes of noisy data. In this paper, we describe working with power grid engineers to investigate how this problem can be addressed from a visual analytics perspective. As a result, we have developed PMU Tracker, an event localization tool that supports power grid operators in visually analyzing and identifying power grid events and tracking their propagation through the power grid's network. As a part of the PMU Tracker interface, we develop a novel visualization technique which we term an epicentric cluster dendrogram, which allows operators to analyze the effects of an event as it propagates outwards from a source location. We robustly validate PMU Tracker with: (1) a usage scenario demonstrating how PMU Tracker can be used to analyze anomalous grid events, and (2) case studies with power grid operators using a real-world interconnection dataset. Our results indicate that PMU Tracker effectively supports the analysis of power grid events; we also demonstrate and discuss how PMU Tracker's visual analytics approach can be generalized to other domains composed of time-varying networks with epicentric event characteristics.
... On en trouve une implémentation dans ParaGraph. Une version étendue est implémentée dans Triva [49,50]. La position relative des entités représente la topologie réelle du réseau et un mécanisme d'animation temporel permet d'analyser au cours du temps la variation des volumes de communications. ...
... Leur surface est ensuite déterminée à partir de ces valeurs. Les La « Scalable Topology-based Visualization » [49,50] du même outil permet de représenter la topologie et les communications au moyen d'un graphe, où les noeuds sont les ressources et les arcs les communications. Un exemple en est donné Figure 3.6. ...
... Le zoom est une fonctionnalité employée par un grand nombre d'outils d'analyse dans leurs représentations spatiotemporelles [5,6,7,30,44,45,48,49,50,52,80,100]. Dans le cas d'Ocelotl, il ne s'agit pas d'un zoom graphique, c'est-à-dire grossissant les objets graphiques ou désagrégeant les agrégats visuels, mais d'un zoom sur les données : le processus d'agrégation temporelle ou spatiotemporelle est réitéré sur une sous-partie de la trace. ...
Article
Full-text available
Trace visualization techniques are commonly used by developers to understand, debug, and optimize their applications. Most of the analysis tools contain spatiotemporal representations, which is composed of a time line and the resources involved in the application execution. These techniques enable to link the dynamic of the application to its structure or its topology. However, they suffer from scalability issues and are incapable of providing overviews for the analysis of huge traces that have at least several Gigabytes and contain over a million of events. This is caused by screen size constraints, performance that is required for an efficient interaction, and analyst perceptive and cognitive limitations. Indeed, overviews are necessary to provide an entry point to the analysis, as recommended by Shneiderman’s mantra — Overview first, zoom and filter, then details-on-demand —, a guideline that helps to design a visual analysis method.To face this situation, we elaborate in this thesis several scalable analysis methods based on visualization. They represent the application behavior both over the temporal and spatiotemporal dimensions, and integrate all the steps of Shneiderman’s mantra, in particular by providing the analyst with a synthetic view of the trace. These methods are based on an aggregation method that reduces the representation complexity while keeping the maximum amount of information. Both measures are expressed using information theory measures. We determine which parts of the system to aggregate by satisfying a trade-off between these measures; their respective weights are adjusted by the user in order to choose a level of details. Solving this trade off enables to show the behavioral heterogeneity of the entities that compose the analyzed system. This helps to find anomalies in embedded multimedia applications and in parallel applications running on a computing grid.We have implemented these techniques into Ocelotl, an analysis tool developed during this thesis. We designed it to be capable to analyze traces containing up to several billions of events. Ocelotl also proposes effective interactions to fit with a top-down analysis strategy, like synchronizing our aggregated view with more detailed representations, in order to find the sources of the anomalies.
... On en trouve une implémentation dans ParaGraph. Une version étendue est implémentée dans Triva [49,50]. La position relative des entités représente la topologie réelle du réseau et un mécanisme d'animation temporel permet d'analyser au cours du temps la variation des volumes de communications. ...
... Le zoom est une fonctionnalité employée par un grand nombre d'outils d'analyse dans leurs représentations spatiotemporelles [5,6,7,30,44,45,48,49,50,52,80,100]. Dans le cas d'Ocelotl, il ne s'agit pas d'un zoom graphique, c'est-à-dire grossissant les objets graphiques ou désagrégeant les agrégats visuels, mais d'un zoom sur les données : le processus d'agrégation temporelle ou spatiotemporelle est réitéré sur une sous-partie de la trace. ...
... La « Scalable Topology-based Visualization » de Triva[49,50], montrant la topologie de l'application, les noeuds, les liens, et leur taux d'utilisation. ...
Thesis
Les techniques de visualisation de traces sont fréquemment employées par les développeurs pour comprendre, déboguer, et optimiser leurs applications.La plupart des outils d'analyse font appel à des représentations spatiotemporelles, qui impliquent un axe du temps et une représentation des ressources, et lient la dynamique de l'application avec sa structure ou sa topologie.Toutefois, ces dernières ne répondent pas au problème de passage à l'échelle de manière satisfaisante. Face à un volume de trace de l'ordre du Gigaoctet et une quantité d'évènements supérieure au million, elles s'avèrent incapables de représenter une vue d'ensemble de la trace, à cause des limitations imposées par la taille de l'écran, des performances nécessaires pour une bonne interaction, mais aussi des limites cognitives et perceptives de l'analyste qui ne peut pas faire face à une représentation trop complexe. Cette vue d'ensemble est nécessaire puisqu'elle constitue un point d'entrée à l'analyse~; elle constitue la première étape du mantra de Shneiderman - Overview first, zoom and filter, then details-on-demand -, un principe aidant à concevoir une méthode d'analyse visuelle.Face à ce constat, nous élaborons dans cette thèse deux méthodes d'analyse, l'une temporelle, l'autre spatiotemporelle, fondées sur la visualisation. Elles intègrent chacune des étapes du mantra de Shneiderman - dont la vue d'ensemble -, tout en assurant le passage à l'échelle.Ces méthodes sont fondées sur une méthode d'agrégation qui s'attache à réduire la complexité de la représentation tout en préservant le maximum d'information. Pour ce faire, nous associons à ces deux concepts des mesures issues de la théorie de l'information. Les parties du système sont agrégées de manière à satisfaire un compromis entre ces deux mesures, dont le poids de chacune est ajusté par l'analyste afin de choisir un niveau de détail. L'effet de la résolution de ce compromis est la discrimination de l'hétérogénéité du comportement des entités composant le système à analyser. Cela nous permet de détecter des anomalies dans des traces d'applications multimédia embarquées, ou d'applications de calcul parallèle s'exécutant sur une grille.Nous avons implémenté ces techniques au sein d'un logiciel, Ocelotl, dont les choix de conception assurent le passage à l'échelle pour des traces de plusieurs milliards d'évènements. Nous proposons également une interaction efficace, notamment en synchronisant notre méthode de visualisation avec des représentations plus détaillées, afin de permettre une analyse descendante jusqu'à la source des anomalies.
... The use of graph visualization for the presentation of communication graphs is widespread, see for instance a 1992 survey [11]. More recent work has applied a variety of graph drawing methods to automatically generate layouts for various graphs associated with concurrent and distributed programs [4], [14], [18]. ...
Article
This paper describes a prototype visualization system for concurrent and distributed applications programmed using Erlang, providing two levels of granularity of view. Both visualizations are animated to show the dynamics of aspects of the computation.
Article
Full-text available
The electrical power grid is a critical infrastructure, with disruptions in transmission having severe repercussions on daily activities, across multiple sectors. To identify, prevent, and mitigate such events, power grids are being refurbished as ‘smart’ systems that include the widespread deployment of GPS-enabled phasor measurement units (PMUs). PMUs provide fast, precise, and time-synchronized measurements of voltage and current, enabling real-time wide-area monitoring and control. However, the potential benefits of PMUs, for analyzing grid events like abnormal power oscillations and load fluctuations, are hindered by the fact that these sensors produce large, concurrent volumes of noisy data. In this paper, we describe working with power grid engineers to investigate how this problem can be addressed from a visual analytics perspective. As a result, we have developed PMU Tracker, an event localization tool that supports power grid operators in visually analyzing and identifying power grid events and tracking their propagation through the power grid's network. As a part of the PMU Tracker interface, we develop a novel visualization technique which we term an epicentric cluster dendrogram , which allows operators to analyze the effects of an event as it propagates outwards from a source location. We robustly validate PMU Tracker with: (1) a usage scenario demonstrating how PMU Tracker can be used to analyze anomalous grid events, and (2) case studies with power grid operators using a real-world interconnection dataset. Our results indicate that PMU Tracker effectively supports the analysis of power grid events; we also demonstrate and discuss how PMU Tracker's visual analytics approach can be generalized to other domains composed of time-varying networks with epicentric event characteristics.
Article
Full-text available
The analysis of large-scale systems faces syntactic and semantic difficulties: How to observe millions of distributed and asynchronous entities? How to interpret the disorder that results from the microscopic observation of such entities? How to produce and handle relevant abstractions for the systems' macroscopic analysis? Faced with the failure of the analytic approach, the concept of epistemic emergence - related to the nature of knowledge - allows us to define an alternative strategy. This strategy is motivated by the observation that scientific activity relies on abstraction processes that provide macroscopic descriptions to broach the systems' complexity. This thesis is more specifically interested in the production of spatial and temporal abstractions through data aggregation. In order to generate scalable representations, the control of two essential aspects of the aggregation process is necessary. Firstly, the complexity and the information content of macroscopic representations should be jointly optimized in order to preserve the relevant details for the observer, while minimizing the cost of the analysis. We propose several measures of quality (internal criteria) to evaluate, compare and select the representations depending on the context and the objectives of the analysis. Secondly, in order to preserve their explanatory power, the generated abstractions should be consistent with the background knowledge exploited by the observer for the analysis. We propose to exploit the systems' organisational, structural and topological properties (external criteria) to constrain the aggregation process and to generate syntactically and semantically consistent representations. Consequently, the automation of the aggregation process requires solving a constrained optimization problem. We propose a generic algorithm that adapts to the criteria expressed by the observer. Furthermore, we show that the complexity of this optimization problem directly depend on these criteria. The macroscopic approach supported by this thesis is evaluated on two classes of systems. Firstly, the aggregation process is applied to the visualisation of large-scale distributed applications for performance analysis. It allows the detection of anomalies at several scales in the execution traces and the explanation of these anomalies according to the system syntactic properties. Secondly, the process is applied to the aggregation of news for the analysis of international relations. The geographical and temporal aggregation of media attention allows the definition of semantically consistent macroscopic events for the analysis of the international system. Furthermore, we believe that the approach and the tools presented in this thesis can be extended to a wider class of application domains.