Article

Processing: Programming for the media arts

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Processing is a programming language and environment built for the media arts communities. It is created to teach fundamentals of computer programming within the media arts context and to serve as a software sketchbook. It is used by students, artists, designers, architects, and researchers for learning, prototyping, and production. This essay discusses the ideas underlying the software and presents its relationship to open source software and the idea of software literacy. Additionally, Processing is discussed in relation to education and online communities.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The Processing programming language was designed to support exactly this kind of creative work [40,119,120] ; it was developed by designers and meant to be more accessible than it's closest counterpart, Java. Processing has since been leveraged in university-level offerings of "CS Principles" and CS0-style courses, both as an approachable text-based language and a means to create art [4,137] . ...
... The course is taught entirely in Processing, aside from a Blockly-based warm up assignment. I chose Processing because it was originally developed by artists and designers to enable the creation of creative work by those with minimal programming background [120] . To scaffold our journey through learning to write Processing code, all students are required to purchase Learning Processing and it serves as the course textbook [123] . ...
Thesis
Full-text available
The field of computer science has long been plagued by issues of diversity – in particular, attracting and retaining those historically marginalized in computing contexts. This is a great loss to the field, to the future of innovation, and to society. Perhaps most importantly, it is an incalculable loss to those populations excluded from pursuing a passion for computing in the first place. This dissertation chronicles a collection of projects aimed at broadening perceptions of computing, who is participating in computing, and what kinds of artifacts are created with computing. These projects leverage extensive fieldwork in the educational domains of computational craft and open source contribution; they entail (1) course design at the college level and (2) tool and curriculum design for a more open-ended audience of hobbyists and educators. The contribution of this dissertation is documentation of these design processes, along with my subsequent reflections, recommendations, and analysis. First, I share my experience designing two courses developed while on faculty at Berea College: Craft of Computing, which aims to attract a diversity of first- and second-year students to computing, and Open Source Software Engineering, which seeks to retain a diversity of upperclassmen through graduation and into computing careers beyond. Second, I revisit my own prior work in e-textiles tool/curriculum design, sharing long-term impact analysis for the LilyTiny sewable microcontroller and accompanying workshop guide. Evidence so far suggests that my forays into college course design successfully piqued students' interest in new domains, while positively influencing their confidence, identity, and sense of belonging. Analysis of the LilyTiny and accompanying workshop curriculum is also promising; it shows that an inexpensive and stable tool, coupled with freely available instructional resources, can indeed achieve widespread adoption in a market suggestive of novice and educational use.
... Our approach combined creative coding (i.e., programming as a creative medium) (Reas and Fry, 2006) with open-ended learning (i.e., giving students greater agency in shaping their learning trajectories) (Hannafin, 1995) and student-centered learning (i.e., letting students play an active role in teaching) (De Volder et al., 1985). Specifically, we wanted to frame programming skills around small open-ended "making" activities and then invite students to create these activities for their peers. ...
... "Creative coding" is a computing pedagogy that offers some solutions to these problems. In creative coding approaches, programming is presented as a medium for creative (often visual) expression (Reas and Fry, 2006), providing a simple means for highly abstract concepts to be represented visually. This can often lead to the "flow" of a complex program-a common sticking point for students-being clearer and more easy to manipulate. ...
Article
Full-text available
This article reports on a three and a half year design-led project investigating the use of open-ended learning to teach programming to students of interaction design. Our hypothesis is that a more open-ended approach to teaching programming, characterized by both creativity and self-reflection, would improve learning outcomes among our cohort of aspiring HCI practitioners. The objective of our design-led action research was to determine how to effectively embed open-endedness, student-led teaching, and self-reflection into an online programming class. Each of these notions has been studied separately before, but there is a dearth of published work into their actual design and implementation in practice. In service of that objective we present our contribution in two parts: a qualitatively-derived understanding of student attitudes toward open-ended blended learning, as well as a matching set of design principles for future open-ended HCI education. The project was motivated by a search for better educational outcomes, both in terms of student coding self-efficacy and quantitative metrics of cohort performance (e.g., failure rates). The first year programming course within our interaction design-focussed Bachelors program has had the highest failure rate of any core unit for over a decade. Unfortunately, the COVID-19 pandemic confounded any year-to-year quantitative comparison of the learning efficacy of our successive prototypes. There is simply no way to fairly compare the experiences of pre-pandemic and pandemic-affected student cohorts. However, the experience of teaching this material in face-to-face, fully online, and hybrid modalities throughout the pandemic has aided our qualitative exploration of why open-ended learning helps some students but seems to harm others. Through three sets of student interviews, platform data, and insights gained from both the instructional and platform design process, we show that open-ended learning can empower students, but can also exacerbate fears and anxieties around inadequacy and failure. Through seven semesters of iterating on our designs, interviewing students and reflecting on our interventions, we've developed a set of classroom-validated design principles for teaching programming to HCI students without strong computational backgrounds.
... The Processing programming language was designed to support exactly this kind of creative work [20], [21], [22]; it was developed by designers and meant to be more accessible than it's closest counterpart, Java. Processing has since been leveraged in university-level offerings of "CS Principles" and CS0-style courses, both as an approachable text-based language and a means to create art [23], [24]. ...
... The course is taught entirely in Processing, aside from a Blockly-based warm up assignment. We chose Processing because it was originally developed by artists and designers to enable the creation of creative work by those with minimal programming background [20]. To scaffold our journey through learning to write Processing code, all students are required to purchase Learning Processing and it serves as the course textbook [34]. ...
... There are also several articles describing how creative coding has been used in art-related teaching, or why it should be part of art education (Greenberg, Kumar & Xu, 2012;Knochel & Patton, 2015;Peppler & Kafai, 2009). When talking about creative coding, programming is often referred to as a new material whose purpose is to create something expressive rather than functional (Artut, 2017;Knochel & Patton, 2015;Maeda, 2000;Reas & Fry, 2006;) ...
... As Antonsen states in his interview, programming can be seen as a material to create with. As mentioned before, programming in Art and crafts can be understood as a material whose purpose is to create something expressive rather than functional (Artut, 2017;Knochel & Patton, 2015;Maeda, 2000;Reas & Fry, 2006). Antonsen illustrates that it is important that pupils get to know the particularities of each material. ...
Conference Paper
Full-text available
The purpose of this study was to look at whether programming should be introduced as a new material in Art and crafts education in Norway. Programming has here been linked to creative coding, where it is classified as a new creative material whose purpose is to create something expressive rather than functional. The empirical data was gathered through four semi-structured interviews. The participants were chosen through purposive sampling, based on their knowledge of programming in either art or in Art and crafts education. The result showed that pupils should be taught the particularities of each material, also when it comes to programming. Programming as a material was also described as relevant, accessible, and important for pupils' everyday lives. However, the fields' attitudes also show a resistance against ICT and teachers are more focused on developing pupils' skills in traditional material and craftsmanship and tactile experiences. Thus, it may be seen as a contradiction to introduce programming as a material, especially when creating digital expressions. At the same time, programming in Art and crafts have potential as a tool for in-depth learning and creative problem solving.
... To implement the interaction tracker component, we used the EyesWeb software [7], which is a platform for designing and implementing real-time multimodal interactive systems. We implemented the rendering engine using the Processing software [32] -an open-source creative coding tool for interactive and visual arts. Figure 3 summarizes the interactive system design. ...
... With an Arduino sketch, a calibration procedure is initiated for each experiment, setting the applied force to zero before strapping a test subject in. Several software applications have been developed to perform different experiments in both Pro- [26] and PsychoPy (v3.0) [27]. The software applications control the motor by setting motor steps. ...
Article
Full-text available
Electroactive textile (EAT) has the potential to apply pressure stimuli to the skin, e.g. in the form of a squeeze on the arm. To present a perceivable haptic sensation we need to know the perception threshold for such stimuli. We designed a set-up based on motorized ribbons around the arm with five different widths (range 3 - 49 mm) for psychophysical studies. We investigated the perception threshold of force pressure and ribbon reduction in two studies, using two methods (PSI and 1up/3down staircase), comparing sex, the left and right arm, the lower and upper arm, and stimulated surface area with a total of 57 participants. We found that larger stimulation surfaces require less pressure to reach the perception threshold (0.151 N per cm $^{2}$ for 3 mm width, 0.00972 N per cm $^{2}$ for 49 mm width on the lower arm). This indicates a spatial summation effect for these pressure stimuli. We did not find significant differences in perception threshold for the left and right arm and, the upper and lower arm. Between male and female participants we found significant differences for two conditions (10 mm and 25 mm) in Experiment 1, but we could not reproduce this in Experiment 2.
... Examples of this include, but are not limited to: SBART2.4, an interactive tool that utilizes user feedback to control the creation of artifacts (Unemi, 2002); Processing, a programming environment using code-based approaches to generate art (Reas & Fry, 2006). In these examples, artists establish general rules for the system while also allowing the computer system to make some decisions. ...
... • Chapters 15-20 are breadth material. The first four are case studies of specific issues in design through specific projects such as Processing [37], Twine [14], Flow-Matic [42], Inform [32], and Penrose [44]. The last two chapters are breadth topics in the theory of programming languages. ...
Preprint
Full-text available
This paper is a companion to the author's open-access textbook , "Human-Centered Programming Languages. " Beyond the contributions of the textbook itself, this paper contributes a set of textbook design principles for overcoming those limitations and an analysis of students' stated needs and preferences drawn from anonymous course report data for three courses, the last of which was based on notes that became the basis of the textbook. The textbook is intended to be multipurpose , with its primary audiences being undergraduate and master's-level elective courses on programming languages within computer science, but significant opportunity for cross-use in disciplines ranging from human-computer interaction and software engineering to gender studies and disability studies. The book is intended to be language-agnostic, but the course in which it will be used first is Rust-based.
... CroP is a data visualization tool developed in Java using the Processing library [127], designed to represent and analyze multivariate data, in particular relational and temporal data. While it is able to process generic datasets, there is additional support for biological datasets, such as the integration of an external database for cross-referencing gene proprieties, allowing it to be used to explore and discover patterns across PPI networks and gene expression time-series. ...
Thesis
Full-text available
Data visualization has been shown to be an important tool in knowledge discovery, being used alongside data analysis to identify and highlight patterns, trends and outliers, aiding users in decision-making. The need for analyzing unstructured and increasingly larger datasets has led to the continued emergence of visualization tools that seek to provide methods that facilitate the exploration and analysis of such datasets. Many fields of study still face the challenges inherent to the analysis of complex multidimensional datasets, such as the field of computational biology, whose research of infectious diseases must contend with large protein-protein interaction networks with thousands of genes that vary in expression values over time. Throughout this thesis, we explore the visualization of multivariate data through CroP, a data visualization tool with a coordinated multiple views framework that allows users to adapt the workspace to different problems through flexible panels. While CroP is able to process generic relational, temporal and multivariate quantitative data, it also presents methods directed at the analysis of biological data. This data can be represented through various layouts and functionalities that not only highlight relationships between different variables, but also dig-down into discovered patterns in order to better understand their sources and their effects. In particular, we can highlight the exploration of time-series through our dynamic and parameter-based implementation of layouts that bend timelines to visually represent how datasets behave over time. The implemented models and methods are demonstrated through experiments with diverse multivariate datasets, with a focus on gene expression time-series datasets, and complemented with a discussion on how these contributed to the creation of comprehensible visualizations, facilitated data analysis, and promoted pattern discovery. We also validate CroP through model and interface tests performed with participants from both the fields of information visualization and computational biology. As we present our research and a discussion of its results, we can highlight the following contributions: an analysis of the available range of visualization models and tools for multivariate datasets, as well as modern data analysis methods that can be used cooperatively to explore such datasets; a coordinated multiple views framework with a modular workspace that can be adapted to the analysis of varied problems; dynamic visualization models that explore the representation of complex multivariate datasets, combined with modern data analysis methods to highlight and analyze significant events and patterns; a visualization tool that incorporates the developed framework, visualization models and data analysis methods into a platform that can be used by different types of users.
... All the mesocosms were equipped with a microprocessor-based (Arduino Nano) controller, temperature sensors, and a logging system. An open-source user interface controller program was developed in the Java programming language using the Processing 3 software program [24]. System communication was made based on the UDP protocol over the Ethernet network, which was the fastest method. ...
Article
Full-text available
Salinization of freshwater ecosystems is one of the major challenges imposed largely by climate change and excessive water abstraction for irrigated crop farming. Understanding how aquatic ecosystems respond to salinization is essential for mitigation and adaptation to the changing climate, especially in arid landscapes. Field observations provide invaluable data for this purpose, but they rarely include sufficient spatial and temporal domains; however, experimental approaches are the key to elucidating complex ecosystem responses to salinization. We established similar experimental mesocosm facilities in two different climate zones in Turkey, specifically designed to simulate the effects of salinization and climate change on shallow lake ecosystems. These facilities were used for two case-study experiments: (1) a salinity gradient experiment consisting of 16 salinity levels (range: 0–50 g/L); and (2) a heatwave experiment where two different temperature regimes (no heatwave and +6 °C for two weeks) were crossed with two salinity levels (4 and 40 g/L) with four replicates in each treatment. The experiments lasted 8 and 2 months, respectively, and the experimental mesocosms were monitored frequently. Both experiments demonstrated a significant role of salinization modulated by climate on the structure and function of lake ecosystems. Here, we present the design of the mesocosm facilities, show the basic results for both experiments and provide recommendations for the best practices for mesocosm experiments conducted under saline/hypersaline conditions.
... Obě úlohy byly prováděné na 15" laptopu, ovládaném pomocí myši. Experimentální úlohy byly implementovány v prostředí Processing (Reas & Fry, 2006 ...
Conference Paper
Full-text available
Sense of agency je pocit, že jsem tím, kdo způsobil danou akci. V této studii jsme zkoumali efektnízkofrekvenční (1 Hz) a vysokofrekvenční (10 Hz, 20Hz) rTMS stimulace pravého dolního parietálního laloku na sense of agency a potenciální roli gamma oscilací v tomto procesu. Participanti (N=16) podstoupili rTMS stimulaci, po které následoval úkolna sense of agency zahrnující pohybování kurzorem a externí manipulaci zpětné vazby. Aplikace 20 Hz rTMS stimulace vedla ke sníženípřesnosti rozpoznání vlastních a cizích akcí. Tento efekt byl způsoben poklesem kapacity rozpoznat externí manipulaci pohybu.
... Due to the limitations of BCI2000 to modify the display as required in this experiment, the Processing software was used to simulate an ATC scenario [30]. Processing is a graphic software coded in Java that was synchronized in time with BCI2000 through a UDP port (using the BCI2000 "watches" tool) and received the temporal instant in which the target stimulus was presented in BCI2000. ...
Article
Full-text available
An event-related potential (ERP)-based brain–computer interface (BCI) can be used to monitor a user’s cognitive state during a surveillance task in a situational awareness context. The present study explores the use of an ERP-BCI for detecting new planes in an air traffic controller (ATC). Two experiments were conducted to evaluate the impact of different visual factors on target detection. Experiment 1 validated the type of stimulus used and the effect of not knowing its appearance location in an ERP-BCI scenario. Experiment 2 evaluated the effect of the size of the target stimulus appearance area and the stimulus salience in an ATC scenario. The main results demonstrate that the size of the plane appearance area had a negative impact on the detection performance and on the amplitude of the P300 component. Future studies should address this issue to improve the performance of an ATC in stimulus detection using an ERP-BCI.
... CroP is a data visualization tool developed in Java using the Processing library [63], designed to represent and analyze multivariate data, in particular relational and temporal data. While it is able to process generic datasets, there is additional support for biological datasets, such as the integration of an external database for cross-referencing gene proprieties, allowing it to be used to explore PPI networks and gene expression time-series. ...
Article
Full-text available
Many fields of study still face the challenges inherent to the analysis of complex multidimensional datasets, such as the field of computational biology, whose research of infectious diseases must contend with large protein-protein interaction networks with thousands of genes that vary in expression values over time. In this paper, we explore the visualization of multivariate data through CroP, a data visualization tool with a coordinated multiple views framework where users can adapt the workspace to different problems through flexible panels. In particular, we focus on the visualization of relational and temporal data, the latter being represented through layouts that distort timelines to represent the fluctuations of values across complex datasets, creating visualizations that highlight significant events and patterns. Moreover, CroP provides various layouts and functionalities to not only highlight relationships between different variables, but also dig-down into discovered patterns in order to better understand their sources and their effects. These methods are demonstrated through multiple experiments with diverse multivariate datasets, with a focus on gene expression time-series datasets. In addition to a discussion of our results, we also validate CroP through model and interface tests performed with participants from both the fields of information visualization and computational biology.
... RealityFlow's goals were to facilitate collaboration and to support multiplatform editing of XR content, which has most recently been achieved through Rec Room's authoring tools. However, none of these tools have gained popularity or widespread use as projects such as or Processing [24] or Scratch [16]. ...
... Researcher Reas points out that "software is a tool that controls the flow of bits that orbit the air and surface of our planet. Understanding software and its impact on culture is the foundation for understanding and contributing to modern society" (Reas, 2006). Based on the above, it should be noted that the demand for the integration of digital technologies in all areas of modern education, including the visual arts, is growing. ...
Article
Full-text available
Today digital technologies find their place in all spheres of society, including in the education system. In particular, the use of digital technologies in organizing and conducting fine arts lessons will not only improve the qualifications of teachers in working with technologies and new methods but will also play an important role in shaping students' competencies in accordance with modern requirements. It should be noted that digital technologies are effective only if they are used purposefully and effectively. This article describes the general classification of digital technologies and their use in organizing art classes, depending on the form and purpose of the lesson.
... This allows to remote control Arduino devices via the internet, with a minimal amount of coding knowledge required to implement the system. Similarly, a basic Processing (Reas and Fry, 2006) implementation has been provided, allowing to use the accessible coding environment to build custom graphical interfaces to control electronic boards, without need to worry on how to im-plement communication protocols. ...
Conference Paper
Full-text available
In order to offer a novel approach towards the development of interactive projects in architecture and design, as well as their tight integration in existing CAAD toolchains, this paper presents Funken, an open-source toolkit that handles serial communication for microcontrollers, aimed at simplifying the integration process between CAAD tools and interactive devices, and allowing fast implementation of human-readable user-specific communication protocols on the fly. Funken's details and implementation are presented, as well as custom-developed interfaces to Grasshopper, NodeJS and Processing. Funken is designed for building systems that allow users to implement their own custom defined logic, without imposing predetermined behaviors. Within teaching, it allows to encapsulate complexity of microcontroller programming, while still allowing to implement complex behaviors through simple interfaces. The possibility of integrating Funken into a variety of CAD and media design frameworks offers the possibility of adding interactive functionality to a variety of projects.
... Programming and coding are becoming increasingly an inherent component of design education, and are being exploited in several aspects of the design process, including algorithmic thinking, form finding, generative design, creativity, and optimization (Reas and McWilliams, 2010;Burry, 2013;Caetano, 2020;Leitao et al., 2016;Terzidis, 2006). Processing, the open-source programming language originally developed for the electronic and visual arts and design communities (Reas and Fry, 2006), has been recently explored in architectural teaching and research, especially in the area of shape studies (Ahlquist and Menges, 2012), opening up unique possibilities for architects with respect to creative design exploration, formation, and interactivity. Recent studies on the design exploration of geometric patterns and shapes with respect to tangible cultural heritage demonstrate the potential of computational methods through programming in terms of both analysis and form generation (Agirbas, 2017;Barrios and Alani, 2015), especially as relates to the challenges associated with understanding, simulating, and reconstructing the conditions and rules under which these patterns were created traditionally by the original craftsmen and artists. ...
Conference Paper
Full-text available
Coding and visual programming are becoming an important component of design education, with focus on algorithmic thinking, form finding, and generative design. Programming languages such as Processing are becoming increasingly explored in the area of shape studies in architecture, thus opening unique possibilities for creative design exploration. Most pedagogical approaches that integrate coding in the exploration of heritage-inspired geometric patterns focus on shape grammars and rule-based design. This exploratory paper further examines the potential of traditional geometric patterns as sources of inspiration for interactivity in architectural design. We discuss the process and outcomes of an undergraduate architectural computing course, where students implement visual programming using Processing to develop interactive architecture prototypes based on elements of cultural heritage. Results demonstrated a variety of abstraction and translation strategies for both tangible and intangible heritage inspirations, and generation of emergent concepts for diverse architectural prototypes including urban grids, movable structures, and responsive façades.
... They were simply asked to produce a binary series of 300 elements. We gathered the responses from the participants using a custom software 1 that was written in processing (Reas & Fry, 2006). The procedure was run on MacBook Pro 13,3-inch Early 2015 with OS X 10.11. ...
Article
Full-text available
Many psychological studies have shown that human-generated sequences are hardly ever random in the strict mathematical sense. However, what remains an open question is the degree to which this (in)ability varies between people and is affected by contextual factors. Herein, we investigated this problem. In two studies, we used a modern, robust measure of randomness based on algorithmic information theory to assess human-generated series. In Study 1 (), in a factorial design with task description as a between-subjects variable, we tested the effects of context and mental fatigue on human-generated randomness. In Study 2 (), in online research, in experimental design, we further investigated the effect of mental fatigue on the randomness of human-generated series and the relationship between the need for cognition (NFC) and the ability to produce random-like series. Results of Study 1 show that the activation of the ability to produce random-like series depends on the relevance of the contextual cues (), whether they activate known representations of a random series generator and consequently help to avoid the production of trivial sequences. Our findings from both studies on the effect of mental fatigue (Study 1 – ; Study 2 – ) and cognitive motivation () demonstrate that regardless of the context or task's novelty people quickly lose interest in the random series generation. Therefore, their performance decreases over time. However, people high in the NFC can maintain the cognitive motivation for a longer period and consequently on average generate more random series. In general, our results suggest that when contextual cues and intrinsic constraints are in optimal interaction people can temporarily escape the structured and trivial patterns and produce more random-like sequences.
... The year 2021 marks the twentieth anniversary of the Processing project, an open-source, Java-based programming framework initiated by Casey Reas and Ben Fry [62]. It is designed with the goal to enable artists and designers to more easily employ computer programming for visual media and creative output [180]. Over the years, Processing and other similar creative programming frameworks become vastly popular, with their user bases no longer limited to art and design practitioners. ...
Preprint
Full-text available
The metaverse, enormous virtual-physical cyberspace, has brought unprecedented opportunities for artists to blend every corner of our physical surroundings with digital creativity. This article conducts a comprehensive survey on computational arts, in which seven critical topics are relevant to the metaverse, describing novel artworks in blended virtual-physical realities. The topics first cover the building elements for the metaverse, e.g., virtual scenes and characters, auditory, textual elements. Next, several remarkable types of novel creations in the expanded horizons of metaverse cyberspace have been reflected, such as immersive arts, robotic arts, and other user-centric approaches fuelling contemporary creative outputs. Finally, we propose several research agendas: democratising computational arts, digital privacy and safety for metaverse artists, ownership recognition for digital artworks, technological challenges, and so on. The survey also serves as introductory material for artists and metaverse technologists to begin creations in the realm of surrealistic cyberspace.
... The methods presented in this paper were integrated into CroP [6], a data visualization tool created in Java using the Processing library [25]. CroP employs a multiple coordinated views layout to visualize user-provided datasets at different levels of detail through various visualization models contained within flexible panels, including relational networks, tabular visualizations, linear graphs, and an implementation of time curves (Fig. 1). ...
Article
Visualization has shown to be a valuable tool in the analysis of large and complex temporal datasets, aided by the emergence of new models such as Time Curves, which distorts timelines to position time points based on their similarity with each other, reflecting changes in the data over time. In this paper, we further explore time-series functionally and aesthetically by presenting an interactive and parameter-based implementation of the Time Curves model, complemented with addition of supporting visualizations and data analysis methods. In our implementation we introduce Time Paths, a force-directed layout that can dynamically transform the original model to not only smoothen the transitions between time points, but also reduce visual noise in favor of portraying overall patterns. The proposed addition of visual elements to the model includes temporal glyphs and a supporting timeline graph which help discover and better understand temporal patterns across complex datasets. Through interactive exploration, we demonstrate how these methods can be used to analyze and identify the main agents at the source of significant instances in three biological datasets. These methods are presented within CroP, a data visualization tool with coordinated multiple views aimed at the analysis of biological datasets.
... GUI modules are created with Java Swing and Abstract Window Toolkit (AWT) components for their cross-platform reliability. The primary movement animation is implemented using Processing [23], an open-source Java library that abstracts tedious computer graphics commands away and provides an intuitive, user-friendly development environment. In the implementation of DynamoVis, Processing handles all graphical commands (trajectory shapes, annotations, and legend elements) on a geographic basemap. ...
Article
Full-text available
Background This paper introduces DynamoVis version 1.0, an open-source software developed to design, record and export custom animations and multivariate visualizations from movement data, enabling visual exploration and communication of patterns capturing the associations between animals’ movement and its affecting internal and external factors. Proper representation of these dependencies grounded on cartographic principles and intuitive visual forms can facilitate scientific discovery, decision-making, collaborations, and foster understanding of movement. Results DynamoVis offers a visualization platform that is accessible and easily usable for scientists and general public without a need for prior experience with data visualization or programming. The intuitive design focuses on a simple interface to apply cartographic techniques, giving ecologists of all backgrounds the power to visualize and communicate complex movement patterns. Conclusions DynamoVis 1.0 offers a flexible platform to quickly and easily visualize and animate animal tracks to uncover hidden patterns captured in the data, and explore the effects of internal and external factors on their movement path choices and motion capacities. Hence, DynamoVis can be used as a powerful communicative and hypothesis generation tool for scientific discovery and decision-making through visual reasoning. The visual products can be used as a research and pedagogical tool in movement ecology.
... Visualization authoring libraries and the interfaces they provide can be categorized by their level of abstraction [25,50,84]. On one end of the spectrum, users are provided with graphical elements (e.g., rectangles, circles, and lines) that need to be composed from the bottom up to construct visualizations (e.g., Processing [64] and D3 [7]). This approach is the most expressive way to create a wide variety of visualizations but, at the same time, makes the construction process time-consuming and laborious. ...
Article
Full-text available
The combination of diverse data types and analysis tasks in genomics has resulted in the development of a wide range of visualization techniques and tools. However, most existing tools are tailored to a specific problem or data type and offer limited customization, making it challenging to optimize visualizations for new analysis tasks or datasets. To address this challenge, we designed Gosling-a grammar for interactive and scalable genomics data visualization. Gosling balances expressiveness for comprehensive multi-scale genomics data visualizations with accessibility for domain scientists. Our accompanying JavaScript toolkit called Gosling.js provides scalable and interactive rendering. Gosling.js is built on top of an existing platform for web-based genomics data visualization to further simplify the visualization of common genomics data formats. We demonstrate the expressiveness of the grammar through a variety of real-world examples. Furthermore, we show how Gosling supports the design of novel genomics visualizations. An online editor and examples of Gosling.js, its source code, and documentation are available at https://gosling.js.org .
... Dasturiy ta'minotni va uning madaniyatga ta'sirini tushunish zamonaviy jamiyatni anglash va hissa qo'shish uchun asosdir". [3] Xalqaro‖ ilg'or‖ tajribalar‖ shuni‖ ko'rsatadiki,‖ o'quvchilarda‖ texnologiyaga‖ oid‖ kopmetensiyalar‖ va‖ kelajakka‖ oid‖ bilimlarni‖ shakllantirishda‖ tasviriy‖ san'at‖ fani‖ o'quv‖ mazmuniga‖ tegishli‖ talablar‖ qo'yilgan.‖ Jumladan,‖ Finlayandiyada‖ o'quvchilar‖ tasviriy‖ san'at‖ darslarida me'moriy fotografiya, landshaft san'ati, materiallarni qayta ishlash, arxitektura, mobil texnologiyalar va robototexnika dizayni(PS) bilan tanishadilar. ...
Article
Full-text available
Digital technology can only be effective if it is used purposefully and effectively. This article describes the general classification of digital technologies and the structuring of appropriate accommodations for the formation and application of technological competencies in students through the use of digital technologies in Fine Arts classes.
... The custom 3/4D LLAMA visualisation software we developed (github.com/ jameslefevre/visualiser-4D-microscopy-analysis), is built on the Processing 3 environment and language [28], which provides a powerful framework for interactive visualisation; this is the only component of the system which is not based on ImageJ. It should be noted that the visualiser is not directly part of the analysis pipeline, and following our modular approach it may be ignored, or alternative tools used. ...
Article
Full-text available
Background With recent advances in microscopy, recordings of cell behaviour can result in terabyte-size datasets. The lattice light sheet microscope (LLSM) images cells at high speed and high 3D resolution, accumulating data at 100 frames/second over hours, presenting a major challenge for interrogating these datasets. The surfaces of vertebrate cells can rapidly deform to create projections that interact with the microenvironment. Such surface projections include spike-like filopodia and wave-like ruffles on the surface of macrophages as they engage in immune surveillance. LLSM imaging has provided new insights into the complex surface behaviours of immune cells, including revealing new types of ruffles. However, full use of these data requires systematic and quantitative analysis of thousands of projections over hundreds of time steps, and an effective system for analysis of individual structures at this scale requires efficient and robust methods with minimal user intervention. Results We present LLAMA, a platform to enable systematic analysis of terabyte-scale 4D microscopy datasets. We use a machine learning method for semantic segmentation, followed by a robust and configurable object separation and tracking algorithm, generating detailed object level statistics. Our system is designed to run on high-performance computing to achieve high throughput, with outputs suitable for visualisation and statistical analysis. Advanced visualisation is a key element of LLAMA: we provide a specialised tool which supports interactive quality control, optimisation, and output visualisation processes to complement the processing pipeline. LLAMA is demonstrated in an analysis of macrophage surface projections, in which it is used to i) discriminate ruffles induced by lipopolysaccharide (LPS) and macrophage colony stimulating factor (CSF-1) and ii) determine the autonomy of ruffle morphologies. Conclusions LLAMA provides an effective open source tool for running a cell microscopy analysis pipeline based on semantic segmentation, object analysis and tracking. Detailed numerical and visual outputs enable effective statistical analysis, identifying distinct patterns of increased activity under the two interventions considered in our example analysis. Our system provides the capacity to screen large datasets for specific structural configurations. LLAMA identified distinct features of LPS and CSF-1 induced ruffles and it identified a continuity of behaviour between tent pole ruffling, wave-like ruffling and filopodia deployment.
... The Ultrasonic.h library was used to convert the US signal into a metric distance value. All information gathered from the US, PS, and rotary encoder was transmitted at 40 Hz through the Mega 2560 board USB port and recorded in a text file (.txt) on a laptop using the Processing freeware software (Reas and Fry, 2006). In addition to the data generated by the sensors, the coordinates generated by a global navigation satellite system (GNSS) model GR-3 (Topcon, Tokyo, Japan) with an accuracy of 0.017 m with real-time kinematic (RTK) differential correction in a dynamic condition was recorded in the text file. ...
Article
There is a need for new precision agriculture approaches that allow real-time intervention within sugarcane rows to reduce costs and minimize negative environmental impacts. Therefore, our goal was to test an alternative system for detection within rows of sugarcane plants. The objective was to determine the errors of an alternative system to detect targets of different sizes at different travel speeds. A system with a photoelectric sensor, ultrasonic sensor, and encoder was developed to detect and map the plants within the sugarcane row. The use of sensors separately and simultaneously for plant detection was compared. To improve the accuracy of plant detection, decision tree (DT), random forest (RF), and support vector machine (SVM) models were tested. The three machine learning (ML) models used data generated by the photoelectric and ultrasonic sensors along with the displacement sensor. The models were compared in terms of their precision in detecting plants within sugarcane rows. The approach with the two sensors and the DT model had the best precision (>90%) in plant detection. The sensors have the ability to detect 91% of the total plants (recall =0.91). The travel speed influenced the performance of the sensors in detecting the targets, especially at 2 m s-1
... The human involvement is limited to associating a music note with a color, which is applied through historical context, and giving parameters to a function that randomizes the strokes generated on a virtual canvas. We here present a process to convert piano compositions into computer generated paintings, using the Processing programming language [7], which is designed for artists with a focus on computer programming in a visual context. We show results of extracting colors from famous musical compositions and paintings generated using our proposed system. ...
Article
The authors present an automated, rule-based system to convert piano compositions into paintings. Using a color-note association scale presented by Edward Maryon in his book Marcotone (1919), which correlates 12-tone scale with 12 hues of the color circle, the authors present a simple approach for extracting colors associated with each note played in a piano composition. The authors also describe the color extraction and art generation process in detail, as well as the process to create ‘moving art,’ which imitates the progression of a musical piece in real time. They share and discuss artworks generated for four famous piano compositions.
... Students are asked to quickly review the lecture notes. T -M AC S1 b) Game development using processing GP We researched several programming languages and application domains and eventually chose Processing (Reas & Fry, 2006), an open source library and a programming environment that are used to code within the context of the visual arts. Processing uses simplified Java syntax to create visual, interactive media. ...
Article
Full-text available
With the increased reliance on technology, computer programming has emerged as an essential skill that is interesting to many audiences beyond merely computer scientists. As a result, many students from various disciplines take first-year computer science courses. This led to classrooms with a lot of diversity in student motivation, backgrounds, learning needs, and educational levels. Teaching the same material to such a diverse group is challenging. The aim of this paper is two-fold. Firstly, we present a flipped-based approach that benefits from the mixed-ability nature of first-year programming courses rather than considering it as a burden. Secondly, we present a study that evaluates the extent to which the proposed approach enhances student learning in such a mixed-ability environment. The study was conducted in a first-year course at the University of British Columbia – Okanagan, and it was based on three components:1) a survey of 25 Likert items(n = 46), 2) class average grade and pass rate over 6 years (n = 42 + 38 + 56 + 79 + 90 + 74), and 3) student ratings of the course over 5 years (n = 42 + 38 + 56 + 79 + 90). Findings of the survey indicate an overall positive students’ impression with no significant difference in the opinions of various student populations. Analyzing the course grades, pass rates, and student ratings confirmed the survey findings and showed an overall improvement in grades, pass-rates, and student satisfaction.
... A principios de los 2000 existían algunas plataformas propietarias como BasicStamp (Barragan, 2004) que consiste en una placa controladora de código abierto, que utiliza una interfaz sencilla basada en el software abierto Processing (Reas y Fry, 2006) para programar dicha placa controladora. El objetivo de esta propuesta no es otro que el de hacer llegar a un público no técnico, en concreto artistas digitales, las posibilidades de utilizar sensores y actuadores electrónicos para enriquecer sus creaciones artísticas. ...
Thesis
Full-text available
Si partimos de la idea de que el acceso a la tecnología reduce las diferencias sociales, y el acceso al conocimiento sobre cómo crear tecnología y sobre como comprenderla nos empodera como ciudadanos. El presente estudio incide sobre la premisa de que el hecho de comprender los procesos de creación de software no tan solo extiende nuestras habilidades y posibilidades, sino que colabora en la generación de estructuras y formas de razonar que pueden ser útiles en otros contextos. Estos últimos años, desde diferentes entidades se está promoviendo la implementación de actividades de programación de computadores como pueden ser la robótica educativa, la computación creativa o la programación de dispositivos móviles en escuelas e institutos de secundaria. En muchos casos se modifica el currículum para incluir este tipo de actividades en diferentes asignaturas. Estas acciones formativas en ocasiones persiguen objetivos distintos y en algunas ocasiones se implementan sin excesiva reflexión sobre el objetivo original que persiguen las propuestas, o posiblemente guiados por una tendencia educativa. El presente estudio en primer lugar ofrece una reflexión sobre la perspectiva histórica de las diferentes propuestas, intentando contextualizar el momento actual evitando de esta forma caer en los ciclos de sobrexpectación de aplicación tecnológica. Al mismo tiempo, analiza de forma experimental el impacto que este tipo de actividades tiene sobre las funciones ejecutivas de los estudiantes. Así pues, se establece como objetivo principal (O1) comprobar si existe una relación entre el desarrollo de las funciones ejecutivas y las actividades de computación creativa de los estudiantes de secundaria. Y se documenta el proceso para generar una herramienta de cuantificación que permita medir el efecto que tienen estas acciones formativas. Desde un punto de vista metodológico, el presente estudio aporta un formato escalable y replicable de análisis de la evolución de parte de las funciones ejecutivas de los alumnos implicados en este tipo de actividades desde un prisma positivista y experimental. El estudio se implementa con 74 alumnos de tercero de educación secundaria del centro Ins Miquel Biada en la localidad de Mataró. En este proceso se cuidan tanto los materiales didácticos generados como los procesos de aplicación e integración en la logística escolar para que pueda ser replicado y escalado con facilidad. Los resultados finales demuestran la existencia de una relación entre la evolución de las funciones ejecutivas y las acciones formativas relativas a la computación creativa. Al mismo tiempo el análisis resultante muestra datos interesantes sobre el grado de motivación de los alumnos y sobre la diferencia de resultados teniendo en cuenta el sexo. De esta forma se abre la puerta tanto a la replicación del propio estudio con la finalidad de llevar a cabo la evaluación de acciones formativas similares, como a la posibilidad de exploración de la aplicación de este tipo de acciones con el objetivo de mejorar las habilidades de alumnos con dificultades de aprendizaje o problemas de desmotivación.
Article
Tense/aspect morphology on verbs is often thought to depend on event features like telicity, but it is not known how speakers identify these features in visual scenes. To examine this question, we asked Japanese speakers to describe computer-generated animations of simple actions with variation in visual features related to telicity. Experiments with adults and children found that they could use goal information in the animations to select appropriate past and progressive verb forms. They also produced a large number of different verb forms. To explain these findings, a deep-learning model of verb production from visual input was created that could produce a human-like distribution of verb forms. It was able to use visual cues to select appropriate tense/aspect morphology. The model predicted that video duration would be related to verb complexity, and past tense production would increase when it received the endpoint as input. These predictions were confirmed in a third study with Japanese adults. This work suggests that verb production could be tightly linked to visual heuristics that support the understanding of events.
Preprint
Full-text available
The impact of temperature on growth is typically considered under heat- or cold-shock conditions that elicit specific regulation. In between, cellular growth rate varies according to the Arrhenius law of thermodynamics. Here, we use growth-rate dynamics during transitions between temperatures to discover how this behavior arises and what determines the temperature sensitivity of growth. Using a device that enables single-cell tracking across a wide range of temperatures, we show that bacteria exhibit a highly conserved, slow response to temperatures upshifts with a time scale of ~1.5 doublings at the higher temperature, regardless of initial/final temperature or nutrient source. We rule out transcriptional, translational, and membrane reconfiguration as potential mechanisms. Instead, we demonstrate that an autocatalytic enzyme network incorporating temperature-sensitive Michaelis-Menten kinetics recapitulates all temperature-shift dynamics, reveals that import dictates steady-state Arrhenius growth behavior, and successfully predicts alterations in the upshift response observed under simple-sugar or low-nutrient conditions or in fungi. These findings indicate that metabolome rearrangement dictates how temperature affects microbial growth.
Chapter
We showcase visualizations created for art periods of Dalí, van Gogh, and Picasso by leveraging deep neural embedding models like word2vec to represent color features. First, the embedding vectors are generated for every color used in artworks of these painters. Next, t-distributed Stochastic Neighbor Embedding (t-SNE) is applied to generate a two-dimensional visualizations of the color space. Colors used in close proximity on the canvas are observed as compact clusters in the visualizations. These visualizations are termed as fingerprints, as they uniquely depict each art period of a painter, by highlighting the color palette used in their works. The authors further provide commentary on the artists’ art periods and how the fingerprints showcase their artistic evolution.KeywordsArt periodsNeural embeddingsWord embeddingst-SNE
Article
This study investigated the ability to produce accurate multiphase flow profiles simulating the response of producing reservoirs, using generative deep learning (GDL) methods. Historical production data from numerical simulators were used to train a variational autoencoder (VAE) algorithm that was then used to predict the output of new wells in unseen locations. This work describes a procedure in which data analysis techniques can be applied to existing historical production profiles to gain insight into field-level reservoir flow behavior. The procedure includes clustering, dimensionality reduction, correlation, in addition to novel interpretation methodologies that synthesize the results from reservoir simulation output, characterizing flow conditions. The insight was then used to build and select samples to train a VAE algorithm that reproduces the multiphase reservoir behavior for unseen operational conditions with high accuracy. Furthermore, using deep feature space interpolation, the trained algorithm can be used to further generate new predictions of the reservoir response under operational conditions for which we do not have previous examples in the training data set. It is found that VAE can be used as a robust multiphase flow simulator. Applying the methodology to the problem of determining multiphase production rate from new producing wells in undrilled locations showed positive results. The methodology was tested successfully in predicting multiphase production under different scenarios including multiwell channelized and heterogeneous reservoirs. Comparison with other shallow supervised algorithms demonstrated improvements realized by the proposed methodology. The study developed a novel methodology to interpret both data and GDL algorithms, geared toward improving reservoir management. The method was able to predict the performance of new wells in previously undrilled locations, potentially without using a reservoir simulator.
Conference Paper
This study investigated the ability to produce accurate multiphase flow profiles simulating the response of producing reservoirs, using Generative Deep Learning (GDL) methods. Historical production data from numerical simulators were used to train a GDL model that was then used to predict the output of new wells in unseen locations. This work describes a procedure in which data analysis techniques are used to gain insight into reservoir flow behavior at a field level based on existing historical data. The procedure includes clustering, dimensionality reduction, correlation, in addition to novel interpretation methodologies that synthesize the results from reservoir simulation output, characterizing flow conditions. The insight was then used to build and train a GDL algorithm that reproduces the multiphase reservoir behavior for unseen operational conditions with high accuracy. The trained algorithm can be used to further generate new predictions of the reservoir response under operational conditions for which we do not have previous examples in the training data set. We found that the GDL algorithm can be used as a robust multiphase flow simulator. In addition, we showed that the physics of flow can be captured and manipulated in the GDL latent space after training to reproduce different physical effects that did not exist in the original training data set. Applying the methodology to the problem of determining multiphase production rate from new producing wells in undrilled locations showed positive results. The methodology was tested successfully in predicting multiphase production under different scenarios including multiwell channelized and heterogeneous reservoirs. Comparison with other shallow supervised algorithms demonstrated improvements realized by the proposed methodology, compared to existing methods. The study developed a novel methodology to interpret both data and GDL algorithms, geared towards improving reservoir management. The method was able to predict the performance of new wells in previously undrilled locations without using a reservoir simulator.
Chapter
In collaborative visual analytics sessions, participants analyze data and cooperate toward a shared vision. These decision-making processes are challenging and time-consuming. In this chapter, we introduce a system for facilitating decision-making in exploratory and collaborative visual analytics sessions. Our system comprises an assistant analytical agent, a multi-display wall and a framework for interactive visual analytics. The assistant agent understands participants’ ongoing conversations and exhibits information about the data on displays. The displays are also used to manifest the current state of the session. In addition, the agent answers the participants’ questions either regarding the data or open-domain ones, and preserves the productivity and the efficiency of the session by confirming that the participants do not deviate from the session’s goal. Whereas, our visual analytics medium makes data tangible, hence more comprehensible and natural to operate with. The results of our qualitative study indicate that the proposed system fosters productive multi-user decision-making processes.
Calculated movements User interface: a personal view The art of human–computer interface design Computer lib/dream machines The new media reader
  • Cuba
Cuba L (1987) Calculated movements. Published in Prix Ars Electronica Edition '87: Meisterwerke der Computerkunst. H.S. Sauer Kay A (1989) User interface: a personal view. In: Laurel B (ed) The art of human–computer interface design, Addison-Wesley, Reading, MA Maeda J (2004) Creative code. Thames & Hudson, London Nelson T (2003) Computer lib/dream machines. In: Wardrip-Fruin N, Montfort N (eds) The new media reader. MIT Press, London Tarbell J (2004) Complexification.net (http://www.complexification.net/medium.html)
Calculated movements
  • L Cuba
Cuba L (1987) Calculated movements. Published in Prix Ars Electronica Edition '87: Meisterwerke der Computerkunst. H.S. Sauer Kay A (1989) User interface: a personal view. In: Laurel B (ed) The art of human–computer interface design, Addison-Wesley, Reading, MA Maeda J (2004) Creative code. Thames & Hudson, London Nelson T (2003) Computer lib/dream machines. In: Wardrip-Fruin N, Montfort N (eds) The new media reader. MIT Press, London
  • J Tarbell
Tarbell J (2004) Complexification.net (http://www.complexification.net/medium.html)