Experiment 6: the dependence of eudaimonic well-being on the rise and fall of hedonic states in the environments that contain both food and poison.
The four plots correspond to the four map types shown in Fig 12 Agents with a more negative outlook (pλ = 2, pλ = 0.5) tend to do better in all four types of environments.

Experiment 6: the dependence of eudaimonic well-being on the rise and fall of hedonic states in the environments that contain both food and poison. The four plots correspond to the four map types shown in Fig 12 Agents with a more negative outlook (pλ = 2, pλ = 0.5) tend to do better in all four types of environments.

Source publication
Article
Full-text available
We offer and test a simple operationalization of hedonic and eudaimonic well-being (“happiness”) as mediating variables that link outcomes to motivation. In six evolutionary agent-based simulation experiments, we compared the relative performance of agents endowed with different combinations of happiness-related traits (parameter values), under fou...

Similar publications

Article
Full-text available
Agent-based models (ABMs) are one of the most effective and successful methods for analyzing real-world complex systems by investigating how modeling interactions on the individual level (i.e., micro-level) leads to the understanding of emergent phenomena on the system level (i.e., macro-level). ABMs represent an interdisciplinary approach to exami...
Chapter
Full-text available
Mercuur, RijkDignum, VirginiaJonker, Catholijn M.Inducing behavioural change requires a good understanding of how habits break. We identified two theories in the psychological literature on this process: the decrease theory and persist theory. Both theories are used to explain behavioural change, but one states the original habit fades out, while t...
Presentation
Full-text available
The relation between the main variants of pre-industrial economic production in arid Eurasia, from nomadic pastoralism to irrigated agriculture, is known to have been unstable, with abundant examples of conflict and shifting patterns of land use right up to contemporary times. We present the latest development of a six-year effort, within the Simul...
Article
Full-text available
In a world where pandemics are a matter of time and increasing urbanization of the world’s population, governments should be prepared with pandemic intervention policies (IPs) to minimize the crisis’s direct and indirect adverse effects while keeping normal life as much as possible. Successful pandemic IPs have to take into consideration the hetero...
Article
Full-text available
Developing brand agricultural products (BAPs) has become a strategic choice for consumption upgrading and agricultural modernization in China. As a powerful marketing method, word-of-mouth (WOM) is rarely applied to BAPs. Based on the particularity of the agricultural environment and products in China, this paper focuses on the WOM behavior of cons...

Citations

... Furthermore, evolutionary simulations suggest that performance-driven positive affect alone is not as effective in motivating an agent as an alternation of positive and negative affective states, brought about, respectively, by successes and failures (Gao and Edelman, 2016a); moreover, such a balance between happiness and unhappiness can serve as an effective intrinsic motivator (Gao and Edelman, 2016b). Likewise, in the evolutionary account of pain proposed by Kolodny et al. (2021), the pain factor makes a contribution to reinforcement learning that is orthogonal to that of reward. ...
Preprint
Full-text available
In its pragmatic turn, the new discipline of AI ethics came to be dominated by humanity's collective fear of its creatures, as reflected in an extensive and perennially popular literary tradition. Dr. Frankenstein's monster in the novel by Mary Shelley rising against its creator; the unorthodox golem in H. Leivick's 1920 play going on a rampage; the rebellious robots of Karel \v{C}apek -- these and hundreds of other examples of the genre are the background against which the preoccupation of AI ethics with preventing robots from behaving badly towards people is best understood. In each of these three fictional cases (as well as in many others), the miserable artificial creature -- mercilessly exploited, or cornered by a murderous mob, and driven to violence in self-defense -- has its author's sympathy. In real life, with very few exceptions, things are different: theorists working on the ethics of AI completely ignore the possibility of robots needing protection from their creators. The present book chapter takes up this, less commonly considered, ethical angle of AI.
... In an evolutionary setting, this assumption makes intuitive sense insofar as (i) reinforcement learning is universally employed by living systems in honing adaptive behavior, and (ii) an autonomous system by definition must provide its own source of drive, as per the principle of intrinsic motivation (Barto, 2013). Furthermore, evolutionary simulations suggest that performance-driven positive affect alone is not as effective in motivating an agent as an alternation of positive and negative affective states, brought about, respectively, by successes and failures (Gao and Edelman, 2016a); moreover, such a balance between happiness and unhappiness can serve as an effective intrinsic motivator (Gao and Edelman, 2016b). If it were possible for the agent to choose not to experience negative affect, suffering would be avoided, but the question still remains whether or not the price for that would be failing to learn quickly and well from the consequences of behavior. ...
Preprint
Insofar as consciousness has a functional role in facilitating learning and behavioral control, the builders of autonomous AI systems are likely to attempt to incorporate it into their designs. The extensive literature on the ethics of AI is concerned with ensuring that AI systems, and especially autonomous conscious ones, behave ethically. In contrast, our focus here is on the rarely discussed complementary aspect of engineering conscious AI: how to avoid condemning such systems, for whose creation we would be solely responsible, to unavoidable suffering brought about by phenomenal self-consciousness. We outline two complementary approaches to this problem, one motivated by a philosophical analysis of the phenomenal self, and the other by certain computational concepts in reinforcement learning.
Article
Full-text available
In promoting career sustainability, psychological theories historically have informed human resource management (HRM) development—three assessment directions are among them: work-related flow, happiness promotion, and appraising PERMA (Positive Emotions, Engagement, Relationships, Meaning, and Accomplishment) factors. Csikszentmihalyi’s work-related flow represents an optimally challenging work-related process. Happiness promotion strives to maintain a pleased satisfaction with the current experience. PERMA represents measurable positive psychological factors constituting well-being. Reliable and validated, the experience of flow has been found to determine career sustainability in contrast to the more often investigated happiness ascertainment or identifying PERMA factors. Career sustainability research to inform HRM development is in its infancy. Therefore, publishers’ commitment to sustainability provides integrity. Given MDPI’s uniquely founding sustainability concern, its journal articles were searched with the keywords “flow, Csikszentmihalyi, work”, excluding those pertaining to education, health, leisure, marketing, non-workers, and spirituality, to determine the utilization of work-related flow to achieve career sustainability. Of the 628 returns, 28 reports were included for potential assessment. Current studies on Csikszentmihalyi’s work-related flow ultimately represented three results. These provide insight into successful, positive methods to develop career sustainability. Consequently, HRM is advised to investigate practices for assessing and encouraging employees’ engagement with work-related flow with the aim of ensuring career sustainability.
Article
Till date, as consciousness has a functional role in facilitating learning and behavioral control, the builders of autonomous Artificial Intelligence (AI) systems are likely to attempt to incorporate it into their designs. The extensive literature on the ethics of AI is concerned with ensuring that AI systems, and especially autonomous conscious ones, behave ethically. In contrast, our focus here is on the rarely discussed complementary aspect of engineering conscious AI: how to avoid condemning such systems, for whose creation we would be solely responsible, to unavoidable suffering brought about by phenomenal self-consciousness. We outline two complementary approaches to this problem, one motivated by a philosophical analysis of the phenomenal self, and the other by certain computational concepts in reinforcement learning.
Book
Full-text available
Cambridge Core - Statistics for Social Sciences, Behavioral Sciences and Law - Replacing GDP by 2030 - by Rutger Hoekstra
Article
Reinforcement learning, a general and universally useful framework for learning from experience, has been broadly recognized as a critically important concept for understanding and shaping adaptive behavior, both in ethology and in artificial intelligence. A key component in reinforcement learning is the reward function, which, according to an emerging consensus, should be intrinsic to the learning agent and a matter of appraisal rather than a simple reflection of external outcomes. We describe an approach to intrinsically motivated reinforcement learning that involves various aspects of happiness, operationalized as dynamic estimates of well-being. In four experiments, in which simulated agents learned to explore and forage in simulated environments, we show that agents whose reward function properly balances momentary (hedonic) and longer-term (eudaimonic) well-being outperform agents equipped with standard fitness-oriented reward functions. Our findings suggest that happiness-based features can be useful in developing robust, general-purpose reward mechanisms for intrinsically motivated autonomous agents.