Figure 2 - uploaded by Antonio Lieto
Content may be subject to copyright.
Compositionality of Prototypes in Conceptual Spaces  

Compositionality of Prototypes in Conceptual Spaces  

Source publication
Article
Full-text available
During the last decades, many cognitive architectures (CAs) have been realized adopting different assumptions about the organization and the representation of their knowledge level. Some of them (e.g. SOAR [35]) adopt a classical symbolic approach, some (e.g. LEABRA[48]) are based on a purely con-nectionist model, while others (e.g. CLARION [59]) a...

Context in source publication

Context 1
... As previously stated, if we represent a concept as a convex area in a suitable Conceptual Space, then the degree of typicality of a certain individual can be measured as the distance of the corresponding point from the center of the area. The conjunction of two concepts is represented as the intersection of the two corresponding areas, as in Fig. 2. According to the conceptual space approach, Pina should presumably turn out to be very close to the center of polka dot zebra (i.e. to the intersection between zebra and polka dot thing). In other words, she should turn out to be a very typical polka dot zebra, despite being very eccentric on both the concepts zebra and polka dot ...

Citations

... Gärdenfors' approach of a conceptual framework, where concepts express properties across multiple dimensions that evolve in time and are ascribed to different domains [13] can be adapted to an artificial setup. To do so, a knowledge engineer must provide a finite initial set of features -a task that becomes unscalable as the number of features increases [20]. Alternatively, DNNs could be used to extract pertinent features. ...
... We are not entering into the debate on how representations are formed, nor do we aim to assess any of the claims regarding concept formation in humans [32]. We acknowledge that the dependency on input data and the complexity of creating useful conceptual spaces with sufficient quality dimensions for characterising abstract concepts, thus bridging sub-symbolic and symbolic representations, are unsolved problems in cognitive architectures [12,20] -an issue that becomes even more challenging when such dimensions are derived from a latent space with undefined semantics. Our interest lies exclusively in describing the necessary concept space and interplaying structures that could aid an artificial agent in transferring knowledge across contexts and tasks. ...
Chapter
Inspired by cognitive theories of creativity, this paper introduces a computational model (AIGenC) that lays down the necessary components to enable artificial agents to learn, use and generate transferable representations. Unlike machine representations, which rely exclusively on raw sensory data, biological representations incorporate relational and associative information that embed a rich and structured concept space. The AIGenC model poses a hierarchical graph architecture with various levels and types of representations procured by the different components. The first component, Concept Processing, extracts objects and affordances from sensory input and encodes them into a concept space. The resulting representations are stored in a dual memory system and enriched with goal-directed and temporal information acquired through reinforcement learning, creating a higher-level of abstraction. Two additional and complementary components work in parallel to detect and recover relevant concepts through a matching process and create new ones, respectively, in a process akin to cognitive Reflective Reasoning and Blending. If Reflective Reasoning fails to offer a suitable solution, a blending operation creates new concepts by combining past information. We discuss the model’s capability to yield better out-of-distribution generalisation in artificial agents, thus advancing toward Artificial General Intelligence.
... Aside from bearing advantages for relating the contents of experience and belief in probabilistic learning accounts, the model bears other advantages. For instance, it captures the gradual nature of perception, sensorimotor intentions, and conceptual thought (see [56,57] for examples). ...
Article
Full-text available
The notions of psychological similarity and probabilistic learning are key posits in cognitive, computational, and developmental psychology and in machine learning. However, their explanatory relationship is rarely made explicit within and across these research fields. This opinionated review critically evaluates how these notions can mutually inform each other within computational cognitive science. Using probabilistic models of concept learning as a case study, I argue that two notions of psychological similarity offer important normative constraints to guide modelers’ interpretations of representational primitives. In particular, the two notions furnish probabilistic models of cognition with meaningful interpretations of what the associated subjective probabilities in the model represent and how they attach to experiences from which the agent learns. Similarity representations thereby provide probabilistic models with cognitive, as opposed to purely mathematical, content.
... raspberry and berry) then its region is included in that of the other. This aspect of conceptual spaces makes them particularly appealing from a knowledge representation perspective, as they can thus offer a bridge between symbolic knowledge and vector space encodings [2,3,4]. ...
Preprint
Full-text available
Distilling knowledge from Large Language Models (LLMs) has emerged as a promising strategy for populating knowledge bases with factual knowledge. The aim of this paper is to explore the feasibility of similarly using LLMs for learning cognitively plausible representations of concepts, focusing in particular on the framework of conceptual spaces. Such representations allow us to compare concepts along particular quality dimensions, e.g. in terms of their size, colour or shape. Learning conceptual spaces is known to be challenging, among others because many of the features that need to be captured are rarely expressed in text (e.g. shape), a problem which is exacerbated by reporting bias. In this paper, we explore to what extent recent LLMs are able to overcome these barriers. To this end, we introduce a new dataset with three types of probing questions. Our results provide evidence that ChatGPT has access to a rich conceptual structure, which allows it to make connections between unrelated concepts (e.g. the fact that limousines and crocodiles have a similar shape). On the other hand, we also find that the model sometimes falls back on shallow heuristics. Compared to ChatGPT, GPT-4 makes fewer mistakes, although the difference in performance is generally small.
... In [9], authors claim that the Conceptual Space can be used as a lingua franca for the different levels of representation. With Conceptual Space it becomes easy to unify and generalize many aspects of symbolic, diagrammatic and subsymbolic approaches and integrate them on a common ground. ...
Chapter
We demonstrate new comparative reasoning abilities of NARS, a formal model of intelligence, which enable the asymmetric comparison of perceivable quantifiable attributes of objects using relations. These new abilities are implemented by extending NAL with additional inference rules. We demonstrate the new capabilities in a bottle-picking experiment on a mobile robot running ONA, an implementation of NARS.KeywordsNon-Axiomatic LogicComparative RelationComparative ReasoningInference RulesVisual object comparisonNARS
... Cognitive modelling of operators using cognitive architectures [3,35] is a useful building block for modelling AI cognitive assistants aware of task representations, capable of anomalous behaviour detection, and cognitive load estimation as an input to autonomy engagement. A subsymbolic yet interpretable knowledge representation framework would arguably make a promising step towards modelling the declarative module ((e.g., [39,40]), which is yet to be demonstrated in industrial environments involving human-AI teaming. ...
Preprint
Full-text available
Trustworthiness of artificially intelligent agents is vital for the acceptance of human-machine teaming in industrial manufacturing environments. Predictable behaviours and explainable (and understandable) rationale allow humans collaborating with (and building) these agents to understand their motivations and therefore validate decisions that are made. To that aim, we make use of G\"ardenfors's cognitively inspired Conceptual Space framework to represent the agent's knowledge using concepts as convex regions in a space spanned by inherently comprehensible quality dimensions. A simple typicality quantification model is built on top of it to determine fuzzy category membership and classify instances interpretably. We apply it on a use case from the manufacturing domain, using objects' physical properties obtained from cobots' onboard sensors and utilisation properties from crowdsourced commonsense knowledge available at public knowledge bases. Such flexible knowledge representation based on property decomposition allows for data-efficient representation learning of typically highly specialist or specific manufacturing artefacts. In such a setting, traditional data-driven (e.g., computer vision-based) classification approaches would struggle due to training data scarcity. This allows for comprehensibility of an AI agent's acquired knowledge by the human collaborator thus contributing to trustworthiness. We situate our approach within an existing explainability framework specifying explanation desiderata. We provide arguments for our system's applicability and appropriateness for different roles of human agents collaborating with the AI system throughout its design, validation, and operation.
... To "explain natural cognitive processes, their geometrical structure should be determined by psychophysical measurements which determine the structure of how our perceptions are represented" [10]. Besides, these cognitive representations [11] must be constantly reevaluated on the basis of the new data [12] to defend against deception. This is the way to understand human's ability to make broad judgments and the possibility to understand the essence of things. ...
... Since there already exist computational models of conceptual spaces (Adams & Raubal, 2009;Chella et al., 2001;Gärdenfors, 2014;Lieto, 2021) these algorithms could be extended by implementing the search procedures proposed here in order to account for different types of analogical reasoning. In addition, conceptual spaces can work as an interface between propositional models and subsymbolic models of cognition (Lieto et al., 2017). This opens up the possibility that our approach can be adapted in algorithm based on neural networks (see Jani & Levine, 2000). ...
Article
Full-text available
In this paper, we outline a comprehensive approach to composed analogies based on the theory of conceptual spaces. Our algorithmic model understands analogy as a search procedure and builds upon the idea that analogical similarity depends on a conceptual phenomena called 'dimen-sional salience.' We distinguish between category-based, property-based, event-based, and part-whole analogies, and propose computationally-oriented methods for explicating them in terms of conceptual spaces. ARTICLE HISTORY
... Unlike Franklin et al. (2020) and Kelly et al. (2020), for example, the account is not committed to the distributed representation of stored items in hyperdimensional spaces of the kind posited in vector symbolic architectures. This is despite the employment of similarity spaces, which may constitute a lingua franca of cognitive architectures (Lieto et al., 2017). In contrast, the account is committed to the claim that episodic models, constructed at recall, do not only represent but functionally recapitulate an event's structure. ...
Article
Full-text available
This paper offers a modeling account of episodic representation. I argue that the episodic system constructs mental models: representations that preserve the spatiotemporal structure of represented domains. In prototypical cases, these domains are events: occurrences taken by subjects to have characteristic structures, dynamics and relatively determinate beginnings and ends. Due to their simplicity and manipulability, mental event models can be used in a variety of cognitive contexts: in remembering the personal past, but also in future-oriented and counterfactual imagination. As structural representations, they allow surrogative reasoning, supporting inferences about their constituents which can be used in reasoning about the represented events.
... In the 300 word2vec space, the process semantic axes are therefore not specic in their variance properties. They were identied with a dierent method based on the notion of semantic prototypes Lieto et al. (2017). ...
Preprint
Full-text available
The paper describes a model of subjective goal-oriented semantics extending standard "view-from-nowhere" approach. Generalization is achieved by using a spherical vector structure essentially supplementing the classical bit with circular dimension, organizing contexts according to their subjective causal ordering. This structure, known in quantum theory as qubit, is shown to be universal representation of contextual-situated meaning at the core of human cognition. Subjective semantic dimension, inferred from fundamental oscillation dynamics, is discretized to six process-stage prototypes expressed in common language. Predicted process-semantic map of natural language terms is confirmed by the open-source word2vec data.
... In [13], we implement a holographic declarative memory consisting of many individual holographic vectors. Each holographic vector represents a distinct concept, collectively serving as the basis vectors for the agent's conceptual space [20]. Our model is able accounts for human performance on a wide range of tasks, including recall, probability judgement, and decision-making [13], as well as how humans learn the meaning and part-of-speech of words from experience [14]. ...
Preprint
Full-text available
In this article, we present a cognitive architecture that is built from powerful yet simple neural models. Specifically, we describe an implementation of the common model of cognition grounded in neural generative coding and holographic associative memory. The proposed system creates the groundwork for developing agents that learn continually from diverse tasks as well as model human performance at larger scales than what is possible with existant cognitive architectures.