Conference PaperPDF Available

One-Shot Ontogenetic Learning in Biomedical Datastreams

Authors:

Abstract and Figures

Recent technological advances in the biological and physical sciences have allowed for the generation of large quantity datasets necessary for applying deep neural networks. Despite the demonstrable success of these methods in a variety of tasks including image classification, machine translation, and query-answering, among others, their widespread adoption in biomedical research has been tempered due to issues inherent to modeling complex biological systems not readily addressed by traditional gradient-based neural networks. We consider the problem of unsupervised, general-purpose learning in biological sequence data, wherein variable-order temporal dependencies, multi-dimensionality and uncertainty in model structure and data are the norm. To successfully model and learn these dependencies in an intuitive and holistic manner, we have utilized the data abstraction of Simplicial Grammar within a Bayesian learning framework. We demonstrate that this framework offers the ability to quickly encode and integrate new information, and perform prediction tasks without extensive, iterative training.
Content may be subject to copyright.
A preview of the PDF is not available
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Existing computational pipelines for quantitative analysis of high‐content microscopy data rely on traditional machine learning approaches that fail to accurately classify more than a single dataset without substantial tuning and training, requiring extensive analysis. Here, we demonstrate that the application of deep learning to biological image data can overcome the pitfalls associated with conventional machine learning classifiers. Using a deep convolutional neural network (DeepLoc) to analyze yeast cell images, we show improved performance over traditional approaches in the automated classification of protein subcellular localization. We also demonstrate the ability of DeepLoc to classify highly divergent image sets, including images of pheromone‐arrested cells with abnormal cellular morphology, as well as images generated in different genetic backgrounds and in different laboratories. We offer an open‐source implementation that enables updating DeepLoc on new microscopy datasets. This study highlights deep learning as an important tool for the expedited analysis of high‐content microscopy data.
Article
Full-text available
Technological advances in genomics and imaging have led to an explosion of molecular and cellular profiling data from large numbers of samples. This rapid increase in biological data dimension and acquisition rate is challenging conventional analysis strategies. Modern machine learning methods, such as deep learning, promise to leverage very large data sets for finding hidden structure within them, and for making accurate predictions. In this review, we discuss applications of this new breed of analysis approaches in regulatory genomics and cellular imaging. We provide background of what deep learning is, and the settings in which it can be successfully applied to derive biological insights. In addition to presenting specific applications and providing tips for practical use, we also highlight possible pitfalls and limitations to guide computational biologists when and how to make the most use of this new technology.
Article
Full-text available
Synthesizing concepts and findings from a number of recent models of human consciousness, a unified model of the key properties characterizing human consciousness is outlined. Six key properties are emphasized: Dynamical representation of the focus of consciousness, Focusing of energetic resources and focusing of informational resources on a subset of system knowledge, Global Workspace dynamics as outlined by Bernard Baars in his cognitive theory of consciousness, Integrated Information as emphasized by Tononi, and correlation of attentional focus with self-modeling. It is proposed that the extent, and relative importance, of these properties may vary in different states of consciousness; and that any AI system displaying closely human-like intelligence will need to manifest these properties in its consciousness as well. The “hard problem” of consciousness is sidestepped throughout, via focusing on structures and dynamics posited to serve as neural or cognitive correlates of subjective conscious experience.
Conference Paper
“Cognitive synergy”– a dynamic in which multiple cognitive processes, cooperating to control the same cognitive system, assist each other in overcoming bottlenecks encountered during their internal processing. – has been posited as a key feature of real-world general intelligence, and has been used explicitly in the design of the OpenCog cognitive architecture. Here category theory and related concepts are used to give a formalization of the cognitive synergy concept. Cognitive synergy is proposed to correspond to a certain inequality regarding the relative costs of different paths through certain commutation diagrams. Applications of this notion of cognitive synergy to particular cognitive phenomena, and specific cognitive processes in the PrimeAGI design, are discussed.
Conference Paper
A series of formal models of intelligent agents is proposed, with increasing specificity and complexity: simple reinforcement learning agents; “cognit” agents with an abstract memory and processing model; hypergraph-based agents (in which “cognit” operations are carried out via hypergraphs); hypergraph agents with a rich language of nodes and hyperlinks (such as the OpenCog framework provides); “PGMC” agents whose rich hypergraphs are endowed with cognitive processes guided via Probabilistic Growth and Mining of Combinations; and finally variations of the PrimeAGI design, which is currently being built on top of the OpenCog framework.
Article
"Cognitive synergy" refers to a dynamic in which multiple cognitive processes, cooperating to control the same cognitive system, assist each other in overcoming bottlenecks encountered during their internal processing. Cognitive synergy has been posited as a key feature of real-world general intelligence, and has been used explicitly in the design of the OpenCog cognitive architecture. Here category theory and related concepts are used to give a formalization of the cognitive synergy concept. A series of formal models of intelligent agents is proposed, with increasing specificity and complexity: simple reinforcement learning agents; "cognit" agents with an abstract memory and processing model; hypergraph-based agents (in which "cognit" operations are carried out via hypergraphs); hypergraph agents with a rich language of nodes and hyperlinks (such as the OpenCog framework provides); "PGMC" agents whose rich hypergraphs are endowed with cognitive processes guided via Probabilistic Growth and Mining of Combinations; and finally variations of the PrimeAGI design, which is currently being built on top of OpenCog. A notion of cognitive synergy is developed for cognitive processes acting within PGMC agents, based on developing a formal notion of "stuckness," and defining synergy as a relationship between cognitive processes in which they can help each other out when they get stuck. It is proposed that cognitive processes relating to each other synergetically, associate in a certain way with functors that map into each other via natural transformations. Cognitive synergy is proposed to correspond to a certain inequality regarding the relative costs of different paths through certain commutation diagrams. Applications of this notion of cognitive synergy to particular cognitive phenomena, and specific cognitive processes in the PrimeAGI design, are discussed.
Article
A novel partial order is defined on the space of digraphs or hypergraphs, based on assessing the cost of producing a graph via a sequence of elementary transformations. Leveraging work by Knuth and Skilling on the foundations of inference, and the structure of Heyting algebras on graph space, this partial order is used to construct an intuitionistic probability measure that applies to either digraphs or hypergraphs. As logical inference steps can be represented as transformations on hypergraphs representing logical statements, this also yields an intuitionistic probability measure on spaces of theorems. The central result is also extended to yield intuitionistic probabilities based on more general weighted rule systems defined over bicartesian closed categories.