Grounding Relations in perceptual routines

Humans have an exceptional ability to notice relations between different entities, and transfer their relational knowledge across a variety of situations. For example, adults can easily discriminate a pair of identical items from a pair of different items, whatever these items are. It is still unclear how such relational concepts get learnt and grounded in lower-level cognitive architectures. Here, we explore whether many relational concepts can be grounded in simple perception-action routines that allow humans to extract relational information that is invariant to particular entities being compared. We are conducting behavioral experiments to determine the role of active perception in human relational abilities and characterize particular perceptual strategies that people use to infer visual relations. At the same time, we are also experimenting with connectionist models with and without active perception to analyze the strategies that the artificial agents come up with to solve analogous visual relational tasks.

open-ended evolution of communication

Humans have developed a great variety of complex communicative systems (languages) without any centralized assistance. Therefore, evolution of human communication has often been modeled as a result of distributed learning among agents which are reinforced for successfully transmitting information to each other. These models, however, face two major challenges: 1) even in most successful cases, the agents can only develop a very small number of communicative conventions, whereas humans managed to successfully agree upon thousands of words; 2) after groups of artificial agents converge on a set of communicative conventions, they have no incentive to improve or expand it, whereas the development of human languages is open-ended. Here, I explore whether these two challenges could be resolved by dynamically changing the problem that the agents are learning to solve with communication. I hypothesize that the communicative problem that starts small and gradually increases in difficulty as the agents agree upon new communicative conventions is essential for achieving tractable evolution of rich communicative systems in decentralized multi-agent systems.

Dubova, M. Growing Opportunities to Grow: Toward Open-Ended Multi-Agent Communication Learning. (submitted)

categorical perception meets el greco

It has been commonly assumed that the categorical perception effects should uniformly affect how we perceive different items in the visual field. Here, we adapt the color matching experimental paradigm to test this assumption. Spoiler: we found that only some of the objects in the visual field are biased by categorical color associations. We suspect that the eye movements determine whether perception of a given item is affected by categorical biases.

Collaborator: Rob Goldstone

Dubova, M. & Goldstone, R. Categories Color Perception Variably in a Simultaneous Matching Task. (submitted)

Grounded Communicative AI

Here, I attempted to review main insights of the Embodied, Embedded, Enactive, and Extended cognition research to distinguish the main aspects of naturalistic learning conditions that play causal roles for human language development. I then use this analysis to propose a list of concrete, implementable components for building “grounded” artificial communicative intelligence. These components include embodying machines in a perception-action cycle, equipping agents with active exploration mechanisms so they can build their own curriculum, allowing agents to gradually develop motor abilities to promote piecemeal language development, and endowing the agents with adaptive feedback from their physical and social environment.

Dubova, M. Building Human-Like Communicative Intelligence: a Grounded Perspective. (submitted)

Language games

We are looking at how shared communicative systems can emerge and develop in populations of independently adapting Reinforcement Learning agents. We start with simulations of a "minimal assumptions" multi-agent model (w.r.t. pre-built constraints/architecture/supervision) and then add potentially helpful components one-by-one to probe the necessary and sufficient conditions for the emergence of communicative patterns. To isolate the properties of communicative systems affected by our interventions, we have developed a set of metrics for multi-agent communicative analysis.

Collaborators: Arseny Moskvichev, Rob Goldstone

Dubova, M., Moskvichev, A., & Goldstone, R. (2020). Reinforcement Communication Learning in Different Social Network Structures. ICML 2020 1st Language and Reinforcement Learning Workshop. (see this 5-min video presentation from ICML LaReL)

Dubova, M., & Moskvichev, A. (2020). Effects of supervision, population size, and self-play on multi-agent reinforcement learning to communicate. Artificial Life Conference Proceedings (pp. 678-686).

categorical perception

We are studying how unsupervised and task-dependent perceptual learning mechanisms are supporting adaptive concept learning in humans and artificial neural networks. We formalize multi-task perceptual learning with Bayesian models, and with a convolutional beta-VAE neural network trained to both reconstruct and categorize perceptual inputs. We conduct behavioral experiments to compare model predictions and human learning.

Collaborator: Rob Goldstone

Preprint: Dubova, M. & Goldstone, R. (2020). The Influences of Category Learning on Perceptual Reconstructions (pending minor revisions at Cognitive Science).

Poster (presented at the 61st Annual Meeting of the Psychonomic Society)

adaptation aftereffects

We conducted several experimental studies to test the factors that determine the onset of assimilative or contrastive visual adaptation aftereffect. Getting insights from these data, we developed a probabilistic model of the potential common mechanism underlying adaptation aftereffects in the opposite directions. We formalized the alterations of perception which occur after short-term adaptation as a result of Bayesian inference based on learning the perceptual structure of stimuli.

Collaborator: Arseny Moskvichev

Dubova, M., & Moskvichev, A. (2019). Adaptation Aftereffects as a Result of Bayesian Categorization. Proceedings of the 41st Annual Meeting of the Cognitive Science Society (pp. 1669-1675).

Semantic similarity detection

We tried to capture between-sentence similarity with a combination of different metrics, starting from simple word overlapping and ending with distance between sentence embedding representations. The work went beyond the initial scope when we obtained unrealistically high performance scores and realized that the evaluation metric used for the algorithms’ selection for many years was biased. Eventually, we did not only develop a new semantic similarity detection method, but also proposed a new evaluation framework for the task.

Collaborator: Anton Belyy

Belyy, A., Dubova, M., & Nekrasov, D. (2018). Improved evaluation framework for complex plagiarism detection. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) (pp. 157-162).

Belyy, A. V., & Dubova, M. A. (2018). Framework for Russian plagiarism detection using sentence embedding similarity and negative sampling. Dialogue, 1.