IoB Middleware

IoBミドルウェア

Overview

IoB Middleware aims to develop technology that will enable direct sharing of information extracted from brain activity, to computers or other brains. Specifically, we will create AI methods capable of extracting sensory and thought information from brain activity, develop mathematical theory to enable the sharing of the extracted information, and verify the technology through animal experiments.

To make direct brain-to-brain communication possible, it is essential to have the technology to decipher the neural codes used in individual brains, and the technology to transmit the deciphered codes from brain to brain. However, the neural code used by the brain is likely to be different between each of us, and the task of translating the code between multiple brains is necessary. In addition, using devices based on brain activity also requires translating the code between the brain and the device. Information translation technology between different systems is one of the core technologies for the success of this project, and the main goal of this research item is to develop the mathematical theory for its construction, and engineer its implementation.

Member

  • SASAI Shuntaro,Ph.D.

    Team Leader
    Neurotechnology R&D Unit, Araya Inc. 

  • HAYASHI Ryusuke,Ph.D.

    Senior Researcher
    Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology (AIST)  

    In order to develop IoB that reads neural information from the brain for avatar-control and brain-to-brain communication, it is necessary to construct a mathematical basis for standardizing the representation of conceptual information that differs among brains and AI models. In this research, we will focus mainly on conceptual information extracted from image data, and aim to develop a mathematical infrastructure to realize the common representation of such conceptual information. Furthermore, we will record neural information data from the visual cortex and experimentally verify the developed technology. Through this research, we will contribute to developing new technologies that will lead to the expansion of human abilities.

  • OIZUMI Masafumi,Ph.D.

    Associate Professor
    Graduate School of Arts and Sciences,
    The University of Tokyo

    In this research project, by utilizing the methods of control theory and nonequilibrium statistical mechanics, we will (1) estimate mental fatigue from brain activity, and (2) estimate optimal input to achieve specific brain state transitions, which is needed for direct brain-to-brain communication. More specifically, we will develop a theoretical framework to quantify the transition cost between brain states in stochastic (non)linear neural systems and work on the estimation of mental fatigue and optimal control input to the brain from the perspective of transition cost and efficiency.

  • Kai ARULKUMARAN,Ph.D.

    Team Leader
    Neurotechnology R&D Unit, Araya Inc.

    In this research project, we will develop AI systems that can act autonomously to fulfill the goals of the user, which may be specified in forms such as natural language or via BMI. To do so we will focus on goal-based reinforcement learning agents, which learn to perform tasks based on interaction with their environment and reward signals. The agents will also need continual learning capabilities in order to adapt to changes after deployment. Furthermore, we will develop multi-agent systems that can infer the goals of other agents, such as humans, to work on collaborative tasks. The agents can take physical or virtual forms, allowing applications in domains from robotics to virtual avatars.

  • REKIMOTO Jun,Ph.D.

    Professor / Deputy Director
    Interfaculty Initiative in Information Studies, The University of Tokyo / Sony Computer Science Laboratories Inc. 

    In this research, we will develop technology to decode speech intentions from non-invasive, invasive, and non-contact information. We will solve the problems of deciphering the intended content of speech from these signals and inputting the content to the brain of another person. In addition, by using the extraction technology of the mental state to be developed, we aim to convert the sensory understanding of an expert into data and feed it back to the individual, so that it can be used to support the acquisition of skills that are difficult to communicate verbally. This research is expected to enable a new means of interaction between humans and computers.

  • KOIKE Hideki,Dr.

    Professor
    School of Computing, Tokyo Institute of Technology

    In this research, we will develop a method for skill acquisition that takes into account brain information, in addition to physical information, in contrast to conventional skill acquisition research that focuses only on physical information. As the first research item, we will develop a method to predict the posture in the near future by measuring the physical and brain information during a specific movement using deep learning. Secondly, we will develop a spatio-temporally distorted skill acquisition environment using virtual reality. Then, we will identify the optimal spatio-temporal distortion parameters based on the measured brain states.