The potential of passive brain-computer interfaces (BCIs) to revolutionize human-computer interaction is immense. They enable computers to respond in amazing new ways by interpreting our naturally-occurring mental states.
Using passive BCIs, a trusted device can know how we perceive, judge, and understand the world around us—and get all this information directly from the human brain.
This technology, which translates brain activity into interpretable mental states, also has benefits that go beyond human-computer interaction. While our own computers can learn to “understand” us based on our brain activity, a passive BCI-based artificial intelligence can not only learn to understand us, but also perform “mental feats” that were previously reserved for humans.
We have introduced the word neuroadaptive to the world to refer to such technology: technology that acquires implicit input through a passive BCI and uses this in any number of ways to enable new possibilities for human-computer symbiosis.
At Zander Labs, we are at the forefront of exploring these possibilities, constantly pushing the boundaries of what we currently believe is achievable.
To further explore these innovations, we focus on three key areas: neuro-enabling technology, next-generation EEG sensors, and interdisciplinary expertise:
We build the tools that make a neuroadaptive world possible. Our cutting-edge neuro-enabling technology bridges the gap between the human brain and digital systems.
By harnessing the power of passive brain-computer interfaces, we enable a unique, seamless flow of information between users and computers for more intuitive and efficient communication.
It all starts with the best signal. Our hardware enables continuous, comfortable and unobtrusive measurements of brain activity over long periods of time. We are working on a complete data acquisition and decoding pipeline in an easy-to-use package.
Science to shift paradigms. Our team includes experts from various fields, including neuroscience, artificial intelligence, machine learning, data science, physics, psychology and more. This interdisciplinary approach enables us to develop comprehensive solutions that are evidence-based and technologically advanced.
Our expertise lies in the subtle integration of the human mind with technology through passive brain-computer interfaces.
A passive brain-computer interface system can read and interpret the spontaneous, ever-present brain activity of humans. This brain activity reflects aspects of naturally-occurring cognitive and affective mental states—including those the person may not be aware of. This is possible because a living brain is never silent: there is always a constant flow of neural activity responding to a sight, a smell, a memory or a fleeting thought—to any and all aspects of the human experience.
A passive BCI uses algorithms that have been developed and trained to recognize specific patterns in brain activity and reveal their cognitive and affective components.
The term “passive” refers to the fact that the user makes no effort to actively or explicitly communicate with a computer. Instead, the system remains in the background, “listening” to the user’s brain activity and adapting accordingly while the user concentrates on other tasks. The result is an implicit communication channel that requires no effort, volition or awareness, to function—a paradigm shift in human-computer interaction.
The acquisition of brain activity remains the first step in any brain-computer interface. Our Mobile EEG Suite offers a user-friendly, unobtrusive and high-quality portable solution that enables real-time monitoring as well as secure, on-device analysis of brain activity in different environments. This mobile solution offers flexibility and convenience, enabling accurate data collection and neuroadaptive applications outside the traditional lab setting..
Current BCI technologies are often limited by the need for individual calibration of classifiers for specific users or electrode configurations, leading to a lack of transferability and scalability. Our research aims to revolutionize this aspect by developing universal classifiers that provide a plug-and-play BCI experience, regardless of the person, headset or environment.
To achieve this, four thousand datasets are currently being collected. Based on this data, we will develop different classifiers optimized for different mental states, such as workload, surprise, happiness, stress and others.
Since our universal classifiers will offer a plug-and-play experience, our technology can decode any number of mental processes in parallel. By combining a number of individual mental processes into a Multi-Dimensional Mental State (MDMS), we provide a more comprehensive representation of human mental activity, enabling a more accurate and in-depth understanding of user interactions.
Different combinations of classifiers lead to specialized MDMS. We are currently investigating which mental processes are most relevant for human-computer interaction in general. Our first comprehensive MDMS will be optimized to promote a truly natural and intuitive interaction between humans and technology.
Our MDMS approach offers unique, comprehensive insights into human intelligence—ready for real-time communication with technology. We are designing novel AI learning processes that incorporate this cognitive data to continuously refine and improve system performance.
Our neuroadaptive AI solutions use decoded brain activity at different stages of learning and deployment and enable novel AI systems to learn faster, perform better, and behave in a way that is based on the kind of understanding of the world that can only be obtained from direct access to the human experience.
We're focused on a groundbreaking mission: to use real-time access to the human mind in order to enable computers to understand us, learn from us, and truly understand the world around us. This requires the seamless integration of the human mind with advanced technology.
Our path to this ambitious goal led to the project Neuroadaptivity for Autonomous Systems (NAFAS); an endeavor funded by the German federal agency Agentur für Innovation in der Cybersicherheit - “Innovation for Cybersecurity" that seeks to revolutionize human-computer interaction and artificial intelligence through neuroadaptive technology.
With an investment of 30 million euros, this project will significantly advance neuroadaptive technology, a field with immense potential for the future of human-computer interaction and the development of human-compatible AI.