|
|
--- |
|
|
license: cc-by-nc-sa-4.0 |
|
|
language: |
|
|
- en |
|
|
- tr |
|
|
tags: |
|
|
- ai |
|
|
- brain |
|
|
- eeg |
|
|
- neuroscience |
|
|
- deeplearning |
|
|
- mind |
|
|
- bci |
|
|
- text |
|
|
- ieeg |
|
|
- emg |
|
|
- sentence |
|
|
- number |
|
|
- mind-to-text |
|
|
- dl |
|
|
- artificial-intelligence |
|
|
- first-of-world |
|
|
- eeg-to-text |
|
|
pipeline_tag: text-generation |
|
|
library_name: tf |
|
|
--- |
|
|
|
|
|
# bai-64 Mind | EEG-to-Text Model [BETA]๐ง โ๏ธ |
|
|
|
|
|
Classify imagined speech commands from EEG brain signals using deep learning. |
|
|
|
|
|
 |
|
|
 |
|
|
 |
|
|
|
|
|
## Overview |
|
|
|
|
|
This project enables Brain-Computer Interface (BCI) applications by decoding imagined directional commands ("Up", "Down", "Left", "Right") from EEG brain signals. Users think about a direction without speaking, and the system predicts their intended command. |
|
|
|
|
|
## Quick Start |
|
|
|
|
|
### Installation |
|
|
|
|
|
```bash |
|
|
pip install -r requirements.txt |
|
|
``` |
|
|
|
|
|
### Basic Usage |
|
|
|
|
|
```python |
|
|
import numpy as np |
|
|
from tensorflow import keras |
|
|
|
|
|
# Load pre-trained model |
|
|
model = keras.models.load_model('path/to/your/model.h5') |
|
|
|
|
|
# Your EEG data (1 second, 64 channels, 250 Hz sampling) |
|
|
eeg_data = np.random.randn(250, 64) # Replace with real EEG |
|
|
|
|
|
# Make prediction |
|
|
prediction = model.predict(eeg_data.reshape(1, 250, 64)) |
|
|
classes = ['Up', 'Down', 'Left', 'Right'] |
|
|
predicted_command = classes[np.argmax(prediction)] |
|
|
|
|
|
print(f"Predicted command: {predicted_command}") |
|
|
print(f"Confidence: {np.max(prediction):.3f}") |
|
|
``` |
|
|
|
|
|
## Real-Time BCI Application |
|
|
|
|
|
```python |
|
|
from analysis import InnerSpeechAnalyzer |
|
|
|
|
|
# Initialize predictor |
|
|
analyzer = InnerSpeechAnalyzer('path/to/your/model.h5') |
|
|
predictor = analyzer.create_real_time_predictor() |
|
|
|
|
|
# Real-time loop |
|
|
while True: |
|
|
eeg_data = capture_eeg_signal() # Your EEG acquisition function |
|
|
command, confidence = predictor.predict_thought(eeg_data) |
|
|
|
|
|
if confidence > 0.8: |
|
|
execute_command(command) # Your command execution |
|
|
print(f"Executing: {command}") |
|
|
``` |
|
|
|
|
|
## Hardware Requirements |
|
|
|
|
|
### EEG Device |
|
|
- **Channels**: 64 Channels (10-20 system) |
|
|
- **Sampling Rate**: 250+ Hz |
|
|
- **Impedance**: <5kฮฉ |
|
|
- **Bandwidth**: 0.5-100 Hz |
|
|
|
|
|
### Recommended Devices |
|
|
- OpenBCI Cyton + Daisy (16+ channels) (64 channels recommended) |
|
|
- Emotiv EPOC X (14 channels) (64 channels recommended) |
|
|
- g.tec g.USBamp (Professional) (64 channels recommended) |
|
|
|
|
|
## Applications |
|
|
|
|
|
- ๐ฆฝ **Assistive Technology**: Control for paralyzed patients |
|
|
- ๐ฎ **Gaming**: Mind-controlled games and VR |
|
|
- ๐ค **Robotics**: Brain-controlled robot navigation |
|
|
- ๐ป **Silent Computing**: Hands-free computer control |
|
|
- ๐งช **Research**: Neuroscience and BCI studies |
|
|
|
|
|
## Data Format |
|
|
|
|
|
Your EEG data should be: |
|
|
- **Shape**: (250, 64) per trial |
|
|
- **Duration**: 1 second recording |
|
|
- **Channels**: 64 EEG electrodes |
|
|
- **Sampling**: 250 Hz |
|
|
- **Classes**: ["Up", "Down", "Left", "Right"] |
|
|
|
|
|
## Features |
|
|
|
|
|
โ
**Ready-to-use** pre-trained model |
|
|
โ
**Real-time prediction** for BCI applications |
|
|
โ
**Custom training** with your own EEG data |
|
|
โ
**Multiple architectures** (CNN-LSTM, Transformer) |
|
|
โ
**EEG preprocessing** pipeline included |
|
|
โ
**Cross-platform** support (Windows, macOS, Linux) |
|
|
|
|
|
## Dependencies |
|
|
|
|
|
```bash |
|
|
tensorflow>=2.8.0,<3.0.0 |
|
|
scikit-learn>=1.0.0 |
|
|
numpy>=1.21.0 |
|
|
scipy>=1.7.0 |
|
|
pandas>=1.3.0 |
|
|
mne>=1.0.0 |
|
|
matplotlib>=3.5.0 |
|
|
seaborn>=0.11.0 |
|
|
``` |
|
|
|
|
|
## Example Use Cases |
|
|
|
|
|
### Wheelchair Control |
|
|
```python |
|
|
# User thinks "forward" โ wheelchair moves forward |
|
|
# User thinks "left" โ wheelchair turns left |
|
|
``` |
|
|
|
|
|
### Smart Home |
|
|
```python |
|
|
# User thinks "up" โ lights turn on |
|
|
# User thinks "down" โ lights turn off |
|
|
``` |
|
|
|
|
|
### Gaming |
|
|
```python |
|
|
# User thinks "right" โ character moves right |
|
|
# Mental commands for game control |
|
|
``` |
|
|
|
|
|
## Support |
|
|
|
|
|
- **Web Site**: [Neurazum](https://neurazum.com) |
|
|
- **Email**: [[email protected]](mailto:[email protected]) |
|
|
|
|
|
## Note |
|
|
|
|
|
**This project is in the *BETA* phase. Use at your own risk. Due to the process, low accuracy rates may be observed. In addition, since the data belongs to <span style="color: #ff8d26; "><b>Neurazum</b></span>, the function structure may change in future models.** |
|
|
|
|
|
## License |
|
|
|
|
|
CC-BY-NC-SA 4.0 - see [LICENSE](https://creativecommons.org/licenses/by-nc-sa/4.0/) file for details. |
|
|
|
|
|
### Acknowledgments |
|
|
|
|
|
1. Neurazum's own data set was used. This data set is closed source. |
|
|
2. Nieto, N., Peterson, V., Rufiner, H. L., Kamienkowski, J. E., & Spies, R. (2021). |
|
|
"Thinking out loud, an open access EEG-based BCI dataset for inner speech recognition." |
|
|
bioRxiv. https://doi.org/10.1101/2021.04.19.440473 |
|
|
|
|
|
--- |
|
|
|
|
|
*Enable mind-controlled technology with EEG! ๐* |
|
|
|
|
|
<span style="color: #ff8d26; "><b>Neurazum</b> AI Department</span> |