E5CA GitHub - KrishnaswamyLab/NeuroMuse · GitHub
[go: up one dir, main page]

Skip to content

KrishnaswamyLab/NeuroMuse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
< B488 /div>
 
 

Repository files navigation

NeuroMuse Creativity

A deep learning framework for analyzing brain states during musical creativity using fNIRS (functional near-infrared spectroscopy) data. This project implements multiple neural architectures to classify improvisation vs scale playing states and explores the neural dynamics of creative musical expression.

Overview

This project analyzes fNIRS brain imaging data collected during a musical creativity task where musicians alternated between playing scales (non-creative) and improvisation (creative) tasks. The framework implements and compares several deep learning approaches:

  • Baseline Models: MLP classifiers on raw fNIRS signals and scattering coefficients
  • Temporal Models: RNN/LSTM architectures capturing temporal dynamics
  • Topological Models: Using persistent homology and Vietoris-Rips complexes
  • Geometric Models: Incorporating manifold curvature analysis
  • Combined Architecture: Multi-modal approach that combines features from the above models

Important: Scattering Coefficients Data

The file DATA/SCATTERING_COEFFICIENTS/combined_scattering_data.csv is too large to be uploaded to GitHub (>100MB).

Generating the Scattering Coefficients

To generate this file locally:

  1. Open the notebook NOTEBOOKS/00_data_preparation.ipynb
  2. Set GENERATE_COEFFICIENTS = True (line 6)
  3. Run the notebook to generate the scattering coefficients

The generated file will be saved to DATA/SCATTERING_COEFFICIENTS/combined_scattering_data.csv.

Project Structure

.
├── DATA/
│   ├── RAW_FNIRS_DATA/           # Raw fNIRS recordings (48 channels)
│   ├── PREPARED_FNIRS_DATA/      # Preprocessed fNIRS data
│   ├── POS_DATA/                 # 3D channel coordinates (.pos files)
│   ├── SCATTERING_COEFFICIENTS/  # Scattering transform features
│   └── fnirs_data_pipeline.py    # Data processing utilities
│
├── NOTEBOOKS/
│   ├── 00_data_preparation.ipynb    # Data preprocessing & scattering generation
│   ├── 01_baseline_models.ipynb     # MLP baseline experiments
│   ├── 02_thesis_model.ipynb        # Main thesis implementation
│   ├── 03_topological_model.ipynb   # Topological data analysis
│   ├── 04_rnn_model.ipynb           # Temporal sequence modeling
│   ├── 05_curvature.ipynb           # Geometric manifold analysis
│   ├── 06_visualization.ipynb       # Results visualization
│   └── 07_combined_model.ipynb      # Multi-modal fusion architecture
│
├── SCRIPTS/
│   ├── config.py                    # Global configuration & parameters
│   ├── dataprep.py                  # Data loading & preprocessing
│   ├── models.py                    # Core neural network architectures
│   ├── attention.py                 # Attention mechanisms
│   ├── train.py                     # Training loops & evaluation
│   ├── scattering.py                # Scattering transform implementation
│   ├── xyzcoords.py                 # 3D coordinate processing
│   ├── rnn_model.py                 # RNN/LSTM implementations
│   ├── rnn_training.py              # RNN-specific training
│   ├── topological_model.py         # Topological neural networks
│   ├── topological_training.py      # Topological training utilities
│   ├── curvature_models.py          # Curvature-based architectures
│   ├── curvature_training.py        # Geometric training methods
│   ├── combined_model_v2.py         # Multi-modal fusion network
│   ├── combined_training_v2.py      # Combined model training
│   ├── baseline_interval_model.py   # Interval-based baseline
│   ├── baseline_interval_training.py # Interval model training
│   ├── cross_validation_experiments.py # Cross-validation framework
│   ├── attention_visualization.py   # Attention weight analysis
│   ├── trajectory_visualization.py  # Latent space trajectories
│   └── comparison_visualization.py  # Model comparison plots
│
├── RESULTS/                      # Generated during experiments
│   ├── cross_validation_results/
│   ├── attention_results/
│   ├── latent_space_visualization_results/
│   ├── trajectory_comparison_results/
│   ├── final_visualization_results/
│   └── model_checkpoints/
│
└── README.md

Data Description

fNIRS Data

  • Channels: 48 (24 per hemisphere)
  • Sampling: 10 Hz
  • Duration: ~13 minutes per subject
  • Subjects: 17 professional musicians
  • Total samples: 133,450 (7,850 per subject)

Task Paradigm

The experimental protocol follows Tachibana et al.'s paradigm with alternating blocks:

  • Rest periods: Baseline brain activity
  • Scale playing: Non-creative, structured musical performance
  • Improvisation: Creative, spontaneous musical generation
  • Sham: Control condition

Each block lasts 400 samples (40 seconds) with the following structure:

  1. Pre-task baseline (250 samples)
  2. Alternating Rest → Task blocks (400 samples each)
  3. 4 repetitions each of Improv and Scale conditions

Dependencies

The project requires the following Python packages:

  • torch (PyTorch deep learning)
  • numpy (numerical computing)
  • pandas (data manipulation)
  • scikit-learn (ML utilities)
  • matplotlib (plotting)
  • seaborn (statistical visualization)
  • torch-topological (topological layers)
  • tqdm (progress bars)
  • IPython (notebook support)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

0