pyprf_motion

Population receptive field analysis for motion-sensitive early- and mid-level visual cortex.


Keywords
pRF
License
GPL-3.0
Install
pip install pyprf_motion==1.0.4

Documentation

pyprf_motion

Population receptive field analysis for motion-sensitive early- and mid-level visual cortex.

This is an extension of the pyprf package. Compared to pyprf, pyprf_motion offers stimuli that were specifically optimized to elicit responses from motion-sensitive areas. On the analysis side, pyprf_motion offers some additional features made necessary by the different stimulation type (model positions defined in polar coordinates, sub-TR temporal resolution for model creation, cross-validation for model fitting) at the cost of some speed and flexibility. There is currently no support for GPU.

Installation

For installation, follow these steps:

Option A: install via pip

pip install pyprf_motion

Option B: install from github repository

  1. (Optional) Create conda environment
conda create -n env_pyprf_motion python=2.7
source activate env_pyprf_motion
conda install pip
  1. Clone repository
git clone https://github.com/MSchnei/pyprf_motion.git
  1. Install numpy, e.g. by running:
pip install numpy
  1. Install pyprf_motion with pip
pip install /path/to/cloned/pyprf_motion

Dependencies

Python 2.7

Package Tested version
NumPy 1.14.0
SciPy 1.0.0
NiBabel 2.2.1
cython 0.27.1
tensorflow 1.4.0
scikit-learn 0.19.1

How to use

1. Present stimuli and record fMRI data

The PsychoPy scripts in the stimulus_presentation folder can be used to map motion-sensitive visual areas (especially area hMT+) using the pRF framework.

  1. Specify your desired parameters in the config file.

  2. Run the createTexMasks.py file to generate relevant masks and textures. Masks and textures will be saved as numpy arrays in .npz format in the parent folder called MaskTextures.

  3. Run the createCond.py file to generate the condition order. Condition and target presentation orders will be saved as numpy arrays in .npz format in the parent folder called Conditions.

  4. Run the stimulus presentation file motLoc.py in PsychoPy. The stimulus setup should look like the following screen-shot:

2. Prepare spatial and temporal information for experiment as arrays

  1. Run prepro_get_spat_info.py in the prepro folder to obtain an array with the spatial information of the experiment. This should result in a 3d numpy array with shape [pixel x pixel x nr of spatial aperture conditions] that represents images of the spatial apertures stacked on top of each other.

  2. Run prepro_get_temp_info.py in the prepro folder to obtain an array with the temporal information of the experiment. This should result in a 2d numpy array with shape [nr of volumes across all runs x 3]. The first column represents unique identifiers of spatial aperture conditions. The second column represents onset times and the third durations (both in s).

3. Prepare the input data

The input data should be motion-corrected, high-pass filtered and (optionally) distortion-corrected. If desired, spatial as well as temporal smoothing can be applied. The PrePro folder contains some auxiliary scripts to perform some of these functions.

4. Adjust the csv file

Adjust the information in the config_default.csv file in the Analysis folder, such that the provided information is correct. It is recommended to make a specific copy of the csv file for every subject.

5. Run pyprf_motion

Open a terminal and run

pyprf_motion -config path/to/custom_config.csv

References

This application is based on the following work:

License

The project is licensed under GNU General Public License Version 3.