Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
135 commits
Select commit Hold shift + click to select a range
498344b
use new sample_data
qian-chu Dec 19, 2025
e132c3e
Format code with isort and ruff
github-actions[bot] Dec 19, 2025
4dd4bfb
Update sample_data.py
qian-chu Dec 19, 2025
94f67aa
Update sample_data.py
qian-chu Dec 19, 2025
e4da3ef
Merge branch 'main' into new_epochs
qian-chu Dec 21, 2025
7f367f2
Format code with isort and ruff
github-actions[bot] Dec 21, 2025
e0c77ee
Refactor epochs/events API and improve doc consistency
qian-chu Dec 22, 2025
b6e23e4
Format code with isort and ruff
github-actions[bot] Dec 22, 2025
6a9bac5
update reference
qian-chu Dec 22, 2025
2b0432f
update annot
qian-chu Dec 22, 2025
9106967
Format code with isort and ruff
github-actions[bot] Dec 22, 2025
b7becbc
Refactor Epochs class and update plotting for new API
qian-chu Dec 28, 2025
de8f53e
Format code with isort and ruff
github-actions[bot] Dec 28, 2025
726468c
Improve epoching docs and Dataset handling for native data
qian-chu Dec 29, 2025
7e28a5d
Format code with isort and ruff
github-actions[bot] Dec 29, 2025
fb227bb
update
qian-chu Jan 7, 2026
31c5b5c
adjust workflow
qian-chu Jan 7, 2026
b361cf3
Update main.yml
qian-chu Jan 7, 2026
822631c
Update pyneon/epochs.py
qian-chu Jan 15, 2026
5d12e0d
updated baseine correction, respecting circular columns
JGHartel Jan 15, 2026
25ea0d2
Merge branch 'new_epochs' of https://github.com/ncc-brain/PyNeon into…
JGHartel Jan 15, 2026
32cb0f0
Refactor and expand stream and event tests, minor fixes
qian-chu Jan 15, 2026
90bd91f
initial UNTESTED commit for upsampling homographies
JGHartel Jan 15, 2026
92a5467
Merge branch 'new_epochs' of https://github.com/ncc-brain/PyNeon into…
JGHartel Jan 15, 2026
cb55353
Update main.yml
qian-chu Jan 15, 2026
04cfda8
Merge branch 'new_epochs' of https://github.com/ncc-brain/PyNeon into…
JGHartel Jan 15, 2026
6ee60c6
upsampled homographies
JGHartel Jan 16, 2026
898eced
allowing apriltag detection window and additional setting to homograp…
JGHartel Jan 16, 2026
147eea5
more tests
qian-chu Jan 16, 2026
9770282
update (with UTC -> Unix typo corr)
qian-chu Jan 16, 2026
0af3ba7
Update epochs.py
qian-chu Jan 16, 2026
10904cd
Refactor AprilTag detection to use flat column format
qian-chu Jan 21, 2026
ce569f7
Refactor AprilTag detection to generic marker detection
qian-chu Jan 21, 2026
b0d6298
Update marker detection and data types, require OpenCV 4.7+
qian-chu Jan 22, 2026
525edee
Update detect_marker.py
qian-chu Jan 22, 2026
49946f3
Rename video processing modules and update tutorial output
qian-chu Jan 22, 2026
ec7f160
Refactor camera pose estimation and update imports
qian-chu Jan 22, 2026
0ee3099
Update video.py
qian-chu Jan 22, 2026
bb8aefa
Refactor marker detection and pose estimation APIs
qian-chu Jan 22, 2026
709b79f
Update surface_mapping.ipynb
qian-chu Jan 23, 2026
143df22
Refactor marker detection to use named corners
qian-chu Jan 23, 2026
7c99c91
Refactor marker detection and layout handling
qian-chu Jan 23, 2026
e035d84
Refactor marker and frame indexing to 'frame index'
qian-chu Jan 25, 2026
905a02e
Update sample data references and improve cloud tutorial
qian-chu Jan 27, 2026
8c54bde
Add detector_parameters to marker detection functions
qian-chu Jan 29, 2026
f7fcf83
Refactor homography computation API and docs
qian-chu Jan 29, 2026
46f54f6
Improve marker mapping docs and add find_homographies export
qian-chu Jan 29, 2026
fcf12fb
Refactor docstrings to use shared snippets and add marker layout plot…
qian-chu Jan 30, 2026
50e3a78
Refactor crop methods to use 'sample' instead of 'row'
qian-chu Feb 1, 2026
55de0c2
Refactor surface mapping and interpolation APIs
qian-chu Feb 1, 2026
1665da9
Improve interpolation gap handling and overlays
qian-chu Feb 2, 2026
6ef67fb
Add type hints, fix epochs, improve video overlays
qian-chu Feb 3, 2026
8cb8db8
Extract homography util and add Events method
qian-chu Feb 4, 2026
d06994d
Update surface_mapping.ipynb
qian-chu Feb 6, 2026
8b46804
Update short_aruco.ipynb
qian-chu Feb 6, 2026
528ef48
Improve docstrings and add Events.id_name
qian-chu Feb 9, 2026
9cbaadd
Fix frame sampling and refine marker overlay
qian-chu Feb 9, 2026
b71bab7
Update vis.py
qian-chu Feb 9, 2026
5fd2025
corrected saving of video derivatives in derivative folder
JGHartel Feb 10, 2026
e7b399e
correction of seeking on variable frame rate video
JGHartel Feb 10, 2026
2f66af9
improved plot_marker_layout
JGHartel Feb 10, 2026
2a77bfa
Unify detection visuals and streams
JGHartel Feb 10, 2026
ee7a53d
refactor: standardize detection outputs and rename screen to surface
JGHartel Feb 11, 2026
7208420
consolidated definition of detection_columns as a list of ints
JGHartel Feb 11, 2026
d233712
Corrected doc decorators and made use of them in detect_surface
JGHartel Feb 11, 2026
16bf158
passed changes in column definition to vis
JGHartel Feb 11, 2026
594d713
refactored homography.py to work with unified detections format
JGHartel Feb 11, 2026
55a5987
small correction on tutorial
JGHartel Feb 11, 2026
9be49a7
ruff format (added whitespace)
JGHartel Feb 11, 2026
aae53a2
removed references to set frame, gave surface an optional time_window…
JGHartel Feb 11, 2026
a1807db
refactored undistortion
JGHartel Feb 11, 2026
72c6392
made undistort optional for marker and surface detection
JGHartel Feb 11, 2026
e819bf9
removed/corrected remaining references to set(frame)
JGHartel Feb 11, 2026
340a0ec
ruff format
JGHartel Feb 11, 2026
80b7c7e
applied a tiny patch: area ratios in detect_surface need to be modifi…
JGHartel Feb 11, 2026
a4293f7
Move video visualization to vis/video module
qian-chu Feb 12, 2026
8d13948
Unify output_path and frame/read APIs
qian-chu Feb 12, 2026
d32e430
recursive call of read_frame_at to grab from the start
JGHartel Feb 14, 2026
b3b5bf6
refactored overlay_detections
JGHartel Feb 14, 2026
89726b5
Add reprs and improve video/stream/export handling
qian-chu Feb 16, 2026
256977e
Merge branch 'main' into new_epochs
qian-chu Feb 16, 2026
0bbe83e
update surface mapping tutorial
qian-chu Feb 16, 2026
4ca2354
Update README.md
qian-chu Feb 16, 2026
6d84742
update tutorial
qian-chu Feb 17, 2026
041b20a
Update main.yml
qian-chu Feb 17, 2026
6998583
Update sample dataset links and README
qian-chu Feb 18, 2026
349efad
Remove video utilities from Recording
qian-chu Feb 18, 2026
e6c2308
Add Eye-BIDS export and refine BIDS metadata
qian-chu Feb 18, 2026
d480725
Refine BIDS metadata, fix concat and notebook
qian-chu Feb 19, 2026
62958e5
Add OpenCV cleanup fixtures and format fixes
qian-chu Feb 19, 2026
981ab1e
Add save/load CSV for Streams and Events
qian-chu Feb 20, 2026
c0128e2
Handle empty Epochs & use pandas string dtypes
qian-chu Feb 20, 2026
94f1b73
attempt to cleanup opencv after pytest
qian-chu Feb 20, 2026
69ed8c0
Update main.yml
qian-chu Feb 20, 2026
6bd9f49
Close Recording video handles on cleanup
qian-chu Feb 20, 2026
19d26af
Update export_cloud.py
qian-chu Feb 20, 2026
fd48d5b
Update main.yml
qian-chu Feb 20, 2026
7b33aeb
Revert "Update main.yml"
qian-chu Feb 20, 2026
7527b98
test if video is the critical test
qian-chu Feb 20, 2026
421b840
Revert "test if video is the critical test"
qian-chu Feb 20, 2026
8aec20b
Update main.yml
qian-chu Feb 20, 2026
0e4a0e6
Update main.yml
qian-chu Feb 20, 2026
dd229de
safer changes to video closing
JGHartel Feb 21, 2026
d558304
slight changes to video object reset and delete
JGHartel Feb 21, 2026
71ac913
move typgeguard import
JGHartel Feb 21, 2026
97c333b
changes to delete method
JGHartel Feb 21, 2026
131a42e
removed del on recording
JGHartel Feb 21, 2026
9338fe1
changed to use composition instead of inheritance from cv2
JGHartel Feb 21, 2026
af95be2
Fix video frame handling; update notebooks
qian-chu Feb 23, 2026
df1aa1a
Add inplace sync option; update BIDS docs and examples
qian-chu Feb 23, 2026
7782f13
Refactor Recording loaders, events and utils
qian-chu Feb 23, 2026
408b9bc
Update test_events.py
qian-chu Feb 23, 2026
c9fb205
Add IMU channel map and export tests
qian-chu Feb 23, 2026
198ce22
Refine BIDS export handling and expand tests
qian-chu Feb 23, 2026
ee046e2
Update test_export.py
qian-chu Feb 23, 2026
8f24aa6
Update export_bids.py
qian-chu Feb 23, 2026
2bc87be
Cleanup utils and expand Video API/docs
qian-chu Feb 24, 2026
c513304
Update video.py
qian-chu Feb 24, 2026
c91aad4
Refactor video detection API and docs
qian-chu Feb 24, 2026
9f29e98
Revert "Refactor video detection API and docs"
qian-chu Feb 24, 2026
de71549
Update recording.py
qian-chu Feb 24, 2026
3cc35f1
Rename export_eye_bids to export_eye_tracking_bids
qian-chu Feb 24, 2026
248c5fc
Merge branch 'tidy-up' into new_epochs
qian-chu Feb 24, 2026
063805b
Update test_export.py
qian-chu Feb 24, 2026
b8f646b
Rename quaternion/pupil BIDS fields
qian-chu Feb 25, 2026
f5b1465
Update homography.py
qian-chu Feb 25, 2026
d4e786b
Improve BIDS export metadata & prefix inference
qian-chu Feb 25, 2026
09760f8
first (untested) layout class
JGHartel Feb 27, 2026
8815032
corrected find_homography api
JGHartel Feb 27, 2026
4e93530
Select single surface result and simplify output
qian-chu Feb 27, 2026
42ce7be
Add DataFrame validators and refactor video/utils
qian-chu Feb 27, 2026
4aaa6be
Rename surface detection to contour detection
qian-chu Feb 27, 2026
b0fc139
Delete buggy fallback
qian-chu Feb 28, 2026
0b4c6b3
improve docstr and format
qian-chu Feb 28, 2026
3cbf85b
Update doc_decorators.py
qian-chu Feb 28, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 44 additions & 4 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
@@ -1,14 +1,20 @@
name: PyNeon CI

on: [push, pull_request]
on:
push:
branches: ["main"]
pull_request:
branches: ["main"]

jobs:
ruff-format:
format:
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # to be able to push changes

- name: Set up Python
uses: actions/setup-python@v4
Expand All @@ -26,15 +32,50 @@ jobs:
run: ruff format .

- name: Commit changes if any
if: github.event_name == 'push' # only push changes on push events
run: |
git config --local user.email "github-actions[bot]@users.noreply.github.com"
git config --local user.name "github-actions[bot]"
git add .
git commit -m "Format code with isort and ruff" || echo "No changes to commit"
git push

tests:
runs-on: ${{ matrix.os }}
env:
PYTHONFAULTHANDLER: "1"
needs: format # waits until formatting is done
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
python-version: ["3.10", "3.11", "3.12", "3.13"]

steps:
- uses: actions/checkout@v4

- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}

- name: Install test dependencies
run: pip install .[dev]

- name: Run tests
if: matrix.os != 'windows-latest'
run: pytest tests -p no:cacheprovider -p no:faulthandler -p no:unraisableexception

- name: Run tests (Windows, disable MSMF)
if: matrix.os == 'windows-latest'
env:
OPENCV_VIDEOIO_PRIORITY_MSMF: "0"
run: pytest tests -p no:cacheprovider -p no:faulthandler -p no:unraisableexception

build-docs:
runs-on: ubuntu-latest
needs: format # waits until formatting is done
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
steps:
- uses: actions/checkout@v4

Expand All @@ -44,7 +85,7 @@ jobs:
python-version: "3.13"

- name: Install Pandoc
run: sudo apt-get install pandoc
run: sudo apt-get install -y pandoc

- name: Install docs dependencies
run: pip install .[doc]
Expand All @@ -65,7 +106,6 @@ jobs:

- name: Deploy (GitHub Pages)
uses: peaceiris/actions-gh-pages@v3
if: github.ref == 'refs/heads/main'
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: build/html
Expand Down
25 changes: 0 additions & 25 deletions .github/workflows/tests.yml

This file was deleted.

1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Test data
data/
tests/outputs/
source/tutorials/export/

# Ruff
.ruff_cache/
Expand Down
9 changes: 7 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
![GitHub License](https://img.shields.io/github/license/ncc-brain/PyNeon?style=plastic)
![Website](https://img.shields.io/website?url=https%3A%2F%2Fncc-brain.github.io%2FPyNeon%2F&up_message=online&style=plastic&label=Documentation)
[![PyNeon CI](https://github.com/ncc-brain/PyNeon/actions/workflows/main.yml/badge.svg)](https://github.com/ncc-brain/PyNeon/actions/workflows/main.yml)

# PyNeon

Expand All @@ -14,14 +15,18 @@ PyNeon supports both **native** (data stored in the companion device) and [**Pup

Documentation for PyNeon is available at <https://ncc-brain.github.io/PyNeon/> which includes detailed references for classes and functions, as well as step-by-step tutorials presented as Jupyter notebooks.

We also created a few sample datasets containing short Neon recordings for testing and tutorial purposes. These datasets can be found on [OSF](https://doi.org/10.17605/OSF.IO/3N85H). We also provide a utility function `get_sample_data()` to download these sample datasets directly from PyNeon.

## Key Features

- [(Tutorial)](https://ncc-brain.github.io/PyNeon/tutorials/read_recording.html) Easy API for reading in datasets, recordings, or individual modalities of data.
- Easy API for reading in datasets, recordings, or individual modalities of data.
- [Tutorial](https://ncc-brain.github.io/PyNeon/tutorials/read_recording_cloud.html) for reading data in Pupil Cloud format
- [Tutorial](https://ncc-brain.github.io/PyNeon/tutorials/read_recording_native.html) for reading data in native format
- [(Tutorial)](https://ncc-brain.github.io/PyNeon/tutorials/interpolate_and_concat.html) Various preprocessing functions, including data cropping, interpolation,
concatenation, etc.
- [(Tutorial)](https://ncc-brain.github.io/PyNeon/tutorials/pupil_size_and_epoching.html) Flexible epoching of data for trial-based analysis.
- [(Tutorial)](https://ncc-brain.github.io/PyNeon/tutorials/video.html) Methods for working with scene video, including scanpath estimation and AprilTags-based mapping.
- [(Tutorial)](https://ncc-brain.github.io/PyNeon/tutorials/export_to_bids.html) Exportation to [Motion-BIDS](https://www.nature.com/articles/s41597-024-03559-8) (and forthcoming Eye-Tracking-BIDS) format for interoperability across the cognitive neuroscience community.
- [(Tutorial)](https://ncc-brain.github.io/PyNeon/tutorials/export_to_bids.html) Exportation to [Motion-BIDS](https://doi.org/10.1038/s41597-024-03559-8) and [Eye-Tracking-BIDS](https://doi.org/10.64898/2026.02.03.703514) formats for interoperability across the cognitive neuroscience community.

## Installation

Expand Down
15 changes: 7 additions & 8 deletions pyneon/__init__.py
Original file line number Diff line number Diff line change
@@ -1,17 +1,14 @@
# ruff: noqa: E402
__version__ = "0.0.1"

from typeguard import install_import_hook

install_import_hook("pyneon")

from .dataset import Dataset
from .epochs import Epochs, construct_times_df, events_to_times_df
from .epochs import Epochs, construct_epochs_info, events_to_epochs_info
from .events import Events
from .recording import Recording
from .stream import Stream
from .utils import *
from .video import Video
from .video import Video, find_homographies
from .vis import plot_marker_layout

__all__ = [
"Dataset",
Expand All @@ -20,6 +17,8 @@
"Events",
"Epochs",
"Video",
"construct_times_df",
"events_to_times_df",
"plot_marker_layout",
"find_homographies",
"construct_epochs_info",
"events_to_epochs_info",
]
115 changes: 72 additions & 43 deletions pyneon/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,10 @@

class Dataset:
"""
Holder for multiple recordings. It reads from a directory containing a multiple
recordings downloaded from Pupil Cloud with the **Timeseries CSV** or
**Timeseries CSV and Scene Video** option. For example, a dataset with 2 recordings
Container for multiple recordings. Reads from a directory containing multiple
recordings.

For example, a dataset with 2 recordings downloaded from Pupil Cloud
would have the following folder structure:

.. code-block:: text
Expand All @@ -28,48 +29,76 @@ class Dataset:
├── enrichment_info.txt
└── sections.csv

Individual recordings will be read into :class:`pyneon.Recording` objects based on
``sections.csv``. They are accessible through the ``recordings`` attribute.
Or a dataset with multiple native recordings:

.. code-block:: text

dataset_dir/
├── recording_dir_1/
│ ├── info.json
│ ├── blinks ps1.raw
| ├── blinks ps1.time
| ├── blinks.dtype
| └── ...
└── recording_dir_2/
├── info.json
├── blinks ps1.raw
├── blinks ps1.time
├── blinks.dtype
└── ...

Individual recordings will be read into :class:`Recording` instances
(based on ``sections.csv``, if available) and accessible through the
``recordings`` attribute.

Parameters
----------
dataset_dir : str or pathlib.Path
Path to the directory containing the dataset.
custom : bool, optional
Whether to expect a custom dataset structure. If ``False``, the dataset
is expected to follow the standard Pupil Cloud dataset structure with a
``sections.csv`` file. If True, every directory in ``dataset_dir`` is
considered a recording directory, and the ``sections`` attribute is
constructed from the ``info`` of recordings found.
Defaults to ``False``.

Attributes
----------
dataset_dir : pathlib.Path
Path to the directory containing the dataset.
recordings : list of Recording
List of :class:`pyneon.Recording` objects for each recording in the dataset.
List of :class:`Recording` instances for each recording in the dataset.
sections : pandas.DataFrame
DataFrame containing the sections of the dataset.

Examples
--------
>>> from pyneon import Dataset
>>> dataset = Dataset("path/to/dataset")
>>> print(dataset)

Dataset | 2 recordings

>>> rec = dataset.recordings[0]
>>> print(rec)

Data format: cloud
Recording ID: 56fcec49-d660-4d67-b5ed-ba8a083a448a
Wearer ID: 028e4c69-f333-4751-af8c-84a09af079f5
Wearer name: Pilot
Recording start time: 2025-12-18 17:13:49.460000
Recording duration: 8235000000 ns (8.235 s)
"""

def __init__(self, dataset_dir: str | Path, custom: bool = False):
def __init__(self, dataset_dir: str | Path):
dataset_dir = Path(dataset_dir)
if not dataset_dir.is_dir():
raise FileNotFoundError(f"Directory not found: {dataset_dir}")

self.dataset_dir = dataset_dir
self.recordings = list()
self.dataset_dir: Path = dataset_dir
self.recordings: list[Recording] = list()

if not custom:
sections_path = dataset_dir.joinpath("sections.csv")
if not sections_path.is_file():
raise FileNotFoundError(f"sections.csv not found in {dataset_dir}")
self.sections = pd.read_csv(sections_path)
sections_path = dataset_dir / "sections.csv"

if sections_path.is_file():
self.sections = pd.read_csv(sections_path)
recording_ids = self.sections["recording id"]

# Assert if recording IDs are correct
for rec_id in recording_ids:
rec_id_start = rec_id.split("-")[0]
rec_dir = [
Expand Down Expand Up @@ -104,44 +133,44 @@ def __init__(self, dataset_dir: str | Path, custom: bool = False):
RuntimeWarning,
)

# Rebuild a `sections` DataFrame from the Recording objects
# Rebuild a `sections` DataFrame from the Recording instances
sections = []
for i, rec in enumerate(self.recordings):
for rec in self.recordings:
sections.append(
{
"section id": i,
"section id": None,
"recording id": rec.recording_id,
"recording name": rec.recording_id,
"wearer id": rec.info["wearer_id"],
"wearer name": rec.info["wearer_name"],
"recording name": None,
"wearer id": rec.info.get("wearer_id", None),
"wearer name": rec.info.get("wearer_name", None),
"section start time [ns]": rec.start_time,
"section end time [ns]": rec.start_time + rec.info["duration"],
"section end time [ns]": rec.start_time
+ rec.info.get("duration", 0),
}
)

self.sections = pd.DataFrame(sections)

def __repr__(self):
"""Return a string representation of the Dataset.

Returns
-------
str
Summary showing the number of recordings.
"""
return f"Dataset | {len(self.recordings)} recordings"

def __len__(self):
"""Return the number of recordings in the dataset.

Returns
-------
int
Number of recordings.
"""
return len(self.recordings)

def __getitem__(self, index: int) -> Recording:
"""Get a Recording by index."""
return self.recordings[index]

def load_enrichment(self, enrichment_dir: str | Path):
"""
Load enrichment information from an enrichment directory. The directory must
contain an enrichment_info.txt file. Enrichment data will be parsed for each
recording ID and added to Recording object in the dataset.

The method is currently being developed and is not yet implemented.

Parameters
----------
enrichment_dir : str or pathlib.Path
Path to the directory containing the enrichment information.
"""
raise NotImplementedError("Enrichment loading is not yet implemented.")
Loading