Roadmaps#

This page outlines current development priorities and aims to guide core developers and to encourage community contributions. It is a living document and will be updated as the project evolves.

The roadmaps are not meant to limit movement features, as we are open to suggestions and contributions. Join our Zulip chat to share your ideas. We will take community feedback into account when planning future releases.

Long-term vision#

The following features are being considered for the first stable version v1.0.

  • Import/Export motion tracks from/to diverse formats. We aim to interoperate with leading tools for animal tracking and behaviour classification, and to enable conversions between their formats.

  • Standardise the representation of motion tracks. We represent tracks as xarray data structures to allow for labelled dimensions and performant processing.

  • Interactively visualise motion tracks. We are experimenting with napari as a visualisation and GUI framework.

  • Clean motion tracks, including, but not limited to, handling of missing values, filtering, smoothing, and resampling.

  • Derive kinematic variables like velocity, acceleration, joint angles, etc., focusing on those prevalent in neuroscience and ethology.

  • Integrate spatial data about the animal’s environment for combined analysis with motion tracks. This covers regions of interest (ROIs) such as the arena in which the animal is moving and the location of objects within it.

  • Define and transform coordinate systems. Coordinates can be relative to the camera, environment, or the animal itself (egocentric).

  • Provide common metrics for specialised applications. These applications could include gait analysis, pupillometry, spatial navigation, social interactions, etc.

  • Integrate with neurophysiological data analysis tools. We eventually aim to facilitate combined analysis of motion and neural data.

Short-term milestone - v0.1#

We plan to release version v0.1 of movement in early 2025, providing a minimal set of features to demonstrate the project’s potential and to gather feedback from users. At minimum, it should include:

  • Ability to import pose tracks from DeepLabCut, SLEAP and LightningPose into a common xarray.Dataset structure.

  • At least one function for cleaning the pose tracks.

  • Ability to compute velocity and acceleration from pose tracks.

  • Public website with documentation.

  • Package released on PyPI.

  • Package released on conda-forge.

  • Ability to visualise pose tracks using napari. We aim to represent pose tracks as napari layers, overlaid on video frames.