In this repository we provide the code and annotation GUI for DeepAction, a MATLAB toolbox for automatic annotation of animal behavior in video described in the preprint. Our method extracts features from video and uses them to train a birectional LSTM classifier, which in addition to predicting behavior generates a confidence score for the predicted label. These confidence scores allow for the selective review and correction of ambiguous annotations while omitting unnecessary review.
Included in this repository is:
- The code for the workflow
- The MATLAB app GUIs
- Project method, configuration file, and GUI documentation
- And two demonstration projects to cover the entire DeepAction pipeline
- Getting started
- Example projects
- Key folders & files
- Release notes
All this is required to begin using DeepAction is to add the toolbox folder to your MATLAB search path. The command to do this is below:
toolbox_folder is the path to the toolbox repository. The command
savepath saves the current search path so the toolbox doesn't need to be added to the path each time a new MATLAB instance is opened. Don't include it if this is not desirable.
To demonstrate the toolbox, we run the workflow below using the home-cage dataset from Juang et al (see references). These demonstration projects are designed to familiarize users with the toolbox, while minimizing the time- and computationally-intensive components of the workflow (i.e., generating temporal frames and spatiotemporal features). Scripts can be found in the
To do demonstrate different facets of the toolbox, we split the demonstration into two sets. In the first, we show how to create a project, extract features, and launch the annotator. The data for this project are a series of short clips (5 minutes each), selected to decrease the time required to generate frames and features. In the second project, we guide users through training and evaluating the classifier and confidence-based review, as well as launching the confidence-review annotator. For this project we provide pre-extracted features, we well as the corresponding annotations, for all the videos in the home-cage dataset.
The example data can be found via Google Drive link here. The contents of the example folder are as follows:
- The subfolder
project_1_videoscontains the demonstration videos that will be used in Project 1.
- The subfolder
project_2contains the annotations, spatial/temporal features, and dimensionality reduction model for the entire home-cage dataset, which is used in Project 2.
- The subfolder
example_annotationscontains the files used in the minidemonstration for converting existing annotations into a DeepAction-importable format.
In this project we provide a small number of short video clips from the home-cage dataset, and demonstrate steps 1-5 on the workflow below. The file to run this is
demo_project_1.mlx in the
examples folder of this repository. In this script, users:
- Initialize a new project
- Import a set of 5 shortened videos from the home-cage dataset
- Extract the spatial and temporal frames from the project videos
- Use the spatial and temporal frames to create spatial and temporal features
- Create a dimensionality reduction model
- Launch the annotator to familiarize with the annotation process
In the second set, we provide annotations as well as the spatial and temporal features needed to train the classifier, and guide users through the processes of training and evaluating the classifier and running the confidence-based review. The file to run this project is
demo_project_2.mlx in the
examples folder. Here, users:
- Load spatiotemporal features using the provided dimensionality reduction model
- Split annotated clips into training, validation, and test sets
- Train and evaluate the classifier
- Create confidence scores for each clip
- Launch the annotator to explore the confidence-based review GUI
- Export annotations
In addition, we also provide a demonstration of how to import annotations from a
.csv file into a new DeepAction project. The code to run this demonstration can be found in the
FormatAnnotations.mlx file in the
./project_folder/config.txt configuration file (see here)
./project_folder/annotations annotations for each video
./project_folder/videos raw video data imported into the project
./project_folder/frames spatial and temporal feames corresponding to the video data in
./project_folder/features spatial and temporal video features
./project_folder/rica_model dimensionality reduction models (only one model needs to be created for each stream/camera/dimensionality combination)
- Home-cage dataset - dataset used in demonstration projects. Also see: Jhuang, H., Garrote, E., Yu, X., Khilnani, V., Poggio, T., Steele, A. D., & Serre, T. (2010). Automated home-cage behavioural phenotyping of mice. Nature communications, 1(1), 1-10.
Piotr's toolbox - used for reading/writing
.seqfiles (and a version of this release is included in the
- Dual TVL1 Optical Flow - used to estimate TV-L1 optical flow and create temporal frames
- CRIM13 dataset - used in the preprint (but not the example projects)
- EZGif.com - used to create GIF files from video
As this is the initial release, we are expecting users might run into issues with the program. With this in mind, annotations are backed up each time the annotator is opened (so, if there's some sort of data loss/mistake/bug when using the annotator, or other components of the workflow that access annotation files, prior annotations can be restored from file). Please raise any issues on the issues page of the GitHub repository, and/or contact the author in the case of major problems. In the near future, the "to-do" items are:
- Releasing a multiple-camera example project and improving multiple-camera usability.
- Improved method documentation! (The main methods are covered in functions.md, but full details and sufficient in-code documentation is incomplete.)
- Improving the annotator video viewer to reduce lag.
- ... and quite a few other miscellaneous items
If you're interested in contributing, please reach out!
This project is licensed under the MIT License - see the LICENSE file for details
Carl Harris (2022). DeepAction (https://github.com/carlwharris/DeepAction), GitHub. Retrieved .
Harris, Carl, et al. DeepAction: A MATLAB Toolbox for Automated Classification of Animal Behavior in Video. Cold Spring Harbor Laboratory, June 2022, doi:10.1101/2022.06.20.496909.
MATLAB Release Compatibility
Platform CompatibilityWindows macOS Linux
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!Start Hunting!