Double Pendulum Chaotic

Get this dataset


A double pendulum is a pendulum with another pendulum attached to its end. Despite being a simple physical system, it exhibits a rich dynamic behavior with a strong sensitivity to initial conditions and noises in the environment (motion of the air in the room, sound vibrations, vibration of the table due to coupling with the pendulum etc.). Those influences at any given time will affect future trajectory in a way that is increasingly significant with time, making it a chaotic system.

Videos of the double pendulum were taken using a high-speed Phantom Miro EX2 camera. The camera’s fast global shutter enabled us to take non-distorted frames, with a short exposure time to avoid any motion blur. To make the extraction of the arm positions easier, a matte black background was used, and the three datums were marked with red, green and blue fiducial markers. The markers were printed so that their diameter matches exactly that of the pendulum datums, which made their alignment easier. A powerful LED floodlight with a custom DC power supply (to avoid flicker) was used to illuminate the pendulum, to compensate for the short frame exposure time. The camera was placed at 2 meters from the pendulum, with the axis of the objective aligned with the first pendulum datum. The pendulum was launched by hand, and the camera was motion triggered. Our dataset was generated on the basis of 21 individual runs of the pendulum. Each of the recorded sequences lasted around 40s and consisted of around 17500 frames.

We implemented the program to extract the positions of the markers obtained from the video. The video frames were first upscaled 5 times to easily take advantage of subpixel positional resolution. We used scikit-image to draw the fiducial markers templates. These templates were used with the OpenCV cross-correlation algorithm to find the best matches on a per frame basis. The found matching markers were finally distinguished on the basis of their color.

The proposed challenge is to predict the next 200 consecutive time-steps on the basis of the past 4 consecutive time-steps. For that purpose we have preprocessed the original 21 sequences in a way described below.

We extracted 5% of the data as “validation and test sets”, in such a way that those sequences were homogeneously spread over the data and the runtime of the pendulum runs. In order to avoid strong correlations between the training and the validation/test sets, we discarded 200 time-steps before and after each of the extracted sequence. That resulted in 123 non-overlapping sequences: 39 training sequences of varying length (from 637 to 16850 time-steps) and 84 validation/test sequences of 204 time-steps each. In the latter case the first 4 steps represent the inputs (i), and the next 200 steps correspond to the targets (t). Finally, we randomized the order of all files.

We supplement the original images with two additional representations: marker positions and arm angles. Marker positions are three pairs (x,y) representing image coordinates of all three markers (each value is multiplied by 5). Arm angles are sines and cosines of two angles α and β, where α is the angle between the horizontal image line pointing right and the the first arm, whereas β is the angle between the first and second arm.

It is worth noting that one might combine and use different representations for inputs and targets/predictions, what regulates the difficulty of the challenge. In particular, using raw images as both inputs and targets seems to be the most complex task, whereas utilization of arm angles as inputs and targets reduces the task into classic multiple-input multiple-output time-series prediction.

The data file contains:
A. original videos (uncut)
– CSV files with the positions
– h264 compressed video
B. dataset with train and test (4 frames input, 200 frames prediction), generated from the original videos
– CSV files with the positions
– h264 compressed video

The 6 columns in the csv-formatted annotation files correspond to (x_red, y_red), (x_green, y_green), (x_blue, y_blue).
Please see the research publication for more information.

Dataset Metadata

Format License Domain Number of Records Size Originally Published
CDLA-Sharing Time Series 21 videos
378099 annotated frames
600 MB December 15, 2018


title={Learning beyond simulated physics},
author={Asseman, Alexis and Kornuta, Tomasz and Ozcan, Ahmet},
maintitle={Neural Information Processing Systems},
booktitle={Modeling and Decision-making in the Spatiotemporal Domain Workshop},