Introduction

This page presents a Head Movement Dataset of users that watch 360-Degree videos using a Head Mounted Display (HMD) along with all the tools needed for the community to reproduce or extend the dataset.

If you want to use this dataset, please provide a reference to our paper: Xavier Corbillon, Francesca De Simone, and Gwendal Simon. 360-Degree Video Head Movement Dataset. In Proceedings of ACM Multimedia System (MMSys) 2017

Downloads

Download the test software(122.02 KB) as it was the 10 March 2017!

If you want to try the lastest version of the software, you can find it on github at this address: https://github.com/xmar/360Degree_Head_Movement_Dataset

Download the videos(2.71 GB) used to generate the first dataset

Download the first dataset(55.85 MB) 59 users (with 20% of women) aged from 6 to 62 with an average age of 34. 61% of the user never used a HMD before.

Community Based Dataset

This section contains the different datasets uploaded by the community.

If you select multiple dataset you will download a merged archive containing all the selected dataset.

Uploader Name Uploader Institution Uploader Contacts Notes Dataset Size Dataset Status Download
Xavier CorbillonIMT Atlantiquexavier.corbillon[at]imt-atlantique.fr55.85 MBvalidated

Download selection(if more than one archive are selected, the start of the download may take time)

Upload a new datatset (Not available yet but you can directly contact us)

Videos

In the first dataset we used seven videos including two training videos.

We selected popular YouTube 360 videos that cover the different use cases of 360-degree videos.

YouTube IdNameDescriptionStart Offset
2bpICIClAIgElephantsFixed camera with one favor direction.15 s
7IWp875pCxQRhinosFixed camera with no favor directions.15 s
2OzlksZBTiADivingSlowly moving camera with no clear horizon and no favor directions.40 s
8lsB-P8nGSMRollercoasterCamera fixed in front of a moving roller-coaster car. One favor gaze direction.65 s
CIw8R8thnm8TimelapseVery often scene cuts. Clear horizon with a lot of fast moving object/people.0 s
s-AJRFQuAtEVeniceVirtual reconstruction of Venice with a flying camera.0 s
sJxiPiAaB4kParisGuided tour of Paris. Static camera with some smooth scene cuts.0 s
Table 1: Description of the representative YouTube 360-Degree videos used to harvest our dataset. Videos with green background are training videos.

The following videos show, in the equirectangular planar projection space, the probability for a specific pixel to be inside the viewport of a user. The probability was computed based on the first dataset. The whiter the higher the probability is.

DivingRoller-coaster
TimelapseVenice
Paris

Dataset Structure

Dataset Folder Structure

Figure 1: Dataset folder structure

Each dataset uses the same folder hierarchy. Figure 1 illustrate this hierarchy. The root folder is the results folder. Inside this folder, you may find a file named .private_existingUsers.txt but this file should never be exported as it contains the real name of each user. There is one folder per user, named uid-X with X a unique identifier for each user. The file formAnswers.txt contains the answer of the small questionnaire asked to the user before starting the test scenario. A user may run the test scenario multiple time, each time a new folder, named testY with Y the test number, is created. Inside each test folder you may find a testInfo.txt file that contains the list of the videos displayed to the user with the md5Sum of each video. There is one video per line and the order of the video in this file is the same as the order of the video displayed to the user. Then for each video displayed there is one file name videoId.txt that contains the configuration file of our OSVR video player and head movement logger. Each displayed video generates also a folder named videoId that contains a file name videoId_0.txt. This file is the head movement logs for this video and this user.

Conventions Used to Record the Head Positions

Figure 2: Choice for the stationary reference frame (O, i, j, k) and for the rotating reference frame (O, i′, j′, k′)

We considered rotational head movements and ignored translational movements. To measure the head position we chose the following conventions, illustrated in Figure 2:

The reference position (i.e. the (O, i, j, k) basis) is set at boot time by the HMD. k is always vertical but i and j can change each time the HMD restart. Between two reboots the reference position never change.

Using the software described at the software Section, we captured any variation of the head position during a viewing session, with respect to the reference position. The head position variation is described by the rotation R that transforms (O, i, j, k) into (O, i′, j′, k′). There are many ways to characterize a rotation in R3: we use the unit Hamiltons quaternions representation.

According to Euler’s rotation theorem, any rotation or sequence of rotations of a three-dimensional coordinate system with fixed origin is equivalent to a single rotation around an axis, represented by a unit vector v = (x, y, z) = x i + y j + z k in R3, and by a given angle θ, using the right hand rule. This axis-angle representation of R can be expressed by four scalars defining the unit quaternion q = (q0, q1, q2, q3) = (cos(θ/2), sin(θ/2) v).

We chose the quaternion representation because (i) it has the advantage of being a compact representation (four scalars instead of the nine required by the 3x3 matrix representation), (ii) quaternion are not subject to the gimbal lock well known issue of the Euler angles representation, and (iii) quaternion representation of rotations is less sensitive than matrix representation to rounding errors produced when in computer science scalars are approximated by float numbers.

Head Position Log Structure

timestamp frameId q0 q1 q2 q3
Figure 3: Log file example

Figure 3 shows the structure of each line of each head position log file. The first value of the line is the relative timestamp (relative to the start of the display of the video). It is a float. The timestamp of the first line should be zero, if not we should subtract its value to each timestamp in the file. The second value is the id of the video frame currently displayed to the user when the position was taken. It is an integer The first displayed frame has always frame id 0 and this value increase by one for each frame. The four next values are the value float values q0, q1, q2, and q3 of the hamilton quaternion q = (q0, q1 i + q2 j + q3 k) that represent the rotation made by the head of the user compared to the reference frame.

Software Structure

Regarding the structure our software and how to compile and use them, please read the README.rst files in the different folders.

Use the Software

The scripts presented in this Section are scripts you may find in the tar.gz archive in the download Section or inside our github repository.

Requirements

To run a test scenario and harvest some new users head position, a full working OSVR environment should be installed on your machine. (This is not require if you only want to use the post-processing scripts). Our osvr video player should be compiled (cf. README.rst) and the OSVR_Server should be running.

A python3.6 interpreter should be installed on your system and the virtualenv python package. An empty virtualenv folder named .env should be created before running any script. The scripts will install and update inside this virtualenv all the needed python packages.

The cmake package and a modern C++ compile should also be installed

Configuration file

Most script suppose a configuration file named config.ini exists in the same folder as the scripts.

You may find here below the configuration file used to run our tests.

[AppConfig]
resultFolder = results
pathToOsvrClientPlayer = ../build/OSVRClientTest
portForInterprocessCommunication=5542
;Section name of the video used for the training (empty for none)
trainingVideo = Elephant, Rhino
;List of Section name (comma separated) of the video used in the test
videoConfigList = Diving, Rollercoaster, Timelapse, Venise, Paris
;supported log levels: DEBUG, INFO, WARNING, ERROR
consoleLogLevel= DEBUG
fileLogLevel=DEBUG

[Elephant]
path=videos/2bpICIClAIg.webm
id=Elephant-training-2bpICIClAIg
nbMaxFrames=2100
bufferSize=250
startOffsetInSecond=15

[Rhino]
path=videos/7IWp875pCxQ.webm
id=Rhino-training-7IWp875pCxQ
nbMaxFrames=2100
bufferSize=250
startOffsetInSecond=15

[Diving]
path=videos/2OzlksZBTiA.mkv
id=Diving-2OzlksZBTiA
nbMaxFrames=2100
bufferSize=250
startOffsetInSecond=40
[Rollercoaster]
path=videos/8lsB-P8nGSM.mkv
id=Rollercoaster-8lsB-P8nGSM
nbMaxFrames=2100
;nbMaxFrames=1000
bufferSize=250
startOffsetInSecond=65

[Timelapse]
path=videos/CIw8R8thnm8.mkv
id=Timelapse-CIw8R8thnm8
nbMaxFrames=2100
bufferSize=250
startOffsetInSecond=0

[Venise]
path=videos/s-AJRFQuAtE.mkv
id=Venise-s-AJRFQuAtE
nbMaxFrames=1800
bufferSize=250
startOffsetInSecond=0

[Paris]
path=videos/sJxiPiAaB4k.mkv
id=Paris-sJxiPiAaB4k
nbMaxFrames=4200
bufferSize=250
startOffsetInSecond=0

Start a New Test Scenario

If you want to generate a new dataset you need to run the startTestManager.sh script. This will read the config.ini file, get the video described in this ini file and create a results folder if it does not exist.

You need to have an osvr server running with the configuration file that correspond to your head mounted device. On our side we used the OSVR HDK 2 device.

Run the Post-processing Script

To post-process the dataset, you can run the startPostProcessing.sh script. This will read the results and create a statistics folder inside. All the post processing results will be inside this statistics folder.

Export your Dataset into an Archive File

If you want to export your dataset you can use the ExportResults.py script. It will read the results folder and generate a tar.gz archive with all uid-X. It will not export the .private_existingUsers.txt containing the private information on the users nor the statistics folder.

If you want to share your dataset with the community, you can upload it on this website (or contact us in order to add it on this website). Before doing so please check you only add your own data and didn't include users from other datasets.

Contacts

If you have any issue to compile or to use the softwares, or to use the datasets, please contact us at this e-mail address: xavier.corbillon[at]imt-atlantique.fr