# Evaluation

The official scores of each entry are evaluated on our server but for transparency we publish all evaluation scripts here. Together with the ground truth solutions that are available for a small subset of scenes in each challenge, they can be used to test algorithms without the need of an official submission. Furthermore, they should be used to ensure that all submitted files are valid and can be parsed by the evaluation tools.


The scripts are part of the toolbox and are available as a single [download](/toolbox).


Each challenge has its own evaluation script, but they use a common interface. They are meant to be used as command line tools, which take filenames for the solution and reference as parameters and output the individual scores. Example:

python Geometry.py GroundTruth/StanfordBunny.obj MySolutions/StanfordBunny.obj

In case of a successful comparison, the output may look like:

"status": "success",
"precision": 0.003560282451840327,
"completeness": 0.015540456325554744

The output is json-formatted, making it easy to read by humans and machines. 

The requirements for the input files are documented on the individual challenge sites. It is also very helpful to have a look at the provided ground truth data.