Skip to content

#375 Add scripts to benchmark data-based synthesis to CIF benchmarks.

I've added multiple files:

  • benchmark__base.tooldef contains the logic to execute data-based synthesis with various settings, collect various statistics, and output it to file and stdout.
  • benchmark__settings.tooldef allows configuring the settings to benchmark. By default, it executes data-based synthesis once, with default settings. The comments explain how to configure it differently.
  • benchmark_*.tooldef scripts, one per benchmarking model, allow benchmarking specific benchmarking models. Having multiple scripts allows running them in parallel. This does require disabling the 'Auto Terminate' option of the 'Applications' view. Running too much in parallel may affect timing metrics. But the maximum number of BDD nodes and the number of BDD operations are also collected and are more reproducible anyway.

These scripts are to benchmark different settings for different benchmark models, comparing the settings against each other, for instance to compare different variable ordering algorithms. They are explicitly not meant to benchmark the performance of CIF data-based synthesis against other synthesis tools. Collecting the platform-independent metrics makes synthesis much slower, and other tools likely don't do this. A fair comparison with other tools requires a completely different approach, which is out of scope here.

Closes #375 (closed)

Merge request reports