Skip to content
Snippets Groups Projects
Code owners
Assign users and groups as approvers for specific file changes. Learn more.

aidge_benchmark

A collection of performance benchmarks and export tools for AI inference workloads using the AIDGE framework. Provides model creation, randomized or constant‐value inputs, timed inference, result comparison, and C++ code export for CPU/GPU/NPU evaluation.


Features

  • Model Suite: MLP, MNIST CNN, MobileNet, ResNet‑18, ResNet‑50, SqueezeNet
  • Input Generation: Randomized or all‑ones tensors with reproducible seeding
  • Benchmarking: Measure model creation time, forward‐pass latency, and export time
  • Comparison: Compare native vs. exported inference results
  • Export: Generate standalone C++ inference code (with timing harness or output‐compare harness)

Table of Contents

  1. Installation
  2. Quickstart
  3. run_model CLI
  4. Available Models
  5. Examples
  6. Development & Contributing
  7. License

Installation

Dependencies

Aidge:

  • Aidge_core: 0.6.1
  • Aidge_backend_cpu: 0.6.1
  • Aidge_export_cpp: 0.3.1

If need to compare with tvm Tvm: 0.19.0

  1. Clone the repo
    git clone https://gitlab.eclipse.org/gallasko/aidge_benchmark.git
    cd aidge_benchmark

2. **Install Python dependencies**

   ```bash
   pip install -r requirements.txt
   ```

   > Requires Python 3.8+

3. **(Optional) Install in editable mode**

   ```bash
   pip install -e .
   ```

---

## Quickstart

Run the built‑in CLI command:

```bash
python run_model [options]
```

This will:

1. Create the selected model and scheduler
2. Generate an input tensor (random or ones)
3. Optionally save the initial model graph (`.mmd`)
4. Run N forwards and print timing
5. Export to C++ code and print export timing

---

## `run_model` CLI

```
Usage: python run_model [options]

Options:
  -m, --selectedModel   Model to benchmark (default: mlp)
                        Choices: mlp, mnist, mobilenet, resnet18, resnet50, squeezenet

  -r, --randomize       Randomize input data? (bool, default: False)
  -s, --seed            Seed for random data (int, default: 0)

  -c, --compare         Generate a “compare” C++ harness (bool, default: False)
  -t, --time            Generate a timing C++ harness (bool, default: False)

  -f, --nbForward       Number of forward passes to time (int, default: 1)
  -e, --nbExport        Number of export iterations to time (int, default: 1)

  -d, --saveModel       Save initial model graph as `.mmd` (bool, default: False)
  -p, --printOutput     Print last-layer output values to console (bool, default: False)

  --help                Show this help message and exit
```

---

### What each flag does

* **`-m/--selectedModel`**
  Choose which network architecture to load:

  * `mlp`
  * `mnist`
  * `mobilenet`
  * `resnet18`
  * `resnet50`
  * `squeezenet`

* **`-r/--randomize`** & **`-s/--seed`**
  When randomized, inputs are drawn U(–1, 1) with the given seed. Otherwise all values = 1.

* **`-c/--compare`**
  Exports a C++ “compare” harness that checks native vs. exported inference outputs.

* **`-t/--time`**
  Exports a C++ “timing” harness (runs inference N× and reports average latency).

* **`-f/--nbForward`** & **`-e/--nbExport`**
  How many times to repeat forward‐pass timing and export timing.

* **`-d/--saveModel`**
  Saves the uncompiled model graph to `initial_graph.mmd` and generates a Markdown description of it.

* **`-p/--printOutput`**
  After the last forward pass, prints the first 10 elements of the output tensor to stdout.

---

## Available Models & Input Shapes

| Model          | Input Shape        |
| -------------- | ------------------ |
| **mlp**        | `[1, 1, 28, 28]`   |
| **mnist**      | `[1, 3, 32, 32]`   |
| **mobilenet**  | `[1, 3, 224, 224]` |
| **resnet18**   | `[1, 3, 224, 224]` |
| **resnet50**   | `[1, 3, 224, 224]` |
| **squeezenet** | `[1, 3, 224, 224]` |

---

## Examples

1. **Time 10 forward passes on ResNet‑50 with randomized inputs**

   ```bash
   python run_model -m resnet50 -r True -s 42 -f 10 -t True
   ```

2. **Export SqueezeNet compare harness, run only export**

   ```bash
   python run_model -m squeezenet -e 1 -c True
   ```

3. **Save the initial MLP graph and print its output**

   ```bash
   python run_model -m mlp -d True -p True
   ```

---

## Development & Contributing

Contributions welcome! Please:

1. Fork this repo
2. Create a branch: `git checkout -b feature/your-feature`
3. Add or update benchmark scripts under `benchmarks/`
4. Update `requirements.txt` if adding new dependencies
5. Submit a Merge Request describing your changes

See [CONTRIBUTING.md](docs/CONTRIBUTING.md) for detailed guidelines (coming soon).

---

## License

This project is licensed under the **Eclipse Public License 2.0**. See [LICENSE](LICENSE) for details.