Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • eclipse/oniro-core/docs
  • agherzan/docs
  • shettygururaj/docs
  • dricci783/docs
  • lucafavaretto/docs
  • lucazizolfi/docs
  • landgraf/docs
  • nawab/docs
  • mrybczyn/docs
  • esben/docs
  • lucaseri/docs
  • kristis/docs
  • bero/docs
  • gwozdzcfs/docs
  • ghassaneben/oniro-docs
  • zyga/oniro-docs
  • pcoval/oniro-docs
  • stefanschmidt/docs2
  • kzarka/docs
19 results
Show changes
Commits on Source (48)
Showing
with 1424 additions and 118 deletions
......@@ -8,12 +8,13 @@ stages:
- deploy
include:
- project: eclipse/oniro-core/oniro
ref: kirkstone
file:
- .oniro-ci/dco.yaml
- .oniro-ci/reuse.yaml
- .oniro-ci/build-generic.yaml
- project: eclipse/oniro-core/oniro
ref: kirkstone
file:
- .oniro-ci/dco.yaml
- .oniro-ci/reuse.yaml
- project: 'eclipsefdn/it/releng/gitlab-ci-templates'
file: '/jobs/eca.gitlab-ci.yml'
dco:
extends: .dco
......@@ -26,50 +27,226 @@ reuse:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
# Customize the .workspace job to set the path of the git repository to deviate
# from what the git-repo manifest prepares. This effectively allows testing
# incoming changes that match the repository holding this CI pipeline.
.workspace:
eca:
extends: .eca
# Naming scheme for variables is defined as follows:
#
# CI_ONIRO_*:
# Oniro specific variables used during CI/CD process. Variables in this group
# should use meaningful names and avoid abbreviations, if possible.
#
# CI_ONIRO_MANIFEST_REPO_{URL,REV}:
# URL and revision of the manifest repository.
#
# CI_ONIRO_MANIFEST_MIRROR_REPO_{URL,REV,DIR}:
# URL, revision and directory path of the mirror of manifest repository.
# This repository is used to speed up construction of repo workspace.
#
# CI_ONIRO_REPO_WORKSPACE_DIR:
# Directory path of repo workspace.
#
# CI_*:
# Third party variables used during CI/CD process, defined by GitLab.
# Variables in this group are defined by GitLab and retain their original
# name.
#
# GIT_STRATEGY, CACHE_COMPRESSION_LEVEL:
# Part of GitLab interface.
.oniro-repo-workspace:
interruptible: true
image:
name: registry.ostc-eu.org/ostc/oniro/bitbake-builder:latest
variables:
CI_ONIRO_GIT_REPO_PATH: docs
# URL and branch or revision of the oniro.git repository which contains a
# repo manifest file.
#
# The revision should be bumped during the major release of Oniro but both
# variables can be changed to CI_PROJECT_URL/CI_COMMIT_SHA when testing
# changes landing to oniro.git.
CI_ONIRO_MANIFEST_REPO_URL: https://gitlab.eclipse.org/eclipse/oniro-core/oniro.git
CI_ONIRO_MANIFEST_REPO_REV: kirkstone
# URL and branch used with repo "repo --mirror" to speed up workspace
# construction.
#
# Those are distinct from CI_ONIRO_MANIFEST_REPO_{URL,REV} because the
# former variables can be redirected to CI_PROJECT_URL and CI_COMMIT_SHA,
# while those two stay fixed.
#
# The revision should _only_ be bumped during the major release of Oniro.
CI_ONIRO_MANIFEST_MIRROR_REPO_URL: https://gitlab.eclipse.org/eclipse/oniro-core/oniro.git
CI_ONIRO_MANIFEST_MIRROR_REPO_REV: kirkstone
# Directory where repo mirror is constructed. This location is covered by
# GitLab cache system, and will be reused between pipelines of the same
# project. Note that the .cache directory name is special.
CI_ONIRO_MANIFEST_MIRROR_REPO_DIR: $CI_PROJECT_DIR/.cache/repo-mirror
# XML snippet to inject as a "local manifest" for repo. Those allow arbitrary
# modifications to the project structure to happen before "repo sync" is used
# to construct the workspace.
#
# The default interpreter for the local manifest is plain "echo". For some
# more complex cases, where inline shell is required, use "eval" instead
# and put "cat" echo into the local manifest, coupled with a here-doc
# value.
CI_ONIRO_REPO_WORKSPACE_LOCAL_MANIFEST: ""
CI_ONIRO_REPO_WORKSPACE_LOCAL_MANIFEST_INTERPRETER: echo
# Directory where repo workspace is constructed.
CI_ONIRO_REPO_WORKSPACE_DIR: $CI_PROJECT_DIR/.tmp/repo-workspace
# Use fastest cache compression algorithm, as bulk of the cache is
# already-compressed git history.
CACHE_COMPRESSION_LEVEL: fastest
# Ask GitLab _not_ to check out the git repository associated with the
# project. This is, in a way, pointless, since we use repo, not pure git,
# to construct the workspace. Due to the fact that oniro is
# self-referential (the manifest refers to the repository that contains the
# manifest). This requires custom logic to behave correctly in scenarios
# that modify oniro.git in any way (e.g. a branch, a pull request or merge
# train).
GIT_STRATEGY: none
cache:
- key:
prefix: repo-mirror-$CI_ONIRO_MANIFEST_MIRROR_REPO_REV
files:
- default.xml
paths:
- $CI_ONIRO_MANIFEST_MIRROR_REPO_DIR
before_script:
# Define helper functions to generate GitLab fold markers.
- |
# Disable all the bitbake jobs, since we are not building any code here.
.bitbake-workspace:
rules:
- when: never
function gl_section_open() {
printf '\e[0K''section_start'':%s:%s\r\e[0K%s\n' "$(date +%s)" "$1" "$2"
}
function gl_section_open_collapsed() {
printf '\e[0K''section_start'':%s:%s[collapsed=true]\r\e[0K%s\n' "$(date +%s)" "$1" "$2"
}
function gl_section_close() {
printf '\e[0K''section_end'':%s:%s\r\e[0K\n' "$(date +%s)" "$1"
}
# Query system information. This is mostly useful for forensics, when
# something goes wrong and access to basic information of this type can
# help to uncover the problem.
- gl_section_open_collapsed system_info "Querying system information"
- id
- uname -a
- cat /etc/os-release
- free -m
- lscpu
- env | grep -E '^CI_ONIRO' | sort
- gl_section_close system_info
# Set up Git with bot identity. Eclipse ECA check allows this user to
# create and send commits.
- gl_section_open_collapsed setup_git "Setting up git"
- git config --global --add safe.directory "$CI_PROJECT_DIR"
- git config --global user.name "Oniro Core Project Bot"
- git config --global user.email "oniro-core-bot@eclipse.org"
- gl_section_close setup_git
# Since CI_PROJECT_DIR is set to 'none', GitLab runner does not perform any
# cleanup operations on CI_PROJECT_DIR. In consequence, repo can observe
# junk brought in by previous executions on the same runner, and get
# confused. Perform manual cleanup by removing all top-level items, other
# than .cache, where the cache items are strategically located, before
# proceeding.
- gl_section_open_collapsed cleanup_project_dir "Clean-up project directory"
- find "$CI_PROJECT_DIR" -mindepth 1 -maxdepth 1 ! -name .cache -exec rm -rf {} \;
- ls -la "$CI_PROJECT_DIR"
- gl_section_close cleanup_project_dir
# Create and update a mirror for repo, using the semi-fixed manifest mirror
# repo URL and revision. Since this is cached, the "repo init" part is
# rarely executed (see the test command below), and only the forced
# synchronization is executed.
#
# Note that the location of the mirror is stored in GitLab cache using the
# repo revision as cache key, allowing multiple releases to co-exist
# efficiently.
- gl_section_open_collapsed repo_mirror_setup "Setting up repo mirror"
- mkdir -p "$CI_ONIRO_MANIFEST_MIRROR_REPO_DIR"
- pushd "$CI_ONIRO_MANIFEST_MIRROR_REPO_DIR"
- echo "Initializing repository mirror from $CI_ONIRO_MANIFEST_MIRROR_REPO_URL and $CI_ONIRO_MANIFEST_MIRROR_REPO_REV"
- test ! -e .repo && repo init --mirror --manifest-url "$CI_ONIRO_MANIFEST_MIRROR_REPO_URL" --manifest-branch "$CI_ONIRO_MANIFEST_MIRROR_REPO_REV" --no-clone-bundle
- echo "Synchronizing repository mirror"
- repo sync --force-sync || ( rm -rf .repo && repo init --mirror --manifest-url "$CI_ONIRO_MANIFEST_MIRROR_REPO_URL" --manifest-branch "$CI_ONIRO_MANIFEST_MIRROR_REPO_REV" --no-clone-bundle && repo sync)
- gl_section_close repo_mirror_setup
# Create a repo workspace using the mirror as reference. This is fairly
# efficient, as repo will hardlink files (assuming they live on the same
# filesystem) and avoid bulk of the network traffic.
- gl_section_open_collapsed repo_workspace_setup "Setting up repo workspace"
- rm -rf "$CI_ONIRO_REPO_WORKSPACE_DIR" && mkdir -p "$CI_ONIRO_REPO_WORKSPACE_DIR"
- pushd "$CI_ONIRO_REPO_WORKSPACE_DIR"
- echo "Initializing repository workspace from $CI_ONIRO_MANIFEST_REPO_URL and $CI_ONIRO_MANIFEST_REPO_REV"
- repo init --reference "$CI_ONIRO_MANIFEST_MIRROR_REPO_DIR" --manifest-url "$CI_ONIRO_MANIFEST_REPO_URL" --manifest-branch "$CI_ONIRO_MANIFEST_REPO_REV" --no-clone-bundle
- mkdir -p "${CI_ONIRO_REPO_WORKSPACE_DIR}/.repo/local_manifests"
- test -n "${CI_ONIRO_REPO_WORKSPACE_LOCAL_MANIFEST:-}" && "$CI_ONIRO_REPO_WORKSPACE_LOCAL_MANIFEST_INTERPRETER" "$CI_ONIRO_REPO_WORKSPACE_LOCAL_MANIFEST" | tee "${CI_ONIRO_REPO_WORKSPACE_DIR}/.repo/local_manifests/local.xml"
- echo "Synchronizing repository workspace"
- repo sync --force-sync
- gl_section_close repo_workspace_setup
# Define a build-docs job that extends both the .workspace, for the general
# workspace setup, and .build-docs, for the documentation build logic. The
# script first assembles the workspace and then proceeds to build the
# documentation.
#
# The job extends more than one parent, with the order being relevant for,
# among others, the "rules" section.
build-docs:
extends: [.workspace, .build-docs]
variables:
CI_ONIRO_INSTANCE_SIZE: s3.large.2
extends: [.oniro-repo-workspace]
interruptible: true
image:
name: registry.ostc-eu.org/ostc/oniro/docs-builder:latest
script:
- !reference [.workspace, script]
- !reference [.build-docs, script]
# Artifacts are relative to CI_PROJECT_DIR so we need to provide the build
# docs there.
- mv "$SCRATCH_DIR"/docs/build/ "$CI_PROJECT_DIR" || true
- make -C docs
- mv docs/build "$CI_PROJECT_DIR"
artifacts:
paths:
- build
variables:
# When the workspace is created, substitute the "docs" repository that
# described by the manifest with the project being tested. This works for
# forks and branches but not for merge requests. For that look at the build
# rule below.
CI_ONIRO_REPO_WORKSPACE_LOCAL_MANIFEST: >
<?xml version="1.0" encoding="UTF-8"?>
<manifest>
<!-- remove original docs project entry -->
<remove-project name="oniro-core/docs.git" />
<!-- add remote representing the project -->
<remote name="oniro-override" fetch="${CI_PROJECT_URL}/../" />
<!-- add docs at the exact version are testing -->
<project name="${CI_PROJECT_NAME}" path="docs" remote="oniro-override" revision="${CI_COMMIT_SHA}" />
</manifest>
rules:
# Build the docs when a merge request is created.
# During the merge request, substitute the "docs" repository that is
# described by the manifest with the project that is the source of the
# merge request. This does not test the merged result but is the next best
# thing we can do right now.
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
variables:
CI_ONIRO_REPO_WORKSPACE_LOCAL_MANIFEST_INTERPRETER: eval
CI_ONIRO_REPO_WORKSPACE_LOCAL_MANIFEST: |
cat <<__EOM__
<?xml version="1.0" encoding="UTF-8"?>
<manifest>
<!-- remove original docs project entry -->
<remove-project name="oniro-core/docs.git" />
<!-- add remote representing the project -->
<remote name="oniro-override" fetch="${CI_MERGE_REQUEST_SOURCE_PROJECT_URL}/../" />
<!-- add docs at the exact version are testing -->
<project name="$(basename "$CI_MERGE_REQUEST_SOURCE_PROJECT_PATH")" path="docs" remote="oniro-override" revision="${CI_COMMIT_SHA}" />
</manifest>
__EOM__
# Or when things land.
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
deploy:
extends: .workspace
extends: .oniro-repo-workspace
image:
name: registry.ostc-eu.org/ostc/oniro/docs-builder:latest
stage: deploy
script:
- !reference [.workspace, script]
# We are in the root of the git-repo workspace.
# We are in the root of the git-repo workspace. Because
# .oniro-repo-workspace uses GIT_STRATEGY=none, the workspace is not
# cleaned automatically.
- rm -rf aggregated
- git clone https://user:$CI_ONIRO_AGGREGATED_DOCS_TOKEN@gitlab.eclipse.org/eclipse/oniro-core/oniro-readthedocs-aggregated.git aggregated
- find aggregated -maxdepth 1 -not -path aggregated/.git -not -path aggregated -exec rm -rvf {} \;
- tar -c --dereference -C docs --exclude ./.git --exclude ./.gitlab-ci.yml . | tar -x -C aggregated
......
......@@ -4,17 +4,29 @@ SPDX-FileCopyrightText: Huawei Inc.
SPDX-License-Identifier: CC-BY-4.0
-->
- [Gitlab Contributions](#gitlab-contributions)
- [Overview](#overview)
- [Commit Guidelines](#commit-guidelines)
- [Contributions to Documentation](#contributions-to-documentation)
- [REUSE Compliance](#reuse-compliance)
- [SPDX Information and REUSE Standard](#spdx-information-and-reuse-standard)
- [SPDX Header Example](#spdx-header-example)
- [Substantial Contributions](#substantial-contributions)
- [DCO sign-off](#dco-sign-off)
- [Overview](#overview-1)
- [Developer Certificate of Origin](#docs_dco)
- <a href="#eclipse-contributor-agreement" id="toc-eclipse-contributor-agreement">Eclipse Contributor Agreement</a>
- <a href="#gitlab-contributions" id="toc-gitlab-contributions">Gitlab Contributions</a>
- <a href="#overview" id="toc-overview">Overview</a>
- <a href="#git-setup" id="toc-git-setup">Git setup</a>
- <a href="#commit-guidelines" id="toc-commit-guidelines">Commit Guidelines</a>
- <a href="#contributions-to-documentation" id="toc-contributions-to-documentation">Contributions to Documentation</a>
- <a href="#creating-merge-requests" id="toc-creating-merge-requests">Creating merge requests</a>
- <a href="#reuse-compliance" id="toc-reuse-compliance">REUSE Compliance</a>
- <a href="#spdx-information-and-reuse-standard" id="toc-spdx-information-and-reuse-standard">SPDX Information and REUSE Standard</a>
- <a href="#spdx-header-example" id="toc-spdx-header-example">SPDX Header Example</a>
- <a href="#dep5-files-paragraph-examples" id="toc-dep5-files-paragraph-examples">DEP5 "Files" Paragraph Examples</a>
- <a href="#substantial-contributions" id="toc-substantial-contributions">Substantial Contributions</a>
- <a href="#dco-sign-off" id="toc-dco-sign-off">DCO sign-off</a>
- <a href="#overview-1" id="toc-overview-1">Overview</a>
- <a href="#docs_dco" id="toc-docs_dco">Developer Certificate of Origin</a>
# Eclipse Contributor Agreement
Before your contribution can be accepted by the project team, contributors must electronically sign the [Eclipse Contributor Agreement (ECA)](http://www.eclipse.org/legal/ECA.php).
Commits must have a Signed-off-by field in the footer indicating that the author is aware of the terms by which the contribution has been provided to the project. Also, an associated Eclipse Foundation account needs to be in place with a signed Eclipse Contributor Agreement on file. These requirements are enforced by the Eclipse Foundation infrastructure tooling.
For more information, please see the [Eclipse Committer Handbook](https://www.eclipse.org/projects/handbook/#resources-commit).
# Gitlab Contributions
......@@ -22,6 +34,19 @@ SPDX-License-Identifier: CC-BY-4.0
Oniro Project handles contributions as [merge requests](https://docs.gitlab.com/ee/user/project/merge_requests/) to relevant repositories part of the Oniro Project [GitLab instance](https://gitlab.eclipse.org/eclipse/oniro-core). The flow for handling that is classic: fork-based merge requests. This means that once you have an account, you can fork any repository, create a branch with proposed changes and raise a merge request against the forked repository. More generic information you can find on the Gitlab's documentation as part of ["Merge requests workflow"](https://docs.gitlab.com/ee/development/contributing/merge_request_workflow.html).
## Git setup
Clone your fork locally, enter its directory and set:
``` bash
$ git config --local user.email <your_eclipse_account_email>
$ git config --local user.name <your_eclipse_full_name>
```
To push and pull over HTTPS with Git using your account, you must set a password or [a Personal Access Token](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html) .
If you want to push or pull repositories using SSH, you have to [add a SSH key](https://docs.gitlab.com/ee/user/ssh.html) to your profile.
## Commit Guidelines
<div class="note">
......@@ -40,39 +65,39 @@ At its core, contributing to the Oniro Project project means *wrapping* your wor
To achieve this, we maintain the following commit guidelines:
- Each commit should be able to stand by itself providing a building block as part of the MR.
- A good balance of granularity with scoped commits helps to handle backports (e.g. cherry-picks) and also improves the ability to review smaller chunks of code taking commit by commit.
- Changes that were added on top of changes introduced in the MR, should be squashed into the initial commit.
- For example, a MR that introduced a new build system recipe and, as a separate commit, fixed a build error in the initial recipe. The latter commit should be squashed into the initial commit.
- For example, a MR introducing a new docs chapter and also adding, as a separate commit, some typo fixes. The latter commits should be squashed into the initial commit.
- There is a small set of exceptions to this rule. All these exceptions gravitate around the case where an MR, even if it provides multiple commits in the same scope (for example, to the same build recipe), each of the commits has a very specific purpose.
- For example, a line formating change followed by a chapter addition change in the same documentation file.
- Also, it can be the case of two functional changes that are building blocks in the same scope.
- Another example where commits are not to be squashed is when having a commit moving the code and a commit modifying the code in the new location.
- Make sure you clean your code of trailing white spaces/tabs and that each file ends with a new line.
- Avoid *merge* commits as part of your MR. Your commits should be rebased on top of the *HEAD* of the destination branch.
- Each commit should be able to stand by itself providing a building block as part of the MR.
- A good balance of granularity with scoped commits helps to handle backports (e.g. cherry-picks) and also improves the ability to review smaller chunks of code taking commit by commit.
- Changes that were added on top of changes introduced in the MR, should be squashed into the initial commit.
- For example, a MR that introduced a new build system recipe and, as a separate commit, fixed a build error in the initial recipe. The latter commit should be squashed into the initial commit.
- For example, a MR introducing a new docs chapter and also adding, as a separate commit, some typo fixes. The latter commits should be squashed into the initial commit.
- There is a small set of exceptions to this rule. All these exceptions gravitate around the case where an MR, even if it provides multiple commits in the same scope (for example, to the same build recipe), each of the commits has a very specific purpose.
- For example, a line formating change followed by a chapter addition change in the same documentation file.
- Also, it can be the case of two functional changes that are building blocks in the same scope.
- Another example where commits are not to be squashed is when having a commit moving the code and a commit modifying the code in the new location.
- Make sure you clean your code of trailing white spaces/tabs and that each file ends with a new line.
- Avoid *merge* commits as part of your MR. Your commits should be rebased on top of the *HEAD* of the destination branch.
As mentioned above, *git log* becomes informally part of the documentation of the product. Maintaining consistency in its format and content improves debugging, auditing, and general code browsing. To achieve this, we also require the following commit message guidelines:
- The *subject* line (the first line) needs to have the following format: `scope: Title limited to 80 characters`.
- Use the imperative mood in the *subject* line for the *title*.
- The *scope* prefix (including the colon and the following whitespace) is optional but most of the time highly recommended. For example, fixing an issue for a specific build recipe, would use the recipe name as the *scope*.
- The *title* (the part after the *scope*) starts with a capital letter.
- The entire *subject* line shouldn't exceed 80 characters (same text wrapping rule for the commit body).
- The commit *body* separated by an empty line from the *subject* line.
- The commit *body* is optional but highly recommended. Provide a clear, descriptive text block that accounts for all the changes introduced by a specific commit.
- The commit *body* must not contain more than 80 characters per line.
- The commit message will have the commit message *trailers* separated by a new line from the *body*.
- Each commit requires at least a *Signed-off-by* trailer line. See more as part of the `/contributing/dco` document.
- All *trailer* lines are to be provided as part of the same text block - no empty lines in between the *trailers*.
- The *subject* line (the first line) needs to have the following format: `scope: Title limited to 80 characters`.
- Use the imperative mood in the *subject* line for the *title*.
- The *scope* prefix (including the colon and the following whitespace) is optional but most of the time highly recommended. For example, fixing an issue for a specific build recipe, would use the recipe name as the *scope*.
- The *title* (the part after the *scope*) starts with a capital letter.
- The entire *subject* line shouldn't exceed 80 characters (same text wrapping rule for the commit body).
- The commit *body* separated by an empty line from the *subject* line.
- The commit *body* is optional but highly recommended. Provide a clear, descriptive text block that accounts for all the changes introduced by a specific commit.
- The commit *body* must not contain more than 80 characters per line.
- The commit message will have the commit message *trailers* separated by a new line from the *body*.
- Each commit requires at least a *Signed-off-by* trailer line. See more as part of the `/contributing/dco` document.
- All *trailer* lines are to be provided as part of the same text block - no empty lines in between the *trailers*.
Additional commit message notes:
- Avoid using special characters anywhere in the commit message.
- Be succinct but descriptive.
- Have at least one *trailer* as part of each commit: *Signed-off-by*.
- You can automatically let `git` add the *Signed-off-by* by taking advantage of its `-s` argument.
- Whenever in doubt, check the existing log on the file (`<FILE>`) you are about to commit changes, using something similar to: `git log <FILE>`.
- Avoid using special characters anywhere in the commit message.
- Be succinct but descriptive.
- Have at least one *trailer* as part of each commit: *Signed-off-by*.
- You can automatically let `git` add the *Signed-off-by* by taking advantage of its `-s` argument.
- Whenever in doubt, check the existing log on the file (`<FILE>`) you are about to commit changes, using something similar to: `git log <FILE>`.
Example of a full git message:
......@@ -95,23 +120,36 @@ In terms of file format, the project unifies its documentation as `ReStructuredT
As a rule of thumb, anything that ends up compiled in the project documentation is to maintain the RestructuredText file format. Text files that are not meant to be compiled as part of the project's documentation can be written in [Markdown](https://daringfireball.net/projects/markdown/). For example, a repository `README` file can be written in Markdown as it doesn't end up compiled in the project-wide documentation.
### Creating merge requests
Once your changes have been pushed to your fork, you are ready to prepare a merge request.
1. Go to your repository in an internet browser.
2. Create a merge request by clicking `Merge Requests` on left toolbar and press `New merge request`. Add an explainable description and create a merge request. Alternatively, you can enter the website of your fork. You should see a message that you pushed your branch to the repository. In the same section you can press `Create merge request`.
3. Before merging, it has to be reviewed and approved by Oniro Project repository maintainers. Read their review and add any required changes to your merge request.
4. After you polish your merge request, the maintainers will run the pipelines which check if your changes do not break the project and approve them. If everything is correct, your work is merged to the main project. Remember that each commit of the merge request should be a minimum, self-contained building block.
# REUSE Compliance
## SPDX Information and REUSE Standard
All projects and files for an hosted project **MUST** be [REUSE](https://reuse.software/) compliant. REUSE requires SPDX information for each file, rules for which are as follows:
- Any new file must have a SPDX header (copyright and license).
- For files that don't support headers (for example binaries, patches etc.) an associated `.license` file must be included with the relevant SPDX information.
- Do not add Copyright Year as part of the SPDX header information.
- The general rule of thumb for the license of a patch file is to use the license of the component for which the patch applies.
- When modifying a file through this contribution process, you may (but don't have to) claim copyright by adding a copyright line.
- Never alter copyright statements made by others, but only add your own.
Some files will make an exception to the above rules as described below:
- Files for which copyright is not claimed and for which this information was not trivial to fetch (for example backporting patches, importing build recipes etc. when upstream doesn't provide the SPDX information in the first place)
- license files (for example `common-licenses` in bitbake layers)
- for files copyrighted by projects contributors (**"First Party Files"**):
- any new file MUST have a SPDX header (copyright and license);
- for files that don't support headers (for example binaries, patches etc.) an associated `.license` file MUST be included with the relevant SPDX information;
- do not add Copyright Year as part of the SPDX header information;
- the general rule for patch files is to use the MIT license and *not* the license of the component for which the patch applies - the latter solution would be error-prone and hard to manage and maintain in the long run, and there may be difficult-to-handle cases (what if the patches modifies multiple files in the same component - eg. gcc - which are subject to different licenses?);
- when modifying a file through this contribution process, you may (but don't have to) claim copyright by adding a copyright line;
- you MUST NOT alter copyright statements made by others, but only add your own;
- for files copyrighted by third parties and just added to the project by contributors, eg. files copied from other projects or back-ported patches (**"Third Party Files"**):
- if upstream files already have SPDX headers, they MUST be left unchanged;
- if upstream files do *not* have SPDX headers:
- the exact upstream provenance (repo, revision, path) MUST be identified;
- you MUST NOT add SPDX headers to Third Party Files;
- copyright and license information, as well as upstream provenance information (in the "Comment" section), MUST be stored in <span class="title-ref">.reuse/dep5</span> following [Debian dep5 specification](https://dep-team.pages.debian.net/deps/dep5/) (see examples below);
- you MUST NOT use wildcards (\*) in dep5 "Files" paragraphs even if Debian specs allow it: it may lead to unnoticed errors or inconsistencies in case of future file additions that may be covered by wildcard expressions even if they have a different license;
- in case of doubts or problems in finding the correct license and copyright information for Third Party Files, contributors may ask the project's Legal Team in the project mailing list <oniro-dev@eclipse.org>;
### SPDX Header Example
......@@ -125,9 +163,29 @@ Make sure all of your submitted new files have a licensing statement in the head
*/
```
### DEP5 "Files" Paragraph Examples
``` text
Files: meta-oniro-staging/recipes-containers/buildah/buildah_git.bb
Copyright: OpenEmbedded Contributors
License: MIT
Comment: Recipe file for buildah copied from meta-virtualization project at
https://git.yoctoproject.org/meta-virtualization,
recipes-containers/buildah.
README file of meta-virtualization project states:
"All metadata is MIT licensed unless otherwise stated."
Files: meta-oniro-staging/recipes-devtools/ninja/ninja/0001-feat-support-cpu-limit-by-cgroups-on-linux.patch
Copyright: Google Inc.
License: Apache-2.0
Comment: Patch for ninja backported from Ninja project at
https://github.com/ninja-build/ninja, commit 540be33
Copyright text left as found in the header of the patched file.
```
### Substantial Contributions
Therefore, if your contribution is only a patch directly applied to an existing file, then you are not required to do anything. If your contribution is an entire new project, or a substantial, copyrighted contribution, you **MUST** make sure that you do that following the [IP Policy](https://booting.oniroproject.org/distro/governance/ip-policy) and that you comply with REUSE standard to include the licensing information where they are required.
Therefore, if your contribution is only a patch directly applied to an existing file, then you are not required to do anything. If your contribution is an entire new project, or a substantial, copyrighted contribution, you **MUST** make sure that you do that following the [IP Policy](https://git.ostc-eu.org/oss-compliance/ip-policy/) and that you comply with REUSE standard to include the licensing information where they are required.
# DCO sign-off
......
......@@ -148,8 +148,6 @@ Test devices are connected to their local LAVA worker and manged by LAVA server.
* `How to deploy wic image on Raspberry Pi in LAVA <https://forum.ostc-eu.org/t/how-to-deploy-wic-image-on-raspberry-pi-in-lava/228>`_
* `Adding Arduino Nano 33 BLE Board to LAVA Lab <https://forum.ostc-eu.org/t/adding-arduino-nano-33-ble-board-to-lava-lab/215>`_
* `Adding Nitrogen Board to LAVA Lab <https://forum.ostc-eu.org/t/adding-nitrogen-board-to-lava-lab/192>`_
* `Adding Avenger96 Board to LAVA Lab <https://forum.ostc-eu.org/t/adding-avenger96-board-to-lava-lab/46>`_
References
----------
......
......@@ -25,7 +25,7 @@ project = 'Oniro Project'
copyright = '2022'
author = 'Oniro Project'
version = '2.0.0-alpha'
version = '2.0.0'
release = version
# -- General configuration ---------------------------------------------------
......
......@@ -8,9 +8,8 @@
-- tool. This is needed when converting a set of reST documents to a markdown
-- one where directives like `toctree` and `contents` are not supported and end
-- up translated literally.
function Div(el)
local class = el["c"][1][2][1]
if class == "toctree" or class == "contents" then
function Div(div)
if div.classes:includes('toctree') or div.classes:includes('contents') then
return {}
else
return nil
......
......@@ -100,6 +100,7 @@ log "Converting to markdown..."
pandoc -s --toc --markdown-headings=atx --wrap=none -t gfm \
--lua-filter="$SCRIPT_PATH/CONTRIBUTING.lua" \
"$SCRIPT_PATH/../definitions.rst" \
"$SCRIPT_PATH/eca.rst" \
"$SCRIPT_PATH/gitlab.rst" \
"$SCRIPT_PATH/reuse.rst" \
"$SCRIPT_PATH/dco.rst" \
......
.. SPDX-FileCopyrightText: Huawei Inc.
..
.. SPDX-License-Identifier: CC-BY-4.0
.. include:: ../definitions.rst
Eclipse Contributor Agreement
#############################
Before your contribution can be accepted by the project team, contributors must
electronically sign the
`Eclipse Contributor Agreement (ECA) <http://www.eclipse.org/legal/ECA.php>`_.
Commits must have a Signed-off-by field in the footer indicating that the
author is aware of the terms by which the contribution has been provided to the
project. Also, an associated Eclipse Foundation account needs to be in place
with a signed Eclipse Contributor Agreement on file. These requirements
are enforced by the Eclipse Foundation infrastructure tooling.
For more information, please see the
`Eclipse Committer Handbook <https://www.eclipse.org/projects/handbook/#resources-commit>`_.
......@@ -21,6 +21,24 @@ with proposed changes and raise a merge request against the forked repository.
More generic information you can find on the Gitlab's documentation as part of
`"Merge requests workflow" <https://docs.gitlab.com/ee/development/contributing/merge_request_workflow.html>`_.
Git setup
*********
Clone your fork locally, enter its directory and set:
.. code-block:: bash
$ git config --local user.email <your_eclipse_account_email>
$ git config --local user.name <your_eclipse_full_name>
To push and pull over HTTPS with Git using your account, you must set a password
or `a Personal Access Token <https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html>`_
.
If you want to push or pull repositories using SSH, you have to
`add a SSH key <https://docs.gitlab.com/ee/user/ssh.html>`_ to your profile.
Commit Guidelines
*****************
......@@ -145,3 +163,21 @@ is to maintain the RestructuredText file format. Text files that are not meant t
as part of the project's documentation can be written in `Markdown <https://daringfireball.net/projects/markdown/>`_.
For example, a repository ``README`` file can be written in Markdown as it
doesn't end up compiled in the project-wide documentation.
Creating merge requests
-----------------------
Once your changes have been pushed to your fork, you are ready to prepare a merge request.
1. Go to your repository in an internet browser.
#. Create a merge request by clicking ``Merge Requests`` on left toolbar
and press ``New merge request``. Add an explainable description and create a merge request.
Alternatively, you can enter the website of your fork. You should see a message that you
pushed your branch to the repository. In the same section you can press ``Create merge request``.
#. Before merging, it has to be reviewed and approved by |main_project_name| repository
maintainers. Read their review and add any required changes to your merge request.
#. After you polish your merge request, the maintainers will run the pipelines which check
if your changes do not break the project and approve them. If everything is correct, your work
is merged to the main project. Remember that each commit of the merge request should be a minimum,
self-contained building block.
......@@ -16,6 +16,8 @@ requirements.
.. toctree::
:maxdepth: 1
quick-start-contribution-onboarding
eca
gitlab
reuse
dco
......
.. SPDX-FileCopyrightText: Huawei Inc.
..
.. SPDX-License-Identifier: CC-BY-4.0
.. include:: ../definitions.rst
Quick start contribution guide for new developers
#################################################
This page describes Quick start contribution guide for new developers who would like to
join the |main_project_name|
.. contents::
:depth: 2
Setting up
**********
Creating an account on Eclipse
------------------------------
Head to the
`Eclipse foundation website <https://accounts.eclipse.org/user/register?destination=user/login>`_
and set up an account by entering your:
- Email
- Username
- Full name
- Organization
- Password
- Country
Then read and check the box to agree to Terms of Use, Privacy Policy and Code of Conduct.
When you complete that, follow the instructions sent to your email to activate the account.
Signing the ECA
---------------
In order to contribute to the |main_project_name| you need to sign the
`Eclipse Contributor Agreement <https://accounts.eclipse.org/user/eca>`_,
which describes the terms under which you can contribute to the project.
If you sign this ECA, you confirm your legal rights to submit the code to the project.
You also provide license to your contributions to Eclipse and specified users, however
you still own your contributions.
EF Gitlab Account Setup
-----------------------
Now you can go to `the Oniro Gitlab <https://gitlab.eclipse.org/eclipse/oniro-core/oniro>`_ .
You should use the account that was created in the previous step to log in.
For further information, go to the :doc:`Gitlab section <gitlab>`.
......@@ -17,17 +17,26 @@ All projects and files for an hosted project **MUST** be `REUSE <https://reuse.s
compliant. REUSE requires SPDX information for each file, rules for which are
as follows:
* Any new file must have a SPDX header (copyright and license).
* For files that don't support headers (for example binaries, patches etc.) an associated ``.license`` file must be included with the relevant SPDX information.
* Do not add Copyright Year as part of the SPDX header information.
* The general rule of thumb for the license of a patch file is to use the license of the component for which the patch applies.
* When modifying a file through this contribution process, you may (but don't have to) claim copyright by adding a copyright line.
* Never alter copyright statements made by others, but only add your own.
* for files copyrighted by projects contributors (**"First Party Files"**):
Some files will make an exception to the above rules as described below:
* any new file MUST have a SPDX header (copyright and license);
* for files that don't support headers (for example binaries, patches etc.) an associated ``.license`` file MUST be included with the relevant SPDX information;
* do not add Copyright Year as part of the SPDX header information;
* the general rule for patch files is to use the MIT license and *not* the license of the component for which the patch applies - the latter solution would be error-prone and hard to manage and maintain in the long run, and there may be difficult-to-handle cases (what if the patches modifies multiple files in the same component - eg. gcc - which are subject to different licenses);
* when modifying a file through this contribution process, you may (but don't have to) claim copyright by adding a copyright line;
* you MUST NOT alter copyright statements made by others, but only add your own;
* for files copyrighted by third parties and just added to the project by contributors, eg. files copied from other projects or back-ported patches (**"Third Party Files"**):
* if upstream files already have SPDX headers, they MUST be left unchanged;
* if upstream files do *not* have SPDX headers:
* the exact upstream provenance (repo, revision, path) MUST be identified;
* you MUST NOT add SPDX headers to Third Party Files;
* copyright and license information, as well as upstream provenance information (in the "Comment" section), MUST be stored in `.reuse/dep5` following `Debian dep5 specification <https://dep-team.pages.debian.net/deps/dep5/>`_ (see examples below);
* you MUST NOT use wildcards (\*) in dep5 "Files" paragraphs even if Debian specs allow it: it may lead to unnoticed errors or inconsistencies in case of future file additions that may be covered by wildcard expressions even if they have a different license;
* in case of doubts or problems in finding the correct license and copyright information for Third Party Files, contributors may ask the project's Legal Team in the project mailing list oniro-dev@eclipse.org;
* Files for which copyright is not claimed and for which this information was not trivial to fetch (for example backporting patches, importing build recipes etc. when upstream doesn't provide the SPDX information in the first place)
* license files (for example ``common-licenses`` in bitbake layers)
SPDX Header Example
-------------------
......@@ -42,7 +51,30 @@ Make sure all of your submitted new files have a licensing statement in the head
* SPDX-License-Identifier: Apache-2.0
*/
DEP5 "Files" Paragraph Examples
-------------------------------
.. code-block:: text
Files: meta-oniro-staging/recipes-containers/buildah/buildah_git.bb
Copyright: OpenEmbedded Contributors
License: MIT
Comment: Recipe file for buildah copied from meta-virtualization project at
https://git.yoctoproject.org/meta-virtualization,
recipes-containers/buildah.
README file of meta-virtualization project states:
"All metadata is MIT licensed unless otherwise stated."
Files: meta-oniro-staging/recipes-devtools/ninja/ninja/0001-feat-support-cpu-limit-by-cgroups-on-linux.patch
Copyright: Google Inc.
License: Apache-2.0
Comment: Patch for ninja backported from Ninja project at
https://github.com/ninja-build/ninja, commit 540be33
Copyright text left as found in the header of the patched file.
Substantial Contributions
-------------------------
Therefore, if your contribution is only a patch directly applied to an existing file, then you are not required to do anything. If your contribution is an entire new project, or a substantial, copyrighted contribution, you **MUST** make sure that you do that following the `IP Policy <https://booting.oniroproject.org/distro/governance/ip-policy>`_ and that you comply with REUSE standard to include the licensing information where they are required.
Therefore, if your contribution is only a patch directly applied to an existing file, then you are not required to do anything. If your contribution is an entire new project, or a substantial, copyrighted contribution, you **MUST** make sure that you do that following the `IP Policy <https://git.ostc-eu.org/oss-compliance/ip-policy/>`_ and that you comply with REUSE standard to include the licensing information where they are required.
......@@ -14,7 +14,7 @@ Contributing to Projects not Maintained by |main_project_name| Team
Overview
********
In order to comply with :ref:`Upstream first<sec-upstream>` rule and Open Source licenses requirements, |main_project_name| developers collaborate with several upstream projects to submit fixes, improvements, bug reports, problem investigation results etc. Contribution must be made in accordance with upstream project policy using the tooling upstream project prefers such as mailing list, github/gitlab pull/merge requests, etc.
In order to comply with "upstream first" rule and Open Source licenses requirements, |main_project_name| developers collaborate with several upstream projects to submit fixes, improvements, bug reports, problem investigation results etc. Contribution must be made in accordance with upstream project policy using the tooling upstream project prefers such as mailing list, github/gitlab pull/merge requests, etc.
.. _sec_upstream_contrib_signoff:
......
......@@ -66,17 +66,37 @@ daily life.
oniro/hardware-support/index
building-project-documentation
.. toctree::
:caption: OS Features
:maxdepth: 2
ota
.. toctree::
:caption: Supported Technologies
:maxdepth: 2
oniro/supported-technologies/openthread
oniro/supported-technologies/matter
oniro/supported-technologies/containers
oniro/supported-technologies/modbus
oniro/supported-technologies/lvgl
oniro/supported-technologies/ledge
.. toctree::
:caption: Supported Toolchains
:maxdepth: 2
oniro/toolchains
.. toctree::
:caption: Troubleshoot
:maxdepth: 2
oniro/fallback-devices-support
oniro/fallback-devices-support
oniro/debug-mode
oniro/update-tool
oniro/default-password
.. toctree::
:caption: Contribute
......@@ -95,8 +115,8 @@ daily life.
.. toctree::
:caption: Policies and Compliance
:maxdepth: 2
ip-policy/index
Intellectual Property Policy <https://www.eclipse.org/org/documents/Eclipse_IP_Policy.pdf>
security/index
.. toctree::
......
.. SPDX-FileCopyrightText: Huawei Inc.
..
.. SPDX-License-Identifier: CC-BY-4.0
Intellectual Property Compliance Policy
#######################################
.. toctree::
:maxdepth: 1
ip-policy_policy/index
ip-policy_implementation-guidelines/index
../../ip-policy/implementation_guidelines/source/
\ No newline at end of file
../../ip-policy/policy/source/
\ No newline at end of file
.. SPDX-FileCopyrightText: Huawei Inc.
..
.. SPDX-License-Identifier: CC-BY-4.0
.. include:: definitions.rst
Over The Air (OTA) Updates
==========================
|main_project_name| provides support for updating Linux devices in the field.
With certain modifications, derivative projects can prepare and distribute
periodic updates to bring in up-to-date security patches, new features and
capabilities.
This document is meant to be read top-to-bottom, with an increasing level of
details presented at each stage. It starts with an overview of two supported
distribution flows and their suggested deployment scenarios:
* centrally managed with HawkBit
* anonymously updated with NetOTA
Moreover, it describes the architecture of the OTA stack on Linux devices and
how mutable persistent data is arranged. Lastly, a detailed architecture of the
on-device stack is described.
That final chapter is meant to assist developers in porting the stack to new
boards or debugging unexpected issues.
Important Considerations
------------------------
This chapter contains specific advice to the implementer of the update system.
|main_project_name| provides some good defaults and a starting point, but any
complete product must tune and adjust a number of elements.
Failure to understand and correctly implement the following advice can cause
significant failure in the field. When in doubt, re-test and re-check.
Partitions
..........
|main_project_name| devices use an A/B model with two immutable system
partitions and separate partitions for boot, application data, system data and
immutable device data. The roles for these partitions was determined at the
design stage and should be used in accordance with the intent.
OS, Not Apps
............
The update stack is designed to update the operating system, not applications.
Applications _may_ be embedded into the operating system image but ideally
_should_ be delivered as separate entities, for example, as system containers.
That de-couples their life-cycle and upgrade frequency from that of the base
system.
The sizes of the A/B partitions are fixed during the lifetime of the device.
Care should be taken to plan ahead, so that their sizes are not a constraining
factor during the evolution of the system software. This is also related to any
applications that may be bundled in the system image.
Each update requires a system reboot. In case of failure (total or partial),
another reboot is performed for the rollback operation. In contrast, some
application update stacks may be able to achieve zero-downtime updates.
Plan your updates so that least downtime and interruption occurs for the users
of your product.
Certificate Validity
....................
The update payload, also known the *update bundle*, is verified against a known
public key or keyring contained inside the system image. The validity of the
keyring should be such, that a device that was taken from long-term storage,
without receiving any intermediate updates, may successfully validate the
signature.
A conservative number of **ten years** is recommended. After the baked-in public
key expires, the device needs to be re-programmed externally, possibly involving
product recall.
Space Requirements
..................
An update involves downloading the complete copy of the system partition. The
device must either use the application data partition (which should have enough
storage for typical use-cases) or must combine having enough memory in
RAM-based file system _and_ use small enough images to ensure that the copy may
be fully downloaded. The choice of temporary storage for the download image
depends on the update model and will be discussed below.
Each update involves writing the system image to one of the available A/B slots.
Care should be taken to design the system with enough write endurance to support
updates during the entire lifetime of the product.
Time Requirements
.................
Update frequency incurs proportional load on the update server. A large enough
fleet of devices merely _checking_ for an update can take down any single
server. To alleviate this, product design should balance update frequency (in
some cases it can be controlled remotely post-deployment) and to spread the load
over time. It is strongly advisable to evenly distribute update checks with a
random element. If any potential updates must occur at a specific local time
(e.g. between three and four AM), then the system must be correctly configured
to observe the correct time zone. The update server can be scaled horizontally,
to an extent. At least for NetOTA, care was taken to allow efficiency at scale,
with stateless operation and no need for a traditional database. Any number of
geographically distributed replicas, behind load balancers and geo-routing, can
withstand an arbitrarily large load. The update server (both HawkBit and
NetOTA) uses separates meta-data from file storage, allowing to offload network
traffic to optimized CDN solutions.
Partitions And Data
-------------------
The system image, as build by the Yocto recipe `oniro-image-base` (or
derivative) has a corresponding update bundle, defined by the Yocto recipe
`oniro-bundle-base`.
Full disk image is meant to be programmed once, typically during manufacturing
or during initial one-time setup. The update bundle image is meant to be
uploaded to the upgrade server and downloaded to individual devices in the
field.
The disk is partitioned into the following partitions:
- boot (FAT)
- sys-a (squashfs)
- sys-b (squashfs or empty)
- devdata (ext4, ro)
- sysdata (ext4)
- appdata (ext4)
The update stack interacts with the boot partition, the sys-a and sys-b
partitions and the sysdata partition. Remaining partitions may be used by
other parts of the system, but are not directly affected by anything that
happens during the system (base OS) update process.
Boot And Update Process
-----------------------
The platform-specific boot loader chooses one of the system partitions, either
A or B, and boots into it. On EFI systems the kernel is loaded from the system
partition. Other boot loaders may need to load the kernel from the boot
partition. An appropriate redundancy scheme is used, to allow more than one
kernel to co-exist.
During early initialization of userspace, the immutable system partition
mounted at `/` is augmented with bind mounts to other partitions. In general,
application data (e.g. containers and other large data sets) is meant to live
on the application data partition, which does not use the A/B update model.
Certain small amount of data, such as system configuration and identity,
including the state of the update stack, is kept in the system-data partition.
Applications that are compiled into the image need overrides for their Yocto
recipes to allow them to persist state. In other words, stateful recipes
describing packages ending up in the system image, need recipe overrides that
take advantage of the writable bbclass. This is handled with the ``WRITABLES``
system which is documented in the associated bbclass for now.
When an update is initiated, a complete system image is downloaded to temporary
storage. The image is cryptographically verified against the RAUC signing key
or key-chain. Compatibility is checked against the RAUC ``COMPATIBLE`` string.
For more information about RAUC please refer to the official `RAUC
documentation <https://rauc.readthedocs.io/en/latest/>`_.
Valid and compatible images are mounted in a private mount namespace to reveal
the update image contained inside. That namespace is observed by the ``rauc``
process and the ``sysota-rauc-install-handler`` process. On |main_project_name|
systems, the update payload is a single squashfs image called ``system``. The
system image is then copied to the inactive slot, for example, when *slot A* is
active, then the image is copied to *slot B*. Platform-specific logic is then
used to configure the boot system to boot into the newly written slot **once**.
This acts as a safety mechanism, ensuring that power loss anywhere during the
update process has the effect of reverting back to the known-good image. After
the image is written, a platform-specific post-install hook schedules the
device to reboot, perhaps in a special way to ensure the boot-once constraint.
During boot-up, platform firmware or GRUB EFI application detects the boot-once
mode and uses the inactive slot for the remainder of the boot process. This
tests, in one go, the new kernel, kernel modules and any userspace
applications. On successful boot, late userspace takes the decision to commit
the update transaction. A committed transaction atomically swaps the
active-inactive role of the two system partitions.
If failure, for example due to power loss or unexpected software error,
prevents reaching the commit stage, then the update commit will not happen.
Depending on the nature of the failure the device may restart automatically, or
may need to restarted externally. It is recommended to equip and configure a
hardware watchdog to avoid the need for manual recovery during this critical
step, while ensuring the watchdog doesn't result in a reboot loop.
Once restarted the known-good slot is booted into automatically and the upgrade
is aborted. Temporary data saved during the update process is removed, so that
it does not accumulate in the boot partition.
Supported Update Servers
------------------------
|main_project_name| supports two update servers: **HawkBit** and **NetOTA**.
HawkBit is a mature solution and recommended for scenarios where devices are
managed centrally by a single authority. The device manufacturer may sell
white-label boxes, deferring all management to the integrator or reseller. The
integrator must deploy, operate and maintain a HawkBit installation for the
lifetime of the product. All devices deployed in the field must be explicitly
provisioned with location and credentials before updates can be distributed.
For more information please refer to the official `HawkBit documentation
<https://www.eclipse.org/hawkbit/>`_.
NetOTA is still under development, but is recommended for scenarios where no
central authority manages devices, but the device manufacturer or vendor still
maintains the software over time, releasing updates that devices may install at
any time. The manufacturer may pre-provision all devices with the location of
the update server, name of the image and default update channel, for example
latest, stable release. Depending on product user interface and other
integration requirements, end users may trigger the update process manually or
the device may automatically attempt to update from time to time.
HawkBit Update Server
---------------------
Eclipse HawkBit can be used to manage any number of devices of diverse types.
Devices periodically contact HawkBit over HTTPS to check if an update is
available. Whoever operates the HawkBit server, has total control of the
software deployed onto the devices.
HawkBit is most suited to environments, where a single authority operates a
number of devices and wants to exert precise control over the update process.
Each device is separately addressable, although mass-updates (roll-outs) are
also supported.
This mode requires explicit provisioning. A configuration file for
``rauc-hawkbit-updater`` needs to be prepared for each device and installed
either during manufacturing or on-premises. |main_project_name| does currently
not offer any support for provisioning, this part is left for the integrator.
HawkBit supports several types of authentication between itself and devices in
the field. Both per-device authentication token and shared gateway tokens are
supported. Control on polling frequency is also available. HawkBit offers
advanced features for tracking and reporting devices, although not all of them
are supported by the ``rauc-hawkbit-updater`` client.
HawkBit is a complex piece of software with vast documentation. Refer to
https://www.eclipse.org/hawkbit/ for details. Small deployments, especially
useful for evaluation, can use the ``hawkbit`` snap package for quick local
deployments. The snap package is not optimized for high number of users or
high-availability, so larger deployments are encouraged to learn about HawkBit
architecture and deploy a scalable installation across multiple machines.
Deploying HawkBit
.................
In order to evaluate HawkBit, it is best to use the ``hawkbit`` snap package.
The package offers several stability levels expressed as distinct snap tracks.
Installation instructions can be found on the `hawkbit snap information page
<https://snapcraft.io/hawkbit>`_.
The _stable_ track offers HawkBit 0.2.5 and is not recommended for deployment
due to old age, number of open bugs and missing features. The _beta_ track
offers HawkBit 0.3.0M7 and is recommended for evaluation. The _edge_ track
offers a periodic build of the latest upstream HawkBit. This version is
annotated with the git commit hash and a sequential number counted since the
most recent tag.
**Warning**: HawkBit 0.2.5 does not offer updates to 0.3.0. This is an upstream
issue caused by faulty database migration.
Once ``hawkbit`` snap is installed, consult the ``snap info hawkbit`` command and
read the description explaining available configuration options. Those are
managed through the snap configuration system. The name of the administrative
account can be set with ``snap set hawkbit username=user``. The password of the
administrative user can be similarly set with ``snap set hawkbit
password=secret``. By default HawkBit listens on ``localhost``, port ``8080``
and is meant to be exposed by a reverse http proxy. Evaluation installations
can use the insecure http protocol directly and skip setting up the proxy. To
use HawkBit for evaluation, set the listen address to `0.0.0.0` or `::`, so
that the service is reachable from all the network interfaces. This can be done
with ``snap set hawkbit address=0.0.0.0``.
Once HawkBit is installed, it should
be configured in one of several ways. The primary deciding factor is how devices
authenticate to HawkBit. The full documentation is beyond the scope of this
document, but for simple deployments we recommend either using *per-device
authentication token*, in which HawkBit has to be told about the presence of
every distinct device, or using the *gateway authentication token*, in which
there is a shared secret among all the devices and they all authenticate to the
HawkBit server this way. This configuration is exposed under **System Config**
menu, available from the sidebar on the left.
In either mode any number of devices can be created under the **Deployment**
menu. In HawkBit nomenclature, a device is called a _target_. Targets may be
clustered into target types, which aid in maintaining a heterogeneous fleet more
easily. Each target has a *controller ID*, which is an unique string identifying
the device in the system. In some authentication modes, devices need to be
provisioned not only with the URL of the HawkBit server, but also with their
*controller ID* and *security token*. Mass deployments can be performed using
bulk upload or using the management API.
The |main_project_name| project created a command line tool for working with
portions of the HawkBit management APIs. This tool is called ``hawkbitctl`` and
is similarly available as a snap package or as a container on DockerHub
(``zyga/hawkbitctl``). To install ``hawkbitctl`` as a snap, see `hawkbitctl
snap information page <https://snapcraft.io/hawkbitctl>`_. Refer to the
documentation of ``hawkbitctl`` to see how to use it to create devices with
given controller ID and security tokens.
Provisioning Devices For HawkBit
................................
SysOTA does not contain a native HawkBit client yet, so it leverages the
``rauc-hawkbit-updater`` program for this role. Said program reads a
configuration file ``/etc/rauc-hawkbit-updater/config.conf``, which must be
owned by the ``rauc-hawkbit`` user, connects to a given HawkBit server,
authenticates using either device or gateway token and then listens for events.
|main_project_name| images contain a sample configuration file in
``/usr/share/rauc-hawkbit-updater/example.conf`` which can be used as a quick
reference.
At minimum, the following settings must be configured:
- The ``target_name`` field must be set to the *controller ID* of the target
created in HawkBit. The values may be generated separately, for example the
manufacturing process may generate a batch of identifiers and save them in a
CSV file to be imported into HawkBit later.
- The ``auth_token`` field must be set to the per-device authentication token.
If gateway authentication is used then ``gateway_token`` must be used
instead. Similarly the tokens may be generated in batches during
manufacturing and stored along with controller IDs in a CSV file.
- The ``hawkbit_server`` field must be set to the domain name or IP of your
HawkBit server. Domain names are recommended, but toy deployments may use
local IP addresses as well.
Once the file is created and has the right ownership you can start the
``rauc-hawkbit-updater.service`` systemd unit, to ensure the client can connect
and authenticate correctly.
Working With HawkBit
....................
HawkBit has both the web dashboard and a complex set of REST APIs covering all
aspects of the management story. During exploration and evaluation, it is
recommended to use the graphical user interface. As the workflow solidifies, it
is encouraged to switch the REST APIs and automation.
The general data model related to updates is as follows:
- the *update bundle* is expressed as an *artifact* on a *software module*
- the *software module* is added as an element of a *distribution set*
- the *distribution set* is assigned to a *target* for deployment
The |main_project_name| project has created the ``hawkbitctl`` utility. A tool
that easily creates the required scaffolding and uploads the bundle to the
server. However the tool does not cover the entire API surface yet and you may
find that specific functionality is missing. In cases like that, custom
solutions may be used as a stop-gap measure. For example scripts using
``curl``.
HawkBit has one more essential complexity, the type system, where *targets*
(devices), *software modules* and *distribution sets* have corresponding type
entities, *target types*, *software module types* and *distribution set types*.
The type system allows to constrain correct combinations and prevent mistakes.
Devices of the same type should refer to a *target type*, which further refers
to compatible *distribution set type*, which finally refers to a compatible
*software module type*. This allows an actual update bundle to be placed in a
new software module of the right _type_, which in the end allows HawkBit to
prevent assigning or rolling out incorrect software to a given specific device.
When using the graphical user interface you should be aware that some of the
operations are only expressed as a drag and drop interaction. This specifically
applies to the act of binding a *software module* to a *distribution set* and
the act of assigning a *distribution set* to a *target*.
Operators working with HawkBit are strongly encouraged to read the extensive
upstream documentation to understand the finer details of the data model,
specifically around the cardinality of relations.
Updating With HawkBit
.....................
The basic checklist of updating with HawkBit, assuming the update server
is deployed and devices are provisioned, is as follows:
- Build the bundle recipe, for example ``oniro-bundle-base``. Products should
maintain a pair of recipes, one for the bundle and one for the whole image.
All the examples that refer to the base recipes here should be understood as
references to the actual recipe names used by the product.
- Collect the ``*.raucb`` (RAUC bundle) file from the Yocto deploy directory.
- Perform any QA process deemed necessary. This should at least involve copying
the bundle to a real device and updating manually with ``rauc install``. It
is recommended to test the software for a few days at least, to attempt to
detect problems related to memory leaks that would not crash outright but may
crash and cause issues after the update transaction is committed.
- Create a new *software module* with an unique combination of both name and
version, and a reference to an appropriate *software module type* created
out-of-band, which describes RAUC update bundles for specific class of
devices.
- Upload the bundle as an artifact to the software module created earlier.
- Create a new *distribution set* with an unique combination of both name and
version, and a reference to an appropriate *distribution set type* created
out-of-band, which describes a distribution that contains the software module
type with the RAUC update bundle.
- Bind the *software module* to the *distribution set* (by drag-and-drop).
At this stage, the update is uploaded and can be rolled out or assigned to
individual devices. Once a device is asked to update, it will download and
install the bundle. Basic information about the process is relayed from the
device to HawkBit and can be seen in per-device action history.
When testing updates with a small number of devices, the distribution set may
be dragged and dropped onto the device to commence the update for that specific
device.
NetOTA Update Server
--------------------
The NetOTA project can be used to distribute software to diverse devices from
one or more servers. In this mode the device periodically contacts NetOTA over
HTTPS to check if an update is available. In this mode whoever operates the
NetOTA server chooses the composition and number of available system images and
devices can be configured to follow a specific image name and stability level.
Unlike in the HawkBit model, the central server has no control over the devices.
Instead, anyone controlling individual devices chooses the server, the image
name and the stability level and then follows along at the pace determined by
the device.
This mode requires minimal provisioning by either installing a configuration
file or by using the ``sysotactl`` utility to set the name of the package, the
stability level and the URL of the update server. In addition a systemd timer
or equivalent userspace agent, must periodically call the ``sysotactl update``
command or the corresponding D-Bus API.
NetOTA is a beta quality software. It can be used and has documentation
sufficient for deployment, but was not tested daily, during the development of
the |main_project_name| release. This mode is documented for completeness,
since it complements the centrally managed HawkBit mode.
For more information about deploying NetOTA, creating an update repository and
uploading software to said repository, please refer to the `upstream
documentation <https://gitlab.com/zygoon/netota>`_.
Deploying NetOTA
................
To deploy NetOTA for evaluation, it is best to use the ``netota`` snap
package. The package offers several stability levels expressed as distinct snap
tracks. Installation instructions can be found on the `netota snap information
page <https://snapcraft.io/netota>`_.
The _stable_ track offers NetOTA 0.3.2 and is recommended for deployment. The
_edge_ track offers automatic builds from the continuous integration system.
This version is annotated with the git commit hash and a sequential number
counted since the most recent tag.
Once ``netota`` snap is installed, consult the ``snap info netota`` command and
read the description explaining available configuration options. Those options
are managed through the snap configuration system. By default NetOTA listens on
``localhost``, port ``8000`` and is meant to be exposed by a reverse http
proxy. Evaluation installations can use the insecure http protocol directly and
skip setting up the proxy. To use HawkBit for evaluation, set the listen
address to `0.0.0.0:8000` or `[::]:8000`, so that the service is reachable from
all the network interfaces. This can be done with ``snap set netota
address=0.0.0.0:8000``.
NetOTA does not offer any graphical dashboards and is configured by placing
files in the file system. The snap package uses the directory
``/var/snap/netota/common/repository`` as the root of the data set. Upon
installation, an ``example`` package is copied there. It can be used to
understand the data structure used by NetOTA. Evaluation deployments can edit
the data in place with a text editor. Production deployments are advised to use
a git repository to track deployment operations. Updates to the repository do
not need to be atomic. The systemd service ``snap.netota.netotad.service`` can
be restarted to re-scan the file system structure and present updated
information over the REST APIs used by devices in the field. Alternatively the
``SIGHUP`` signal may be sent to the ``netotad`` process for the same effect,
without having any observable downtime.
Provisioning Devices For NetOTA
...............................
SysOTA contains a native NetOTA client and maintains all associated states and
configuration. The configuration is exposed as a D-Bus API and is meant to be
consumed by custom device agents developed for a particular solution. The D-Bus
API has the ability to control the URL of the NetOTA server, the package name
and stream name to follow, as well as to perform an update and monitor the
progress.
For convenience, the same APIs are exposed as command line tool ``sysotactl``.
The tool has built-in help. By default a status of the current configuration
and state is displayed. Use the command ``sysotactl set-server URL`` to set the
URL of the NetOTA deployment. Use the command ``sysotactl set-package`` to set
the name of the package containing the system image for your product. Use the
command ``sysotactl set-stream`` to set the name of the stream of the package
to subscribe to.
Using the command ``sysotactl streams`` you can discover the set of streams
available for your package. Using streams allows a fleet of devices to follow
different versions of the same package. It can be useful for canary-testing,
major version upgrades or hot-fixing a particular device experiencing an issue,
without having to upgrade all devices at the same time.
Using the command ``sysotactl update`` you can trigger an update. Updated
software is downloaded and installed automatically. D-Bus signals are sent
throughout the process, allowing any user interface present on the device to
display appropriate information.
The same configuration can be provided by editing SysOTA configuration file
``/etc/sysota/sysotad.conf``. See ``sysotad.conf`` manual page for details.
Updating With NetOTA
....................
The basic checklist of updating with NetOTA, assuming the update server
is deployed and devices are provisioned, is as follows:
- Build the bundle recipe, for example ``oniro-bundle-base``. Products should
maintain a pair of recipes, one for the bundle and one for the whole image.
All the examples that refer to the base recipes here should be understood as
references to the actual recipe names used by the product.
- Collect the ``*.raucb`` (RAUC bundle) file from the Yocto deploy directory.
- Perform any QA process deemed necessary. This should at least involve copying
the bundle a real device and updating manually with ``rauc install``. It is
recommended to test the software for a few days at least, to attempt to detect
problems related to memory leaks that would not crash outright but may crash
and cause issues after the update transaction is committed.
- Choose which stream to publish the bundle to. You can create additional
streams at will, by touching a ``foo.stream`` file. Make sure to create the
corresponding ``foo.stream.d`` directory as well. This will create the stream
``foo``. If you choose an existing stream, remember that all the *archives*
present in that stream must have the exact same version. This means you may
need to perform additional builds, if the package is built for more than one
architecture or ``MACHINE`` value.
- Create a new file with the extension ``.archive`` that describes the newly
build bundle. This process is somewhat involved, as several pieces of
information needs to be provided. The archive file should be placed in the
``.stream.d`` directory of the stream you've selected earlier. The archive
must contain at least one ``[Download]`` section with the ``URL=`` entry
pointing to a http server that hosts the file. For local deployments you can
use any web server you have available. In larger deployments you may choose
to use a content delivery network provider, to offer high-availability
services for your fleet.
- If you are doing this for the first time, make sure to read the upstream
documentation of the NetOTA project and consult the sample repository created
by the ``netota`` snap package on first install. Ideally, keep the changes
you've made in a git repository, so that you can both track any changes or
revert back to the previous state.
- Restart the NetOTA service or sent ``SIGHUP`` to the ``netotad`` process.
Note that if the new repository is not consistent in any, an error message
will be logged and the service will refuse to start up (if you had chosen to
restart the service) or will keep serving the old content (if you had chosen
to send the signal).
At this stage the server will offer updates to devices if they choose to ask.
You can perform the update manually with ``sysotactl update`` or if you have a
custom device agent, you may instruct it to perform the corresponding D-Bus
call.
Limitations
-----------
|main_project_name| update stack is by no means perfect. Knowing current
weaknesses can help plan ahead. We tried to balance the design, so that no
weakness is _fatal_, and so that remaining gaps can be updated in the field.
Firmware Updates
................
Firmware is not updated by SysOTA. Product designers should consider
implementing ``fwupd`` and obtaining firmware from LVFS. The safety of updating
firmware in the field is difficult to measure. For EFI-capable systems being
able to, at least, update the EFI firmware is strongly recommended.
CPU Microcode
.............
CPU microcode may be updated by the EFI firmware and by the early boot process.
At present, CPU microcode is not updated by the early boot process. This is
tracked as https://gitlab.eclipse.org/eclipse/oniro-core/oniro/-/issues/508.
GRUB Application Update
.......................
Updating the OS does not currently update the EFI application containing GRUB.
This is tracked as
https://gitlab.eclipse.org/eclipse/oniro-core/sysota/-/issues/8.
GRUB Script Update
..................
Updating the OS does not currently update the GRUB script. This is tracked as
https://gitlab.eclipse.org/eclipse/oniro-core/oniro/-/issues/523.
......@@ -44,6 +44,24 @@ able to choose from Java, Extensible Markup Language (XML), C/C++, JavaScript
(JS), Cascading Style Sheets (CSS), and HarmonyOS Markup Language (HML) to
develop applications for |main_project_name|.
OpenHarmony Compatibility
-------------------------
|main_project_name| will be OpenHarmony compatible. It will include the services
and frameworks required by OpenHarmony specification, and provide the APIs
required by OpenHarmony specification, so that you can develop products that can
interoperate with other OpenHarmony products and be certified as OpenHarmony
compatible.
OpenHarmony compatibility will enable required OpenHarmony features in the
kernel layer, system services layer and framework layer, allowing the addition
of OpenHarmony compatible applications.
Due to the inherent modularity of the Oniro build system (OpenEmbedded),
individual projects will be able to pick and choose from the OpenHarmony
compatibility features, allowing to build products with just the parts that are
needed.
Technical Architecture
----------------------
......
.. SPDX-FileCopyrightText: Huawei Inc.
..
.. SPDX-License-Identifier: CC-BY-4.0
.. include:: ../../../definitions.rst
2.0.0
#####
**Release timeframe**: 2022-03-07 .. 2022-11-30
**Release Artefacts Download Area**: https://download.eclipse.org/oniro-core/releases/2.0.0/
**Release Tags GPG Public Key**: https://download.eclipse.org/oniro-core/releases/2.0.0/oniro-2.0.0_gpg_key.asc [*]_
.. toctree::
:maxdepth: 1
release_notes
requirements
test_report
ip_compliance_note
security_report
.. [*] All repositories released part of 2.0.0 have associated 2.0.0 `git
tags` that have been signed during the Eclipse Foundation release process.
You can use this GPG public key to verify all these signatures.
.. SPDX-FileCopyrightText: Alberto Pianon <pianon@array.eu> and Carlo Piana <piana@array.eu>
..
.. SPDX-License-Identifier: CC-BY-4.0
.. _2_0_0_IPComplianceNote:
IP Compliance Note
==================
Since the very beginning, a Continuous Compliance `toolchain`_ and `process`_
have been developed and integrated into the Oniro project development so that
source components used to generate Oniro binary images are continuously scanned
by open-source tools like `Fossology`_ and `Scancode`_, and reviewed by
Software Audit Experts and IP Lawyers [*]_.
For detailed information about the why and the how of such a process, please
refer to the Oniro Compliance Toolchain’s `official documentation`_. Sources
and documentation for custom components of the toolchain (`tinfoilhat`_,
`aliens4friends`_, `dashboard`_, `pipelines`_) can be found in their respective
repositories.
*TL;DR*: we put ourselves in your shoes, a device maker willing to use Oniro to
develop its products. We simulated the IP compliance work you would have to do
(on third-party components fetched by Yocto recipes) to build your firmware
image(-s) and spot possible legal risks and issues. In the true open-source
spirit, every time we found an issue with a particular upstream component, we
raised that issue upstream, and most of the time we got it solved for you by
upstream developers.
As of Oniro’s 2.0.0 GA Release, there are just a few issues left that we cannot
address. These relate to proprietary firmware/drivers for hardware support and
some patent-covered technologies. The issues require your attention and
possible action, e.g. getting a patent license. We will briefly explain these
here.
The overall status of audit activities can be monitored through a `dedicated
dashboard`_, which gets updated after every commit to Oniro's main repository.
In the dashboard, also CVE information (collected at the time of the commit) is
shown and can be filtered based on target machines, images, and single
components.
All repositories included in the Oniro 2.0.0 Release are `REUSE compliant
<https://reuse.software/spec/>`_. It means that copyright and license metadata
for every source file are made available within each repository in a standard
machine-readable format, and that at any time one can generate an SPDX SBoM
[*]_ for such repositories with `REUSE tool
<https://github.com/fsfe/reuse-tool>`_ by just running ``reuse spdx`` command.
REUSE-generated SPDX files for all released repositories are available as part
of the `release artefacts download area
<https://download.eclipse.org/oniro-core/releases/oniro-v2.0.0_spdx_sbom.tar.gz>`_.
Last but not least, we provide reference SPDX SBoM of source packages used to
build oniro-image-base and zephyr-philosophers images for a selection of
supported target machines (qemu, raspberrypi4, arduino-nano-33ble), generated
by continuous compliance pipelines. They are provided as a convenience only,
with no express or implied warranty about the accuracy and completeness of the
information contained therein (see the disclaimers below):
============================= ====== ============ =================== ===================
SBoM kernel toolchain(s) machine(s) image
============================= ====== ============ =================== ===================
`linux-qemu`_ linux gcc,clang qemu\* oniro-image-base
`linux-raspberrypi4`_ linux gcc,clang raspberrypi4-64 oniro-image-base
`zephyr-qemu`_ zephyr gcc qemu\* zephyr-philosophers
`zephyr-arduino-nano-33-ble`_ zephyr gcc arduino-nano-33-ble zephyr-philosophers
============================= ====== ============ =================== ===================
.. _linux-qemu: https://download.eclipse.org/oniro-core/releases/2.0.0/oniro-v2.0.0_linux-qemu_images_spdx_sbom.zip
.. _linux-raspberrypi4: https://download.eclipse.org/oniro-core/releases/2.0.0/oniro-v2.0.0_linux-raspberrypi4_images_spdx_sbom.zip
.. _zephyr-qemu: https://download.eclipse.org/oniro-core/releases/2.0.0/oniro-v2.0.0_zephyr-qemu_images_spdx_sbom.zip
.. _zephyr-arduino-nano-33-ble: https://download.eclipse.org/oniro-core/releases/2.0.0/oniro-v2.0.0_zephyr-arduino-nano-33-ble_images_spdx_sbom.zip
*Disclaimer#1*: This is not legal advice. This note is provided just as a
convenience for you, to suggest some critical areas in which you should seek
legal advice if you want to develop real-world products based on Oniro. It is
not meant to be complete nor to substitute internal due-diligence activities
you need to perform before marketing your products.
*Disclaimer#2*: This note covers only source components used to generate
supported Oniro images (oniro-image-base and zephyr-philosophers) for supported
target machines (qemux86-64, qemux86, qemuarm-efi, qemuarm64-efi,
raspberrypi4-64, seco-intel-b68, seco-px30-d23, seco-imx8mm-c61-2gb,
seco-imx8mm-c61-4gb, qemu-cortex-m3, nrf52840dk-nrf52840, arduino-nano-33-ble).
*Disclaimer#3*: “supported” *referred to a board* means that a board is
officially targeted as a potential platform where an Oniro image can be
installed for any purposes; when *referred to an image*, means that the image
targeting a supported board receives thorough testing and specific attention
during the development. It does NOT mean that both will receive support
services nor that any member of the Oniro Working Group or of the Eclipse
Foundation will provide any warranty whatsoever.
Solved Issues
-------------
- There was a proprietary software font accidentally included in
zephyr-philosophers; we opened the issue upstream
(https://github.com/zephyrproject-rtos/zephyr/issues/48111), which was
solved (https://github.com/zephyrproject-rtos/zephyr/pull/49103), and the
fix was backported to Oniro.
(https://gitlab.eclipse.org/eclipse/oniro-core/meta-zephyr/-/commit/0f36ae849d59da08e445af83f711a1c0108dd3bf);
- A similar issue was found also in Harfbuzz component, raised upstream
(https://github.com/harfbuzz/harfbuzz/issues/3845), fixed
(https://github.com/harfbuzz/harfbuzz/pull/3846), and the fix was backported
to Oniro
(https://gitlab.eclipse.org/eclipse/oniro-core/oniro/-/commit/fbb4bc229b287fa293439ee0adbb0d830764b2d8).
- There were a lot of binary files found in zephyr-philosophers, without
corresponding sources and no clear license information; we opened the issue
upstream
(https://gitlab.eclipse.org/eclipse/oniro-core/meta-zephyr/-/commit/0f36ae849d59da08e445af83f711a1c0108dd3bf),
which was then fixed
(https://github.com/zephyrproject-rtos/zephyr/pull/47181), and the fix was
backported to Oniro.
(https://gitlab.eclipse.org/eclipse/oniro-core/meta-zephyr/-/commit/a00d1c4f1aad8b0ea5b9f904966c0bd8a48d8d80)
- Some proprietary license headers, not granting redistribution nor any other
rights without written permission by Intel, were found in some source files
in the Intel-Media-SDK component; we opened the issue upstream
(https://github.com/Intel-Media-SDK/MediaSDK/issues/2937) and it turned out
it was an oversight occurred when open sourcing the component; it was then
fixed (https://github.com/Intel-Media-SDK/MediaSDK/pull/2939) and the fix was
backported to Oniro.
(https://gitlab.eclipse.org/eclipse/oniro-core/oniro/-/commit/d5ee837d90903d91a1ff358ebfe985d28925484e);
- A similar issue was found also in the Intel-Media-Driver component, it was
raised upstream (https://github.com/intel/media-driver/issues/1460), fixed
(https://github.com/intel/media-driver/pull/1465), and the fix was backported
to Oniro
(https://gitlab.eclipse.org/eclipse/oniro-core/oniro/-/commit/b56de944568c8e348cb8265c59d7cfd52a0831b9)
Warnings for Downstream Users: Hardware Support
-----------------------------------------------
Linux
~~~~~
IMX Firmware
^^^^^^^^^^^^
A couple of supported target boards (seco-imx8mm-c61-2gb and
seco-imx8mm-c61-4gb) require Freescale i.MX firmware for VPU and SDMA as well
as firmware for 8M Mini family to train memory interface on SoC and DRAM during
initialization. These firmware require acceptance of a `EULA`_ by the user
(you). Such acceptance may be provided by flagging a specific environment
variable (``ACCEPT_FSL_EULA = "1"``) in your configuration file (please refer
to Oniro’s technical documentation). You should carefully read that `EULA`_ to
check whether you are actually in a position to accept it and whether you can
fulfill all of its conditions. If needed, seek legal advice for that.
Linux-firmware
^^^^^^^^^^^^^^
The third-party components ``linux-firmware`` and ``linux-firmware-rpidistro``
contain many sub-components (mainly firmware BLOBs) for specific hardware
support, coming from different hardware vendors.
Almost all firmware vendor licenses restrict firmware usage to the specific
device(s) of their own.
Some of them (apparently) contain further restrictions, stating that the binary
file is licensed *“for use with [vendor] devices, but not as a part of the
Linux kernel or in any other form which would require these files themselves to
be covered by the terms of the GNU General Public License”*. Our understanding
is that such restriction is either redundant or useless. Apart from some
debatable and contested corner cases, there is no way in which a firmware blob
may become part of the Linux kernel and therefore be covered by the GNU General
Public License - so the above provision seems redundant. But even if someone
claimed that a proprietary firmware requires such a low-level interaction with
the kernel that such firmware must be deemed as a derivative work of the kernel
itself, such (alleged) non-compliance with GPL could not be avoided or excluded
by a vendor license clause - so the above provision would be useless. You
should seek legal advice to use the affected firmware files in either case.
================================= ================================================== ======================== ============================
Source Device/driver File(s) License found in
================================= ================================================== ======================== ============================
`linux-firmware-20220913.tar.xz`_ Conexant Cx23100/101/102 USB broadcast A/V decoder v4l-cx231xx-avcore-01.fw WHENCE
`linux-firmware-20220913.tar.xz`_ meson-vdec - Amlogic video decoder meson/vdec/\* LICENSE.amlogic_vdec, WHENCE
`linux-firmware-20220913.tar.xz`_ lt9611uxc - Lontium DSI to HDMI bridge lt9611uxc_fw.bin LICENSE.Lontium, WHENCE
================================= ================================================== ======================== ============================
Some other firmware files are covered by proprietary licenses that contain
termination clauses providing that either party may terminate the license at
any time without cause, which may work as killswitches (i.e. vendor may
terminate your license at any time without any reason, so your devices -
including already distributed ones - may lose, say, Bluetooth or Wifi support).
You should seek legal advice (and possibly negotiate a different license with
the vendor) if you need to use the affected firmware files:
========================================================== ====================== ======== ================
Source Device/driver File(s) License found in
========================================================== ====================== ======== ================
[git://github.com/murata-wireless/cyw-fmac-fw@ba140e42] Murata Wi-Fi/Bluetooth cyfmac\* LICENCE, README
[git://github.com/murata-wireless/cyw-fmac-nvram@8710e74e] Murata Wi-Fi/Bluetooth cyfmac\* LICENCE.cypress
[git://github.com/murata-wireless/cyw-bt-patch@9d040c25] Broadcom BCM43455 Wifi \*.hcd LICENCE.cypress
========================================================== ====================== ======== ================
Some other firmware files (for NVIDIA hardware, that is not included in any of
Oniro’s supported boards) have been expressly excluded from installation
because they come with a proprietary license with an unclear “open source
exception”. See `issue #834`_ in Oniro main repo for further details.
Some other firmware files are covered by a limited patent license. If you need
to use them, you should check whether you fulfill the conditions of such a
license.
================================= ========================= ============================= ======================
Source Device/driver File(s) License found in
================================= ========================= ============================= ======================
`linux-firmware-20220913.tar.xz`_ WiLink4 chips WLAN driver ti-connectivity/wl1251-fw.bin LICENCE.wl1251, WHENCE
================================= ========================= ============================= ======================
Finally, some licenses have unclear license wording about use and
redistribution. If you need to use firmware covered by such files, you should
check and possibly seek legal advice.
================================= ===================================================== ================================== =======================
Source Device/driver File(s) License found in
================================= ===================================================== ================================== =======================
`linux-firmware-20220913.tar.xz`_ WiLink4 chips WLAN driver ti-connectivity/wl1251-fw.bin LICENCE.wl1251, WHENCE
`linux-firmware-20220913.tar.xz`_ Marvell Libertas 802.11b/g cards libertas/\*.bin, mrvk/\*.bin LICENCE.Marvell, WHENCE
`linux-firmware-20220913.tar.xz`_ Marvell mac80211 driver for 80211ac cards mwlwifi/\*.bin LICENCE.Marvell, WHENCE
`linux-firmware-20220913.tar.xz`_ Marvell CPT driver mrvl/cpt01/\* LICENCE.Marvell, WHENCE
`linux-firmware-20220913.tar.xz`_ Marvell driver for Prestera family ASIC devices mrvl/prestera/\*.img LICENCE.Marvell, WHENCE
`linux-firmware-20220913.tar.xz`_ wave5 - Chips&Media, Inc. video codec driver cnm/wave521c_j721s2_codec_fw.bin LICENCE.cnm, WHENCE
`linux-firmware-20220913.tar.xz`_ Broadcom 802.11n fullmac wireless LAN driver brcm/brcmfmac/\*, cypress/cyfmac\* LICENCE.cypress, WHENCE
`linux-firmware-20220913.tar.xz`_ BCM-0bb4-0306 Cypress Bluetooth firmware for HTC Vive brcm/BCM-0bb4-0306.hcd LICENCE.cypress, WHENCE
================================= ===================================================== ================================== =======================
Zephyr
~~~~~~
The third-party repository ‘`zephyr-philosophers`_’ fetched by
zephyr-philosophers recipe contains many sub-components for specific hardware
support, coming from different hardware vendors. Some of them have specific
proprietary license conditions (eg. software components to support Atmel SAM
L21, Altera Nios II, Cypress/Infineon PSoC6) but are not used to generate Oniro
images, so they are not covered here. Should you need to add support for such
hardware boards, not officially supported by Oniro, you should carefully check
hardware vendor's license conditions.
Warnings for Downstream Users: Patents
--------------------------------------
“Dropbear” component documentation contains a patent and trademark notice:
The author (Tom St Denis) is not a patent lawyer so this section is not to
be treated as legal advice. To the best of the author’s knowledge, the only
patent-related issues within the library are the RC5 and RC6 symmetric block
cyphers. They can be removed from a build by simply commenting out the two
appropriate lines in `\textit{tomcrypt\_custom.h}`. The rest of the cyphers
and hashes are patent-free or under patents that have since expired.
The RC2 and RC4 symmetric cyphers are not under patents but are under
trademark regulations. This means you can use the cyphers you just can’t
advertise that you are doing so.
To our best knowledge, also patents on RC5 and RC6 symmetric block cyphers have
expired, but you should seek legal advice to check whether there still are
active patents covering such technologies.
.. [*]
Carlo Piana and Alberto Pianon from Array (Legal); Rahul Mohan G. and
Vaishali Avhad from NOI Techpark (Audit)
.. [*] SBOM is short for Software Bill Of Material, the full and detailed list
of upstream components. SPDX is short for Software Package Data Exchange, an
`ISO standard <https://spdx.github.io/spdx-spec>`_ to communicate
information about software in a machine-readable form.
.. _toolchain: https://projects.eclipse.org/projects/oniro.oniro-compliancetoolchain
.. _process: https://gitlab.eclipse.org/eclipse/oniro-compliancetoolchain/toolchain/docs/-/tree/main/audit_workflow
.. _Fossology: https://www.fossology.org
.. _Scancode: https://nexb.com/scancode
.. _official documentation: https://gitlab.eclipse.org/eclipse/oniro-compliancetoolchain/toolchain/docs
.. _tinfoilhat: https://gitlab.eclipse.org/eclipse/oniro-compliancetoolchain/toolchain/tinfoilhat
.. _aliens4friends: https://gitlab.eclipse.org/eclipse/oniro-compliancetoolchain/toolchain/aliens4friends
.. _dashboard: https://gitlab.eclipse.org/eclipse/oniro-compliancetoolchain/toolchain/dashboard
.. _pipelines: https://gitlab.eclipse.org/eclipse/oniro-compliancetoolchain/toolchain/pipelines
.. _EULA: https://git.yoctoproject.org/meta-freescale/tree/EULA
.. _linux-firmware-20220913.tar.xz: https://cdn.kernel.org/pub/linux/kernel/firmware/linux-firmware-20220913.tar.xz
.. _issue #834: https://gitlab.eclipse.org/eclipse/oniro-core/oniro/-/issues/834
.. _zephyr-philosophers: https://github.com/zephyrproject-rtos/zephyr
.. _dedicated dashboard: https://sca.software.bz.it/?json=https://gitlab.eclipse.org/eclipse/oniro-compliancetoolchain/mirrors/oniro-goofy/-/jobs/artifacts/kirkstone/raw/report.harvest.json?job=harvest