The core engine of Backend.AI uses many open-source software, and it is itself being developed as an open source project. If you find any inconvenience or bugs while using Backend.AI, enterprise customers can use customer support and technical support channels for issue tracking and support, but it is also possible to contribute to the open-source directly. There are two ways to contribute: explaining in detail the issues or improvement ideas by creating an issue and directly contributing by fixing the code with a pull request. In this post, we introduce several things to keep in mind to communicate more effectively and quickly with the development team during the contribution process.
GitHub Repository Introduction
As seen in the previous post, "Backend.AI Open Source Contribution Guide," Backend.AI originally developed repositories with a meta-repository and several sub-components. However, from the "22.06" version, Backend.AI has changed to a mono-repository using Pants.
The transition to this development workflow has helped to establish a more convenient development environment by solving package compatibility issues that frequently occur in multiple individual components.
Pants is a fast, scalable, and user-friendly build system.
First of all, if you want to raise an issue, the first place to look is the Backend.AI repository. The repository named Backend.AI integrates several packages using Pants, and it is not only a project management repository but also a repository that contains the code that actually performs the function. All issues related to Backend.AI servers and Client SDKs are managed here, and links to other projects are provided through the README.
When creating a new issue, two basic templates are provided: a bug report and a feature request, but it is not necessary to strictly follow the templates. However, it is recommended to write the content following the template, considering the complexity of Backend.AI and the various usage environments, to make it easier to share the context for problem analysis.
Introduction to Mono-repository
From the "22.06" version, Backend.AI has changed to a mono-repository using Pants. A mono-repository is a project that has a source code that integrates the basic dependencies, data models, features, tooling, and processes of multiple projects. It operates a repository by integrating multiple projects that were previously used into one project.
Introduction to Pants
Backend.AI is installed with Pants as the build system. For more information on Pants, please check the following link: Pants - Getting started.
Backend.AI Component Relationships
Figure 1 is a diagram that shows the relationship between major components of Backend.AI.
Figure 2 is a diagram that shows the Mono-repo structure of Backend.AI, including the source code location and execution commands of components, as well as external components (e.g., webui).
Most of the Backend.AI components are managed in the Backend.AI repository, and their source code is located in the src/ai/backend/
subdirectory.
Briefly, the following directories summarize what each component does:
src/ai/backend/manager
(Manager): The core service that monitors the computing resources of the entire cluster and provides APIs for user authentication and session execution.src/ai/backend/agent
(Agent): A service installed on compute nodes that manages and controls containers.src/ai/backend/common
(Common): A library that collects commonly used functions and data types from several server-side components.src/ai/backend/client
(Client SDK for Python): A library that provides an official command-line interface and API wrapper functions and classes for Python.src/ai/backend/storage
(Storage Proxy): A service that allows web browsers or Client SDKs to directly perform large-scale input/output from network storage.src/ai/backend/web
(Web Server): An HTTP service that provides routing for Web UI and SPA (single-page app) implementation and web session-based user authentication.
Web and JavaScript-related components have names and package names in the form of backend.ai-xxx-yyy
.
The following are the source codes used for external components.
backend.ai-client-js
(Client SDK for Javascript): A library that provides API wrapper functions and classes for JavaScript environments.backend.ai-webui
(Web UI & Desktop App): A web component-based implementation of the UI that users actually interact with. It also supports building desktop apps based on Electron. It also includes a localized version of the app proxy that allows users to connect directly to the application port running inside the container.
Backend.AI Version Management Method
Backend.AI releases a major release every six months (March and September each year) and provides post-release support for about a year.
Therefore, version numbers follow the CalVer 형식 format of YY.0M.micro
(e.g., 20.09.14, 21.03.8).
However, due to the version number normalization of the Python packaging system, the version of the wheel package is in the YY.MM.micro
format without zero-padding in the month section (e.g., 20.9.14, 21.3.8).
Some detailed components whose update cycles are different from the main release cycle also follow the general SemVer 형식 format.
Essential Packages to Install before Development
Before installing Backend.AI, you must first install Python, pyenv, Docker, Docker Compose v2, and Rust.
When installing Backend.AI with the scripts/install-dev.sh
script in the repository, it checks whether Python, pyenv, Docker, Rust, and other packages are installed
If Python, pyenv, Docker, and Rust are not installed, you need to install the following necessary packages:
Please install Python3 using your system's package manager.
Afterwards, you will need to install pyenv
and pyenv-virtualenv
.
$ curl https://pyenv.run | bash
And you need to install Rust following the instructions on this link: Rust - Getting started.
$ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
$ source $HOME/.cargo/env
After that, you need to install Docker and Docker Compose v2.
MacOS
For MacOS, installing Docker Desktop on Mac will automatically install Docker and Docker Compose v2.
Linux environments such as Ubuntu, Debian, CentOS, Fedora Core, and Windows 10 WSL v2 environment
For Ubuntu, Debian, CentOS, Fedora Core, and other Linux environments, you can use the following script to automatically install Docker and Docker Compose v2:
$ sudo curl -fsSL https://get.docker.io | bash
After installing Docker, if you run it without using
sudo
, you may encounter an access permission error forunix:///var/run/docker.sock
.$ docker ps
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json": dial unix /var/run/docker.sock: connect: permission deniedIf you encounter such a permission issue, you can use the following command to set the permission:
$ sudo usermod -aG docker $(whoami)
$ sudo chmod 666 /var/run/docker.sock
$ sudo chown root:docker /var/run/docker.sockFor distributions other than the following:
If you want to install it on a Linux distribution other than the ones listed above, you need to install the package using Docker, and then Install Docker Compose CLI plugin as a plugin by referring to the following link Install Docker Compose CLI plugin. The following is an example of installing Docker and Docker Compose v2 using the plugin on openSUSE:
- Installing Docker package on openSUSE
$ sudo zypper install docker
- Installing Docker Compose v2 as a plugin after installing the Docker package
In this case, you need to install it with root privileges.
$ sudo mkdir -p /usr/local/lib/docker
$ sudo mkdir -p /usr/local/lib/docker/cli-plugins
$ sudo curl -SL https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/lib/docker/cli-plugins/docker-compose
$ sudo chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
- Installing Docker package on openSUSE
You can check whether Docker Compose v2 is installed by running the following command:
$ sudo docker compose version
Docker Compose version v2.6.0
Installing Development Environment
To make actual code contributions, it is necessary to set up the development environment as code changes must be tested by running them directly, not just fixing typos or making documentation contributions. Backend.AI has a structure where multiple components work together. Merely cloning a repository and creating a Python virtual environment with an editable install1 is not enough for the installation. To see the GUI that runs only after setting up and executing the manager, agent, storage-proxy, webserver, and wsproxy, and for the CLI environment, the client SDK needs to be installed separately.
In addition, Redis, PostgreSQL, and etcd servers need to be run to manage the manager and communicate with the agent.
If the required packages mentioned above are installed, you can install the various components of Backend.AI using the scripts/install-dev.sh
script in the repository.
What the script does is as follows:
- Check whether packages such as pyenv, Docker, Rust, and Python are installed and provide instructions for installation if needed.
- Install various components in their respective directories, including components required for other components to run, such as accelerator-cuda, in editable status.
- Set the default port for each component to communicate with each other and add fixtures, including example authentication keys, to the database/etcd.
- Create and run PostgreSQL, Redis, and etcd services using Docker Compose with the name "halfstack."
If the install-dev
script is executed successfully, the command to run the service daemons such as the manager and agent and example account information will be displayed. Follow the instructions and use a terminal multiplexer such as tmux, screen, or a terminal app's multi-tab feature to run the service daemons in independent shells. Once the hello world example is working, you are ready to develop and test Backend.AI.
Currently, this method only supports Intel (amd64/x86_64)-based and ARM (Armv8/v9) based macOS and Linux environments of distributions that have Docker Compose installed, such as Ubuntu/Debian/CentOS/Fedora, as well as the WSL v2 environment of Windows 10.
For WSL, it operates on the WSL v2 environment, and it is recommended to install Docker directly on WSL instead of using Windows Docker. Some additional settings are required in WSL, as shown in this link.
If you have installed Windows Docker, remove the "Enable integration with my default WSL distro" option in Settings > Resources > WSL Integration before running Backend.AI.
If you encounter the following error when running the Backend.AI agent:
Cannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?
You can fix it by taking the following measures and then running the Agent:
unset DOCKER_HOST
unset DOCKER_TLS_VERIFY
unset DOCKER_TLS_PATH
Usually, when using the install-dev script for the first time, it often fails due to various errors or failed pre-checks, requiring you to restart the process. In this case, the scripts/delete-dev.sh
script can be used to easily perform the deletion procedure.
Backend.AI 설치 및 삭제하기
이 install-dev 및 delete-dev 스크립트를 활용하면, Backend.AI를 설치 및 삭제를 자유롭게 할 수 있습니다.
먼저 Backend.AI 저장소를 복제합니다.
$ git clone https://github.com/lablup/backend.ai
위의 Backend.AI를 설치합니다.
$ cd backend.ai
$ ./scripts/install-dev.sh
설치가 완료되면, 화면에 나오는 결과 내용을 숙지하시기 바랍니다.
만약, Backend.AI를 삭제하려면 Backend.AI 저장소를 복제한 위치에서 scripts/delete-dev.sh
스크립트를 실행하면 됩니다.
$ cd backend.ai
$ ./scripts/delete-dev.sh
Installing the Open-Source Version CUDA Plugin or the CUDA Mock Plugin
If you have installed as described above, Backend.AI will be installed with only CPU usage by default.
If you want to use a GPU acceleration environment with the open-source version CUDA plugin or run a virtual GPU environment with the CUDA mock plugin, you need to install the following plugins:
- Installing the open-source CUDA plugin
To run a CUDA GPU acceleration environment, you must install the CUDA plugin when installing Backend.AI. When running the install-dev
script, you can install the open-source CUDA plugin by giving the --enable-cuda
option. For the compatibility of specific Backend.AI release versions and CUDA plugin versions, refer to src/ai/backend/accelerator/cuda_open/README.md
.
Before installing the open-source CUDA plugin, you need to install nvidia-docker to run the Backend.AI agent.
If you install the open-source CUDA plugin without installing nvidia-docker, you can confirm that the agent does not run after the following error message is displayed when the agent is executed.
2022-07-18 14:15:14.796 INFO ai.backend.accelerator.cuda [51766] CUDA acceleration is enabled.
2022-07-18 14:15:14.811 WARNING ai.backend.accelerator.cuda [51766] nvidia-docker is not installed.
2022-07-18 14:15:14.811 INFO ai.backend.accelerator.cuda [51766] CUDA acceleration is disabled.
For how to install nvidia-docker, refer to NVIDIA's Install Guide - Docker.
Installing the CUDA mock plugin
To set up and run a virtual CUDA GPU acceleration environment, you must install the CUDA mock plugin when installing Backend.AI. When running the install-dev script, you can install the open-source CUDA mock plugin by giving the
--enable-cuda-mock
option.
Installing webui
If you want to install webui together with Backend.AI, you can add the --editable-webui
option when running the install-dev script.
For more information on webui, please refer to the backend.ai-webui
(Web UI & Desktop App) repository.
To build and run webui, you need to have the latest node
and npm
packages installed.
To install webui with the install-dev script, you should check that node
and npm
are installed on your system and then install with --editable-webui
.
Things to know before contributing
As with most projects managed by distributed version control systems, to contribute to Backend.AI, you must work with the latest commit on the main branch of the original remote repository, and resolve conflicts before requesting a review. Therefore, if you have forked the original repository, your forked repository must be synchronized with the actual original repository.
It is helpful to refer to the following terminology before proceeding with the instructions.
- Upstream: The original Backend.AI repository. All major commit contents are reflected.
- Origin: The Backend.AI repository copied to "my" account through GitHub. (Note: upstream != origin)
- Local working copy: A forked repository downloaded to your local machine
Git branch notation
main
: The main branch of the current local working copyorigin/main
: The main branch of the repository (origin) cloned to create the local working copyupstream/main
: The main branch of the separately added upstream remote repository
Workflow concept
origin/main
is created when you fork.- When you clone the forked repository,
main
is created on your working computer. - Create a new topic branch from
main
and work on it. - When you push this work branch to origin and create a PR, GitHub automatically points to the original forked repository.
- To synchronize the changes to
main
of the original repository during the work, follow the procedure below:
The procedure to synchronize is as follows:
- step1: Add the original remote repository with the name upstream.
$ git remote add upstream https://github.com/lablup/backend.ai
- step2: Fetch the latest commit of the main branch of the original remote repository to the local working copy.
$ git fetch upstream
- step3: Fetch the changes in the main branch of the original remote repository into origin (the local working copy of the forked repository that you created) to reflect the latest changes.
$ git switch main && git merge --ff upstream/main
- step4: Reflect the changes made in the local working copy from step 1 to 3 in the remote repository of origin (the forked repository created by you).
$ git push origin main
Now upstream/main
and origin/main
are synchronized through main.
- step5: Update your working branch with the latest changes.
$ git switch topic
$ git merge main
If you make a mistake in step 5 with the history branches between origin/main
and upstream/main
, it can become very difficult to recover. In addition, the CI tools used by Backend.AI test the differences between upstream/main
and origin/topic
by finding a common ancestor commit. If you reuse the main name for the topic branch, these tools will not work properly.
It is best to always use a new name when creating a new branch.
Guidelines for Writing a Pull Request
To submit a PR for actual bug fixes or feature implementations, you need to first upload it to GitHub. There are several ways to do this, but the following steps are recommended:
- Fork the repository on the GitHub repository page. (If you have direct commit permissions, it is recommended to create a branch directly without forking.)
- In the local working copy, use
git remote
to point to the forked repository.- Conventionally, the upstream repository should be labeled as
upstream
, and the newly forked repository should be labeled asorigin
. - If you have already installed-dev instead of cloning after forking, then the original repository will be named
origin
, and you need to rename the remote name.
- Conventionally, the upstream repository should be labeled as
- Create a new branch.
- If it's a bug fix, add
fix/
to the beginning of the branch name, and if it's a feature addition or improvement, addfeature/
and summarize the topic in kebab-case format. (e.g.,feature/additional-cluster-env-vars
,fix/memory-leak-in-stats
). Other prefixes such asdocs/
andrefactor/
are also used. - Directly modifying the main branch and creating a PR is also possible, but it's more cumbersome because you'll need to rebase or merge it every time you synchronize with the upstream repository during the PR review and revision period. If you create a separate branch, you can rebase and merge whenever you want.
- If it's a bug fix, add
- Commit the changes to the branch.
- The commit message should follow the conventional commit style if possible. Use title prefixes such as
fix:
,feat:
,refactor:
,docs:
, andrelease:
. For Backend.AI, additional prefixes likesetup:
are used for dependency-related commits, andrepo:
is used for gitignore updates or repository directory structure changes. You may also enclose the affected component in parentheses (e.g.,fix(scripts/install-dev): Update for v21.03 release
). - The commit message should be written in English.
- The commit message should follow the conventional commit style if possible. Use title prefixes such as
- Push the branch and create a PR.
- For a PR with a separate issue, you must include the issue number in the PR body. If you want to refer to an issue in the repository, you can write it as
#401
, for example, and GitHub will automatically link it. - There are no specific requirements for the PR body, but it's good to include what problem you solved, what principle you used, what tools or libraries you used, and why you made those choices.
- The PR title and body can be written in English or Korean.
- When you create a PR, you can see various automated testing tools running. In particular, you must sign the CLA (contributor license agreement) before the review can proceed.
- You must pass the basic coding style and coding rule checks for each language (e.g., flake8 and mypy for Python code).
- If there is a
changes
directory in the repository that has atowncrier
check, when you create a PR and receive its number, create a file namedchanges/<PR number>.<modification type>
and write a one-line English sentence summarizing the changes using Markdown syntax. This file serves as the PR body if the changes are relatively simple or if there is a separate issue in the repository. The modification types arefix
,feature
,breaking
,misc
,deprecation
, anddoc
, and any project-specific differences are defined in each repository'spyproject.toml
.
- For a PR with a separate issue, you must include the issue number in the PR body. If you want to refer to an issue in the repository, you can write it as
- Proceed with the review process.
- Generally, the reviewer will tidy up the commit log and merge it in squash-merge form.
- Therefore, during the review process, feel free to create small modification commits whenever you think of them without worrying about the burden of creating many of them.
It's even better to use tools like GitHub CLI, SourceTree, and GitKraken in conjunction with git commands.
Summary
So far, we've looked at the overall component structure and repository structure of Backend.AI, how to install the development environment, and how to write pull requests. I hope this guide will help you get closer to the Backend.AI source code.
- Editable install: Installs a package in a way that allows it to be edited locally and used immediately.↩