xref: /aosp_15_r20/external/perfetto/docs/deployment/deploying-bigtrace-on-a-single-machine.md (revision 6dbdd20afdafa5e3ca9b8809fa73465d530080dc)
1# Deploying Bigtrace one a single machine
2
3NOTE: This doc is designed for administrators of Bigtrace services NOT Bigtrace users. This is also designed for non-Googlers - Googlers should look at `go/bigtrace` instead.
4
5There are multiple ways to deploy Bigtrace on a single machine:
6
71. Running the Orchestrator and Worker executables manually
82. docker-compose
93. minikube
10
11NOTE: Options 1 and 2 are intended for development purposes and are not recommended for production. For production purposes instead follow the instructions on [Deploying Bigtrace on Kubernetes.](deploying-bigtrace-on-kubernetes)
12
13## Prerequisites
14To build Bigtrace you must first follow the [Quickstart setup and building](/docs/contributing/getting-started.md#quickstart) steps but using `tools/install-build-deps --grpc` in order to install the required dependencies for Bigtrace and gRPC.
15
16## Running the Orchestrator and Worker executables manually
17To manually run Bigtrace locally with the executables you must first build the executables before running them as follows:
18
19### Building the Orchestrator and Worker executables
20```bash
21tools/ninja -C out/[BUILD] orchestrator_main
22tools/ninja -C out/[BUILD] worker_main
23```
24
25### Running the Orchestrator and Worker executables
26Run the Orchestrator and Worker executables using command-line arguments:
27
28```bash
29./out/[BUILD]/orchestrator_main [args]
30./out/[BUILD]/worker_main [args]
31```
32
33### Example
34Creates a service with an Orchestrator and three Workers which can be interacted with using the Python API locally.
35```bash
36tools/ninja -C out/linux_clang_release orchestrator_main
37tools/ninja -C out/linux_clang_release worker_main
38
39./out/linux_clang_release/orchestrator_main -w "127.0.0.1" -p "5052" -n "3"
40./out/linux_clang_release/worker_main --socket="127.0.0.1:5052"
41./out/linux_clang_release/worker_main --socket="127.0.0.1:5053"
42./out/linux_clang_release/worker_main --socket="127.0.0.1:5054"
43```
44
45## docker-compose
46To allow testing of gRPC without the overhead of Kubernetes, docker-compose can be used which builds the Dockerfiles specified in infra/bigtrace/docker and creates containerised instances of the Orchestrator and the specified set of Worker replicas.
47
48```bash
49cd infra/bigtrace/docker
50docker compose up
51# OR if using the docker compose standalone binary
52docker-compose up
53```
54This will build and start the Workers (default of 3) and Orchestrator as specified in the `compose.yaml`.
55
56## minikube
57A minikube cluster can be used to emulate the Kubernetes cluster setup on a local machine. This can be created with the script `tools/setup_minikube_cluster.sh`.
58
59This starts a minikube cluster, builds the Orchestrator and Worker images and deploys them on the cluster. This can then be interacted with using the `minikube ip`:5051 as the Orchestrator service address through a client such as the Python API.
60
61