5-Minute Quick Start

Get up and running with the Synthesized platform in just 5 minutes. This tutorial will have you masking your first database using Docker Compose.

Quick Overview

Time Required

5-10 minutes

Difficulty

Beginner

Prerequisites

Docker or Podman installed

What You’ll Build

This guide helps you set up a local demo of the Synthesized platform - our data transformation and management solution.

You’ll be able to:

  • Explore realistic synthetic test data

  • See data masking and anonymization in action

  • Try out data subsetting and sampling features

  • Experience the web interface for managing data workflows

Demo Environment Only - This setup is not for production. It uses hard-coded JWT secrets and demo defaults.

For secure, scalable deployments, see Docker Compose Production Setup or Kubernetes Production Setup.

Prerequisites

You’ll need either Docker or Podman installed on your machine.

  • Podman (Linux)

  1. Install Podman and Podman Compose

  2. Verify installation:

    podman --version

RHEL with SELinux - If you’re using RHEL with SELinux enabled, you’ll need to allow ports 389 and 80:

sudo semanage port -a -t http_port_t -p tcp 80
echo "net.ipv4.ip_unprivileged_port_start=80" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

Quick Start Installation

Download and unzip the latest governor-compose.zip, which contains the Docker Compose scripts and demo database dumps.

Navigate to the unzipped directory and run:

  • Docker

  • Podman

# Pull the required images
docker compose pull

# Start all services
docker compose up

Wait for all services to start (approximately 1-2 minutes). You’ll see log messages indicating services are ready.

Use docker compose up -d to run in the background (detached mode).

Open the UI by going to http://localhost:8080 (the port can be changed in the docker compose file).

Use the predefined credentials to log in: test@synthesized.io / Qq12345_.

# Pull the required images
podman-compose pull

# Start all services
podman-compose up

When using Podman and Podman Compose instead of Docker, you must make the following changes.

Relabel volume mounts (RHEL with SELinux only):

In docker-compose.yml, add the :Z SELinux relabeling option to all volume mounts
volumes:
  - "./initdb/create_governor_db.sql:/docker-entrypoint-initdb.d/1.sql:Z"
  - "./initdb/create_governor_schema.sql:/docker-entrypoint-initdb.d/2.sql:Z"

Change the UI service port mapping to avoid privileged ports:

If you have already performed the SELinux configuration steps above (allowing port 80 for unprivileged usage), changing the port mapping is not necessary and you can keep 80:80.
ports:
- "8080:80"

Open the UI by going to http://localhost:8080 (the port can be changed in the docker compose file).

Use the predefined credentials to log in: test@synthesized.io / Qq12345_.

Step 1: Validate the Data Source

Synthesized already comes pre-packaged with an input and output database.

  1. Navigate to Data Sources - Click on "Data Sources" in the left sidebar, you should see the 2 below

    existing datasources
  2. Check the Connection Details

    Input Source

    input source

    Output Source

    output source
  3. Test Connection - Click "Test Connection" to verify the settings

  4. Save - Click "Save" to store the data source configuration

The demo environment includes a pre-configured PostgreSQL database with sample data. You can use these same steps to connect to your own databases.

Step 2: Run Your First Masking Job

Now you’re ready to transform data using workflows.

  1. Navigate to Workflows - Click on "Workflows" in the left sidebar

  2. Select a Sample Workflow - You’ll see pre-configured demo workflows. For this quickstart, we’ll use the "Demo Masking Workflow".

  3. Run the Workflow - Click "Run" on the "Demo Masking Workflow"

  4. Monitor Progress - Watch the workflow execute in real-time

    existing project

    The demo workflow will:

    • Connect to the source PostgreSQL database

    • Mask sensitive columns (emails, names, addresses)

    • Write masked data to the output database

    • Preserve referential integrity and data types

Step 3: Verify the Results

After the workflow completes, verify the transformed data through a database client.

  1. Navigate to the Database Client

  2. Connect to the Input and Output Sources - Connect using the credentials within Synthesized.

    Connection details for demo source and target databases (host transformed_db, port 5432) are valid inside the internal Docker Compose network. To access the databases from the host machine, use localhost and port 5433.
  3. Browse Tables - Select the customer table

  4. View Masked Data - See how sensitive data has been replaced with realistic synthetic values

You’ll see realistic but fake data that maintains the format and patterns of the original data while protecting privacy.

CLI Access - You can also query the output database directly:

docker exec -it governor-compose-output-db-1 psql -U postgres -d pagila
SELECT first_name, last_name, email
FROM customer
LIMIT 5;

What You Just Did

Congratulations! You’ve successfully:

  • ✅ Set up the Synthesized platform with Docker Compose

  • ✅ Ran a data masking workflow

  • ✅ Masked sensitive information in a PostgreSQL database

  • ✅ Verified the masked results

Optional Configuration

Setting up a volume for the PostgreSQL database

The Synthesized platform uses a PostgreSQL database to store its configuration and state. By default, the data is stored inside the Docker container, which means it will be lost if the container is removed.

To enable data persistence, mount a folder from your local filesystem as a volume:

  1. Uncomment the line: <Governor DB host directory path>:/var/lib/postgresql/data"

  2. Replace <Governor DB host directory path> with an absolute path on your machine where you’d like to store the database files

This ensures that database data will be persisted between restarts of the container or Docker daemon.

You can do the same for transformed database by replacing <transformed DB host directory path>.
Setting up a volume for RocksDB

The Synthesized platform relies on RocksDB embedded key-value store to speed up its performance. However, having RocksDB folder inside the container can lead to problems both with performance and space restrictions.

To set up a volume on your local filesystem:

  1. Uncomment volumes section in docker-compose.yml

  2. Uncomment - <RocksDB host directory path>:/app/rocksdb line and substitute your path to RocksDB folder

  3. Make sure this folder is accessible for Docker using chmod 777 <RocksDB host directory path> command

Setting up a volume for log files

If you are willing to store log files locally instead of having them in the container:

  1. Uncomment volumes section in docker-compose.yml

  2. Uncomment - <logs host directory path>:/app/logs line and substitute your path to logs folder

  3. Make sure this folder is accessible for Docker using chmod 777 <logs host directory path> command

You must use separate folders on the host machine for each of the following paths: <Governor DB host directory path>, <transformed DB host directory path>, <RocksDB host directory path>, <logs host directory path>.

Next Steps

Now that you’ve seen the Synthesized platform in action, explore more capabilities:

Learn Core Concepts

Explore Advanced Features

Troubleshooting

Port 8080 already in use

# Stop the conflicting service or change the port in docker-compose.yml:
ports:
- "8081:80" # Use port 8081 instead

Permission errors (Linux/macOS)

sudo docker compose up

Services Won’t Start

If Docker Compose fails to start:

  1. Check Docker is running: docker ps

  2. Verify ports 8080 and 5432-5433 are available

  3. Increase Docker memory limit to 8GB in Docker Desktop settings

Check if all containers are running:

docker compose ps

# View detailed logs:
docker compose logs

See Installation Troubleshooting for more help.

Can’t Access UI

If you can’t access http://localhost:8080:

  1. Check all containers are running: docker compose ps

  2. View logs: docker compose logs backend

  3. Verify firewall settings aren’t blocking port 8080

Can’t connect to databases

The demo databases run inside Docker and are accessible at:

  • From host machine: localhost:5433

  • From within containers: transformed_db:5432

Workflow Fails

If the masking workflow fails:

  1. Check database connectivity in the workflow configuration

  2. Review workflow logs in the UI

  3. Verify the source database has data

See Common Issues for more solutions.

Cleaning Up

To stop and remove all containers:

docker compose down

To also remove volumes (this deletes all data):

docker compose down -v

Get Help