Configuration
Configure your local instance using the .env files.
To configure the API service / Backend of UGC Guard, edit the /api/.env-File.
To configure the frontend, edit the /app/ugc-guard/.env file.
Setting up the SMTP service
INFO
UGC Guard requires a working SMTP setup to send emails. Account creation is not possible without it.
For testing, tools like MailSlurper can simulate an SMTP service.
To configure SMTP, open the /api/.env file and update the following parameters:
smtp:
host: "mail.example.com"
port: 25
use_tls: false
username: "example_mail_user"
password: "FAKE_PASSWORD"
from_email: "example_ugc@example.com"Note: Indentation is important.
Parameter details:
host: SMTP server address
port: SMTP server port
use_tls: Set to true if your SMTP server requires TLS
username: SMTP account username
password: SMTP account password
from_email: Email address used for sending emails. If this differs from the actual account, emails may be flagged as spam.Setting up the URLs
In the examle env file, the urls are set to localhost. This is useful, when trying out UGC Guard locally. To host UGC Guard, you need to change the URLs.
frontend_url: "http://localhost:5173"
api_url: "http://localhost:8099"Change these values to the URLs where you host the frontend and the backend (api_url).
Setting the secret key
The secret key is used to encode the JWT Tokens which are used for authentication. Keep your secret key unique and secure. Having the secret key means having access to your instance.
Generate a new secret key using the following command:
openssl rand -hex 32Then change the secret key in the /api/.env:
secret_key: "SECRET KEY" # openssl rand -hex 32INFO
Changing the secret key invalidates all issued access tokens. Every user will have to login again.
Setting up AI services
INFO
UGC Guard uses LangChain to support multiple LLM providers.
Add AI services to UGC Guard by configuring them in the /api/.env file.
Currently supported AI services:
- openai
- ollama
- google_genai
- openai_free (Omni)
- deepseek
- mistralai
Example configuration for two AI services:
ai:
omni:
enabled: true
name: "Omni"
description: "Content moderation model by ChatGPT, free to use"
api_key: "OPEN AI API KEY"
model: "omni-moderation-latest"
temperature: 0.7
max_tokens: 1000
type: "openai_free"
url: "https://api.openai.com/v1/"
ollama:
name: "ollama"
description: "Ollama self-hosted."
enabled: true
api_key: "Ollama API KEY"
model: "llama-3.2"
temperature: 0.7
max_tokens: 1000
type: "openai"
url: "https://api.openai.com/v1/"
logo: "https://example.com"Cost calculation
UGC Guard can track model usage costs. Set these flags per model:
costs_per_factor_input_tokens: 1.1
factor_tokens: 1000000
costs_per_factor_output_tokens: 4.4In this example, 1.1 € is charged per 1 million input tokens, and 4.4 € per 1 million output tokens.
If not set, costs default to 0.0 €.
Registration Settings
WARNING
Disabling registration also prevents new users from signing up via Identity Providers.
Control whether new users can register:
allow_registration: trueDefaults to true.
Organization Creation
WARNING
If you disable this before creating the first organization, you will not be able to use UGC Guard.
Allow or prevent users from creating new organizations:
allow_new_organizations: trueDefaults to true.
Scheduler & Worker
The scheduler handles periodic tasks (CRON jobs), such as cleaning up leftover files.
The worker takes async tasks from the task queue and performs them. Both requires Redis.
Add this to /api/.env:
redis:
url: "redis://redis:6379/0"Then configure the scheduler:
scheduler:
enabled: true
timezone: "Europe/Berlin"Task times are hardcoded in the UGC Guard source code.
Changing the timezone will adjust scheduled tasks to run in the specified timezone, not the system timezone.
Proxying Media Files from unsecure locations
To allow proxying media from URLs that are not from secure locations (e.g. localhost or any http:// URL), set the following flag to true:
allow_http_file_proxy: trueThis defaults to false.