# Configure ODD Platform

## Configuration approaches

There are two ways to configure the Platform:

* **Environment variables** are used for simple entries
* Configuring via **YAML** can come in handy when it is necessary to define a complex configuration block (e.g OAuth2 authentication or logging levels).

<details>

<summary>YAML entries VS environment variables</summary>

Here is an example of how to define the following block and configure the Platform with it using environment variables.

YAML:

```yaml
spring:
    datasource:
        url: URL
        username: USERNAME
        password: PASSWORD
    custom-datasource:
        url: URL
        username: USERNAME
        password: PASSWORD
```

To configure the Platform using environment variables, replace semicolons with underscores and uppercasing words, like so:

* `SPRING_DATASOURCE_URL=URL`
* `SPRING_DATASOURCE_USERNAME=USERNAME`
* `SPRING_DATASOURCE_PASSWORD=PASSWORD`
* `SPRING_CUSTOM_DATASOURCE_URL=URL`
* `SPRING_CUSTOM_DATASOURCE_USERNAME=USERNAME`
* `SPRING_CUSTOM_DATASOURCE_PASSWORD=PASSWORD`

</details>

## Connect your database

For all of its features ODD Platform uses PostgreSQL database and PostgreSQL database only. These variables are needed to be defined to connect ODD Platform to database:

* `spring.datasource.url`: [JDBC string](https://jdbc.postgresql.org/documentation/80/connect.html) of your PostgreSQL database. Default value is `jdbc:postgresql://127.0.0.1:5432/odd-platform`
* `spring.datasource.username`: your PostgreSQL user's name. Default value is `odd-platform`
* `spring.datasource.password`: your PostgreSQL user's password. Default value is `odd-platform-password`

These variables are optional (by default, they have the same value as `spring.datasource`) and will be used to connect to PostgreSQL and store Lookup Tables :

* `spring.custom-datasource.url`: [JDBC string](https://jdbc.postgresql.org/documentation/80/connect.html) of your PostgreSQL database where we store Lookup Tables. Default value is `jdbc:postgresql://127.0.0.1:5432/odd-platform`. Note: you can specify any {database\_host}, {database\_port} or {database\_name} but schema, where Lookup Tables are stored always is lookup\_tables\_schema.
* `spring.custom-datasource.username`: your PostgreSQL user's name for custom-datasource. Default value is `odd-platform`
* `spring.custom-datasource.password`: your PostgreSQL user's password for custom-datasource. Default value is `odd-platform-password`

So that your database connection defining block would look like this:

{% tabs %}
{% tab title="YAML" %}

```yaml
spring:
    datasource:
        url: jdbc:postgresql://{database_host}:{database_port}/{database_name}
        username: {database_username}
        password: {database_password}
#    [OPTIONAL]
     custom-datasource:
        url: jdbc:postgresql://{database_host}:{database_port}/{database_name}
        username: {database_username}
        password: {database_password}
```

{% endtab %}

{% tab title="Environment variables" %}

```
SPRING_DATASOURCE_URL=jdbc:postgresql://{database_host}:{database_port}/{database_name}
SPRING_DATASOURCE_USERNAME={database_username}
SPRING_DATASOURCE_PASSWORD={database_password}
# [OPTIONAL]
SPRING_CUSTOM_DATASOURCE_URL=jdbc:postgresql://{database_host}:{database_port}/{database_name}
SPRING_CUSTOM_DATASOURCE_USERNAME={database_username}
SPRING_CUSTOM_DATASOURCE_PASSWORD={database_password}
```

{% endtab %}
{% endtabs %}

## Security

Please follow the [Enable security](https://docs.opendatadiscovery.org/configuration-and-deployment/enable-security) section for enabling security in ODD Platform.

## Select session provider

ODD Platform is able to keep users' sessions in several places such as in memory, PostgreSQL database or Redis. A session provider can be set via `session.provider` variable with following expected values:

* `IN_MEMORY`: Local in-memory storage. ODD Platform defaults to this value
* `INTERNAL_POSTGRESQL`: Underlying PostgreSQL database
* `REDIS`: [Redis data-store](https://redis.io/).

{% hint style="info" %}
If you'd like to use only one instance of ODD Platform and you're ready to tolerate users' logouts each time the Platform restarts, the best choice would be **`IN_MEMORY`**

\
If you already have a Redis in your infrastructure or you're willing to install it, the best choice would be **`REDIS`**

\
Otherwise **`INTERNAL_POSTGRESQL`** is the best pick
{% endhint %}

{% tabs %}
{% tab title="YAML" %}
**In memory (default)**

```yaml
session:
    provider: IN_MEMORY
```

**Internal PostgreSQL**

```yaml
session:
    provider: INTERNAL_POSTGRESQL
```

**Redis**

In order to connect to Redis following variables are needed to be defined:

* `spring.redis.host`: Redis host
* `spring.redis.port`: Redis port
* `spring.redis.username`: Redis user's name
* `spring.redis.password`: Redis user's password
* `spring.redis.database`: Redis database index

YAML for Redis session provider

```yaml
spring:
    redis:
        host: {redis_host}
        port: {redis_port}
        username: {redis_username}
        password: {redis_password}
session:
    provider: REDIS
```

{% endtab %}

{% tab title="Environment Variables" %}
**In memory (default)**

* `SESSION_PROVIDER=IN_MEMORY`

**Internal PostgreSQL**

* `SESSION_PROVIDER=INTERNAL_POSTGRESQL`

**Redis**

In order to connect to Redis following variables are needed to be defined:

* `spring.redis.host`: Redis host
* `spring.redis.port`: Redis port
* `spring.redis.username`: Redis user's name
* `spring.redis.password`: Redis user's password
* `spring.redis.database`: Redis database index

Environment variables for Redis session provider:

```
SESSION_PROVIDER=REDIS
SPRING_REDIS_HOST={redis_host}
SPRING_REDIS_PORT={redis_port}
SPRING_REDIS_USERNAME={redis_username}
SPRING_REDIS_PASSWORD={redis_password}
SPRING_REDIS_DATABASE={redis_database}
```

{% endtab %}
{% endtabs %}

### Session lifetime (`spring.session.timeout`)

Spring Session's timeout controls how long an authenticated session remains valid between requests. ODD Platform's shipped default is `-1`, which means **sessions never expire**.

{% hint style="warning" %}
**`spring.session.timeout: -1` means sessions never expire.** A user who logs in once remains authenticated until their session record is explicitly invalidated (logout, cache eviction, or — for `IN_MEMORY` — platform restart). For any deployment that is internet-facing or serves multiple users, set `spring.session.timeout` to a finite duration so stolen cookies and forgotten sessions eventually lapse.
{% endhint %}

* `spring.session.timeout`: session idle timeout. Duration string (for example `30m`, `8h`, `1d`). Defaults to `-1` (no timeout). Applies to all three providers (`IN_MEMORY`, `INTERNAL_POSTGRESQL`, `REDIS`).

{% tabs %}
{% tab title="YAML" %}

```yaml
spring:
    session:
        timeout: 30m
```

{% endtab %}

{% tab title="Environment variables" %}

```
SPRING_SESSION_TIMEOUT=30m
```

{% endtab %}
{% endtabs %}

## Enable Metrics

ODD Platform can represent some of the metadata it ingests as time-series charts — for example, row counts on a MySQL table or the on-disk size of a Redshift database. Metrics handling splits into two independent concerns that share the `metrics.*` config namespace but do different jobs:

* **Storage** (`metrics.storage`) — the storage tier the platform uses for ingested metrics. This selects where the platform **writes** metric points as they arrive from collectors **and** where it **reads them back** when rendering UI charts. Both directions hit the same backend — you cannot write to one and read from another.
* **Export** (`metrics.export.*`) — where the platform **pushes metrics out** as OpenTelemetry telemetry, for long-term retention and dashboarding in your observability stack.

Configure the two independently; it is valid (and common) to run with `INTERNAL_POSTGRES` storage and no OTLP export, or with `PROMETHEUS` storage and OTLP export disabled, or any other combination.

### Metric storage backend

`metrics.storage` selects the storage tier for metric writes and reads:

* `INTERNAL_POSTGRES` (default) — metrics are **written to and read from** the ODD Platform's own PostgreSQL database (`metric_series` / `metric_point` tables). Zero additional infrastructure; suitable for most single-cluster deployments.
* `PROMETHEUS` — metrics are **remote-written to** an external Prometheus instance (via the [Prometheus remote-write protocol](https://prometheus.io/docs/specs/remote_write_spec/) at `/api/v1/write`, using Snappy-compressed Protobuf-encoded write requests) **and queried from** the same instance (via the [instant-query API](https://prometheus.io/docs/prometheus/latest/querying/api/#instant-queries) at `/api/v1/query`). Suitable when you already run Prometheus for observability and want to avoid storing duplicate metric data in ODD's PostgreSQL.

`metrics.prometheus-host` is the base URL of the Prometheus instance and is only consulted when `metrics.storage=PROMETHEUS`. Both `/api/v1/write` and `/api/v1/query` are called on this single host. Defaults to `http://localhost:9090`.

{% hint style="warning" %}
**`metrics.storage=PROMETHEUS` requires `metrics.prometheus-host` to be set.** The platform validates this at startup — if `metrics.prometheus-host` is empty (or unset) while `metrics.storage=PROMETHEUS`, ODD Platform fails to start with `IllegalStateException: Prometheus host is not defined`. Set it to the Prometheus base URL (for example `http://prometheus:9090`) in the same configuration change that flips the storage backend.
{% endhint %}

{% hint style="warning" %}
**The Prometheus instance must accept remote-write AND queries on the same endpoint.** ODD Platform does not support splitting read and write paths across different hosts.

* **Prometheus server flag** — `--web.enable-remote-write-receiver` must be enabled on the Prometheus process. It is **disabled by default** in Prometheus v2.33+; without it, every ODD Platform metric write returns `404 Not Found` and is silently dropped. The ingestion API still returns `200` to the collector because the remote-write happens downstream of the HTTP acknowledgement, so collector logs will not surface the failure — the symptom is empty charts in the UI.
* **Endpoint must support both paths** — `POST /api/v1/write` (for writes) and `GET /api/v1/query` (for reads) must both resolve to the same Prometheus-compatible host.
* **Read-only Prometheus-compatible backends do not work.** A [Thanos](https://thanos.io/) querier, [Mimir](https://grafana.com/oss/mimir/) in query-only mode, or any other backend that exposes `/api/v1/query` but rejects `/api/v1/write` cannot be used as a `metrics.storage=PROMETHEUS` target. Point `metrics.prometheus-host` at the write-accepting Prometheus instance itself (or at a Mimir distributor that terminates both paths).
  {% endhint %}

{% tabs %}
{% tab title="YAML" %}

```yaml
metrics:
    storage: PROMETHEUS        # INTERNAL_POSTGRES (default) or PROMETHEUS
    prometheus-host: http://prometheus:9090
```

{% endtab %}

{% tab title="Environment variables" %}

```
METRICS_STORAGE=PROMETHEUS
METRICS_PROMETHEUS_HOST=http://prometheus:9090
```

{% endtab %}
{% endtabs %}

### Metric export to OTLP

Independent of where metrics are stored, ODD Platform can push metrics as OpenTelemetry telemetry to an [OTLP collector](https://opentelemetry.io/docs/collector/). Downstream you can forward that stream to [Prometheus](https://prometheus.io/), [New Relic](https://newrelic.com/), or any backend that accepts [OTLP exporters](https://aws-otel.github.io/docs/components/otlp-exporter).

* `metrics.export.enabled`: must be set to `true` to build and wire the OTLP exporter bean. Defaults to `false`.
* `metrics.export.otlp-endpoint`: OTLP collector endpoint (gRPC). Defaults to `http://localhost:4317`.

{% tabs %}
{% tab title="YAML" %}

```yaml
metrics:
    export:
        enabled: true
        otlp-endpoint: {otlp-endpoint-url}
```

{% endtab %}

{% tab title="Environment variables" %}

```
METRICS_EXPORT_ENABLED=true
METRICS_EXPORT_OTLP_ENDPOINT={otlp-endpoint-url}
```

{% endtab %}
{% endtabs %}

## Enable Alert Notifications

Any alert that is created inside the platform can be sent via webhook and/or [Slack incoming webhook](https://api.slack.com/messaging/webhooks) and/or email notifications (via [Google SMTP](https://support.google.com/a/answer/176600?hl=en), [AWS SMTP](https://repost.aws/knowledge-center/ses-set-up-connect-smtp), etc). Such notifications contain information such as:

1. Name of the entity upon which alert has been created
2. Data source and namespace of an entity
3. Owners of an entity
4. Possibly affected entities

ODD Platform uses the PostgreSQL replication mechanism to be able to send a notification even if there's a network lag occurred or the Platform crushes. In order to enable this functionality, an underlying PostgreSQL database needs to be configured as well.

### PostgreSQL Configuration

PostgreSQL database must be [configured](https://www.postgresql.org/docs/current/config-setting.html) in order to leverage the replication mechanism of the Platform along with the granting the database user replication permissions.

#### Database settings

To configure the database, add the following entries to the `postgresql.conf` file:

```
max_wal_senders = 1
wal_keep_size = 16
wal_level = logical
max_replication_slots = 1
```

Or if the replication mechanism is already configured, just increment the `max_wal_senders` and `max_replication_slots` numbers.

#### Database user permissions

ODD Platform database user must be granted with replication permissions:

```sql
ALTER ROLE {database_username} WITH REPLICATION
```

{% hint style="info" %}
User permissions and database configuration may vary from one on-demand/cloud provider to another.

For instance, In AWS RDS, PostgreSQL instances are managed services where certain aspects of replication management are automated. This is done to minimize the risk of misconfiguration. Due to this managed nature, some settings are either not exposed or are altered differently compared to a standard PostgreSQL setup. To enable notifications in such an environment, follow these steps (only differences are mentioned): 1. Alter the `rds.logical_replication` parameter in your database instance's Parameter Group by setting it to `1`, instead of directly modifying the `wal_level` parameter. 2. Ensure the ODD user connecting to the database has the `rds_replication` role. The Master username of the database typically already has this role by default. If using a different username, you may need to assign the necessary role using the command `GRANT rds_replication TO {your_database_username}; 3.`If you changed max\_wal\_senders to 5 (as it's mentioned as a minimal value in Parameter Group) and then constantly getting messages like "The parameter max\_wal\_senders was set to a value incompatible with replication. It has been adjusted from 5 to 55" in the events list of the database instance, please, consider adjusting the parameter from 5 to the mentioned value in the parameter group to exclude automatic change done by RDS.
{% endhint %}

### ODD Platform configuration

Following variables need to be defined:

* `notifications.enabled`: must be set to `true`. Defaults to `false`
* `notifications.message.downstream-entities-depth`: limits the amount of fetching of affected data entities **in terms of lineage graph level.** Defaults to 1
* `notifications.wal.advisory-lock-id`: ODD Platform uses [PostgreSQL advisory lock](https://www.postgresql.org/docs/current/explicit-locking.html#ADVISORY-LOCKS) in order to make sure that in a case of horizontal scaling only one instance of the Platform processes alert messages. This setting defines advisory lock id. Defaults to `100`
* `notifications.wal.replication-slot-name`: PostgreSQL replication slot name will be created if it doesn't exist yet. Defaults to `odd_platform_replication_slot`
* `notifications.wal.publication-name`: PostgreSQL publication name will be created if it doesn't exist yet. Defaults to `odd_platform_publication_alert`
* `notifications.receivers.slack.url`: [Slack incoming webhook](https://api.slack.com/messaging/webhooks) URL. The clickable links rendered inside Slack messages use `odd.platform-base-url` (see below) — there is **no** `notifications.receivers.slack.*` base-URL setting.
* `notifications.receivers.webhook.url`: Generic webhook URL
* `notifications.receivers.email.host`: the SMTP server.
* `notifications.receivers.email.port`: the port used for the email protocol (SMTP, IMAP, or POP3)
* `notifications.receivers.email.protocol`: the email protocol (e.g., SMTP, SMTPS, IMAP, IMAPS, POP3, POP3S)
* `notifications.receivers.email.smtp.auth`: a boolean value (true or false) indicating whether the SMTP server requires authentication
* `notifications.receivers.email.smtp.starttls`: a boolean indicating whether to use STARTTLS, a security protocol that upgrades an unencrypted connection to an encrypted one
* `notifications.receivers.email.password`: the password used for email authentication
* `notifications.receivers.email.sender`: the email address sending the notifications
* `notifications.receivers.email.notification.emails`: the list of recipients for the email notifications
* `odd.platform-base-url`: ODD Platform URL used to build clickable links inside alert notifications. **Shared** by every sender (Slack, email, webhook) — set it once. Defaults to `http://localhost:8080`, which produces unreachable links for anyone outside the host machine; set it to your real deployment URL (for example `https://odd.your-domain.com`) in any non-local environment.

ODD Platform configuration would look like this:

{% tabs %}
{% tab title="YAML" %}

```yaml
notifications:
  enabled: true
  message:
    downstream-entities-depth: {downstream_entities_depth_to_fetch}
  wal:
    advisory-lock-id: {postgresql_advisory_lock_id}
    replication-slot-name: {postgresql_replication_slot_name}
    publication-name: {postgresql_publication_name}
  receivers:
    slack:
      url: {slack_incoming_webhook_url}
    webhook:
      url: {webhook_url}
    email: 
      host: {host} 
      port: {port}
      protocol: {protocol}  # SMTP, SMTPS, IMAP, IMAPS, POP3, POP3S 
      smtp: 
        auth: true # Set to true if SMTP server requires authentication 
        starttls: true # Set to true to enable STARTTLS 
      password:  {email_password}
      sender: {sender_email} 
      notification: 
        emails: {1@mail.com,2@mail.com}   
odd:
  platform-base-url: {platform_url}
```

{% endtab %}

{% tab title="Environment variables" %}

```
NOTIFICATIONS_ENABLED=true
NOTIFICATIONS_MESSAGE_DOWNSTREAM-ENTITIES_DEPTH={downstream_entities_depth_to_fetch}
NOTIFICATIONS_WAL_ADVISORY-LOCK-ID={postgresql_advisory_lock_id}
NOTIFICATIONS_WAL_REPLICATION-SLOT-NAME={postgresql_replication_slot_name}
NOTIFICATIONS_WAL_PUBLICATION-NAME={postgresql_publication_name}
NOTIFICATIONS_RECEIVERS_SLACK_URL={slack_incoming_webhook_url}
NOTIFICATIONS_RECEIVERS_WEBHOOK_URL={webhook_url}
NOTIFICATIONS_RECEIVERS_EMAIL_HOST={host}
NOTIFICATIONS_RECEIVERS_EMAIL_PORT={port}
NOTIFICATIONS_RECEIVERS_EMAIL_PROTOCOL={protocol} # SMTP, SMTPS, IMAP, IMAPS, POP3, POP3S
NOTIFICATIONS_RECEIVERS_EMAIL_SMTP_AUTH=true      # Set to true if SMTP server requires authentication
NOTIFICATIONS_RECEIVERS_EMAIL_SMTP_STARTTLS=true  # Set to true to enable STARTTLS
NOTIFICATIONS_RECEIVERS_EMAIL_PASSWORD={email_password}
NOTIFICATIONS_RECEIVERS_EMAIL_SENDER={sender_email}
NOTIFICATIONS_RECEIVERS_EMAIL_NOTIFICATION_EMAILS={1@mail.com,2@mail.com}
ODD_PLATFORM-BASE-URL={platform_url}
```

{% endtab %}
{% endtabs %}

### Example: Gmail SMTP

A minimal, working configuration for Gmail's SMTP over STARTTLS. Gmail requires an [**app password**](https://support.google.com/accounts/answer/185833) (generated from your Google account with 2-Step Verification enabled) — your regular account password will not work.

{% tabs %}
{% tab title="YAML" %}

```yaml
notifications:
  enabled: true
  wal:
    advisory-lock-id: 100
    replication-slot-name: odd_platform_replication_slot
    publication-name: odd_platform_publication_alert
  receivers:
    email:
      host: smtp.gmail.com
      port: 587
      protocol: SMTP
      smtp:
        auth: true
        starttls: true
      sender: odd-alerts@your-domain.com
      password: {gmail_app_password}
      notification:
        emails: ops@your-domain.com,data-team@your-domain.com
odd:
  platform-base-url: https://odd.your-domain.com
```

{% endtab %}

{% tab title="Environment variables" %}

```
NOTIFICATIONS_ENABLED=true
NOTIFICATIONS_WAL_ADVISORY-LOCK-ID=100
NOTIFICATIONS_WAL_REPLICATION-SLOT-NAME=odd_platform_replication_slot
NOTIFICATIONS_WAL_PUBLICATION-NAME=odd_platform_publication_alert
NOTIFICATIONS_RECEIVERS_EMAIL_HOST=smtp.gmail.com
NOTIFICATIONS_RECEIVERS_EMAIL_PORT=587
NOTIFICATIONS_RECEIVERS_EMAIL_PROTOCOL=SMTP
NOTIFICATIONS_RECEIVERS_EMAIL_SMTP_AUTH=true
NOTIFICATIONS_RECEIVERS_EMAIL_SMTP_STARTTLS=true
NOTIFICATIONS_RECEIVERS_EMAIL_SENDER=odd-alerts@your-domain.com
NOTIFICATIONS_RECEIVERS_EMAIL_PASSWORD={gmail_app_password}
NOTIFICATIONS_RECEIVERS_EMAIL_NOTIFICATION_EMAILS=ops@your-domain.com,data-team@your-domain.com
ODD_PLATFORM-BASE-URL=https://odd.your-domain.com
```

{% endtab %}
{% endtabs %}

### Known limitations

ODD Platform builds its `JavaMailSender` with only the keys documented above. The JavaMail session inherits defaults for every other SMTP parameter, and several of those defaults are operator-hostile in production deployments. None of the following is currently exposed as an ODD configuration key — where a workaround exists it is noted, but the limitations are real and should drive your choice of SMTP relay.

{% hint style="warning" %}
**SMTP timeouts are unset — an unreachable SMTP server will hang notification delivery.** The JavaMail defaults for `mail.smtp.connectiontimeout`, `mail.smtp.timeout` (read), and `mail.smtp.writetimeout` are **infinite**. If the configured SMTP host is unreachable, slow, or stalls mid-response, the notification thread blocks until the TCP stack eventually tears the connection down — there is no application-level timeout to cut it short. Use an SMTP relay you control (or a trusted managed service) and monitor its availability separately from ODD Platform.
{% endhint %}

{% hint style="warning" %}
**Only STARTTLS is supported — implicit-TLS ports (e.g. Gmail port 465, many corporate relays) will not work.** ODD Platform exposes `notifications.receivers.email.smtp.starttls` but does not expose `mail.smtp.ssl.enable`, which is the JavaMail flag required to open an implicit-TLS connection. If your SMTP server only accepts connections on an implicit-TLS port, you must front it with a STARTTLS-capable relay (port 587 is the common choice). Gmail over port 587 with STARTTLS (the example above) works; Gmail over port 465 does not.
{% endhint %}

{% hint style="warning" %}
**Self-signed or internal-CA SMTP certificates require a JVM-level workaround.** `mail.smtp.ssl.trust` is not exposed as an ODD configuration key. If your SMTP relay presents a certificate signed by a private CA, the connection will fail certificate validation unless you either (a) add the CA to the JVM truststore of the ODD Platform container (`$JAVA_HOME/lib/security/cacerts` or a `-Djavax.net.ssl.trustStore=...` override) before starting the process, or (b) use an SMTP relay with a publicly-trusted certificate. There is no configuration-file path to this.
{% endhint %}

{% hint style="warning" %}
**Non-ASCII subjects and bodies may be mangled.** The MIME message is built without an explicit charset, so JavaMail falls back to the JVM default. Containers that do not set `file.encoding` or `LANG` explicitly can end up with `US-ASCII` defaults, which corrupt non-Latin alert content. If your alert text includes non-ASCII characters, set `JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF-8` on the ODD Platform container.
{% endhint %}

{% hint style="danger" %}
**Silent partial delivery: if one recipient fails, subsequent recipients are skipped.** `EmailNotificationSender` iterates over the recipient list in `notifications.receivers.email.notification.emails` and calls the SMTP server once per recipient. If recipient N fails (bad address, mailbox full, server-side policy rejection), the exception is wrapped as a `RuntimeException` and the loop terminates — recipients N+1, N+2, … **never receive the alert**. There is no retry and no partial-failure metric. Keep the recipient list short, use distribution lists on the SMTP side for fan-out, and validate addresses before adding them to the list.
{% endhint %}

### Cleaning up

{% hint style="danger" %}
ODD Platform **doesn't clean up** replication slot it has created. If you need to disable Alert Notification functionality, please perform the following steps along with disabling a feature on a ODD Platform side
{% endhint %}

In order to remove replication slot and publication, these SQL queries must be run against the database:

* ```sql
  SELECT pg_drop_replication_slot('<>');
  ```

  where `<>` is a name of replication slot defined in the ODD Platform. Default is `odd_platform_replication_slot`
* ```sql
  DROP PUBLICATION IF EXISTS <>;
  ```

  where `<>` is a name of publication defined in the ODD Platform. Default is `odd_platform_publication_alert`

## Prometheus AlertManager Integration

In addition to raising alerts internally (failed jobs, data-quality tests, schema changes, distribution anomalies — see the [Alerting](https://docs.opendatadiscovery.org/features#alerting) feature), ODD Platform exposes an **inbound webhook** that accepts Prometheus [AlertManager](https://prometheus.io/docs/alerting/latest/alertmanager/) notifications. Each inbound alert becomes a **Distribution Anomaly** alert on the referenced data entity, visible in the Alerts section and on the entity's page.

### Endpoint

```
POST /ingestion/alert/alertmanager
```

Response: `204 No Content` on success. The endpoint consumes the AlertManager webhook body and always returns empty.

### Payload shape

The platform accepts a subset of the [AlertManager webhook schema](https://prometheus.io/docs/alerting/latest/configuration/#webhook_config) — specifically `alerts[].labels`, `alerts[].generatorURL`, and `alerts[].startsAt`. Other top-level AlertManager fields (`version`, `status`, `receiver`, `groupLabels`, `commonLabels`, …) are accepted and ignored.

```json
{
  "alerts": [
    {
      "labels": {
        "entity_oddrn": "//postgresql/host/pg-host/databases/shop/schemas/public/tables/orders",
        "alertname": "OrdersRowCountDropped"
      },
      "generatorURL": "https://prometheus.example.com/graph?g0.expr=...",
      "startsAt": "2026-04-24T12:34:56"
    }
  ]
}
```

{% hint style="warning" %}
**The `entity_oddrn` label is required for the alert to route to a data entity.** ODD Platform reads `alerts[].labels["entity_oddrn"]` to determine which data entity the alert belongs to. An alert submitted without this label is stored with an empty owner, will not appear on any entity's page, and is effectively orphaned. Configure your AlertManager route or your alerting rules to include the target entity's ODDRN as a label.
{% endhint %}

### Example AlertManager receiver configuration

A minimal `alertmanager.yml` receiver forwarding every alert to ODD Platform:

```yaml
route:
  group_by: ['job']
  group_wait: 1s
  group_interval: 5m
  repeat_interval: 12h
  receiver: odd-platform
receivers:
  - name: odd-platform
    webhook_configs:
      - url: 'http://odd-platform:8080/ingestion/alert/alertmanager'
```

The reference example shipped with the platform is at [`docker/examples/config/alertmanager.yaml`](https://github.com/opendatadiscovery/odd-platform/blob/main/docker/examples/config/alertmanager.yaml) in the odd-platform repo. To make an alert route to a specific entity, attach `entity_oddrn` as a label in your Prometheus alerting rules — for example:

```yaml
groups:
  - name: orders
    rules:
      - alert: OrdersRowCountDropped
        expr: row_count{table="orders"} < 1000
        labels:
          entity_oddrn: "//postgresql/host/pg-host/databases/shop/schemas/public/tables/orders"
        annotations:
          summary: "Orders table row count dropped below 1000"
```

### Authentication

{% hint style="danger" %}
**The AlertManager webhook endpoint is not authenticated.** ODD Platform whitelists the entire `/ingestion/**` namespace in Spring Security, and the ingestion auth filter controlled by `auth.ingestion.filter.enabled` only guards `/ingestion/entities` (POST) — it does **not** cover `/ingestion/alert/alertmanager`. Anyone with network reach to the platform can POST arbitrary AlertManager-shaped payloads and create alerts on any data entity whose ODDRN they can guess. Toggling `auth.ingestion.filter.enabled` has no effect on this endpoint.
{% endhint %}

Because no application-level authentication is enforced on this endpoint today, protect it at the perimeter. Any of these approaches works:

* **Network segmentation** — expose ODD Platform only on a private network or VPN; in Kubernetes, keep AlertManager and the platform in the same cluster and use a NetworkPolicy so only the AlertManager pod can reach `/ingestion/alert/alertmanager`.
* **Reverse proxy with its own authentication** — put an authenticating proxy in front of ODD Platform (for example, nginx with `auth_request` delegating to an SSO sidecar, or Envoy with `ext_authz`) and require AlertManager to present a proxy-validated credential on every webhook call.
* **mTLS termination** — require client certificates on `/ingestion/alert/alertmanager` at the ingress or load balancer layer, and issue a certificate only to the AlertManager pod.

A platform-side fix to extend the ingestion auth filter to cover this endpoint is tracked upstream. Until it ships, apply one of the perimeter controls above for any deployment where the platform's network is not fully trusted.

For the broader ingestion-auth model — what `auth.ingestion.filter.enabled` does cover and how ingestion API keys are provisioned for `/ingestion/entities` — see [Enable security](https://docs.opendatadiscovery.org/configuration-and-deployment/enable-security) and [Server-to-server (S2S) API keys](https://docs.opendatadiscovery.org/configuration-and-deployment/enable-security/authentication/s2s).

## Enable Data Collaboration

Data collaboration feature allows users to initiate discussion about specific data entity in messengers directly from the ODD Platform. Thread replies are tracked by ODD Platform and saved in it, allowing users to retrieve conversation's context and decisions from one place.

At the moment ODD Platform supports only Slack as a target messenger. It uses Slack APIs to send messages and [Slack Events API](https://api.slack.com/apis/connections/events-api) to receive message's thread replies.

### Creating Slack application

Go to the [Slack apps](https://api.slack.com/apps) website and click on `Create New App -> From an app manifest`

<figure><img src="https://3630572601-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FGZJ5RdCEQxq16TnRL3Tq%2Fuploads%2Fgit-blob-11d270ac3d6302cd98594943c746ba17fe42033b%2Fimage.png?alt=media" alt=""><figcaption><p>Creating an app</p></figcaption></figure>

Select a workspace you want to add an application to and click `Next`

<figure><img src="https://3630572601-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FGZJ5RdCEQxq16TnRL3Tq%2Fuploads%2Fgit-blob-f2496b2b43d1be3b005c76bdc2dec8a147c0de1b%2Fimage.png?alt=media" alt=""><figcaption><p>Selecting a workspace to install application to</p></figcaption></figure>

Enter the following manifest into the YAML section, replace the `<ODD_PLATFORM_BASE_URL>` with URL of your ODD Platform deployment and click `Next`

```yaml
display_information:
  name: ODD Data Collaboration
features:
  bot_user:
    display_name: ODD Data Collaboration
    always_online: false
oauth_config:
  scopes:
    bot:
      - channels:history
      - channels:read
      - chat:write
      - users:read
      - incoming-webhook
settings:
  event_subscriptions:
    request_url: https://<ODD_PLATFORM_BASE_URL>/api/slack/events
    bot_events:
      - message.channels
  org_deploy_enabled: false
  socket_mode_enabled: false
  token_rotation_enabled: false
```

<figure><img src="https://3630572601-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FGZJ5RdCEQxq16TnRL3Tq%2Fuploads%2Fgit-blob-5c190d5dec118a72d886c9ac64257ed87d99f053%2Fimage.png?alt=media" alt=""><figcaption><p>Inserting a YAML manifest</p></figcaption></figure>

Review your application's scopes and permissions and click `Create`

<figure><img src="https://3630572601-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FGZJ5RdCEQxq16TnRL3Tq%2Fuploads%2Fgit-blob-d513c0e26fa9390e25eaba0ce60a440d75d34b67%2Fimage.png?alt=media" alt=""><figcaption><p>Reviewing scopes and permissions</p></figcaption></figure>

Proceed with Slack instructions on how to install application into workspace and you should be good to go.

### ODD Platform configuration

Following variables need to be defined:

* `datacollaboration.enabled`: must be set to `true`. Defaults to `false`
* `datacollaboration.receive-event-advisory-lock-id`: PostgreSQL advisory lock id for a job, which translates events from messengers to messages. Defaults to `110`
* `datacollaboration.sender-message-advisory-lock-id`: PostgreSQL advisory lock id for a job, which sends messages created in the platform to messengers. Defaults to `120`
* `datacollaboration.message-partition-period`: time interval in days for a message table partition in PostgreSQL. Defaults to `30`
* `datacollaboration.sending-messages-retry-count`: how many times the Platform will attempt to send a message to provider. Cannot be less than zero. Defaults to `3`
* `datacollaboration.slack-oauth-token`: Slack application OAuth token used for communicating with Slack. Can be retrieved in the `OAuth & Permissions` section of a Slack application.\\

  <figure><img src="https://3630572601-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FGZJ5RdCEQxq16TnRL3Tq%2Fuploads%2Fgit-blob-dd98ef4f4d854f1d47f5ff5f560925874faa2bad%2Fimage.png?alt=media" alt=""><figcaption><p>Retrieving OAuth Token</p></figcaption></figure>

{% tabs %}
{% tab title="YAML" %}

```yaml
datacollaboration:
  receive-event-advisory-lock-id: {receive_event_advisory_lock_id}
  sender-message-advisory-lock-id: {sender_message_advisory_lock_id}
  message-partition-period: {message_partition_period}
  sending-messages-retry-count: {sending-messages-retry-count}
  enabled: true
  slack-oauth-token: {slack_oauth_token}

odd:
  platform-base-url: {platform_url}
```

{% endtab %}

{% tab title="Environment variables" %}

```
DATACOLLABORATION_ENABLED=true
DATACOLLABORATION_RECEIVE_EVENT_ADVISORY_LOCK_ID={receive_event_advisory_lock_id}
DATACOLLABORATION_SENDER_MESSAGE_ADVISORY_LOCK_ID={sender_message_advisory_lock_id}
DATACOLLABORATION_MESSAGE_PARTITION_PERIOD={message_partition_period}
DATACOLLABORATION_SENDING_MESSAGES_RETRY_COUNT={sending_messages_retry_count}
DATACOLLABORATION_SLACK_OAUTH_TOKEN={slack_oauth_token}
ODD_PLATFORM_BASE_URL={odd_platform_base_url}
```

{% endtab %}
{% endtabs %}

## Housekeeping Settings Configuration

ODD Platform runs a background **housekeeping job** that permanently deletes stale data on a schedule. The job fires every **15 minutes**, is guarded by a ShedLock so only one platform instance runs it at a time in a multi-instance deployment, and iterates through three cleanup tasks: resolved alerts, search-facet history, and soft-deleted data entities.

### Configuration keys

* `housekeeping.enabled`: enables the background job. Defaults to `true`. See the caveat below before disabling.
* `housekeeping.ttl.resolved_alerts_days`: how many days an alert in `RESOLVED_AUTOMATICALLY` status is kept after its status-update timestamp before the housekeeping job permanently deletes it (alongside its chunk records). Integer, days. Defaults to `30`. **Note:** the retention window is intended to apply to both `RESOLVED` (manual) and `RESOLVED_AUTOMATICALLY` (system) states, but a known platform bug currently exempts manual resolutions from the retention check — manual `RESOLVED` alerts are hard-deleted on the next housekeeping run regardless of this value. See [Auto-cleanup of resolved alerts](https://docs.opendatadiscovery.org/features#auto-cleanup-of-resolved-alerts) on the Features page for the operator-side workaround.
* `housekeeping.ttl.search_facets_days`: how many days a saved search-facet entry is kept past its `last_accessed_at` timestamp before being deleted. Integer, days. Defaults to `30`.
* `housekeeping.ttl.data_entity_delete_days`: how many days a data entity with status `DELETED` is kept after its status-update timestamp. After this, the entity and its cascading related rows — metadata values, ownerships, lineage, tags, terms, alerts, messages, metrics, attachments, task runs, group relations, and (for datasets) dataset structure and enum values — are permanently deleted. Integer, days. Defaults to `30`.

{% hint style="warning" %}
**Disabling housekeeping (`housekeeping.enabled: false`) stops all three cleanup jobs.** Resolved alerts, search-facet history, and soft-deleted data entities will accumulate indefinitely and the PostgreSQL database will grow without bound. Leave the job enabled in production; disable only for debugging or offline migrations, and re-enable (or run a manual cleanup) afterwards.
{% endhint %}

{% tabs %}
{% tab title="YAML" %}

```yaml
housekeeping:
  enabled: true
  ttl:
    resolved_alerts_days: 30
    search_facets_days: 30
    data_entity_delete_days: 30
```

{% endtab %}

{% tab title="Environment variables" %}

```
HOUSEKEEPING_ENABLED=true
HOUSEKEEPING_TTL_RESOLVED_ALERTS_DAYS=30
HOUSEKEEPING_TTL_SEARCH_FACETS_DAYS=30
HOUSEKEEPING_TTL_DATA_ENTITY_DELETE_DAYS=30
```

{% endtab %}
{% endtabs %}

## Platform-level settings (`odd.*`)

The `odd.*` namespace groups three platform-wide settings that do not belong to any subsystem: stale-metadata detection, the optional Prometheus tenant label, and the Activity-feed partitioning period. A fourth key in the same namespace, `odd.platform-base-url`, is documented above in [Enable Alert Notifications](#enable-alert-notifications) because its only consumers are the notification senders.

### Detecting stale metadata

Stale metadata is metadata that has not been refreshed from its source for longer than an operator-defined window. This typically happens when a collector is paused, deactivated, or failing to reach the source system. When the platform judges an entity to be stale, the UI surfaces it with a "Stale" indicator so users can distinguish data whose freshness is uncertain from actively-maintained metadata.

* `odd.data-entity-stale-period`: number of days after the entity's last successful ingestion before it is labeled "Stale" in the UI and API. Integer, days. Defaults to `7`.

Operators running collectors on schedules longer than a week should raise this value to match the collector cadence — otherwise entities that were ingested successfully will be flagged stale between runs.

{% tabs %}
{% tab title="YAML" %}

```yaml
odd:
  data-entity-stale-period: 7 # days
```

{% endtab %}

{% tab title="Environment variables" %}

```
ODD_DATA_ENTITY_STALE_PERIOD=7
```

{% endtab %}
{% endtabs %}

### Prometheus tenant label (`odd.tenant-id`)

When [`metrics.storage`](#metric-storage-backend) is set to `PROMETHEUS`, the platform appends `tenant_id={value}` as a label on every Prometheus instant query it issues. This lets a single shared Prometheus instance serve metric data for multiple ODD Platform deployments without their metric series colliding — each deployment queries only its own tenant-labeled series.

* `odd.tenant-id`: tenant identifier appended as a Prometheus query label. String, no default (empty means no label is applied, and the Prometheus query returns series across all tenants). Ignored when `metrics.storage=INTERNAL_POSTGRES`.

{% tabs %}
{% tab title="YAML" %}

```yaml
odd:
  tenant-id: my-odd-deployment
```

{% endtab %}

{% tab title="Environment variables" %}

```
ODD_TENANT_ID=my-odd-deployment
```

{% endtab %}
{% endtabs %}

### Activity-feed partitioning (`odd.activity.partition-period`)

The ODD Platform `activity` table is range-partitioned on a rolling date window; `odd.activity.partition-period` sets the partition width in days. The default creates a new partition every 30 days, which is appropriate for most deployments. Operators running high-volume deployments (millions of activity events per day) can tune this downward to narrow partitions — smaller partitions speed up vacuum and partition-prune operations on the activity feed.

* `odd.activity.partition-period`: partition width in days for the `activity` table. Integer, days. Defaults to `30`.

{% tabs %}
{% tab title="YAML" %}

```yaml
odd:
  activity:
    partition-period: 30
```

{% endtab %}

{% tab title="Environment variables" %}

```
ODD_ACTIVITY_PARTITION_PERIOD=30
```

{% endtab %}
{% endtabs %}

## Attachment Storage Configuration

ODD Platform allows users to attach files and links to data entities from the UI — see the [Data Entity Attachments](https://docs.opendatadiscovery.org/features#id-6fbe) section in Features for the user-facing flow. This section covers the operator-facing configuration for **where** those uploaded files are stored.

{% hint style="danger" %}
**The default `LOCAL` storage mode is ephemeral.** Attachments are written to `/tmp/odd/attachments` inside the ODD Platform container filesystem. Any container or pod restart — routine deployment, node drain, crash, Kubernetes eviction — permanently deletes all uploaded files.

**Use `REMOTE` (S3 / MinIO) storage for any Kubernetes or Docker deployment where users will actually upload attachments.** `LOCAL` mode is suitable only for single-host evaluations or local development where losing attachments on restart is acceptable.
{% endhint %}

### Configuration keys

* `attachment.storage`: storage backend. One of `LOCAL` or `REMOTE`. Defaults to `LOCAL`.
* `attachment.max-file-size`: maximum size per uploaded file, in **megabytes**. Defaults to `20`. See the hint below if raising this above 20 MB.
* `attachment.local.path`: filesystem directory where attachments are written when `storage=LOCAL`. Defaults to `/tmp/odd/attachments` (ephemeral — see warning above).
* `attachment.remote.url`: S3-compatible endpoint URL when `storage=REMOTE` (for example `https://s3.us-east-1.amazonaws.com` for AWS S3 or `http://minio:9000` for a MinIO service). See the **Known limitations (REMOTE mode)** subsection below before choosing your endpoint — in particular the `us-east-1` restriction for AWS S3 and the chunked-upload staging behavior.
* `attachment.remote.access-key`: access key for the S3-compatible bucket.
* `attachment.remote.secret-key`: secret key for the S3-compatible bucket.
* `attachment.remote.bucket`: bucket name used to store attachment objects. The bucket must already exist — ODD Platform does not create it.
* `spring.codec.max-in-memory-size`: platform-wide cap on the in-memory buffer Spring WebFlux uses when reading a request body. Defaults to `20MB`. This is the transport-layer ceiling — `attachment.max-file-size` cannot effectively exceed it. Accepts a size string (`20MB`, `100MB`, `1GB`).

{% hint style="warning" %}
**`attachment.max-file-size` must not exceed `spring.codec.max-in-memory-size`.** Both ship with the same `20 MB` default, so the attachment cap is effective out of the box. If you raise `attachment.max-file-size` to allow larger uploads — for example `100 MB` — you must raise `spring.codec.max-in-memory-size` to at least the same value, otherwise uploads above `20 MB` fail at the WebFlux codec layer with `DataBufferLimitException` before the attachment validation runs.
{% endhint %}

### Example: REMOTE storage with S3-compatible backend (MinIO or AWS S3)

{% tabs %}
{% tab title="YAML" %}

```yaml
attachment:
  storage: REMOTE
  max-file-size: 50 # mb
  remote:
    url: {s3_endpoint_url}
    access-key: {access_key}
    secret-key: {secret_key}
    bucket: {bucket_name}
```

{% endtab %}

{% tab title="Environment variables" %}

```
ATTACHMENT_STORAGE=REMOTE
ATTACHMENT_MAX_FILE_SIZE=50
ATTACHMENT_REMOTE_URL={s3_endpoint_url}
ATTACHMENT_REMOTE_ACCESS_KEY={access_key}
ATTACHMENT_REMOTE_SECRET_KEY={secret_key}
ATTACHMENT_REMOTE_BUCKET={bucket_name}
```

{% endtab %}
{% endtabs %}

### Known limitations (REMOTE mode)

ODD Platform builds its `MinioAsyncClient` with only the endpoint and credentials documented above. The MinIO Java SDK inherits defaults for every other parameter, and the attachment-upload code path carries a small amount of additional behavior that is not configurable. None of the following is currently exposed as an ODD configuration key — plan your deployment around these limits rather than assuming a config flag will fix them.

{% hint style="warning" %}
**AWS S3 region pinned to `us-east-1`.** The attachment client is built without an explicit region, so it uses the MinIO Java SDK's default region (`us-east-1`) for request signing. Against **AWS S3 this means only buckets in `us-east-1` work** — buckets in any other region fail signature validation with errors such as `AuthorizationHeaderMalformed` or `PermanentRedirect`. If you need AWS S3 in another region, either host your bucket in `us-east-1` or use a MinIO server in front of it. Self-hosted MinIO and most other S3-compatible services ignore the region header and are unaffected.
{% endhint %}

{% hint style="warning" %}
**HTTP client timeouts are the MinIO SDK defaults (\~5 minutes), not configurable.** ODD Platform does not supply a custom `OkHttpClient` to the MinIO builder, so the SDK's built-in defaults apply: roughly a 5-minute read/write timeout. A single large upload whose end-to-end wall time (network transfer + S3 ingest) exceeds that limit fails with a socket-timeout error even though the content was being streamed successfully. If your users upload near the `attachment.max-file-size` limit over a slow link, keep `attachment.max-file-size` below the size a typical upload can complete inside 5 minutes at your network's real throughput.
{% endhint %}

{% hint style="danger" %}
**Chunked uploads are assembled on the container's local filesystem before they are sent to `REMOTE` storage — a mid-upload container restart loses the staged chunks.** The UI splits large files into chunks and uploads each chunk individually; the platform writes each chunk to a local directory (the same directory family that backs `attachment.local.path`) and reassembles the full file there before streaming it to the S3-compatible backend. **This is true even when `attachment.storage=REMOTE`.** If the ODD Platform container is restarted, evicted, or rescheduled during an in-flight chunked upload, the local directory is wiped and the partial upload is unrecoverable — the user must re-upload from scratch. In Kubernetes deployments, either mount a persistent volume at the chunk-staging directory or limit the maximum upload size so single-request uploads are the norm. The `LOCAL`-mode ephemeral warning above applies to chunk-staging in `REMOTE` mode as well.
{% endhint %}

{% hint style="warning" %}
**No retry on transient S3 / MinIO errors.** Put, get, and remove operations against the bucket do not retry on transient failures — a single 503 from S3, a connection reset from the network, or a short MinIO outage surfaces as a failed operation with no automatic recovery. If your alerting pipeline treats attachment failures as user-impacting errors, add retry at the infrastructure layer (for example an S3-proxy sidecar with retry) rather than expecting the platform to paper over it.
{% endhint %}

### Example: LOCAL storage (single-host / local evaluation only)

{% tabs %}
{% tab title="YAML" %}

```yaml
attachment:
  storage: LOCAL
  max-file-size: 20 # mb
  local:
    path: /var/lib/odd/attachments
```

{% endtab %}

{% tab title="Environment variables" %}

```
ATTACHMENT_STORAGE=LOCAL
ATTACHMENT_MAX_FILE_SIZE=20
ATTACHMENT_LOCAL_PATH=/var/lib/odd/attachments
```

{% endtab %}
{% endtabs %}

If you keep `LOCAL` mode, override `attachment.local.path` to a persistent volume mount rather than the default `/tmp/odd/attachments`, and confirm the volume is actually persistent across restarts in your deployment topology.

## Logging Settings Configuration

Logs provide detailed information about errors in the application helping its users quickly identify and fix problems. Setting up logging is recommended for ensuring operational excellence, system reliability, effective monitoring and troubleshooting.\
Here is a code snippet for setting up logs in ODD Platform:

{% tabs %}
{% tab title="YAML" %}

```yaml
logging:
  level:
    org.springframework.transaction.interceptor: info
    org.jooq.tools.LoggerListener: info
    io.r2dbc.postgresql.QUERY: info
    io.r2dbc.postgresql.PARAM: info
    org.opendatadiscovery.oddplatform.notification: info
    org.opendatadiscovery.oddplatform.housekeeping: info
    org.opendatadiscovery.oddplatform.partition: info
    org.opendatadiscovery.oddplatform.datacollaboration: info
    org.opendatadiscovery.oddplatform.service.ingestion: info
```

{% endtab %}

{% tab title="Environment Variables" %}

```
LOGGING_LEVEL_ORG_SPRINGFRAMEWORK_TRANSACTION_INTERCEPTOR: info
LOGGING_LEVEL_ORG_JOOQ_TOOLS_LOGGERLISTENER: info
LOGGING_LEVEL_IO_R2DBC_POSTGRESQL_QUERY: info
LOGGING_LEVEL_IO_R2DBC_POSTGRESQL_PARAM: info
LOGGING_LEVEL_ORG_OPENDATADISCOVERY_ODDPLATFORM_NOTIFICATION: info
LOGGING_LEVEL_ORG_OPENDATADISCOVERY_ODDPLATFORM_HOUSEKEEPING: info
LOGGING_LEVEL_ORG_OPENDATADISCOVERY_ODDPLATFORM_PARTITION: info    
LOGGING_LEVEL_ORG_OPENDATADISCOVERY_ODDPLATFORM_DATACOLLABORATION: info    LOGGING_LEVEL_ORG_OPENDATADISCOVERY_ODDPLATFORM_SERVICE_INGESTION: info

```

{% endtab %}
{% endtabs %}

Setting the logging level to `info` allows you to see useful messages about the platform’s functioning without being overwhelmed by too much detail as with `trace` or `debug` or missing important issues as with `warn` or higher level.\
However, feel free to adjust the logging level as needed to get more or less information based on your specific requirements.

## Machine-to-Machine (M2M) Tokens Configuration

ODD Platform supports a static API-key authentication mode for non-UI callers (CI/CD jobs, ingestion pipelines, automation scripts) — also referred to as Machine-to-Machine (M2M) tokens. It is **disabled by default**.

For the full configuration keys, the header contract, the curl example, and security considerations (token rotation, HTTPS, blast radius), see [Server-to-server (S2S) authentication](https://docs.opendatadiscovery.org/configuration-and-deployment/enable-security/authentication/s2s).


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.opendatadiscovery.org/configuration-and-deployment/odd-platform.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
