Share

PostgreSQL (Edge)

Hevo Edge supports the following variations of PostgreSQL as a Source:


Supported Configurations

Supportability Category Supported Values
Database versions 10 - 17
Maximum row size 4 MB per row
Connection limit per database No limit
Transport Layer Security (TLS) 1.2 and 1.3
Server encoding UTF-8

Supported Features

Feature Name Supported
Capture deletes Yes
Custom data (user-configured tables & fields) COMPOSITE, ENUM, and DOMAIN data types
Data blocking (skip objects and fields) Yes
Resync (objects and Pipelines) Yes
API configurable Yes
Connecting through a private link Connections via AWS PrivateLink are allowed. Subscription to a business plan is required.

Supported Instance Types

Instance Types Logical Replication
Amazon Aurora PostgreSQL  
      Primary instance Yes (versions 11 - 17)
      Standby instance No
Amazon RDS PostgreSQL  
      Primary instance Yes (versions 11 - 17)
      Standby instance Yes (version 16)
Azure Database for PostgreSQL  
      Primary instance Yes (versions 10 - 17)
      Standby instance No
Generic PostgreSQL  
      Primary instance Yes (versions 10 - 17)
      Standby instance No
Google Cloud PostgreSQL  
      Primary instance Yes
-   Enterprise (versions 10 - 17)
-   Enterprise Plus (versions 12 - 17)
    Standby instance No

Handling Source Partitioned Tables

Hevo Edge supports loading data ingested from partitioned tables in PostgreSQL versions 10 to 17. The way data is loaded varies across versions because PostgreSQL handles partitioning differently across these versions. The table below shows how each version group handles partitioning and how this affects logical replication:

PostgreSQL Versions Partitioning Behavior Logical Replication Behavior
10 to 12 Partitioning operations use internal logic. Each partition is treated as a separate table.
13 to 17 Partitioning is managed through the publish_via_partition_root parameter in publications. Partition changes are tracked through the partition table name or the parent table name, depending on the parameter value.

Note: For versions 13 and above, a publication is created by default with publish_via_partition_root set to FALSE. Refer to the Create a publication for your database tables section in the respective Source documentation to create one with the value set to TRUE.

The following table explains Edge’s behavior for loading data ingested from partitioned tables based on the value of the publish_via_partition_root parameter:

PostgreSQL Version Value of publish_via_partition_root Hevo Behavior
10 - 12 Not applicable Data from the Source table partitions is loaded into separate tables at the Destination.
13 - 17 TRUE Data from the partitioned Source table is loaded into a single Destination table.
  FALSE Data from the Source table partitions is loaded into individual tables at the Destination.

Handling Toast Data

PostgreSQL can efficiently store large amounts of data in columns using The Oversized-Attribute Storage Technique (TOAST). When a column’s value exceeds approximately 8 KB (the default page size), PostgreSQL compresses the data and may store it in a separate TOAST table. Columns managed in this way are referred to as TOASTed columns.

Hevo Edge identifies TOASTed columns in the ingested data and replicates data from these columns into your Destination tables using a merge operation. This operation updates existing records and adds new ones.

Note: In Edge, Hevo does not replicate data from TOASTed columns if your Pipeline loads data in Append mode. In this mode, existing records are not updated; only new records are added.


Resolving Data Loss in Paused Pipelines

For log-based Edge Pipelines created with any variant of the PostgreSQL Source, the data to be replicated is identified from the write-ahead logs (WAL) by the publications created on the database tables. Hence, disabling the log-based Pipeline may lead to data loss, as the corresponding WAL segment may have been deleted. The log can get deleted due to the expiry of its retention period or limited storage space in the case of large log files.

If you notice a data loss in your Edge Pipeline after enabling it, resync the Pipeline. The Resync Pipeline action restarts the historical load for all the active objects in your Pipeline, thus recovering any lost data.

Note: The re-ingested data does not count towards your quota consumption and is not billed.

Last updated on Dec 22, 2025

Tell us what went wrong

Skip to the section