Share
Generic Oracle (Edge) Setup Guide

Oracle database is a widely used Relational Database Management System (RDBMS) designed to support high-performance data processing across multiple platforms, including Windows Server, Unix, and various GNU/Linux distributions. Oracle is used to store and manage both small and large datasets with optimal speed, efficiency, and security.

Oracle databases can be deployed using different configurations depending on availability, scalability, and workload requirements. Hevo supports ingesting data from databases deployed using the following configurations and replicating it to a Destination of your choice:

  • Single Instance Database: A traditional Oracle deployment in which the database runs on a single server. This configuration is typically used for small to medium workloads or environments that do not require high availability through clustering. Hevo connects directly to the Oracle database and replicates data changes from redo logs using LogMiner.

  • Real Application Clusters (RAC): A clustered Oracle deployment in which multiple database nodes run the same database and share a common storage system. This configuration provides high availability and scalability by distributing workloads across multiple nodes. If one node becomes unavailable, the database remains accessible through the remaining nodes. Hevo connects to the Oracle database using the configured service name, which routes connections to the available RAC nodes, and captures data changes from the database redo logs using LogMiner.

  • Multitenant Architecture (CDB/PDB): An Oracle deployment model that allows multiple logical databases to run within a single Oracle instance. In this model:

    • A Container Database (CDB) acts as the primary database container.

    • One or more Pluggable Databases (PDBs) reside within the CDB. Each PDB behaves as an independent logical database containing its own schemas, tables, and data.

    Multitenant databases can run either as a single-instance deployment or as part of a Real Application Clusters (RAC) configuration. Hevo connects to the specific PDB service to capture data from that pluggable database.

    Note: A Pipeline can replicate data from only one Pluggable Database (PDB). If your Container Database (CDB) contains multiple PDBs and you need to ingest data from more than one PDB, create a separate Pipeline for each PDB.


Data Type Mapping

Hevo maps the Oracle Source data type internally to a unified data type, referred to as the Hevo Data Type, in the table below. This data type is used to represent the Source data from all supported data types in a lossless manner.

The following table lists the supported Oracle data types and the corresponding Hevo data type to which they are mapped:

Oracle Data Type Hevo Data Type
-    BIT BOOLEAN
-    NUMBER
-    FLOAT
-    DECIMAL
-    DOUBLE
-    CHAR
-    VARCHAR
-    VARCHAR2
-    NCHAR
-    NVARCHAR
-    ROWID
-    INTERVAL_DAY_TO_SECOND
-    INTERVAL_YEAR_TO_MONTH
VARCHAR
-    TIMESTAMPTZ
-    TIMESTAMP_LOCALTZ
TIMESTAMPTZ (Format: YYYY-MM-DDTHH:mm:ss.SSSSSSSSSZ)
-    DATE
-    TIMESTAMP
TIMESTAMP

Read ANSI, DB2, and SQL/DS Data Types to know the data types that Oracle converts from ANSI to its supported ones.

Note: Hevo replicates values in TIMESTAMPTZ and TIMESTAMP_LOCALTZ fields in Coordinated Universal Time (UTC). Due to this, you may observe a time difference if the original values of these fields uses a different time zone. For example, if a value in a TIMESTAMPTZ column is in the US Eastern Time (UTC-4) and is recorded as 2024-05-01 10:00:00 -04:00, Hevo converts it to UTC and replicates it as 2024-05-01 14:00:00 00:00 in the Destination column.


Handling of NUMBER Data Type

In Oracle, NUMBER is a data type that stores fixed or floating-point numbers. To keep a check on the integrity of the input, the NUMBER data type is specified in the format NUMBER (p,s), where s is the scale and p is the precision. Precision (p) refers to the maximum number of significant digits a number can have. Significant digits are the digits in a number carrying meaningful information and exclude leading or trailing zeros. Scale (s) refers to the number of digits to the right of the decimal point. Read Numeric Data Types to know more about how Oracle handles the NUMBER data type.

Hevo calculates the width of a stored value, when its scale is less than or equal to zero, using the formula, Width = Precision - Scale. Width refers to the total number of digits required to store a number, considering both the integer (digits before decimal point) and fractional (digits after decimal point) parts.

The NUMBER data types are mapped to the following Hevo data types based on the calculated width:

Width Hevo Data Type
< 5 SHORT
< 10 INT
< 19 LONG
> 19 DECIMAL

Note: If precision and scale are not defined for the NUMBER data type, Destination-specific default maximum values for precision and scale are used.


Handling of Unsupported Data Types

If any of the Source objects contain data types that are not supported by Hevo, the corresponding fields are marked as unsupported during object configuration in the Pipeline.

At this time, the following Oracle data types are not supported by Hevo:

  • CLOB
  • NCLOB
  • BLOB
  • UROWID
  • LONG
  • RAW
  • LONG RAW
  • XMLTYPE
  • BFILE
  • Any other data type not listed in the tables above.

Source Considerations

  • Redo Log does not support user-defined data types. Therefore, fields with such data types are not captured in the log and are lost.

  • Hevo uses Oracle LogMiner to read redo logs to capture changes made to a database. When LogMiner reads a log entry, it refers to the Oracle data dictionary to understand and translate the logs. However, as the data dictionary is not included in the redo logs by default, LogMiner uses the version of the dictionary available at the time of reading the logs. As a result, if the schema of any active object in the Pipeline changes after the redo log was generated, Hevo cannot track these schema changes, potentially leading to data loss. However, schema changes to skipped tables do not impact the Pipeline.

    Note: If your Source database schema changes frequently and you want Hevo to write the Oracle data dictionary to the redo logs to track these changes, contact Hevo Support.

  • Oracle LogMiner does not support tables or columns with names longer than 30 characters.

    As a result, Hevo cannot ingest data from objects whose names exceed this limit or that contain columns with names exceeding 30 characters. These objects are marked as Inaccessible during object configuration in the Pipeline.

    To ingest data from these objects, rename the tables and columns to ensure the names are within the supported limit.

  • Oracle does not capture the negative sign for timestamps representing Before Common Era (BCE) dates in the redo logs. As a result, Hevo replicates negative timestamp values in DATE, TIMESTAMP, TIMESTAMPTZ, and TIMESTAMP_LOCALTZ fields as positive timestamp in the Destination columns. For example, a value 2024-01-24 12:34:56 BC in a TIMESTAMP column in the Source table is replicated as 2024-01-24 12:34:56 in the Destination column.


Limitations

  • Hevo does not set the metadata column __hevo__marked_deleted to True for data deleted from the Source table using the TRUNCATE command. This action could result in a data mismatch between the Source and Destination tables.

  • If a table is created in the Source using the CREATE TABLE AS SELECT (CTAS) statement after the Pipeline is created, Hevo cannot automatically capture its schema. This happens because CTAS tables do not generate the metadata that Hevo uses to detect new objects. As a result, data ingestion for the corresponding object fails, and the object is marked as Skipped in the Pipeline. To ingest data for this object, you must resync it.

Last updated on Mar 16, 2026

Tell us what went wrong

Skip to the section