Share

Data Loading

The data ingested from the Source is loaded to the Destination warehouse at each run of your Pipeline. In case your Events quota gets exhausted, the Events in the Pipeline are stored in a data warehouse till the time you purchase additional Events, upon which they are replayed. Read the data replication section of your Source to know its replication strategy.

By default, Hevo maintains any primary keys that are defined in the Source data, in the Destination tables.

You can load both types of data:


Data without Primary Keys

If primary keys are not present in the Destination tables, Hevo directly appends the data into the target tables. While this can result in duplicate Events occurring in the Destination, there is no resource overhead stemming from the data loading process.


Data with Primary Keys

If primary keys are present in the Source data but not enforceable on the Destination warehouse, as in the case of Google BigQuery, Amazon Redshift, and Snowflake, then, ensuring uniqueness of data is not possible by default. Hevo circumvents this lack of primary key enforcement and guarantees that no duplicate data is loaded to or exists in the Destination tables by:

  • Adding temporary Hevo-internal meta columns to the tables to identify eligible Events,

  • Using specific queries to cleanse the data of any duplicate and stale Events,

  • Adding metadata information to each Event to uniquely identify its ingestion and loading time

Note: These steps utilize your Destination system’s resources in terms of CPU usage for running the queries and additional storage utilization for the duration of processing of the data.


Additions to the Destination Schema

Irrespective of the type of data, Hevo adds the following columns to the Destination tables as part of the data loading process:

Metadata Column Description
__hevo_ingested_at A timestamp applied to each Event during ingestion. This timestamp helps to verify that the ingested Event is more current than what already exists in the Destination.
For example, by the time a failed Event is resolved and replayed, a more recent Event may already have been loaded to the Destination. Comparing the ingestion timestamp, the stale record can be discarded from the ingested data. The timestamp is also retained in the Destination table.
__hevo_loaded_at A timestamp to indicate when data was inserted, updated, or deleted (delete flag updated) in the Destination table. The difference between __hevo_ingested_at and __hevo_loaded_at measures the total time taken by Hevo to process the respective Event and can be used to identify latency.
__hevo__marked_deleted A column to logically represent a deleted Event. When an Event is deleted in the Source, it is not physically deleted from the Destination table during the data loading process. Instead, the logical column, __hevo__marked_deleted, is set to True for it.
is_row_deleted A column to logically represent a data record that was deleted in the Stripe Source.
Hevo ingests the deleted Event, sets the column, is_row_deleted to True for it, and loads it to the Destination table.
deleted_timestamp A timestamp to indicate when the data was deleted in the Stripe Source.
You can use the value of deleted_timestamp to identify the Stripe event from the events object that deleted the data.

Note: Hevo also adds the internal columns __he__msg_seq_id and __hevo__consumption_id to the ingested data to help with the deduplication process; these columns are removed before the final step of loading data into the Destination tables.


Last updated on May 30, 2023

Tell us what went wrong

Skip to the section